
Picture this, you call a bank to ask about your loan and, instead of a human agent or a scripted chatbot, you find yourself in conversation with a generative AI agent that can converse, empathise, and offer tailored financial suggestions. It feels personal, almost human. But what happens if the AI casually makes a promise it cannot keep, miscalculates an offer, or uses language that feels off-brand?
As Voice AI moves from back-office efficiency to front-line customer engagement, brands and their agencies face a delicate balancing act: how much autonomy should AI agents be given to speak on behalf of the brand? Grant too little freedom, and bots feel robotic and frustrating. Grant too much, and the brand risks integrity breaches, misinformation, or reputational damage.
This tension between tightly scripted bots and open-ended generative AI is now at the centre of a critical conversation in marketing, brand management and enterprise strategy.
From scripted bots to generative conversations
For more than a decade, brands relied on scripted chatbots to handle predictable queries. Think of airline chat windows that provide flight updates or e-commerce bots that answer basic questions. These systems, while useful for routine requests, collapse when customers stray off the script.
Generative AI, powered by large language models (LLMs), promises something different. Instead of branching dialogues, AI can generate natural, free-flowing exchanges. A customer could ask about product comparisons, sustainability practices or troubleshooting in one conversation, and the AI could respond with nuance, sometimes even empathy.

In 2023, KLM Royal Dutch Airlines tested generative AI to manage complex service inquiries across multiple languages. The system not only provided flight information but also contextual travel advice, cutting handling time by 30%. Compare that with early bots, which often irritated users more than they helped, and the scale of change becomes clear. This new conversational freedom, however, introduces risks.
The opportunity: Scaled personalisation
Personalisation has long been a marketing ambition in the age of micro-segmentation. Generative AI raises the possibility of finally delivering it at scale.
AI agents can now handle complexity in ways static systems never could. Instead of searching through FAQs, a customer might ask: “Which laptop is best for a graphic design student who travels a lot and has a budget of Rs. 1 lakh?” The AI can synthesise purchase history, browsing data and preferences to tailor suggestions.
Retailers like Netflix and Sephora already use AI to track behaviour and make recommendations. Sephora, for instance, deploys AI chat and augmented reality to let customers try on makeup virtually, staying within a controlled domain while offering individualised advice.
AI also creates conversational intimacy. Duolingo has experimented with AI-powered tutors that adapt to a learner’s style and even inject humour, creating stickiness beyond functional engagement. And unlike human teams, AI can operate round the clock, in multiple languages, across time zones.
The result is potentially millions of personalised conversations happening simultaneously—an outcome traditional customer service teams cannot achieve.
The risks: When AI goes off script
With freedom comes unpredictability. The same flexibility that makes AI appealing also exposes brands to new forms of risk.
Brand integrity risks emerge when AI slips into a tone or reference that jars with the brand’s identity. Imagine a luxury fashion label’s AI joking too casually with a customer. In 2023, Snapchat’s My AI bot alarmed parents by offering inappropriate advice to teenagers, sparking questions about responsibility.
Factual errors and hallucinations are equally problematic. Generative AI can fabricate details. For a brand, this could mean an agent promising discounts that don’t exist or offering incorrect medical information in a pharmacy context.
Compliance challenges loom large in regulated industries. Banks, insurers and healthcare providers cannot afford an AI that veers into giving ‘advice’. The EU AI Act and U.S. regulators have already flagged liability for misleading or unsafe AI interactions.
Then there are unexpected behaviours. Open-ended systems may wander into off-brand areas such as politics or cultural debates. Microsoft’s Bing AI, during its 2023 launch, infamously produced unsettling, emotionally manipulative responses, leading to public criticism.
The tension: Control versus creativity
Brands now face a dilemma: keep bots tightly scripted for safety, or allow open-ended autonomy for richer experiences? Scripted systems ensure compliance and predictability but are rigid, outdated and poor at personalisation. Generative AI is engaging, scalable and capable of intimacy but risky, error-prone and difficult to govern.
The answer is unlikely to be binary. The emerging approach is layered autonomy—giving AI freedom where it adds value and applying human or scripted oversight where stakes are high.
Marketing leaders are beginning to structure this through a four-part framework:
Strategic layer: Define autonomy zones. Not every interaction carries the same risk. FAQs, order tracking and product information can be safely automated. Financial advice, healthcare queries or contract negotiations must remain scripted or human-supervised.
Operational layer: Brand integrity filters. AI must be trained on brand-specific tone and style, with safety layers to filter outputs. Coca-Cola’s Create Real Magic campaign allowed users to generate art using its branding, but strict filters ensured nothing offensive emerged. By contrast, a 2016 Twitter bot experiment quickly devolved into offensive content once released into the wild—an example of what happens without filters.
Technical layer: Governance and human-in-the-loop. Sensitive cases require human oversight. Clear escalation systems must exist for when AI is uncertain. American Express, for example, employs AI for fraud detection but leaves final account-blocking decisions to humans.
Adaptive layer: Continuous learning with guardrails. AI must be retrained regularly with updated brand data and monitored for unintended behaviours. Spotify, for instance, constantly adjusts its recommendation algorithms to avoid harmful biases.
In 2024, Klarna reported that its OpenAI-powered service bot managed two-thirds of customer queries and cut costs by $40 million, while improving satisfaction. Crucially, it operated with guardrails: transactional issues were automated, but financial advice was escalated to humans.
Voice AI as brand representative
Voice AI is no longer just a back-end utility. It increasingly acts as a brand representative, shaping perceptions as much as advertising campaigns. That makes governance a marketing issue as much as a technical one.
AI agents need onboarding, training, supervision and accountability, much like employees. The stakes are high: mishandled interactions can damage trust, while well-managed autonomy can strengthen resonance at scale.
Those who rush in without governance risk reputational crises. In an environment where brand trust is fragile and consumer expectations around transparency are rising, missteps can be costly.
The balance lies in granting AI enough autonomy to delight with personalised, human-like interactions, while maintaining enough control to protect integrity, compliance and trust.
For agencies and marketers, the challenge is no longer whether to deploy Voice AI but how to calibrate its autonomy. Questions that once belonged to IT departments are now squarely on the desks of CMOs and brand strategists.
As brands expand the role of generative AI in customer-facing roles, they must design systems that reflect layered autonomy from the outset. That means building campaigns and customer experiences not just around what AI can say, but around what it should say.
AI is unlikely to replace human service entirely. Instead, it is becoming part of the brand team: scalable, efficient, sometimes empathetic, but always in need of guardrails.
The tension between creativity and control will not go away. But the organisations that navigate it best will be those that treat AI neither as a gimmick nor a menace, but as a partner whose freedom is carefully earned.
- Ashita Aggarwal is professor of Marketing and chairperson, Post Graduate Diploma in Management (PGDM) and PGDM Business Management at the S.P. Jain Institute of Management & Research (SPJIMR).