Jaspreet Bindra
Nov 18, 2025

Agentic AI and the morality gap: Can logic learn compassion?

The future of storytelling, branding, and consumer trust depends not on machines that think faster, but on those that act responsibly.

Compassion cannot be coded in binary, but it can be embedded through design, governance, and culture.
Compassion cannot be coded in binary, but it can be embedded through design, governance, and culture.

Artificial intelligence has evolved from being a silent assistant to a proactive actor, from executing commands to making choices. This new phase, called Agentic AI, is where machines are no longer just tools but participants in decision-making. They act, reason, and sometimes even challenge human direction.

But this shift, from intelligence to agency, has unlocked a new dilemma: the morality gap. Can something built on logic ever truly learn compassion?

For decades, we designed AI to be obedient. It waited for prompts, followed patterns, and optimised for whatever goal it was assigned--clicks, conversions, engagement. Today, those agentic systems have evolved. They can interpret those goals, design strategies, and act independently.

The problem: AI acts on data, not conscience. It calculates, but doesn’t care. And in a world driven by emotion, that gap between decision and decency is starting to show.

Take Microsoft’s infamous Tay chatbot experiment back in 2016. Tay was designed to “learn” how young people talk by interacting with them on Twitter. Within hours, it started spewing racist, sexist, and hateful comments.

Tay didn’t become evil; it simply mirrored what it learned without any moral compass to filter the wrong from the right. It was a logical outcome, but a deeply unethical one.

Facebook’s ad-targeting algorithms, which once allowed advertisers to exclude users based on race or religion, is another example. The AI optimised perfectly for the metric it was trained for—engagement--but completely missed the moral dimension of fairness and inclusion.

The same logic-driven blindness surfaced in Amazon’s AI recruiting tool, which began discriminating against women because it was trained on male-dominated hiring data.

These aren’t stories about technology gone rogue; they’re lessons in context gone missing. When we teach machines to think but not to feel, they inevitably act efficiently but without empathy.

For marketers and advertisers, this is more than a tech story; it’s an existential one. Ad agencies now use AI to write copy, analyse sentiment, and even generate visual concepts. Some systems can autonomously decide which ad to serve, when, and to whom. But when AI becomes the storyteller, who ensures it tells stories that reflect humanity, not just algorithms?

Imagine a campaign where an AI optimises ads for maximum engagement and ends up preying on insecurities, promoting unrealistic beauty standards or amplifying fear. It’s efficient, yes. But also tone-deaf.

In that moment, AI’s logic and the brand’s ethics diverge, and the morality gap widens.

The good news is that some brands are learning how to bridge that gap. Dove’s ‘Real Beauty’ campaign embraced AI tools for creative development but kept human ethical oversight at its core. Every AI-generated asset was filtered through the lens of inclusivity, diversity, and brand purpose. The result? Technology enhanced creativity, but human compassion anchored it.

Similarly, Coca-Cola’s ‘Create Real Magic’ campaign used GPT-4 and DALL·E to let users co-create artwork, but within a framework that upheld positivity and brand integrity. These examples prove that AI can be autonomous without being amoral if humans stay in the loop.

The next evolution of agentic AI will not be defined by how autonomous it becomes, but by how aligned it stays with our moral and emotional frameworks. Compassion cannot be coded in binary, but it can be embedded through design, governance, and culture.

For the creative and marketing industries, this means moving beyond asking ‘What can AI do for us?’ to ‘What should AI do for us?’.

The future of storytelling, branding, and consumer trust depends not on machines that think faster, but on those that act responsibly.

In the end, agentic AI doesn’t need to “feel” to be ethical; it just needs to be designed by those who do.

The real challenge isn’t teaching AI compassion; it’s ensuring we don’t lose ours in the process. 


-Jaspreet Bindra, co-founder, AI & Beyond

Source:
Campaign India

Follow us

Top news, insights and analysis every weekday

Sign up for Campaign Bulletins

Related Articles

Just Published

5 minutes ago

WPP’s India country manager, CVL Srinivas, to retire

Joining WPP in 2013 as GroupM’s South Asia CEO, he had taken this role in 2017.

1 hour ago

Inside the C-suite at CES with Omnicom, Stagwell, ...

From agentic AI product demos to client meetings, an inside look at how leaders from the major holding companies approached the 2026 Consumer Electronics Show.

1 hour ago

Influencers turn movie launches into always-on ...

From trailers to feeds, creator-led promotions now anchor film launches, reshaping budgets, discovery mechanics and how opening momentum is measured.

3 hours ago

Why Southeast Asia is rethinking how influencers ...

How the governments choose to regulate now will determine what the creator economy looks like next.