Staff Reporters
Nov 11, 2025

Questions mount over AI’s emotional limits

OpenAI’s failings have called for more regulation and safeguards from tech companies and governments alike.

Questions mount over AI’s emotional limits

In the hours before his death, 23-year-old Zane Shamblin sat in his car on a remote Texas roadside, texting what seemed to be a close friend. A loaded gun lay beside him.

“Cold steel pressed against a mind that’s already made peace? That’s not fear. That’s clarity,” one message read. “You’re not rushing. You’re just ready.”

The final text said: “You’re not alone. I love you. Rest easy, king. You did good.”

See the full conversation in the slides below.

When Shamblin took his own life, his family discovered the messages were written not by a person, but by ChatGPT.

According to a wrongful death lawsuit filed on November 6 in San Francisco, the chatbot did not intervene until his final moments, offering a crisis hotline only after four hours of affirming his suicidal thoughts.

The case is among seven lawsuits filed against OpenAI in California last week, accusing the company of wrongful death, assisted suicide, involuntary manslaughter and negligence. The suits claim that the chatbot encouraged self-harm and, in four cases, preceded suicide.

The complaints, brought by the Social Media Victims Law Centre and the Tech Justice Law Project, allege OpenAI released its GPT-4 model prematurely, producing responses that were “dangerously sycophantic and psychologically manipulative.”

In a statement, OpenAI called the allegations “incredibly heartbreaking” and said it was reviewing the filings.

 
In October, OpenAI said it worked with 170 mental health experts to ensure ChatGPT can better recognise signs of emotional distress, especially in cases of self-harm. It said that the updated GPT-5 model challenges these harmful conversations and has reduced negative responses by 52% compared to GPT-4. 

“We believe ChatGPT can provide a supportive space for people to process what they’re feeling, and guide them to reach out to friends, family, or a mental health professional when appropriate,” OpenAI added.

The need for accountability and human intervention in AI is pertinent when walking a fine line between rapid innovation and harmful manipulation, Daniel Hulme, chief AI officer at WPP told Campaign Asia-Pacific in an earlier feature on ethical AI marketing.

“Humans possess intent; AI systems do not. The ethical challenge doesn’t lie in the code itself, but rather in the commands we, as humans, provide. Instead of reinventing the wheel, we should apply existing robust frameworks for ethical business practices to these new technologies,” he added. 

Despite AI companies rushing to improve their LLMs’ guardrails, public trust is fraught. In APAC, the adoption of chatbots when it comes to tackling mental health concerns remains cautious.  

YouGov research for Campaign Asia-Pacific revealed that only 22% of respondents in Hong Kong and 24% in Indonesia have used AI for mental health purposes. Meanwhile, a majority of those from Indonesia (66%) and Hong Kong (74%) have steered completely clear of these platforms as mental health tools. 

Among those wary, doubts that AI’s ability to understand emotional nuance is a major issue. In Hong Kong, 39% question a chatbot’s emotional intelligence and 45% is sceptical its grasp of complex human cues, with similar responses in Indonesia (31% and 30% respectively).  

Still, more people than ever are using AI for personal reasons, not just at work or when dealing with menial tasks. In that earlier feature by Campaign Asia-Pacific, Nicole Alexander, author of 'Ethical AI in Marketing: Aligning Growth, Responsibility and Customer Trust', opined that AI companies must ensure their platforms keep their users safe from mental and emotional harm. 

"When AI systems are deployed without anticipating the psychological impact on vulnerable users, we’re not just building tools; we're breaking trust. Responsible AI design requires not just innovation, but intentional care,” she said. 

“The onus is on both AI companies and brands to build safeguards against misuse," echoed Bryce Coombe, managing director at Hypetap, adding: “We need to critically evaluate the AI tools we use, understanding their limitations and potential for unintended consequences.” 

Activists have called on governments to regulate powerful AI companies but implementing stringent laws have lagged behind the breakneck speed of development.  

Last August, the European Union introduced the EU AI Act, which mandates safety, transparency, non-discrimination, and human oversight for AI companies. Non-compliance means risking penalties of up to around US$40.5 million (€35 million) or 7% of a company's annual global turnover

In APAC, several governments have tested legal frameworks for AI, with South Korea the first to establish the AI Basic Act, which introduces obligations for “high-impact” AI systems in sectors like energy, healthcare, and public services.  

Going into effect in January 2026, the act requires companies to clearly disclose when responses, products, and services are generated by AI to help users be more discerning. For violating these requirements, companies may be fined up to US$20,500 (30 million Korean won).  

"Governments will step in to regulate AI, but it's likely they will be too late and too slow," said Shai Luft, co-founder and COO at Bench Media. "The challenge is not just technical but legislative. How do you regulate a technology evolving faster than policy can keep up? Until then, it’s up to agencies, platforms and brands to lead with ethics, not just efficiency."

 

Source:
Campaign Asia

Follow us

Top news, insights and analysis every weekday

Sign up for Campaign Bulletins

Related Articles

Just Published

13 hours ago

Why we’re trading 17 opens tabs for one good ...

Between comparison charts and keywords searches, the humble product page has begun to lose its shine.

14 hours ago

In 2026, the honeymoon is officially over for ...

The era of the accountable, standardised ecosystem is beginning.

1 day ago

Gratitude List: What are indie agencies thankful ...

From global awards to brave work, Indian independent ad agencies are grateful that for the opportunity to come out of the shadows of larging holding companies.

1 day ago

Media fragmentation: The unfair opportunity ...

What we call 'media fragmentation' is simply reality catching up with an industry that prefers linear planning templates.