Piyal Banerjee
5 hours ago

When falsehoods go viral, facts must work harder

Deepfakes and AI scams test communicators’ ability to safeguard trust. Pre-bunking and literacy now matter as much as storytelling.

The impact of misinformation varies in severity, but one of its most concerning consequences is the erosion of trust in digital information.
The impact of misinformation varies in severity, but one of its most concerning consequences is the erosion of trust in digital information.

The recent exchange of hostilities between multiple countries globally and the cloud of misinformation surrounding these conflicts has reinforced the importance of accurate, fact based and informed communication. Misinformation has been responsible for multiple mis-happenings across the world. Even the World Economic Forum’s 2025 described rising misinformation as one of the most important risks to environmental, economic, technological, and institutional systems.

In a post-truth era, where emotional appeals and personal beliefs often outweigh objective facts in shaping public opinion, the rapid spread of sensational, emotionally charged, totally fabricated, or deliberately misleading information is triggering rapid but uninformed societal reactions.

A 2018 study by MIT found that false information spreads more quickly than accurate information and although the phenomenon of dissemination of false or misleading information is not new, the risk presented by it is magnified manifold by the recent but widespread adoption of generative AI to produce what is known as ‘synthetic content’.

There has been a massive surge in cases of deepfake, and voice cloning scams are increasingly impersonating both everyday consumers and high-profile figures across business, politics, entertainment, and sports. From hyper-realistic deepfakes of politicians during elections to manipulated images and recordings of celebrities, AI-generated fake content is saturating the internet, fuelling misinformation and propaganda.

The challenge is particularly pronounced in India, where fake news circulates widely on messaging platforms like WhatsApp and Facebook. Many users unknowingly forward such content without verifying its authenticity, thus amplifying its reach.

The impact of misinformation varies in severity, but one of its most concerning consequences is the erosion of trust in digital information. As scepticism grows, it weakens democratic institutions and disrupts the credibility of news sources.

Much like Aesop’s ‘The Shepherd’s False Alarm’ fable, repeated exposure to misleading information can lead individuals to dismiss even legitimate and critical news, making it harder to distinguish truth from deception.

Tackling AI-driven misinformation

Tackling misinformation may seem like an uphill battle, yet it remains crucial to safeguard the truth—and, by extension, democratic systems. While system-level measures may be more effective in curbing misinformation, individual-level strategies—such as industry collaborations, pre-bunking, and literacy training—pose fewer risks to free expression and require less reliance on political and institutional support, thus easier to implement. Here are some strategies to counter it:

1. Partnering for truth

Combating misinformation requires a strategic approach. This includes collaborating with credible partners like influencers who prioritise accuracy and transparency in their content, reputable organisations, and industry groups, to reinforce accurate, evidence-based messaging.

Establishing shared ethical guidelines, implementing robust content vetting, and ensuring transparency in communication are critical to maintaining credibility. Leveraging data-driven insights allows for continuous monitoring of misinformation trends, measuring impact, and refining strategies for greater effectiveness. By building a strong network of trusted partners, communication professionals can actively counter false narratives and foster a more informed public discourse.

2. The power of pre-bunking

Compared to debunking, which corrects misinformation after exposure, pre-bunking is a more proactive and scalable approach, as it inoculates individuals against falsehoods before they encounter them.

For example, a message like ‘Scammers may try to convince you that a limited-time investment will double your money instantly’, followed by ‘Such schemes are often fraudulent and designed to exploit urgency and fear’ helps people recognise and resist misinformation before encountering it.

Pre-bunking is also more effective in combating misinformation as it builds psychological resistance to falsehoods before they take hold. By strategically communicating pre-emptive warnings about manipulative tactics, susceptibility to misinformation can be reduced at scale.

In mass real-world settings like social media, well-crafted pre-bunking messages, integrated into strategic communication campaigns, can proactively shape public perception, reinforce critical thinking, and sustain long-term resilience against misinformation.

3. Building resilience through literacy training

Media literacy training equips individuals with the skills to critically evaluate information, reducing susceptibility to misinformation. By learning to assess sources, identify biases, and recognise manipulative tactics, people become more discerning consumers of information.

Training programs, educational content, and interactive exercises, such as analysing real-world examples of misleading narratives, help build these critical thinking skills. By fostering awareness and scepticism, literacy training enables individuals to make informed decisions and resist the influence of false or misleading content in an increasingly complex digital landscape.

Advancing our approach to digital safety

As AI continues to reshape the digital landscape, communication professionals have a pivotal role in safeguarding information integrity. The responsibility now extends beyond traditional messaging to proactively counter misinformation, not just by debunking falsehoods but by engaging audiences with factual, transparent, and compelling narratives on a proactive basis.

With their expertise in identifying, dissecting, and countering false narratives, communication professionals are uniquely positioned to mitigate the spread of misinformation.

This shift requires communication teams to function not just as news disseminators but as credible fact-checkers, balancing accuracy with a nuanced approach that fosters dialogue rather than division. Building trust through objectivity, data-driven storytelling, and strategic engagement is key to shaping public perception.

In communications, credibility is invaluable, and the spread of misinformation, including AI-generated falsehoods, can undermine it in an instant. The only way to stay ahead is to follow a proactive, responsive, and trust-driven approach.


 

- Piyal Banerjee, director—external communications and media relations, IPM India.

Source:
Campaign India

Related Articles

Just Published

1 hour ago

DM9's Cannes controversy deepens, US senator sues ...

After the Grand Prix was stripped, DM9’s case study on appliance energy costs is now at the centre of a legal complaint.

1 hour ago

What does ethical marketing look like when AI is in ...

With AI systems increasingly involved in sensitive areas like therapy, marketers must balance innovation, scale, and speed with a deep responsibility to protect vulnerable users.

4 hours ago

DS Group and WPP Media launch playbook for digital ...

The free guide aims to standardise practices as digital spending hits 60% of India’s ad market.

5 hours ago

Mascots, markets and metrics: Australia’s tourism bet

Tourism Australia’s AUD $225 million global push leans on mascots and local talent. But, arrivals, not awareness, will prove success.