Lindsey Clay
Feb 05, 2024

Opinion: Is there an acceptable human cost of doing business?

We might not be able to fix the internet but we can do more to help online advertising – can’t we?

Opinion: Is there an acceptable human cost of doing business?

"Blood on your hands". That’s the chilling accusation made this week (31 January) against Mark Zuckerberg and other social media bosses at a hearing of the US Senate Judiciary Committee.

They were examining inadequate protection online for children – from enabling sexual predators to promoting unrealistic beauty standards.

That it has come to this, that such an accusation can even be made, supported by evidence, is astonishing.

The hearing followed another woeful incident online. Like most of you, I hope, I was horrified by the Taylor Swift nude deepfake scandal. The fact it can happen, the fact it can spread, and the fact it continued spreading even after it was discovered and denounced.

And, in this election year, we have every reason to fear a tidal wave of misleading deepfakes online attempting to warp political debate and outcomes. It’s ugly, it’s damaging, it’s dangerous. Some of it will hit society’s shores in advertising.

While I get that the zillion hours of user-generated content being uploaded for free to open platforms is very hard to pre-vet, advertising is different, more straightforward. We might not be able to fix the internet, but we could certainly do more to help online advertising – can’t we?

If human specialists were used to pre-clear all ads before they appeared, as they are in other media, then the scam, fake, illegal, harmful, or misleading ads that continue to see the light of day online would begin to evaporate.

People and businesses pay for advertising space. So why not charge more to cover the cost of rigorous clearance, make less profit, or don’t have an advertising-funded business model?

The automated ad reviewing systems using AI and machine learning that tech giants employ are impressive and clever beyond my comprehension. They catch a lot of the bad. But, as is frequently shown, they don’t catch all of it and there’s no suggestion they ever will.

So we have a choice. Advocate for a proper clearance system, like Clearcast, basically an upstream Advertising Standards Authority, or accept that platforms that choose automation are, in effect, allowed to show some illegal/scam/misleading ads. Just live with it being acceptable collateral damage.

I appreciate that proper ad clearance will impact on the business models and profits of companies that currently choose automation.

But, as the tech giants make significant profits, it wouldn’t bankrupt them to be more responsible. A cost to them; a boon to society and their reputations (and advertising’s reputation generally; we’re an industry suffering from an embarrassing deficit of trust).

And, to be blunt, cost shouldn’t be an issue anyway. Principles should cost something. If cost is an issue, then it suggests a (knowingly) flawed business model. No company has an innate right to make money while knowingly repeatedly causing social damage.

I know the argument against: they’ll say they do clear their ads. They invest considerably in AI and machine learning technologies to automate the review process. Human reviewers are also employed – lots of them – to handle complex cases. And they remove ads when they become aware they fall short of their standards.

Plus they’ll say there are just too many ads to manually process and it’s all happening in real time, allowing advertisers to tweak campaigns/creative. Too much is happening too quickly. Automation is the only answer.

If one of our industry’s goals is to eradicate harmful or illegal advertising then system changes have to happen upstream before any ads are seen. Removal can, by definition, only happen after some damage has been done.

How much collateral damage is acceptable in a business model? When do you accept a business model needs fixing? Where do you draw the line on what is or isn’t your responsibility as a business?

Automation benefits lots in life, the precision of robotic surgery in delicate procedures, for example. But when there is interpretation and nuance, potential criminality and social harm involved – and when money is changing hands – step forward the trained humans.

You can have a thorough ad clearance process or a convenient but flawed one; you can’t really have both.


Lindsey Clay is the chief executive of Thinkbox

(This article first appeared on CampaignLive.com)

Source:
Campaign India

Related Articles

Just Published

1 day ago

Nespresso to launch in India by late 2024

The roll-out in India will begin with the opening of its first boutique in Delhi, with plans to expand to other major cities subsequently.

1 day ago

Netflix reports strong Q1 growth but is it painting ...

Although Netflix has added almost 10 million new paid subscribers in early 2024, some experts believe advertising is quickly becoming the streaming giant’s long-term profitability plan, presenting a compelling opportunity for brands.

1 day ago

WPP blames Pfizer loss and tech client cuts for ...

In contrast, Publicis, Omnicom and IPG all increased their revenues.

1 day ago

Panasonic nurtures next generation of reporters in ...

Originating 34 years ago in the US, KWN has successfully nurtured creativity and media literacy among young people across various countries prior to launching in India.