This article was published on July 7, 2023

Europe must act against AI-written reviews before it’s too late

And the biggest tool we have to fight this AI-created problem? That’s right: more AI


Europe must act against AI-written reviews before it’s too late

Parts of modern life are inescapable. We all use mapping software for directions, check the news on our phones, and read online reviews of products before buying them.

Technology didn’t create these things, but what it has done is democratise them, make them easier to access or add to. Take online reviews. Nowadays, people can share their honest opinions about products and services in a way that, back in the times gone by, would’ve been impossible.

Yet, what the internet giveth, it can, uh, taketh away too?

It didn’t take long for nefarious actors to realise they could exploit this new-found ability technology to flood the market with fake reviews, creating an entirely new industry along the way.

In recent years, much of the discussion around fake reviews has dissipated, but now? They’re back with a vengeance — and it’s all because of AI.

The <3 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol' founder Boris, and some questionable AI art. It's free, every week, in your inbox. Sign up now!

The ascension of large language models (LLMs) like ChatGPT means we’re entering a new era of fake reviews, and governments in Europe and the rest of the world need to act before it’s too late.

AI-written reviews? Who cares?

As pithy as that sounds, it’s a valid question. Fake reviews have been part of online life for almost as long as the internet has existed. Will things really change if it’s sophisticated machines writing them instead of humans?

Spoiler: yes. Yes it will.

The key differentiator is scale. Previously, text-generating software was relatively unsophisticated. What they created was often sloppy and vague, meaning the public could immediately see it was untrustworthy, crafted by a dumb computer rather than a slightly less dumb person. 

This meant that for machine-written fake reviews to be successful and trick people, other humans had to be involved with the text. The rise of LLMs and AI means that’s no longer the case.

Using ChatGPT, almost anyone can produce hundreds of fake reviews that, to all intents and purposes, read like they could be written by a real person.

But, again, so what? More fake reviews? Who cares? I put this to Kunal Purohit, Chief Digital Services Officer at Tech Mahindra, an IT consulting firm.

He tells me that “reviews are essential for businesses of any size.” The reason for this is it helps them “build brand recognition and trust with potential customers or prospects.”

This is increasingly important in the modern world, as the competitiveness of the business sector is causing customers to become more aware and demanding of companies.

Now that user experience is a core selling point — and brands prioritise this aspect of their business — Purohit says that bad reviews can shatter organisations’ abilities to do business effectively.

To put that another way, fake reviews aren’t just something that can convince you to buy a well-reviewed book that, in reality, is a bit boring. They can be used for both negative and positive reasons, and, when levelled at a company, can seriously impact that business’ reputation and ability to work.

This is why we — and the EU — must take computer-generated reviews seriously.

But what’s actually going on out there?

At this point, much of the discussion is academic. Yes, we’re aware that AI-written reviews could be a problem, but are they? What’s actually happening?

Purohit tells me that, already, “AI-powered chatbots are being used to create fake reviews on marketplace products.” Despite the platforms’ best efforts, they’ve become inundated with computer-generated reviews.

This is confirmed by Keith Nealon, the CEO of Bazaarvoice, a company that helps retailers show user-generated content on their site. He says he’s seen how “generative AI has recently been used to write fake product reviews,” with the goal being to “increase the volume of reviews for a product with the intent to drive greater conversion.”

AI-written reviews are gaining momentum, but, friends, this is just the beginning.

Long, hard years are on the horizon

The trust we have in reviews is about to be shattered.

Nealon from Bazaarvoice says the use of AI at scale could have “serious implications for the future of online shopping,” especially if we reach a situation where “shoppers can’t trust whether a product review is authentic.”

The temptation to use computer-generated reviews on the business side of things will also only increase.

“We all want our apps to be at the top of the rankings, and we all know one way to get this is through user engagement with reviews,” Simon Bain — CEO of OmniIndex, an encrypted data platform — tells me. “If there’s the option to mass produce these quickly with AI, then some companies are going to take that route, just as some already do with click farms for other forms of user engagement.”

He continues, saying that while the danger of computer-written reviews are bad, the fact this methodology becomes an extra tool for click farms is even worse. Bain foresees a world where AI-generated text can “be combined with other activities like click fraud and mass-produced in a professional way very cheaply.”

What this means is rather than AI-written reviews being a standalone problem, they have the potential to become a huge cog in an even bigger misinformation machine. This could lead to trust in all aspects of online life being eroded.

So… can anything be done?

Hitting back against AI-written reviews

There were two common themes across all the experts I spoke with regarding fighting computer-generated reviews. The first was that it’s going to be tough. And the second? We’re going to need artificial intelligence to fight against… artificial intelligence.

“It can be incredibly difficult to spot AI-written content  — especially if it is being produced by professionals,” Bain says. He believes we need to crack down on this practice in the same way we’ve been doing so for similar fraudulent activities: with AI.

According to Bain, this would function by analysing huge pools of data around app use and engagement. This would use tactics like “pattern recognition, natural language processing, and machine learning” to spot fraudulent content.

Purohit and Nealon agree with this, each of them pointing towards the potential AI has to solve its problems in our conversations.

Despite this, it’s Chelsea Ashbrook — Senior Manager, Corporate Digital Experience at Genentech, a biotechnology company — who summed it up best: “Looking into the future, though, we might need to develop new tools and techniques. It is what it is; AI is getting smarter, and so must we.”

The government must get involved

At this stage, we encounter another problem: yes, AI tools can combat computer-generated reviews, but how does this actually work? What can be done?

And this is where governing bodies like the EU come in. 

I put this to Ashbook: “They certainly have their work cut out for them,” she says. Ashbrook then suggests one way governments can combat this upcoming plague may be to “establish guidelines that necessitate transparency about the origin of reviews.”

Bain from OmniIndex, on the other hand, mentions the importance of ensuring that existing laws and regulations around elements like “fraud, and cybercrime keep up to date with how [AI] is being used.”

Purohit from Tech Mahindra believes we’re already seeing many positive initiatives and policies from governments and key AI industry professionals around the responsible use of the tech. Despite this, “there are several ways official bodies such as the EU … can prevent [it] from getting out of hand.”

He points towards “increasing research and development, [and] strengthening regulatory frameworks” as two important elements of this strategy.

Beyond that, Purohit believes governments should update consumer protection laws to combat the dangers posed by AI-generated content. This could cover a range of things, including “enforcing penalties for the misuse or manipulation of AI-generated reviews” or “holding platforms accountable for providing accurate and reliable information to consumers.”

There you go, Europe, feel free to use those ideas to get the ball rolling.

AI-written reviews: Here to stay

Want to read the least shocking thing you’ve seen in some time? AI is going to change the world.

Despite that, the topics the press tends to be obsessed with are things like the singularity of a potential AI-driven apocalypse. Which, to be honest, sound far sexier — and drive far more clicks.

But in my mind, it’s the smaller things like AI-written reviews that will have the most impact on our immediate lives.

Fundamentally, society is based on trust, on the idea that there are people around us who share a vague set of similar values. AI being used in these small ways has the potential to undercut that. If we can no longer believe what we see, hear, or read, then we no longer trust anything. 

And if that happens? Well, it won’t take long until things start crumbling around us.

This is why governmental bodies like the EU can’t adopt a “wait-and-see” approach to regulating areas as seemingly inconsequential as AI-written reviews. There must be regulation — and it must be fast. Because if we delay too long, it may already be too late.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with