The Rise of Deepfakes…It’s Time For a Reality Check
You may or may not have heard of the Gorilla Experiment.
Regular readers of my articles will remember it highlights what scientists call selective perception – the tendency of our brains to slightly close the curtains on the windows of our minds and focus only on the thing we expect to see/hear/experience at that moment.
In itself, selective perception is not a bad thing. Quite the opposite, in fact. It’s a critical function of our brains, helping us filter out un-helpful data and focus on the signals we need to register. It’s why an exhausted new mother will sleep through someone vacuuming in the next room but wake up the second her baby makes a noise.
But, as with most things in life, you can have too much of a good thing.
The problem arises when selective perception collides with biased reference points, such as those fed to us via social media. (Think of Facebook or Instagram posts, for example, that only show us pictures of our friends living lives that seem so much more glamourous and exciting than our own).
This media melting pot can wreak havoc with our beliefs, creating a paradise for fake news.
Fake news is so dangerous because it’s so seductive. It grabs our attention and prejudices our thinking by providing information that is not only factually incorrect but actively and strategically chosen to affect our cognitive processes and influence the decisions we make.
Even more insidious, however, is fake news’ evil big brother, deepfakes.
Deepfakes emerged as the unplanned and unwanted lovechild of generative artificial intelligence (GenAI) and large language models (LLMs). The name comes from the ability of deep learning to create fake versions of real, existing videos, images or audio material. They look so realistic – and are becoming even more so as the technology develops – that it’s sometimes nearly impossible to tell them apart from the real thing.
The technology was originally used to great (and harmless) effect in the entertainment industry. The problems started when people started using it to make it appear as though politicians and other well-known people were saying or doing things they never actually said or did.
This kind of content is dangerously misleading as it can have a profound influence on public opinion – even to the point of influencing the outcome of elections or inciting tensions between countries.
Global verification platform Sumsub tells us that the number of detected deepfakes in the 1st quarter of last year was 10% higher than in the whole of 2022. Most of these came from the UK. More worrying, just over 50% of all online misinformation comes from manipulated images.
The problem is set to escalate throughout 2024, as major elections take place in the UK, USA, EU and right here in South Africa.
In January this year, the New Hampshire Attorney General’s Office was called on to investigate reports of an apparent robocall that used artificial intelligence to mimic President Joe Biden’s voice. The message reportedly told local residents not to vote in the primary election that was set to happen later that month.
Only a few days later, we read reports that sexually explicit deepfake images of Taylor Swift were circulating widely on social media.
Actor Tom Hanks was also deepfaked for an ad for a dental plan that he was apparently endorsing. He later took great pains to assure people he had nothing to do with the ad or the plan.
Popular radio personality Zoe Ball suffered a similar fate earlier this year when deepfake images of her promoting a fraudulent crypto investment website started circulating.
And here at home, SABC TV News anchor Bongiwe Zwane’s voice was used in an automated telephone message asking people to donate money to a non-existent foundation.
Of course, politicians and other famous people aren’t the only ones at risk. Businesses and private individuals are easy pickings for criminals who use deepfakes to help them commit fraud.
Harry Potter fans might understand a reference here to Polyjuice, an advanced potion which allows the drinker to assume the physical appearance of another human being. If only this ability was limited to fantasy novels.
Unfortunately, the real-life ability to look and sound like anyone, especially people who are authorised to approve payments, gives fraudsters almost unlimited opportunities to exploit weak internal procedures and extract vast sums of money almost at will.
No wonder, then, that the World Economic Forum ranks deepfakes as one of the most worrying uses of AI, and disinformation one of the top risks in 2024.
Ironically, the democratisation of information, and the increasing affordability of internet-connected devices means creating deepfake audios or visuals is relatively cheap and ridiculously easy.
You don’t need advanced IT knowledge – you don’t even need advanced software.
A quick Google of “how to make a deepfake” reveals a mind-blowing number of websites, apps and YouTube videos obligingly filled with helpful information.
In May last year, Russian state-owned media outlet SPUTNIK International posted a series of Tweets attacking the Biden administration. Each one prompted a well-written response from an account called CounterCloud, and sometimes included links to newspaper articles or websites.
This in itself is not unusual, but here’s what is:
Everything, from the Tweets to the articles – and even the journalists who wrote them – was created entirely by artificial intelligence algorithms.
The mastermind behind this “experiment” goes by the name of Nea Paw, who did it to highlight the danger of mass-produced AI disinformation.
The entire campaign cost in the region of $400.
The wide availability and low prices of many generative AI tools make creating sophisticated information campaigns fast, easy and cheap.
“I don’t think there is a silver bullet for this,” says Paw, “just as there is no silver bullet for phishing attacks, spam, or social engineering.
There are things we can do to mitigate the damage – education, making GenAI systems that block misuse, or equipping browsers with AI-detection tools, for example.
“But I think none of these things is really elegant, cheap or particularly effective,” says Paw.
It’s a truly worrying situation; the implications of deepfake technology and the manipulation of data are both complex and frightening.
Experts predict that by 2025, around 8 million deepfakes will be circulating online, and the numbers are expected to double every 6 months.
As AI algorithms advance, the line between genuine and fake or manipulated content becomes ever blurrier, and there are serious implications for people, privacy, and trust itself.
There is an urgent need for sophisticated detection software and tougher, zero tolerance legislation. This is easier said than done, as while it’s important to have robust protection in place, it’s equally important not to cancel out useful, informative and entertaining GenAI content.
It’s obvious there needs to be balance, but it’s also obvious things cannot keep progressing in same way they have been.
Education and awareness are key, as are more advanced deepfake detection tools. These must be backed up by governmental support – we need updated legislation that promises severe consequences for people misusing the technology.
Getting this right on a global level requires close collaboration – let’s share information, best practices and technology. Together, we CAN protect ourselves against the threat of deepfakes.