By Jacques van Wyk
CEO:  JGL Forensic Services

We’ve probably all seen those amusing posts on Facebook. The ones where videos of Barack Obama or Donald Trump have been cleverly doctored so it looks as though they’re singing a rap song. They make us laugh, no one gets hurt and it’s all good, harmless fun.

The same process is also used (albeit at a much more complicated and professional level) in live action, animated movies, such as the recent remake of Disney’s The Lion King. But there is a dark side to the rise of this seemingly innocent, deep-learning technology, and it’s spawned a growing concern about the potential for abuse.

Credit: The Observer

As David von Drehle says in an article in the Washington Post: “Even as I delighted in [Disney’s] achievement, I was dismayed by their prowess. If a warthog and a meerkat can become Abbot and Costello, how much more easily can a presidential candidate’s image be made to utter an outrageous statement on election eve — or a president’s virtual self be manipulated to threaten a nuclear strike?

“Thus, a children’s movie points us to the fact that ‘deepfake’ warfare is not tomorrow’s problem. Like the ability to transplant Beyonce into the body of a lioness, the capacity to falsify reality for hostile purposes is here and now.”

The frightening fact we have to face is that while deepfake technology is already widely in use in some of our favourite films, it’s now cropping up more and more on mainstream and social media, as well as in business.

What Do We Mean When We Talk About Deepfakes?

Deepfakes use what’s known as deep learning technology, which is a branch of machine learning that has the ability to learn what a source face looks like from different angles. The ultimate purpose is to then transpose the face onto a particular target – usually an actor, politician or other celebrity. The skills needed to pull this off were once only really found in Hollywood. Actor Peter Cushing, for example, was resurrected for the 2016 blockbuster Rogue One: A Star Wars Story, but it required huge technical skill, and used complicated and vastly expensive face-mounted cameras.

A scant three years later, and we now have simple software tools, such as DeepFaceLab and FakeApp, that are free, open-sourced and relatively easy to learn. This means the tools needed to achieve a comparable effect to those used widely in the film-making industry are now readily available to all.

Credit: www.wired.com

Until recently, deepfakes appeared to be limited mainly to such comparatively inane pastimes as putting celebrities’ faces on porn stars bodies, and making politicians say amusing things. But according to a recent report in the Wall Street Journal, the first recorded incident of an AI-generated voice scam defrauded an un-named UK firm out of around $243 000. The hapless victim was the CEO of the company, who believed he was talking on the phone to his boss when he followed instructions to transfer the money to a Hungarian bank account. The CEO later said the voice expertly mimicked his boss’ gentle German accent, tone and “melody.”

The warnings from industry experts are unambiguous: It’s not only businesses who face potential ruin from deepfake technology. Warning bells are ringing for personal, political and governmental applications. It’s surely only a matter of time before this technology is used to ruin someone’s marriage with a deepfake sex video, throw a city or country into widespread panic with a deepfake announcement of an imminent terrorist attack, or post a deepfake video of a political candidate saying something deeply offensive just prior to an election.

As US Senator Marco Rubio says, “In the old days, if you wanted to threaten the United States, you needed 10 aircraft carriers, nuclear weapons and long-range missiles. Today, you just need access to our internet system, to our banking system, to our electrical grid and infrastructure, and increasingly, all you need is the ability to produce a very realistic fake video that could undermine our elections, that could throw our country into tremendous crisis internally, and weaken us deeply.”

How Do Deepfakes Actually Work?

Deepfakes use a machine learning technique known as a “generative adversarial network”, or GAN. Graduate student Ian Goodfellow invented GANs in 2014, initially as a way to algorithmically generate new types of data out of existing data sets. In plain English, that means a GAN can, for example, examine thousands of photos of Barack Obama, and then produce a completely new photo that resembles the photos it’s studied, but isn’t an exact copy of any of them. GANs can also be used in the same way to create new audio from existing audio, and even new text from existing text.

Initially, this technology was limited mainly to Artificial Intelligence (AI) research. But in 2017, a user on the social media platform Reddit started posting pornographic videos that had been digitally altered using GAN technology. His Reddit username was “Deepfakes,” and the name quickly became the generic label for material produced in this way.

Deepfakes represent a massive jump in the original technology. It allows you to manipulate existing video to show people doing things and saying things they never did or said. It uses GANs to exploit what’s known as “unsupervised learning,” in other words, when machine learning models teach themselves. This has positive implications in many industries – self-drive vehicles, for example, use it to help them recognise pedestrians and cyclists, and voice-activated digital assistants such as Siri and Alexa are becoming more conversational thanks to this incredible technology. But the potential for evil is worrying.

One of the main problems is that deepfakes are becoming increasingly difficult to detect. Because GANs can be trained how to evade the digital forensics used to discover them, this may well be a battle we’ll struggle to win. Critics warn that because the Internet pervades almost every aspect of our lives, an inability to separate fake from real will make us reluctant to trust anything we see or hear online, threatening our faith in our political system and heralding an end to truth.

What Are The Tangible Risks?

“At the most basic level, deepfakes are lies disguised to look like truth,” says Andrea Hickerson, Director of the School of Journalism and Mass Communications at the University of South Carolina. “If we take them as truth or evidence, we can easily make false conclusions with potentially disastrous consequences. What happens if a deepfake video portrays a political leader inciting violence or panic? Might other countries be forced to act if the threat was immediate?”

In America, with elections looming in 2020, the threat of “weaponised” deepfakes being used to divide the American electorate and alter voter behaviour is very real.

So What Can We Do?

Denis Bensch, CIO of FlowCentric Technologies, believes deepfakes are next to impossible to stop, and almost as difficult to identify. There is, however, a sliver of hope in this seemingly bleak forecast.

“A business process management (BPM) system, if properly implemented and maintained, offers the best and most effective strategy to deal with this growing new threat,” he says.

Essentially, a BPM takes human gullibility out of the equation by introducing a (hopefully) foolproof process of authentication and authorisation. Implementing a BPM introduces a system of checks and balances so that no one individual – not even the most senior manager – can order a payment without at least one additional layer of authentication.

This may sound tedious and time consuming, but until technologists or governments can find another way to counteract the scourge of deepfakes, solutions have to be driven at a personal and company level. We cannot afford to wait. We need an immediate push for a decisive resolution – before it’s too late.

JGL Forensic Services is a multidisciplinary team of experienced forensic accounting and investigation professionals. We strongly believe in the rule of law and the scientific method as it applies to forensic accounting and investigation. Talk to us in confidence, and let’s work together to prevent corporate corruption and fraud.