Why deep fakes should trigger deep thinking

We’re at an interesting point in our history when it comes to identity manipulation. And the consequences could be alarming.

Artificial Intelligence

Cannings Purple 7 Nov 2019
4 mins
This Obama deep fake has become famous.

I haven’t written an article before which relies so heavily on Spider-Man.

All will become clear, but let’s start with the fact that “our favourite neighbourhood webslinger” wears a mask. It’s why newspaper editor J Jonah Jameson is so suspicious of his motivations and supports his assumption that Spidey must be up to no good.

That instinct is well-seeded in our society of course. Masks and fake identities have always been a profitable tactic for criminals. Whether it be the masked robberies of the pre-surveillance era or the more manual exploits of Frank Abagnale made famous in the film Catch Me If You Can, disguises and masks are a constant thread in criminal activity.

But we’re at an interesting point in our history when it comes to identity manipulation.

2019 marks the nexus of increasing computing power, improving artificial intelligence and the availability of personal data. Those three elements are a potent mix for some genuinely startling (and worrying) abuses in terms of identity manipulation.

You may be aware of deep fakes – the act of taking a single 2D photograph and, using freely available AI-inspired software, mapping that image onto a moving video. Early efforts were reasonably rudimentary, but these days, almost any non-Hollywood computer user with some time on their hands can create convincing examples.

This is one of the best known; a (deep) fake video of “President Obama” delivering a public service announcement.

Although convincing, there is still something a little… ‘off’ about this, because the voice speaking the words is an impression of the former President using someone else’s vocal cords. So deep fake videos could just be seen as a bit of interesting fun, but there have been, and could be, more concerning uses for this technology; extortion, revenge porn and blackmail are all conceivable outcomes of a convincing deep fake video, especially one which doesn’t require the target to be speaking in the video.

But earlier this year, a new flavour of deep fakes emerged.

Audio deep fakes require a few minutes of test audio of a person (and for public figures, this is likely reasonably easy to find through YouTube/ podcasts/ interviews etc), for AI to then map the speech patterns, intonations and tone of their voice. The resulting files are incredibly lifelike, and (currently at least) difficult to spot.

Here’s some (entirely faked) audio of American podcast host Joe Rogan, being made to say things he never said:

What’s more, the same company which developed the AI which sits behind that video is also working on a text-to-speech solution which means getting someone to say something, anything, could be as simple as just typing it into a word processor and pressing a button.

So, what interest does this have for those of us wanting to communicate more effectively and efficiently? Well, the answer to that lies with one of my favourite and most referred to modern-day philosophers, the aforementioned Spider-Man: “With great power comes great responsibility”.

My initial response to deep fake audio was to see the positive potential of such a wonder: how much easier would it be to correct podcast or video recordings using this technology than having to set up the whole shoot again to do re-recordings.

Articles like this could easily be converted into podcast versions; books and media could more easily be provided to those with visual impairments; corporate communications could be provided in a multitude of formats without taking up the time of business leaders.

But, as I say, there will be those who want to use this technology for more nefarious means.

In fact, they already are.

Earlier this year a UK executive received a call purporting to be from his CEO and requesting an urgent transfer of money. The voice was convincing, and the request was unusual but urgent. The verbal sign off was deemed enough to proceed.

It transpired the request was from a hacker using AI deep fake audio. The CEO never made the call, but the company lost $200,000.

Aside from the obvious interest for any business which might be targeted by this, for those involved with communications deep fakes do open the door to other issues: if we can no longer believe what we see and hear, if anyone can be made to say anything, how do we build trust regarding our genuine communications? If we have a reputational crisis which needs managing, how can we be sure that the evidence of the issue is real? Could an entire election be turned with the release of a convincing, but fake, video of the candidates? Is the response to any unfortunate recording “oh, it’s just a deep fake” going to become the norm to solve issues?

The implications for this sort of technology are clear but, as usual, our structures, laws and understanding have some catching up to do in order to mitigate the potential issues it could create.

If we are to allow this technology, we need to enforce the responsibility which comes with its power.

Cannings Purple Director of Digital Jamie Wilkinson is an expert on social media and planning for and managing communications during crisis situations. Email Jamie.

More Cannings Purple news: