With elections coming up, we’ve all been warned that disinformation, and more specifically the ominously named deepfakes, are going to be used to try to sway voting choices.
For many of us, the deepfake is still a fairly abstract concept. Where we’ve encountered disinformation, it’s been crudely formulated and clearly designed to play into people’s willingness to embrace lies, rather than their susceptibility to believe new truths. And deepfakes, like wars in foreign lands, are things we read about, not things we experience.
Which is why I was struck by the realisation that, actually, deepfakes aren’t just for pushing the big-issue buttons, or spreading hate speech or misogyny in general, or disseminating revenge porn to the inchoate masses of the internet. They’re also focused weapons with impact beyond ideological manipulation, and they can pose severe business risk.
This was brought home to me by a story that broke last week about someone who was conned into paying out more than $25m by the use of deepfakes. The person, employed by a multinational firm and based in Hong Kong, paid out the money in 15 transactions, after believing that he had been instructed to do so by the company’s London-based CFO in a conference call.
But this was no ordinary one-on-one conference call. It involved multiple people, who the victim believed were all members of staff. It turned out that all of them — except for the poor sap being conned — were deepfakes. According to Asia Financial, “the fraudsters, who are yet to be caught, used the company’s past meetings to create artificial intelligence-generated deepfakes of its chief financial officer and several other employees to execute the elaborate scam”.
The scammers stayed in touch with the employee after the scam via WhatsApp, e-mails and one-on-one video calls, and he only realised later, during a call with the company’s head office, that he had been scammed. I’d like to have been a robot fly on the wall for that call. Imagine the horror as it slowly dawned on everyone that they’d been scammed, and that the very thing that supposedly defines truth — “I saw it with my own eyes” — was actually the vulnerability that was exploited.
A spokesperson for the Hong Kong police said it was believed that the deepfakes were created from videos found on YouTube, and then AI was used to imitate the voices and read from scripts on the video conference. You have to feel sorry for the victim here. Apparently, “on joining the meeting, the employee had a ‘moment of doubt’ but fell for the scam because everyone on the call ‘looked and sounded real’”, and looked and sounded just like colleagues he recognised.
Who among us has not felt the same thing on the endless and endlessly proliferating video calls we are subjected to nowadays — one of the tragic side effects of Covid. “Gosh, these people sound so real, and look almost human. I could almost believe this meeting has a point.”
Deepfakes, like wars in foreign lands, are things we read about, not things we experience
According to a 2021 KPMG report, “many security leaders believe that home-based employees are particularly vulnerable to manipulated content attacks, creating consternation as more workers leave the office behind”.
The US Federal Bureau of Investigation (FBI, if I really need to tell you the acronym) suggests synthetic content may be used in a method of attack it calls business identity compromise, which will leverage advanced content generation and manipulation techniques to create synthetic personas based on real employees. It warns that this “could have significant financial and reputational impacts to businesses and organisations”, which the $25m loss in the story above attests to.
The FBI has warned that bad actors are using deepfakes to apply for a broad array of remote IT and programming positions, so that they can gain access to personal, financial and other proprietary information. “These schemes can have significant financial repercussions. Indeed, the average impact of a data breach was more than $1m higher when a remote working arrangement was a factor in the breach.”
The same KPMG report tells us that “the perpetrators behind [deepfake] schemes are looking for bigger game than individual consumers or public figures. Creative, ambitious cybercriminals with access to the latest technology have started to focus on more profitable targets — corporations, institutions and sovereigns — many of which are ill-prepared to defend against this threat.”
KMPG also gives us a handy definition. “Deepfakes are synthetic media files. They are ‘synthetic’ in that existing imagery, video or audio — typically featuring a specific individual — is manipulated and replaced with another person’s face or voice. This work is done using generative artificial intelligence-powered neural networks, also known as Generative Adversarial Networks (GANs), that process information, create patterns, and learn much like the human brain does.
“Today, widespread availability of sophisticated computing technology and the growing accessibility of Al enables virtually anyone to create highly realistic fake content. In fact, the number of deepfake videos available online is increasing by 900% annually.”
As KPMG sardonically puts it, “as a risk factor, deepfake content is not merely a concern for social media, dating sites and the entertainment industry — it is now a boardroom issue”.
How do companies deal with the growing risk of being scammed by the use of deepfakes? Since this is KPMG, the main thing you can do, the firm suggests, is throw money at the problem by making sure you have the right cybersecurity people on your team, and that you budget correctly for the technology and processes to mitigate risk. But one of the other measures is something called zero trust.
“With identity at its core, zero trust enables organisations to evaluate whether a user is properly authenticated; isolate the resource the user is attempting to access; determine if the request is from a trusted, stolen or third-party device; and confidently decide whether access should or should not be granted. The emergence of zero trust represents a mindset shift in which CISOs [chief information security officers] and their teams assume compromise in connection with system access, and make security decisions on the basis of identity, device, data and context.”
A recent survey by global cybersecurity firm Kaspersky revealed that only 21% of South African employees were able to tell the difference between a real image of a US actor and one generated by a deepfake tool
That sounds very efficient, but of course there’s always the huge chink in any security armour, and that’s the willingness of people to believe things that suit them. I was reading something in The New York Times about the bizarre — to me, anyway — furore that has been stoked in the US about the relationship between Taylor Swift and Kansas City Chiefs tight-end Travis Kelce. Apparently, tight-end isn’t an insult but a position in American football, a sport as weird as US mainstream media’s relationship to celebrities.
The article describes the “incredible weirdness of the recent theory emanating from people with some of the largest platforms in MAGA America. According to them, Taylor Swift’s extraordinary popularity isn’t the organic outcome of a talented and appealing superstar’s bond with her fans. No, according to them, Swift’s rise is an op or a psyop engineered by the deep state in order to benefit [US President Joe] Biden. A central part of the plot, of course, is Swift’s fake, deep-state-invented relationship with Kelce.”
If there are people capable of believing such drivel, then we have to believe that we’re going to have people in a corporate environment who are equally capable of being fooled by deepfakes, and indeed of wanting to be fooled. A KPMG survey of professionals at firms across more than 13 industries found that more than 80% of respondents said deepfakes pose a potential risk to their business, but only 29% said they have taken steps to protect themselves. And 46% said their organisation has no plan to mitigate the threat.
I would imagine that none of this comes as news to the South African business world, but I wonder how many of us underestimate how difficult it is to deal with deepfakes. A recent survey by global cybersecurity firm Kaspersky revealed that only 21% of South African employees were able to tell the difference between a real image of a US actor and one generated by a deepfake tool. And a firm spokesperson told News24 that “even though many employees claimed that they could spot a deepfake, our research showed that only half of them could actually do it”.
It’s the usual human propensity to overestimate our abilities — which might be the one vulnerability that no amount of money spent on cybersecurity can fully engineer for.






Would you like to comment on this article?
Sign up (it's quick and free) or sign in now.
Please read our Comment Policy before commenting.