Imagine being on a video conference call with your company’s CFO, receiving instructions to transfer nearly USD 25 million, only to discover after completing the task that it was all a scam…
I hear you thinking: That wouldn’t happen to me!
But, here’s the thing: why wouldn’t it?
It happened to an high educated employee of Arup, a British engineering company, in 2024. Scammers sent the employee an email inviting them to a conference call about “secret” transactions that needed to be completed. The employee thought the email might be a phishing attempt but joined the call anyway. During the call, the scammers used deepfake technology to impersonate the CFO and other employees so convincingly that the employee let go of their doubts and transferred the money.
Deepfake technology is improving quickly, and the risk of falling victim to deepfake scams is rising just as fast. In a 2024 survey by Deloitte, almost one in four executives said their company had experienced one or more attacks on financial data using deepfakes. The cost of this AI-driven fraud is also significant: Deloitte’s Center for Financial Services predicts that in the US, losses could reach up to USD 40 billion by 2027.
Mark Read, CEO of British marketing agency WPP, had a close call with scammers using AI. The fraudsters set up a WhatsApp account to arrange a Microsoft Teams meeting with a fake “leader” from WPP to steal money and personal information. By cloning Read’s voice and manipulating YouTube footage of him, they impersonated him during the meeting. Luckily, WPP noticed the scam in time, showing that, in a world full of cybercriminals, staying alert is key.
But what does being alert really mean? The obvious answer is using the deepfake-detection products available in the market. But we also need to strengthen the first line of defence: the employee who reads the phishing email or joins the deepfaked conference call.
The Arup employee had a gut feeling that the email invitation was suspicious, and their instincts were right. Employees should be encouraged to trust these suspicions. Verifying emails, videos, or phone calls may take time and resources, but it could save millions. Creating a workplace where employees feel comfortable voicing concerns about fraud is a simple but effective defence.
Another solution is training employees to recognise the signs of deepfakes. Researchers from Northwestern University have created a free guide to help people spot common mistakes and inconsistencies in AI-generated images (https://arxiv.org/abs/2406.08651). They also developed the Detect Fakes website, where users can practice spotting real and fake images (https://detectfakes.kellogg.northwestern.edu/). Through awareness and practice, employees can develop their intuition, adding an extra layer of protection against fraud.
Let’s not forget the importance of a well-defined, regularly enforced company policy on information security. Consider the shock when classified files were found in US President Donald Trump’s bathroom at Mar-a-Lago. This was a clear example of poor oversight regarding sensitive information. It’s not enough to assume that employees will always handle company information properly. Clear guidelines, regular reminders, and best practices in information security are essential.
It’s easy to believe something like that could never happen to you, but these scams are getting more sophisticated by the day. The truth is, cybercriminals are increasingly using AI to trick even the most cautious of us. And if you think you’re immune, remember, even the best can fall victim to these tricks if they’re not prepared.
As deepfake-detection tools continue to improve, scammers will try to stay one step ahead. This means companies need to stay ahead too. By combining the latest deepfake detection technology with continuous employee training, businesses can better protect themselves from the growing threat of cybercrime.