In the Age of Deepfakes, Basic Security Still Matters
AI-enabled social engineering attacks require a new level of vigilance as bad actors deploy increasingly sophisticated fakery
“AI is fundamentally transforming financial fraud—making sophisticated scams more accessible, scalable, and convincing than at any point in history.” — Perry Carpenter, Chief Human Risk Management Strategist, KnowBe4, in prepared remarks to the SEC Investor Advisory Committee
Scams aren’t new, but the tools for both perpetrating them and guarding against them are new and incredibly advanced. Deepfake-enabled hacking is here, and the enterprise must simultaneously get back to basics and deploy leading-edge technologies to protect assets.
At one time, training employees to avoid scams meant teaching them to look carefully at email addresses, avoid putting random USBs into their drives and look closely at the grammar and structure of messages. Today, voices and likenesses of real people can be duplicated so faithfully that even friends and close acquaintances can be fooled.
The conclusions of a pair of recent reports are startling:
- Risk management firm Riskonnect found that more than 80% of companies did not have protocols in place to defend against AI-based attacks in 2024.
- At the same time, voice authentication experts Pindrop found that deepfake fraud attempts rose by more than 1,300% in 2024.
Fake Video Call Costs $25 Million
Reasons to address this thorny problem have continued to mount, particularly after some high profile instances of fraud brought the risks into stark relief. One especially disturbing incident occurred at the engineering firm Arup, which lost $25 million to deep fake fraudsters. The elaborate scam was the first known case of criminals using customized deepfakes to create an entire group video call. Authorities think the bad actors used publicly available videos, images and audio recordings to create the fakes.
The scam began when an employee in Hong Kong received what he suspected was a phishing message. It came from Arup’s UK office and said the employee needed to carry out a secret transaction. The employee only agreed to the transaction after taking part in a video call that appeared to include the CFO, along with other company officials the employee recognized.
After the call that seemingly revealed the legitimacy of the request, the employee wired over $25 million dollars to five different banks over 15 separate transfers. At that point, the fraud was discovered. Although the financial loss was significant, this wasn’t a cyberattack, according to Arup’s chief of digital information, Rob Greig.
At the World Economic Forum, Greig pointed out that no data was compromised, and that the scammers didn’t gain access to the company’s systems. Because of that, the incident, he said, was “technology-enhanced social engineering” rather than cyberattack.
Psychology is a crucial element in such attacks, he noted. “Audio and visual cues are very important to us as humans and these technologies are playing on that,” Greig explained.
Back to Basics
Although deep fakes and other advanced forms of fraud are increasing, it’s crucial to make sure employees understand the basics. “People are smarter than they were 20 years ago, but there’s still that bottom 50% that’s all about education,” says Jillian Kossman, operations officer at IDScan.net, a company that specializes in detecting and preventing ID fraud.
It may seem hard to believe, but criminals still try to do things like using tape to attach a photo to a fake ID and use it to make a withdrawal at a bank—and sometimes it works. That’s where the education element comes in. Kossman says we are now are facing “ultra-sophisticated organized crime rings that create all these deep fakes, with an expert creating the images and someone else specializing in IP addresses, and at the same time, the ‘dummy fraud.’” Both ends of the spectrum need to be covered by one platform, she says.
Kossman offers the example of opening a bank account using your phone. The bank will want to capture your ID, and do a face match to confirm that you’re who you say you are. “That’s where a bad actor could inject malicious code that has a deep fake of your face,” says Kossman. “The barrier to creating a deep fake is not that high.”
The danger is real for both the company and the person whose ID has been hijacked. The scammers could open an account in the person’s name and take out loans and ruin their credit. Or, they might wait for the right opportunity to strike, deploying a cutting-edge fraud scheme at a later date. “They can just sit on that account for years and use it for some other type of fraud five years later,” Kossman warns.
Kossman views fraud prevention as a collective effort. “Everyone benefits from the rising tide of fraud prevention,” she says.
AI: The Problem … and Solution
Deep fakes and other advanced forms of fraud are possible, in part, because of AI. Somewhat ironically, AI is also one of the most effective ways to prevent fraud. Bad actors have access to tools that allow them to generate AI photos and what appear to be real IDs that can bypass security. At the same time, fraud prevention involves analyzing millions, or even billions, of transactions and images to find anomalies using pattern recognition and machine learning.
Just as employees need training on the most basic security practices even in the age of advanced fraud techniques, Kossman recommends that companies “don’t leave easy doors open for threats.” Things like not allowing image capture and not accepting paper documents are still important.
Her second recommendation is to “let the technology do its job, and do your best to make sure the inputs into the technology allow you to catch fraud.” For instance, if a customer is wearing sunglasses and a hat in a photo, facial recognition software can’t do what the enterprise pays for it to do. Printed, temporary paper IDs can’t be verified.
Additional tips include:
- Completing a robust risk assessment
- Setting up systems for access and approval
- Verifying documents
- Creating processes for collaboration information sharing
- Providing training for employees
Nearly every expert, including Kossman, believes that identity fraud is far more common than anyone realizes. “You need to be constantly aware,” she says, “and vetting every individual.”