You get a call from your boss. She seems mad.
"I need you to hurry up and transfer those funds to the McLaren account. There's a hold up. They said they won't move forward on the project until they get the money. Why do you always take so long to do your job?"
"S-sorry," you stammer, "I was waiting 'til Tuesday, like we scheduled."
"Just get it done," she snaps, "Oh, and by the way, they switched banks, so their account number has changed. Do you have a pen?"
"She gives you the account number. "Just get it transferred already. You're on thin ice."
The call overwhelms you. You don't like being yelled at and you're scared to lose your job, so you make the transfer. You don't even take the time to think about how strange the request was. You don’t get the opportunity to wonder why the schedule was changed, or why the account was different. You just do what you were told because you’re terrified of being fired.
Unfortunately, things weren’t quite as they seemed.
It turns out that it wasn’t actually your boss, it was a deepfake—an attacker using AI to copy your boss's voice. But it’s too late now, you’ve already sent the money to the attacker's bank account. Guess you'll have to face your boss's wrath after all.
What is a deepfake?
According to the Department of Homeland Security, deepfakes are:
“...an emergent type of threat falling under the greater and more pervasive
umbrella of synthetic media, utilize a form of artificial intelligence/machine learning (AI/ML) to create believable, realistic videos, pictures, audio, and text of events which never happened.”
This means that deepfakes go beyond just AI-generated or modified videos, and can even include things like voice and video calls. This technology has a wide range of legitimate uses, including art and satire. However, deepfakes also have significant potential for causing harm, including spreading disinformation, fake pornography, and scams.
Deepfakes are already being used for social engineering
You’ve probably already seen a bunch of benign deepfakes, such as Jordan Peele’s deepfake of Obama, or the Pope looking extra hip in a puffy coat. But bad actors are also taking advantage of the technology:
- The CEO of an energy firm got a call from his “boss” at its parent company. The “boss” asked him to transer $243,000 to a Hungarian supplier, so the CEO of the energy firm did what his "boss" told him too. Turns out that the “boss” was actually a cybercriminal using AI to mimic the boss's voice. They used the technology to trick the CEO into sending the funds to their own account.
- In 2020, the branch manager overseeing the account for an Emirati company received a call alongside a series of emails. They appeared to be from the director of the Emirati company, and they instructed the branch manager to transfer $35 million to a series of accounts, purportedly to acquire a business. Turns out, it wasn’t the director on the end of the line. It was an attacker using deepfake technology to impersonate the director’s voice. All of that money went straight into accounts controlled by the attackers.
Deepfakes, catfishing and sexpionage
As this technology progresses, catfishing (creating a fake online identity for the purposes of manipulating someone) and sexpionage (espionage that uses sex or seduction as a technique for manipulation) are also likely to become more serious issues.
Just imagine the potential harm that a spy or a scammer could cause with this type of technology. Attackers could elicit lewd videos from someone and then use them for extortion. They could make a victim fall in love and then trick them into handing over company secrets. Realistic deepfakes have a lot of potential for harm in this domain.
Deepfakes and identity verification
Certain platforms rely on user images or videos to verify their identities. Hackers are already trying to use deepfake technology to circumvent this. A Trend Micro report found hacker discussions of how they could use deepfakes to bypass Binance’s user verification tools. If the attackers succeeded, it would have allowed them to overtake the accounts and drain the funds. As deepfake technologies improve, we will have to seriously rethink these methods of verification, because they will become increasingly vulnerable.
How can we defend against deepfakes?
Deepfakes pose a serious threat to society, through social engineering and beyond. Mitigating their impact will require a multi-pronged effort including:
- New legislation that punishes harmful use of deepfake technology.
- Education on how to spot deepfakes
- Deployment of deepfake detection tools.
- New platforms that help people identify the source and veracity of information.
- Blocking and removing known deepfakes.
- Establishing new verification processes.
How can you spot deepfakes?
Some common deepfake clues for images and video include:
- Blurring of the face.
- Skin tone differences around the edge of the face.
- Double eyebrows, double chins, or faces with double edges.
- Blurriness of the face when it gets partially obscured.
- Some sections of the video appearing to be lower quality.
- Cropped effects or box-like shapes surrounding the eyes, neck or mouth.
- Unnatural movements and blinking.
- Unusual changes to the lighting or background.
- Inconsistencies between the background and foreground.
For deepfake audio, some telltale signs are:
- Varying vocal tones.
- Choppy phrasing
- Phrasing or speech that differs from the speaker's normal mannerisms.
- Background sounds that are inconsistent with the speaker’s alleged location.
While most current deepfakes will probably feature at least some of the above clues, it’s important to recognize that the underlying technology is quickly progressing. As it improves, many of these clues will be absent, and deepfakes will become much harder to spot.
Can deepfake detection tools save us?
Deepfake detection tools are another important means of combating deepfakes. These include tools like:
While tools of this nature will be helpful for detecting deepfakes, defensive techniques will be in a constant arms race against the latest deepfake technologies. It seems likely that these tools will be able to do a decent job of picking up the bulk of deepfakes, but they may be limited in their effectiveness against cutting-edge deepfakes.
New verification measures
In both of the real-world deepfake examples that we described above, attackers convinced the victims to transfer money over the phone. There does not appear to have been any additional verification process prior to the transfers. The “bosses” told the victims to transfer the money, so they did it.
The attacks would have been far more difficult to pull off if there was an additional layer of verification in place before funds could be transferred. This could be as simple as the two-factor verification that we should all be using on our bank accounts.
As deepfakes improve, they have the potential to significantly shake up our security ecosystem. We will have to stay vigilant and be ready to deploy new defensive measures if we want to keep our systems secure.