The NSA, FBI and CISA’s guide to deepfake defense

An ostrich and a fake bird behind a fence - Destination Certification

We warned you about the looming threat of AI-generated deepfakes back in May. Since then, the NSA, the FBI and the Cybersecurity and Infrastructure Security Agency (CISA) have come together to launch a factsheet on the issue. The publication covers the technologies involved, the threats to organizations, as well as the tools and techniques that can be used to mitigate these threats.

The document describes deepfakes as:

Multimedia that have either been created (fully synthetic) or edited (partially synthetic) using some form of machine/deep learning (artificial intelligence)...

Deepfakes are becoming much easier to produce amid the recent rise of various AI technologies. It is getting much more practical to produce fake videos and audio recordings that imitate real individuals. Some of these tools are getting better at operating in real-time, and large-language models (LLMs) are easily able to create text that imitate an individual’s writing style.

These developments pose two major problems for organizations:

  • Deepfakes could be used to impersonate key individuals in a way that outwardly harms the organization – A good example is a deepfake of Ukrainian President Zelensky that appeared to show him surrendering. An attacker could use similar techniques to create a video of a CEO proudly exclaiming their company’s plans to napalm the Amazon. It’s a hyperbolic scenario, but videos of this nature could harm a company’s public reputation and crash their stock price, as well as resulting in a range of other negative effects.
  • Deepfakes could be used to penetrate an organization or trick employees – An attacker could use deepfake technology to impersonate an employee’s coworker on a call. If the audio is convincing, it may not be difficult to trick the employee into granting the attacker access to private information that only the actual coworker is authorized to access. Similarly, an attacker could use deepfakes to impersonate an executive, ordering a lower-level employee into making a seemingly legitimate transaction into the attacker’s bank account.

As this suite of technologies progresses, it is expected that it will become even easier to create realistic impersonations of individuals that can harm organizations in a variety of ways.

Deepfake authentication

The factsheet from the NSA, the FBI and CISA highlights authentication and detection as the two major ways to combat deepfakes. It deems that “…authentication methods are active forensic techniques that are purposely embedded at the time of capture or time of edit of the media in question.”

In other words, if a company wanted to use authentication to protect itself from deepfake impersonation, then all of its legitimate media would need to be embedded with some kind of authentication mechanism at the time of creation, such as watermarks or digital signatures in the metadata. If media is marked by these authentication mechanisms, then people will be able to verify the provenance. However, this obviously only allows us to verify media that was authenticated upon creation and can’t help us verify anything that lacks this embedded data.

Tech companies like Google are rolling out features that mark AI-generated images created by their own tools. However, attackers will still be able to use other software to generate unmarked images, so we will need other techniques to keep us safe.

Deepfake detection

The factsheet describes the detection approach as “…developing methods that seek evidence of manipulation and present that evidence in the form of a numerical output or a visualization to alert an analyst that the media needs further analysis.” This means that detection tools will need to be able to sniff out the typical signs of synthetic media and then alert users.

While these tools may be helpful in detecting some deepfakes, they are far from perfect. One issue is that they can turn up false positives, so if you treat every positive as a deepfake, you may end up also blocking some legitimate media.

Another important point is that attackers can use these tools as well. If they produce a deepfake, they can run it through these detection algorithms to see whether or not it triggers a positive result. If it does, they can tweak it and then try again, continuing until they produce something that circumvents the tools.

Ultimately, there is an arms race between AI generation and AI detection tools, and it’s hard to be confident that our detection tools will be able to detect the more sophisticated attacks.

Training and policy changes

In addition to these techniques, training and policy changes will be crucial for keeping organizations safe. One important security pillar involves incorporating more two-step verification into business processes. For example, if a person is contacted by a colleague from a new number or email address, the recipient should be trained to first verify the colleague over other channels before revealing any sensitive information. Similarly, when an executive orders someone to initiate a financial transfer, their subordinate should verify that the transaction is legitimate over a separate channel.

IGiven that social engineering is already such a big threat to organizations, advances in deepfake technology present a huge challenge. As this technology progresses, we must remain vigilant and be willing to change our strategies to ensure that we cannot be easily misled by seemingly realistic copies.

Image of the author

Cybersecurity and privacy writer.

Would you like to receive the DestCert Weekly via email?

Your information will remain 100% private. Unsubscribe with 1 click.

Page [tcb_pagination_current_page] of [tcb_pagination_total_pages]