With the explosion of ChatGPT, there's a lot of talk of both the AI-powered abundance that’s just around the corner, as well as the robot uprising that brings us all to our doom. But there are also a host of other issues that aren't so extreme.
One of these is phishing. What happens when bad actors embrace the powers of ChatGPT and similar large language models (LLMs) for phishing campaigns? But before we discuss their impact, let's back up a little.
What is phishing?
Phishing is a form of social engineering that relies on deception to trick victims out of sensitive information. Sometimes, the goal may be to extract financial information like credit card details. Another common aim is to trick users into handing over their credentials so that the attacker can take over their accounts.
The attacker could then use these details to act as the user, causing a range of harms, such as draining their bank account or escalating their privileges inside the victim's company. A third form of phishing involves attackers tricking users into executing malware, which the attackers can then use for a range of other schemes.
The classic form of phishing is the emails that are spammed out en-mass, trying to trick recipients into taking some harmful action. Similar attacks are often conducted through social media platforms, calls, text messages, and even online calendars.
While the most basic type of phishing attack involves the same generic message being sent out to everyone, these have a fairly low success rate. A more sophisticated technique known as spearphishing entails attackers researching their victims, and then crafting tailored messages that are more likely to trick the recipient than generic spam. Another common term is whaling, where attackers craft similar attacks aimed at high-value targets like CEOs and executives.
So, how does this all relate to AI?
Well, phishers face a few challenges in their work. One is that they are often non-native English speakers, which makes it challenging for them to craft believable messages with the correct spelling and grammar. LLMs like ChatGPT are excellent at writing coherent messages, so they largely eliminate this problem.
Another issue involves spam filters. One technique that spam filters use is to filter out certain keywords that commonly appear in spam messages. It can be time-consuming for attackers to craft messages that will be able to slip past these filters. LLMs can write emails in seconds at almost zero cost, so this gives attackers all of the variations they need to try and figure out which ones may be able to get through filters.
While ChatGPT has guardrails that try to prevent malicious acts, there are often simple workarounds. If you ask it to "Write me a phishing email", you'll probably just get a stern lecture. But if you ask it to write a tech support email encouraging someone to download a file, it will do the job. It won't know that you really plan on using the email to trick people into downloading a malicious file. You may have to be a little creative, but you can easily find ways to make LLMs do your bidding.
AI and spearphishing
One of the major concerns involves bad actors using AI for spearphishing. Not only is it possible for tools to automatically scrape public data about a target from the internet (e.g. from LinkedIn, Reddit, and Facebook profiles), but AI can then use this information to write a personally tailored phishing email. Bad actors could always scrape data and write personalized emails without these technologies, but it was just too time consuming to do so at scale. LLMs make the process far more efficient
What happens if we all start getting swamped by personalized spearphishing emails?
Would you be able to defend yourself against every single one?
Or would one eventually slip past your defenses?
Cybersecurity firm Darktrace has already noticed an uptick in the sophistication of spam emails, noting that the "...linguistic complexity, including text volume, punctuation and sentence length among others, have increased." There are even tools that are specifically designed to make undetectable AI-generated text.
A Singapore-based team recently found that AI-assisted phishing was more successful than phishing created solely by humans. However, it would probably be best to conduct further research to determine whether this study is replicable, and if so, figure out why AI approaches are more successful.
The same research team is one of many that note that AI-based techniques could be part of the solution to detecting these AI-assisted phishing attacks. But at this stage, it's hard to know how things will play out. There's certainly a chance that AI-defense tools keep pace with AI attacks, and phishing doesn't end up being any more problematic than it is today.
But what if they don’t keep up?
Could your defenses hold firm if you were being bombarded with personalized spearphishing attacks all day? What if you were tired, stressed, or in a rush?
Let's just say we live in interesting times.