AI and malware: Should we be terrified?

Image of a broken screen -Destination Certification

Yes, AI can write malware, at least to a certain extent. But how worried should you be? Let's run through a few examples.

OPWNAI

In the early days of ChatGPT, Check Point Research was able to use the tool to create a full infection flow, all the way from writing a phishing email to coding a reverse shell. The researchers:

  • Asked ChatGPT directly to create a phishing email. It did, although it also provided a warning that it may violate OpenAI's content policy. They then asked it to rewrite the email so that it would ask the recipient to download an attachment. ChatGPT obeyed.
  • The researchers then prompted it to write some Visual Basic for Applications (VBA) code that could run directly from an Excel file. It did, although the code was simplistic, and they had to prompt ChatGPT once more to improve it. They ended up with functional, but far from great, malicious code.
  • They then asked it to write a simple reverse shell, some scanning tools, and a sandbox detection script. Once again, ChatGPT did the job, although the work wasn't great.

Ultimately, with a bit of prompting, the researchers were able to make ChatGPT build a very basic attack. They had a phishing email with an attached Excel document containing malicious VBA code. If the document was opened, it would download a reverse shell onto the target's computer. It wouldn't be a hard attack to defend against, but as a proof of concept, it was a fascinating step forward.

In the underground

Another Check Point Research report probed underground hacking forums to see what the black hats were up to. Among the findings were:

  • A basic infostealer — One hacker used ChatGPT to create a simple infostealer that could search systems for common file types like PDFs and Microsoft Office documents. It would then copy the files, zip them, and send the files to the hacker.
  • An encryption tool — Another site visitor got ChatGPT to make a tool that could encrypt files. While it wasn't malicious, the code could have easily been adapted into ransomware.

BlackMamba

The security team at HYAS created their own proof-of-concept, BlackMamba. It's a polymorphic keylogger, which means it mutates on-the-fly, helping it to evade detection. According to the team, the code appears benign and it doesn't act like typical malware, nor does it require command-and-control (C2) infrastructure. This allows it to slip past endpoint detection and response (EDR) systems.

It achieves this through a harmless-looking executable that calls the OpenAI API. This is a high-reputation API, so many detection systems would think nothing of it. However, the call asks OpenAI to create a keylogger and send it back. The harmless-looking executable then executes a very harmful keylogger, with the malicious portion of the code staying in memory. Whenever BlackMamba executes, its keylogging capabilities are synthesized once again.

The HYAS team then exfiltrated the data through Microsoft Teams. They claim that they tested the attack "...against an industry leading EDR which will remain nameless, many times, resulting in zero alerts or detections."

While this attack is just a proof of concept, it does show a new frontier that we may be entering. With unique code synthesizing each time it executes and no need for C2 infrastructure, it lacks some of the telltale signs that many of our detection methods rely upon. However, these techniques may have other signatures which we can detect relatively easily. If not, we may need to seriously rethink some aspects of our security if we start to see these types of attack in the wild.

What are the takeaways?

First of all, it's important to acknowledge that many of these attacks probably won't work as specified any more. OpenAI and the other major players put guardrails on their AI models in attempts to prevent such bad behavior. But these guardrails are far from perfect, and you can often find ways around them.

Now that this technology is out of the bag, not all AI developers will be as scrupulous as the market leaders. It seems possible that we end up with AI-based blackmarket services that are tailor-made to cause harm, like no-code polymorphic malware.

Another important consideration is that most of the attacks we discussed involved a lot of legwork from the prompter, and the quality of the attacks wasn't exactly stellar. However, if these AI models continue to progress, it seems logical to expect that their malware capabilities will also improve. In the future, we might see similar AI tools that require even less intervention but can create much more advanced malware.

However, AI isn't all bad when it comes to malware. There are also a range of AI-based tools that help us with detection and defense. Let's hope that the defenses can keep pace, so that we don't end up with sophisticated malware in every corner of the internet.

Image of the author

Cybersecurity and privacy writer.

Would you like to receive the DestCert Weekly via email?

Your information will remain 100% private. Unsubscribe with 1 click.

Page [tcb_pagination_current_page] of [tcb_pagination_total_pages]