Can AI tools keep us safe against …AI?

 A mirrored ball on top of a grid - Destination Certification

If you've been paying close attention to your inbox, you've probably seen a few emails from us about the potential dangers that developments in AI could bring to the cyber-landscape. Things like:


  • AI-powered phishing
  • Deepfakes and AI-generated voices for social engineering
  • Ai-powered bots circumventing CAPTCHAs
  • AI-assisted malware

If AI technologies continue to develop at such a rapid pace, and if we also start seeing much wider implementations, then we could also expect a significant shakeup of the security ecosystem. But it may not be all doom and gloom. While there could be a lot of downsides and a range of new and high-powered attacks, there is also tremendous potential for AI to help us with our defenses.

AI-based technologies are nothing new in the cybersecurity world. One of the most important roles is in automated threat detection and response. Another involves operations that are led by security analysts, but which are augmented by machine learning insights.

Some of the benefits of AI include:

  • It’s able to analyze large volumes of threat data — Machine learning can help to trawl through large volumes of data, giving security teams threat intelligence almost in real time. It's not possible for humans to manually sort through this data and obtain insights so rapidly. The information from this analysis can help security teams funnel security resources to where they are needed, based on the current threat environment.
  • It’s able to learn — Human analysts can review the alerts from these tools and then label them accordingly. Labeling false positives helps to train these models to avoid producing them in the future. Over time, this can make the model more accurate, and reduce the amount of time that human analysts have to spend reviewing false positives. These tools can also learn on their own, which means that they can update their enforcement practices over time, blocking new threats based on inferences that they have made in the past.
  • It can help with task automation — A lot of security work is fairly repetitive and boring. Machine learning tools can help to automate this, freeing up personnel to work on more important matters.

Some popular security tools that use AI technologies include:

  • CrowdStrike's Falcon Insight XDR (Extended Detection and Response)— An AI-powered endpoint detection and response tool.
  • Splunk's User Behavior Analytics (UBA) — A tool that uses machine learning to detect suspicious and anomalous user behavior.

If developments in AI result in attackers being able to launch more sophisticated attacks, then it seems likely that we will also continue to see progress in these existing defensive tools. Hopefully they can keep up with any advances on the attacker’s side of the equation.

Will we have a new wave of security tools?

There are already new types of tools in development, powered by the recent advances in AI. One example is Microsoft's Security Copilot, which combine's OpenAI's GPT-4 with a security model from Microsoft.

At the time of writing, it was just in preview mode, so we haven't been able to try it out, other than to take a look at Microsoft's promotional materials. While these should be looked at with a heavy dose of skepticism, Microsoft claims that Security Copilot can take on a range of different prompts and then output responses from internal or external sources.

You can type in questions like:

  • How can I improve my organization’s security posture?
  • What are the latest security incidents?

It gives you back tailor-made responses.

You can also drop in files, code snippets and URLs and ask Security Copilot about them. You can even ask it for alerts and information from your other security tools. All of this is auditable.

While Security Copilot can give you useful responses to your queries and help to speed up tasks, it is built on top of GPT-4, and it comes with some of the same problems.

If you haven’t tried GPT-4 yourself, it can best be described as an occasionally brilliant intern who went off their meds. It’s faster than us, writes better than most of us, and knows things that we’d only expect experts to know. But it’s also a purveyor of BS and a master hallucinator. You can ask it to write you an essay on the French Revolution and you have equal chances of getting an A or being sent to the school counselor’s office.

GPT-4 is helpful, but it’s not reliable. You have to carefully scrutinize its output to determine whether or not it is actually correct, which means that you basically have to be an expert anyway.

This level of reliability is a serious concern when deploying these tools in the security arena. When a simple mistake can turn into a multi-million dollar data breach, would you trust the hallucinating intern? One moment it could tell you to ensure every account has two-factor authentication. The next, it could tell you to Tweet out the admin password. Something so unreliable sounds like a security nightmare.

At this stage, tools like Microsoft Security Copilot seem like they could help to speed up a lot of security work, but they need to be watched with eagle eyes, and all of their outputs need to be verified by skilled security professionals.

But over time, new tools will emerge, and they may become more accurate and powerful. Let’s see where they take us.

Image of the author

Cybersecurity and privacy writer.

Would you like to receive the DestCert Weekly via email?

Your information will remain 100% private. Unsubscribe with 1 click.

Page [tcb_pagination_current_page] of [tcb_pagination_total_pages]