Meta’s head of AI thinks that ChatGPT “...is not particularly innovative". Others warn of a looming robot uprising. We don’t know what the future holds, but we’re security people, so that’s the lens through which we’re looking at these developments.
So, should security teams assess AI risks?
The answer is a boring, “Yes”, even if you don’t think much further innovation is on the horizon.
AI risks may not have been something that we normally put much thought into, and you may consider all of the hoopla to be a storm in a teacup, but AI risks do at least need to be assessed. It’s only after the assessment that you can decide whether they need to be acted upon.
Let's use an analogy:
Should all organizations consider their flood risks?
Do all organizations need to take much action?
Companies based on a hill may consider the risks of flooding to be insignificant, and instead focus their energies on mitigating other risks. The important thing is that flood risk is at least considered as part of the assessment, evaluated, and then determined to be insignificant.
Likewise, even if you think that AI risks are minimal, they should at least be considered as part of your organization's risk assessment.
How much could AI change the world?
Let's run through a few scenarios that should at least be entertained:
- AI models run up against some technical limitations and don't progress much further than the current state of the art. We don't see much further implementation of these technologies than we do today.
- World leaders come together, solve all of their differences and regulate AI, limiting its further development. Joe Biden and Xi Jinping embrace in a warm, beautiful hug and the world weeps with joy.
- Content creators win lawsuits against the major AI companies for using their data to train models without permission. The companies behind these models shut down, or are severely curtailed.
- AI models continue to advance, see wider implementation and have impacts on society that range from moderate to extremely significant. There do seem to be a lot of optimizations that could be made to the models, although there may also be limitations due to training data and hardware costs.
This is far from a comprehensive list of what could happen, but the possible risks range from fairly limited to significant.
If you evaluate AI risks and things stay more or less the same in the future, then at worst, your organization has wasted a few hours assessing scenarios that never eventuate. We spend a lot of time planning for fires, even though most buildings are never engulfed in flames, so it's hardly an irrational use of company time.
What risks do organizations face in the current AI environment?
Some AI risks are already here:
- Are your employees entering company data into tools like ChatGPT? Could sensitive data be exposed? Do you need to block the tools, or issue a usage policy?
- Is your company information being used as training data by these models? Is your copyright being violated? Should you sic your lawyers on OpenAI?
- Have you noticed an uptick in phishing emails? What about more sophisticated spearphishing emails? How can you protect your company and your employees if this AI-assisted social engineering continues to get more convincing?
What AI risks could organizations face in the near future?
Other risks seem like they could be just over the horizon:
- Will deepfakes and AI-powered phishing cause a spike in successful social engineering attacks? How will your organization defend itself? Do you need new detection tools? More employee training?
- Could AI help hackers generate a larger volume of attacks by automating some aspects of malware production? Could defensive AI tools be the answer?
- Could new AI-based cybersecurity tools like Microsoft Security Copilot result in sloppy work that accidentally leads to gaping security holes? Should you use it in your organization, or stick to other methods?
There’s also some fairly dull risks worth considering:
- Vendor lock-in — What happens if your organization becomes heavily reliant on ChatGPT, but all of a sudden OpenAI jacks up the price to make it unaffordable? How could your company avoid being locked in?
- Vendor lock-out — Let’s say that competitive pressures force your company to lay off a bunch of employees and replace them with ChatGPT. If ChatGPT were to be shut down by a lawsuit or government regulation, could your business recover? Would you be able to rehire all of your old employees and keep going? Or could vendor lock-out cause it to collapse.
On the far side of the spectrum, there are a bunch of even more extreme scenarios, like:
- Mass unemployment.
- Society becoming a corporate dystopia ruled by an all-powerful Microsoft.
- An artificial general intelligence that kills us all in the pursuit of maximizing paperclip production.
If you work at a small managed services firm, there’s not much you can really do if Microsoft becomes the global overlord (except maybe switch over from Zoom to Teams so that you end up in Microsoft’s good books). But there’s a lot you can do to stop employees from sending sensitive company data to OpenAI.
We aren’t trying to tell you to go overboard or become gripped with fear about an AI apocalypse. Just include some of the more realistic AI risks in your risk assessment, the same way you would include flood and fire risks. If any of them are deemed actionable, great, take whatever mitigation measures are appropriate. Then you can forget about anything that isn’t within your power to change.