This article was originally published in The Information (subscription required).
‍
There’s much talk about superintelligent artificial intelligence that controls humans, leading to a grim dystopian future. That’s still largely hypothetical.Â
A more immediate AI risk is playing out under our noses: a cascading, AI-fueled cyberattack.Â
The 2020 SolarWinds attack revealed how pervasive, and dangerous, a cyberattack could be. Russia’s foreign intelligence service used SolarWinds’ network visibility software as a Trojan horse to infiltrate the Homeland Security, Treasury and Justice Departments, plus 96% of the Fortune 500 companies that relied on SolarWinds’ software. Microsoft President Brad Smith later called it “the largest and most sophisticated attack the world has ever seen.”
An AI version of the SolarWinds breach would be many times more destructive. Here’s why: SolarWinds’ customers are primarily technology managers who vet its software thoroughly before deploying it. In contrast, individual employees looking to boost their productivity are continually trying—and using—new generative AI apps.Â
That means these tools are seeping into corporate and government networks without the approval—or even the knowledge—of security teams. If one of these tools were hacked or its software update compromised, it could become the ultimate, invisible Trojan horse.Â
Data gathered by our firm, Nudge Security, suggests employees are rushing to adopt AI tools to supplement or replace tasks typically done by humans. At the same time, more AI tools are experiencing breaches. In the past year, 77% of companies surveyed by HiddenLayer revealed they had seen a breach of their AI tools. But only 14% of corporate tech leaders said their companies test for adversarial attacks on AI models.
‍
This is a ticking time bomb, and vulnerabilities are emerging. In May, Hugging Face—a popular repository for open-source AI models—was breached by attackers who stole secrets that may have been used to modify models. Microsoft Copilot, one of the most popular AI applications, has been the target of multiple attacks. Last year, hackers gained access to OpenAI’s internal messaging systems and stole details about the design of the company’s AI technologies, The New York Times reported. It’s not clear what the hackers wanted.Â
‍
But the problem is deeper and more complex. Most AI tools arriving on the market today are actually wrappers around models from companies such as OpenAI. Developers slap on features, often with incremental value, in a rush to market where data privacy and security are afterthoughts.Â
It’s not just new generative AI apps that are using OpenAI under the hood. One in four of the top 100 software as a service apps used by Nudge customers rely in part on an AI tool to deliver its service—most often from OpenAI or Anthropic. As SaaS providers race to market with shiny, new AI-powered features, this sprawling AI supply chain is further entrenched in our organizations.Â
Each of these tools runs the risk of data leakage or, like SolarWinds, opens doors to possible attacks. So how can organizations more effectively govern and secure AI tools?
‍
Traditional security monitoring tools weren’t built for the current proliferation of AI services. Most of these tools rely on lists of permitted software, but it’s simply not feasible to update the lists given how rapidly AI tools are emerging on the market. Organizations can begin by employing a modern security tool that continuously discovers and inventories what AI tools are in use, by whom and for what purpose.Â
‍
Next, organizations need to think about the risks not just of their AI services but also of software providers that use AI. Risk managers need to consider scenarios such as: “Does my CRM provider share my customers’ data with a third-party AI provider? Is that data used to train a public model?” Â
Organizations must demand that software providers supply them with an AIBOM—an artificial intelligence bill of materials that lists the AI providers used and the features they support. This will help the organization better grasp its risk and respond to breaches more quickly if one of those services’ offerings is compromised, as happened with SolarWinds.
‍
AI has all but obliterated traditional IT governance. Employees won’t stop to file a tech support request if they’re worried that AI will replace them. In a recent Gartner survey, 74% of employees said they would bypass cybersecurity guidance if doing so achieved a business objective.Â
At the same time, employees want more education and guardrails for using AI safely. According to a recent Ernst & Young report, 75% of employees say they have cybersecurity concerns related to their AI use. IT and security organizations must capitalize on this, but in a way that works with employees’ AI adoption rather than against it.Â
The scale of the risk and the rate at which it is growing inside the enterprise demand a new approach: Call it just-in-time AI governance. Instead of burying safe AI guidance and policies in a quickly forgotten annual security awareness training session, organizations must focus on delivering guidance in small, consumable ways at the moment an employee begins to experiment with an AI service.Â
Imagine a system where an employee starts to download a new AI note taker and is prompted to instead adopt a tool the enterprise has already vetted and endorsed. The employee would happily adopt the known tool and would likely feel more confident about using it.Â
Or imagine that an employee uploads a file to ChatGPT for analysis and is immediately prompted to use the enterprise-hosted version of ChatGPT to protect corporate data. Most employees wouldn’t hesitate.
As generative AI reshapes the tech landscape, organizations that prioritize continuous AI discovery, robust AI vendor risk management and just-in-time AI governance will be better equipped to seize the benefits—and protect themselves against the risks of a SolarWinds-style attack.