Blog

May 6, 2024

The Shadow IT Implications of AI

Binary Code Illustration

Author

Todd Graham

As we kick off RSA in San Francisco this week, we wanted to talk about the shadow IT implications of AI.

It’s a well traveled path: A new technology emerges to great excitement and fanfare. Companies and users rush to enjoy its benefits and cybercriminals close on their heels to exploit its unaddressed risks. Embarrassing breaches and bad headlines inevitability follow.

We’ve seen this story play out countless times. BYOD and mobile devices promised to free workers from corporate hardware and locations, but they also took data beyond the defenses of their companies’ existing security stack. The shift to remote work during the pandemic enabled by a boom in cloud services and video conferencing, helped both organizations and hackers achieve new levels of productivity. And then there’s the classic shadow IT nightmare: SaaS apps that employees can pay for with the swipe of a credit card, with corporate IT and security teams none the wiser.

As the unintended consequences of shadow IT innovation become clear, they spur a wave of reactive measures by beleaguered cybersecurity teams. Never given the chance to create a secure pathway to adoption, they’re now tasked with closing the barn door and scrambling to round up the animals running amok. That’s good news for security vendors selling tools for network, endpoint, and cloud discovery & remediation, but it’s a highly risky way to run a business.

And now it’s happening again. Like earlier waves of shadow IT, AI offers an irresistible value proposition, in this case the transformative possibilities of large language models for workflow automation, natural language interfaces, autonomous agents, copilots and assistants, and other enterprise use cases. The question isn’t whether to create GenAI applications, but how quickly it can be done—and how much data can be pumped into the models to increase their accuracy and relevance.

Vast amounts of sensitive enterprise and customer data gushing through corporate networks and public clouds to power broadly accessible applications … what could go wrong?

Large language: slow-motion disaster
As organizations race to harness GenAI, many have already passed the point where comprehensive visibility, control, and governance might have been proactively established. Where does that leave us? Ask one of our portfolio companies, AI security provider HiddenLayer, and they’ll tell you that it’s led to security measures falling well behind AI’s rapid enterprise adoption. Their recently released AI Threat Landscape Report found that an eye watering 77% of companies identified breaches to their AI in the past year.

So if every project is now a GenAI project, there are now models—companies have an average of 1,689 models in production per the same HiddenLayer report—scattered across a variety of environments, likely determined by the combination of the lowest-friction locations possible and the skillsets of the model users. Think about the easiest places to put a computing asset, and the easiest places for users to access it. That’s where your models may now be running. Of course, you can’t be sure.

That brings us to three key questions enterprises now face:
Where are our models running—locally or in the cloud? Each possibility brings its own set of security considerations. To address those risks, you need a clear view of the landscape.
What is running in our models? Is it our own data, third-party data, or custom data? The global regulatory implications of the answer go without saying. Knowing what data is being stored and used, where, is critical to understand your exposure in the event of a data leak or breach, and to avoid potential regulatory sanctions.
Who is accessing our models, and how? Are the users internal, external, or both? Are they accessing the models directly or through an app? Access management and control are essential elements of data security, and they’re especially critical here given the volume and diversity of LLM data.

In addition to answering these questions, enterprises must also take on a new duty: ensuring that users know how to govern and use models properly. Even smart people need to be educated on the safe use of new technologies, and the user-friendly side of GenAI can mask a rat’s-nest of possible liabilities. As just one example, we’ve heard from one company of doctors using LLMs to create claims letters, a practice leading to the leakage of personally identifiable information (PII) and other sensitive data.

For CISOs already focused on consolidating their security stack, the implications of GenAI add another layer to think about. Risk officers should also consider the likelihood of regulatory and legal developments around LLMs and their data, especially in the EU. Just as California Senate Bill 1386 ushered in a new generation of data security and breach notification requirements two decades ago, the impending wave of mandates will shape the security and risk management agenda of the AI era.

New tech for old problems
In some ways, the problems posed by GenAI are similar to the perennial challenges with shadow IT: knowing what you have, where, and how it’s being used, by whom. There are already ample solutions of varying quality for discovering and managing other types of shadow IT, but the scale and speed of the AI transformation call for a quantum leap in effectiveness.

Here, AI can help solve the problems it creates by enabling a new tooling response. Companies should embrace machine learning and AI-powered solutions to discover their LLM environments. Such solutions can help answer critical questions about where models run, their intended use, the data flowing into the models, and how the models are being accessed, by whom. AI and ML can also help ensure the auditability and traceability of LLM-powered solutions, and see that they’re connected effectively into the existing cybersecurity ecosystem.

At M12, the security implications of GenAI are already playing a role in our investment strategy. We’re looking into the impact of AI on areas such as data loss prevention and threat detection and analysis, both in terms of the new requirements emerging and the new breed of solutions with which they will be addressed.

In the long term, companies will get GenAI under control and learn how to mitigate the risks LLMs can pose. But it will take time. Until then, we can expect to see governments and the press make examples of those who run afoul of regulators, suffer breaches or cyberattacks, or otherwise put their worst foot forward. We’ve seen this cycle before and we’re about to see it again. It’s just a matter of when—and who.