The AI honeymoon is over. With MCP, embedded AI, and agentic workflows now woven into everyday SaaS, corporate data exposure has fundamentally changed.
This article was updated on January 28, 2026.
‍
For the last two years, we’ve enjoyed the honeymoon phase of AI adoption. The landscape was clear: AI lived in general-purpose LLM chatbots like ChatGPT, or in narrowly scoped AI wrapper apps built on top of major model providers. Data risk was largely limited to what users typed into prompts or manually uploaded as files. Despite rapid adoption, governance challenges overall were well-defined, well-contained. Controls, while imperfect, were at least legible.
‍
That phase is over.
‍
As we enter 2026, AI is no longer something employees use on the side. It is something SaaS platforms do by default. And that shift changes everything about how data moves, how access is granted, and how governance must work.
‍
In the last several months, we’ve seen a major shift in how corporate data is accessed by AI tools, driven in large part by Anthropic’s release of the open-source Model Context Protocol (MCP). Instead of relying on one-off prompts and individual file uploads for context, leading LLM providers can now establish direct connections to business‑critical SaaS applications.
‍
This shift is already playing out across the SaaS ecosystem. Providers including GitHub, Sentry, and Atlassian have announced MCP servers or native AI integrations through their marketplaces, signaling how quickly automated, backend access to SaaS data is becoming the norm.
‍
Why the sudden rush to connect LLMs directly to SaaS data? Because it has become clear that for generative AI to deliver sustained business value, it must be grounded in real organizational context—corporate data, tools, and workflows. MCP formalizes this reality, making it easier for AI systems to retrieve relevant data, produce more specific and trustworthy outputs, and reduce hallucinations or misleading results.
‍
However, this architectural shift carries two governance implications that security teams can no longer ignore.
What once looked like a manageable stream of user‑initiated data sharing has become a mesh of persistent backend integrations that few teams can fully see, let alone govern.
‍
This challenge is compounded by a long‑standing reality of SaaS access control. In most platforms, OAuth grants and API keys inherit the full permissions of the user who authorizes them. Fine‑grained scoping—limiting access to specific objects, folders, or records—is still the exception, not the norm.
‍
The result is a familiar but newly dangerous pattern. A user may intend to give an AI assistant access to a handful of documents or records. In practice, the authorization often grants visibility into everything that user can see—and sometimes more.
‍
In a world of non‑human identities and automated agents, these broad grants persist quietly in the background, long after the initial experiment or workflow is forgotten. The risk is not theoretical. It is structural.
‍
The other major trend we have seen is that the rapid rise of purpose-built AI apps is giving way to the integration of AI directly into standard business software. Over the last 12 months, SaaS providers have moved at remarkable speed to embed AI features into their core products. Today, the vast majority of the top SaaS platforms include AI-enabled interactions by default.
‍
This shift fundamentally changes the governance problem. Instead of monitoring a finite (if fast-growing) set of AI-native tools, organizations now face the challenge of managing AI functionality embedded inside the SaaS applications they already rely on.
‍
Here’s an amusing experiment: Pull up your favorite SaaS app and see how long it takes you to find the AI prompt. I did this with HubSpot, and within 15 seconds, I was able to get it to generate a python script to open a socket for me. The implications of this reality is that we can no longer easily use our network controls to simply “block” AI apps from use by our employees. Our challenge in the control and management of AI explodes as we now need to manage the use of AI features across standard SaaS apps.
‍
The implication is straightforward but profound. Network controls were never designed to selectively block features inside approved SaaS applications. When AI lives behind the same login, domain, and API as the rest of the product, there is no clear perimeter to enforce.
‍
The scope of the problem expands not because employees are reckless, but because AI features have become ambient.
‍
As we look to the future, it’s clear that the AI challenge is, at its core, a SaaS challenge. The question facing security and IT teams is no longer whether employees are using a specific set of AI-native tools, but how data access is being granted, inherited, and automated across an ever-growing ecosystem of SaaS applications.
‍
This shift reframes the governance problem entirely. AI risk does not live in a neat category of tools—it lives inside sanctioned and unsanctioned SaaS platforms alike, each with embedded AI features and simple integrations that can grant automated access to sensitive data.
‍
In this environment, traditional perimeter-based controls struggle to keep up. What matters instead is visibility into the non-human identities created through OAuth grants and API keys, the permissions those identities inherit, and the conditions under which access is granted in the first place.
‍
As employees increasingly experiment with custom workflows and AI-powered assistants, these access decisions become more frequent, more decentralized, and harder to reason about after the fact.
‍
In this heavily distributed, highly decentralized environment, we can no longer rely on network controls to protect and govern the data that moves between tools. Instead, we must focus on managing the non-human identities that provide access to corporate SaaS data—and on establishing clearer guardrails for how that access is granted and maintained.
‍
This challenge is only becoming more acute as employees experiment with and build their own AI-powered workflows and assistants. Each integration, authorization, or enabled feature further democratizes data access, while quietly expanding the organization’s risk surface.
‍
The only consistent control point going forward is the employee. To govern AI use effectively, organizations must engage their workforce at the moments that matter most: when AI features are enabled, when integrations are authorized, and when automated access to SaaS data is approved.
‍
As it turns out, AI governance is fundamentally a human-centric endeavor.