Back to the blog
April 28, 2026
|
Perspectives

AI isn't your breach risk. Your OAuth sprawl is.

How the Vercel incident exposes the SaaS governance gap security teams keep overlooking.

Recently, Vercel disclosed a breach. The path in wasn't a clever zero-day or a sophisticated supply chain attack. It was an OAuth grant—one employee, one free AI productivity tool called Context.ai, and enough delegated access to allow the attacker to pivot into a production environment.

The headlines will frame this as an AI story. I think that framing misses the point entirely.

AI is the red herring. The actual issue is the same one security teams have been avoiding for years: the OAuth attack surface inside your SaaS estate is massive, largely invisible, and almost entirely ungoverned. The fact that an AI tool was the initial foothold is incidental. Replace Context.ai with any free productivity app, project management tool, or data connector, and you get the same breach.

If your reaction to Vercel is to double down on AI controls, you're going to miss the real lesson—and the real risk sitting in your environment right now.

The OAuth attack surface you can't see

Every time an employee signs up for a new SaaS or AI tool and clicks "Allow" on a Google or Microsoft consent screen, they're handing a third party a key to corporate data. No ticket. No procurement review. No vendor security questionnaire. Just a click.

Here's what that looks like at scale, based on what we see across our customer base:

  • On average, each employee has granted 88 OAuth grants to third-party applications.
  • Of those, 31 carry data-level permissions—meaning the third party can read, write, or act on corporate data.
  • In an organization of 1,000 employees, that's 88,000 access paths and 31,000 of them with data access.

Each one of those 31 data-level grants is, functionally, a third party in your supply chain. Each one represents a vendor relationship your security team almost certainly hasn't assessed, a tool your IT team probably doesn't know exists, and a path to your data that no one is monitoring.

Vercel found out what happens when one of those goes wrong. Every organization I've looked at has the same exposure.

The discovery gap makes it worse

Here's the part that turns bad into catastrophic.

When we deploy with a customer, we consistently find two to three times more SaaS and AI tools in use than they thought they had. The average customer actively manages somewhere between 30% and 40% of the tools their employees are actually using.

Do the math. If an employee has 18 data-access OAuth grants, and the organization only has governance over 30% to 40% of the tools involved, that means roughly 19 to 20 of those grants per employee are completely unmanaged. No one has vetted the vendor. No one has reviewed the scopes. No one is watching what happens to the data those grants touch.

This is the gap Vercel fell into. Context.ai wasn't on anyone's list of approved vendors. It didn't need to be. One employee decided to try it, granted it the access it asked for, and an attacker did the rest.

Why your current OAuth governance doesn't cover this

Most organizations I talk to tell me they've already got OAuth risk under control. When I press on what that means, the answer is almost always the same: they've built governance inside Google Workspace or Microsoft 365. They review grants. They enforce admin approval for sensitive scopes. They feel good about it.

That's identity-layer governance. It's necessary. It's also not where most of your sensitive data actually lives.

Your customer data sits in Salesforce. Your engineering data sits in GitHub. Your financial data sits in Netsuite or Workday. Your product data sits in Snowflake. Every one of those systems has its own OAuth grant ecosystem, its own app marketplace, its own set of third-party integrations that employees can provision without admin involvement. And none of it is visible from your identity provider.

If your OAuth governance stops at Google and Microsoft, you've covered the front door and left every other entrance wide open.

Why blocking is a losing strategy

The instinctive response to all of this is to clamp down. Tighten consent policies. Block tools. Require tickets. Slow down provisioning until security can catch up.

I understand the instinct. It doesn't work.

Not are these controls completely missing from most SaaS and AI apps, but employees don't slow down. They route around. The same Gartner research we've cited before showed that 69% of employees admit to bypassing security controls to get their work done. If you block an app, they'll create a email and password account. Block an OAuth grant, they will use an API key. If you require a ticket, they'll wait until you're not looking. The adoption will happen. The only question is whether you know about it.

Reactive revocation, finding grants after they're issued and pulling them back, is equally broken. You're always behind the risk. By the time you revoke the grant, the data has been accessed, the integration has been built, and the exposure already existed for weeks or months.

You can't block your way out of this. You can't clean up your way out of it either.

Start with visibility. Then engage.

There's only one approach that works at the scale of modern SaaS adoption: meet your employees where they are, at the moment they're making the decision.

That starts with visibility. You can't govern what you can't see, and right now, most security teams can't see two-thirds of their SaaS estate, let alone the OAuth grants sprawling across it. The first move (and it's an immediate, actionable one) getting a complete inventory of every SaaS app, every AI tool, and every OAuth grant your employees have created. Across every identity, across every workspace, not just the ones you already knew about.

Once you have that picture, you can start doing something about it. Not by blocking. By engaging. When an employee is about to grant broad data access to a free tool no one's heard of, the right moment to have that conversation is before they click "Allow"—not three weeks later when the grant shows up in an audit. When a sensitive integration gets provisioned, the right response is to guide the employee toward an approved alternative or an appropriate scope, in real time, in context.

This is the Workforce Edge. It's where the decisions are actually being made. It's where the risk actually lives. And it's where security has to meet your workforce if you want to get ahead of the next Vercel.

The real lesson from Vercel

Strip away the AI angle, and what happened to Vercel is almost boring. An employee adopted a free SaaS tool without oversight. The tool got broad OAuth permissions. Those permissions got compromised. The compromise escalated into a production breach.

That story is playing out right now in every organization I look at. The tool will have a different name. The employee will be in a different department. The scope of the grant will be slightly different. The outcome won't.

The orgs that get ahead of this won't be the ones racing to add another AI policy or block another shadow tool. They'll be the ones who stop treating OAuth governance as an identity-layer problem, get real visibility into what's happening at the workforce edge, and engage their employees in the moment they're making the call.

AI is the headline. SaaS access control is the story.

How Nudge Security helps

If this resonates, take a look at what's actually in your environment. Nudge Security can give you a complete inventory of every SaaS app, AI tool, and OAuth grant in your org in under five minutes. No agents, no network traffic, no guessing. Start a free trial today for a full inventory.

Related posts

Report

Debunking the "stupid user" myth in security

Exploring the influence of employees’ perception
and emotions on security behaviors