Back to the blog
May 11, 2026
|
Guides

AI agent governance: How security teams map, inventory, and control AI agents

Most organizations already have AI agents running in their environment. Here's how to find them, understand what they can do, and make sure someone owns them.

Quick answer

AI agent governance is the process of discovering, inventorying, and controlling AI agents operating inside enterprise environments. Unlike traditional AI governance—which focuses on policies for how AI tools are used—agent governance addresses autonomous systems that take actions on behalf of users across SaaS applications, external APIs, and sensitive data stores. The first step is building a complete inventory, because most organizations already have agents running that IT never approved.

‍

Key takeaways

  • AI agents act autonomously across systems; governing them requires a different approach than governing AI tools that respond to prompts.
  • Most enterprise environments already have AI agents operating without IT approval—deployed by employees, auto-provisioned by SaaS platforms, or extended through MCP server connections.
  • The first job in AI agent governance is discovery: you can't apply controls to agents you don't know exist.
  • An AI agent inventory should cover OAuth grants, API keys, MCP server connections, and non-human identities—not just application installations.
  • Least privilege, access reviews, runtime monitoring, and offboarding procedures are the four core controls once you have visibility.

What is AI agent governance?

In 2025, Salesforce made Agentforce generally available. Microsoft shipped Copilot Studio to enterprise customers. OpenAI opened GPT Actions to anyone with an API key. Within a year, AI agents had moved from developer experiment to standard enterprise capability—and security teams hadn't been asked whether any of it was safe. Most still don't know how many agents are running in their environments.

‍

AI agent governance is the set of processes, policies, and controls an organization uses to discover, assess, and manage AI agents operating in its environment.

‍

An AI agent is any software system that can take autonomous actions to accomplish a goal: querying external APIs, running code, reading or writing files, sending messages, connecting to other services—without requiring a human to approve each step. Unlike a traditional AI tool, which responds to a prompt and waits for the next one, an agent persists, plans, and acts. That distinction matters for governance because agents don't just access data; they do things with it.

‍

AI agent governance is narrower and more operational than broader AI governance. AI governance typically covers policies for AI tool adoption, risk assessment processes, and acceptable use frameworks—the decisions organizations make about which AI tools are allowed and how they should be used. Agent governance focuses specifically on autonomous systems that act inside your environment, often with elevated access, and often without the visibility your existing controls were designed to provide.

‍

How AI agent governance differs from AI governance

When a security team governs AI tool adoption, the core questions are: which tools are employees using, what data are they sending to those tools, and do those tools meet the organization's security standards? The controls are primarily discovery and policy enforcement.

‍

Agent governance introduces a different set of questions. An agent doesn't just receive data from a user—it may have been granted OAuth access to a user's calendar, email, and documents. It may hold an API key with write access to a production system. It may be connected to a Model Context Protocol server that extends its reach to file systems, databases, and internal APIs. And it may keep operating after the employee who deployed it has left the organization.

‍

Why agents require a different approach than AI tools

The visibility gap is structural. Most existing controls—SSO, MDM, network monitoring—were designed around human identities and approved applications. AI agents increasingly operate as non-human identity credentials that don't appear in those systems. They authenticate via OAuth tokens or API keys, not user credentials. They connect to external services directly. A tool inventory will miss them. Network logs will show their traffic but won't identify them as agents. An MDM policy has no concept of an autonomous agent credential.

‍

The shadow agent problem

The standard framing for AI risk in the enterprise is shadow AI: employees using AI tools that IT hasn't approved. That's a real problem. But shadow agents are a distinct and more serious category.

‍

A shadow AI tool generates outputs that a human reviews and acts on. A shadow agent takes actions directly. It may have write access to files, send emails on a user's behalf, create calendar events, push code changes, or call external APIs. Those actions persist in the environment after the conversation ends. When the agent operates under automation framework credentials or a service account, those actions persist even after the employee who created it has been offboarded.

‍

Shadow agents enter enterprise environments through several paths.

‍

Agents employees deploy themselves

Building a personal agent is increasingly accessible. Platforms like OpenAI's GPTs, Anthropic's Claude tool-use features, and a growing set of no-code agent builders let users create agents with OAuth connections to their work accounts in minutes. From IT's perspective, this looks like a new OAuth grant from a user—which may or may not trigger a review depending on whether the organization has OAuth monitoring in place.

‍

The risk isn't limited to developers or technical users. Browser extension-based agents, AI assistants with deep calendar and email integrations, and consumer AI platforms with business connectors are all being used at work by employees who don't think of themselves as deploying an agent. They're configuring a productivity tool that happens to act autonomously on their behalf.

‍

Agents auto-provisioned by SaaS platforms

Enterprise SaaS platforms are adding agentic capabilities directly to their products. Salesforce Agentforce lets users create and deploy agents within a Salesforce environment. Microsoft Copilot Studio lets users build agents connected to SharePoint, Teams, and Outlook. ServiceNow and Workday are adding similar capabilities. HubSpot's Breeze Agents operate inside the CRM without leaving the platform.

‍

These agents are often provisioned without a dedicated IT review because they're built into platforms IT already manages. The security team approved Salesforce; that doesn't mean they've evaluated every Agentforce agent a sales ops user created last Tuesday. The surface area is larger than it looks from the application layer.

‍

Agents extended via MCP server connections

The Model Context Protocol (MCP) is an open standard that lets AI agents connect to external tools and data sources through a standardized interface. An agent using MCP can connect to any MCP server, including servers that expose file systems, databases, internal APIs, and code repositories.

‍

The security implication: an agent's effective access can be significantly larger than its stated permissions. A user might authorize an agent to access their email. If that agent also connects to an MCP server that exposes the company's code repository, its actual access scope is email plus code—and the second connection may have been invisible to the original approval workflow.

‍

Nudge Security's research on MCP server exposure and credential risk documents how MCP server sprawl introduces credential exposure and privilege escalation that traditional SaaS discovery tools don't surface. The OpenClaw AI incident—involving the exposure of 1.5 million API tokens through a popular AI tool—illustrates how quickly agent-layer risk can materialize. MCP server connections are the least visible and fastest-growing expansion of the SaaS attack surface.

‍

How to build an AI agent inventory

Governance can't precede discovery. Before you can apply least privilege, conduct access reviews, or set runtime policies, you need to know what agents are operating in your environment.

‍

An AI agent inventory is not the same as an AI tool inventory. An AI tool inventory asks: which applications are employees using? An agent inventory asks: which autonomous systems have access to our environment, what can they do, and who authorized them?

‍

What your environment probably already contains

Before mapping your full agent inventory, it helps to know what you're likely to find. A typical mid-market enterprise running modern SaaS already has agents operating across several categories, whether IT knows it or not.

‍

Platform-provisioned agents: If your organization uses Salesforce, there are likely Agentforce agents created by sales ops or marketing. If you use Microsoft 365, there may be Copilot Studio agents connected to SharePoint or Teams data. HubSpot Breeze Agents are active in any HubSpot environment where a marketer has explored the feature.

‍

Employee-built agents via AI platforms: Employees using OpenAI's platform may have created GPTs with Google Workspace or Slack OAuth connections. Developers using Anthropic, LangChain, or similar frameworks may have built agents with API key access to internal systems.

‍

Browser extension agents: Several popular browser extensions now operate agentically, taking actions on behalf of users across web applications. These typically hold OAuth grants that appear alongside normal SaaS connections.

‍

Zapier and workflow-tool agents: Automated workflows built in Zapier, Make, or n8n that incorporate AI model calls have effectively become agents—they receive a trigger, call an LLM, and take an action, often with write access to business systems.

‍

Where agents live in enterprise environments

AI agents surface across several layers that a conventional SaaS discovery approach may not cover.

‍

OAuth grants. Most agents authenticate to user accounts via OAuth. An agent with access to a Google Workspace or Microsoft 365 account appears in that user's list of connected apps, but traditional OAuth risk management focuses on sanctioned third-party SaaS—not agentic connections. Look for OAuth grants with write scopes to email, calendar, documents, and collaboration tools. Write-scope grants are a strong signal of an agentic or highly integrated connection worth reviewing.

‍

API keys. Agents built by developers or technical users often authenticate via API keys rather than OAuth. These keys are frequently stored in code repositories, environment variables, or developer tools. An API key with write access to a production system is a non-human identity with standing access that most identity governance programs don't track.

‍

MCP server connections. Agents using the Model Context Protocol connect to MCP servers that may not appear anywhere in your SaaS inventory. The connection is agent-to-server rather than user-to-application, so it bypasses discovery methods designed around human identity. Monitoring for MCP server connections requires visibility into the agent's runtime environment or traffic to known MCP server endpoints.

‍

SaaS platform agent registries. Salesforce Agentforce, Microsoft Copilot Studio, ServiceNow, and similar platforms maintain internal registries of agents created within their environment. These registries are visible to administrators but require intentional review—they won't surface in a conventional SaaS discovery scan.

‍

What to look for when building your inventory

For each agent you discover, capture at minimum:

  • Identity: What credential does the agent use to authenticate? Is it a user OAuth grant, a service account, an API key, or a platform-specific agent credential?
  • Access scope: What permissions has the agent been granted? Which data stores, systems, or services can it reach? Are any permissions broader than the agent's stated purpose requires?
  • Deployment source: Who deployed the agent? Was it a user, a developer, or an automated platform action? Was there any security review before deployment?
  • Platform: What agent framework or SaaS product is the agent running on? What are that platform's data handling and retention practices?
  • MCP connections: Does the agent connect to any MCP servers? What tools and data sources do those servers expose?
  • Activity status: Has the agent been active recently? What actions has it taken, and do those actions match its stated purpose?

‍

Keeping the inventory current

AI agent inventories decay fast. Employees deploy new agents. SaaS platforms add agentic capabilities in product updates. The MCP server ecosystem is expanding weekly. An inventory built today is incomplete by next month.

‍

Ongoing inventory maintenance requires continuous monitoring, not point-in-time scans. AI security and governance tools designed for this environment surface new agent connections as they appear, flag changes in access scope, and alert on high-risk grants as they're created—rather than after the next quarterly review cycle.

‍

Key risks to assess once you have visibility

With an inventory in hand, the next step is risk assessment. Not all agents carry the same risk. The factors that matter most are access scope, the data the agent can reach, and the controls on its identity.

‍

Privilege escalation

AI agents frequently request broader permissions than their stated purpose requires. A productivity agent that helps with your calendar may have requested access to your full Google Workspace account, including Drive and Gmail. An agent initially granted read access to a data store may have been updated by its developer to include write access after the original security review.

‍

The risk mirrors what privileged access management addresses for human users, but applies to non-human identity credentials that don't appear in the same review cycles. Agents operating with unnecessary write access to production systems are the highest-priority escalation risk.

‍

Data access scope and leakage

Agents with broad data access—email, documents, databases, code repositories—can exfiltrate data intentionally or inadvertently. The risk with AI agents is that data leakage can be indirect. An agent may summarize sensitive documents and include that content in an API call to an external model provider, where it persists in logs or training pipelines. No file was downloaded and no email was forwarded, but the content was transmitted outside the organization's control.

‍

Nudge Security's research on browser extensions that harvest and exfiltrate AI chat conversations illustrates the pattern: tools that appear to operate locally can transmit session content to external servers without triggering traditional DLP controls.

‍

Non-human identity sprawl

Every agent credential—OAuth token, API key, service account—is a non-human identity that requires its own lifecycle management. Non-human identities typically don't expire, aren't subject to the same access review cadences as human users, and survive employee offboarding unless explicitly revoked.

‍

Identity and access management programs are well-established for human users. Most organizations have weak or no governance for non-human identities at scale. As agent adoption grows, the volume of non-human identities in enterprise environments is outpacing the governance programs designed to manage them.

‍

MCP server connections

MCP server connections deserve separate attention because they're the least visible part of an agent's access scope. A user who grants an agent access to their calendar may not know whether that agent also connects to MCP servers. The MCP specification doesn't require explicit user consent for server connections—the agent's developer makes that decision at build time.

‍

MCP security risks include credential theft through tool call poisoning, data exfiltration through MCP server responses, and privilege escalation through servers that expose capabilities the user never intended to grant. These risks are distinct from the standard OAuth risk model and require specific controls.

‍

From inventory to governance: applying controls

An inventory without controls is a catalog of your exposure. The four controls that matter most for AI agent governance are least privilege enforcement, access reviews, runtime monitoring, and offboarding procedures.

‍

Least privilege for AI agents

Least privilege—grant only the access required to perform the intended function—applies to agents as directly as it does to human users. In practice, this means reviewing the permissions requested by each agent and reducing any that exceed what the agent's purpose actually requires.

‍

Enforcing this is harder with agents than with human users because agents often request permissions during initial setup, before IT has reviewed them. The practical approach: flag any agent OAuth grant carrying write scopes as requiring review before activation, and maintain a list of approved permission profiles for common agent types in your environment.

‍

Access reviews for agent OAuth grants

User access reviews are a standard requirement in SOC 2 and security program audits. Most access review processes focus on user-to-application access. They should also cover agent-to-application access—including OAuth grants held by agents, API keys with standing access, and service accounts used by agent platforms.

‍

Agent access reviews are more complex than user access reviews because agents don't have a human owner who can confirm whether access is still needed. The review process needs to include the employee who deployed the agent, the application owner, and—for platform-provisioned agents—the business unit that owns the SaaS platform.

‍

SOC 2 compliance requirements increasingly cover AI agent access. Organizations that can't produce an agent inventory at audit time will be unable to demonstrate scope completeness for in-scope applications—a gap auditors are beginning to ask about.

‍

Runtime monitoring and drift detection

Least privilege and access reviews address an agent's permissions at a point in time. Runtime monitoring addresses what the agent is actually doing.

‍

Runtime monitoring for AI agents looks for behavioral anomalies: unusual data access patterns, requests to new API endpoints, connections to new MCP servers, or a sudden increase in the scope or volume of actions. The challenge is that agents are inherently less predictable than traditional software. An agent's actions depend on the prompts it receives and the decisions it makes autonomously, so establishing a baseline for normal behavior is harder.

‍

Drift detection matters here because agents can be updated by their developers without triggering a new security review, and platform-provisioned agents may receive capability expansions through SaaS product updates. An agent that was reviewed and approved last quarter may have a different effective access scope today.

‍

Offboarding: what happens when the employee leaves

Employee offboarding processes are designed to revoke a departing employee's access to SaaS applications. They typically don't address agents the employee deployed under their user identity.

‍

When an employee leaves, any agent they deployed under their OAuth credentials should be reviewed. If the agent performs a business function the organization wants to continue, its credentials need to be transferred to a service account or reassigned to a new owner. If it was a personal productivity tool, it should be revoked. Leaving it in place means a non-human identity with access to the employee's former application scope continues operating in the environment under orphaned credentials.

‍

The same logic applies during mergers and acquisitions. When an organization acquires another company, the acquired entity's agent inventory is part of the identity and access scope that needs to be evaluated. Agents operating under credentials from employees who've since departed, or with access to systems that are no longer in scope, represent a hidden exposure that standard M&A IT integration checklists don't typically surface.

‍

This gap exists in most offboarding programs because agents aren't surfaced in the standard offboarding checklist. Adding an agent audit step—reviewing the departing employee's OAuth grants and API keys for agentic connections—closes it without requiring a new process.

‍

Who owns AI agent governance?

The ownership question doesn't have a settled answer yet, and that ambiguity is part of the problem. Most organizations don't have a named owner.

‍

Security teams have the relevant threat model but often lack direct visibility into the SaaS platforms where agents are being provisioned. IT teams manage SaaS access but may not have the context to evaluate agent-specific risks. AppSec teams understand the development patterns but typically don't monitor employee-deployed agents. GRC teams need the evidence but aren't usually involved until audit time.

‍

A workable model assigns primary ownership to whichever team manages SaaS discovery, OAuth risk management, and access reviews today. That team establishes the inventory process, defines risk assessment criteria, and owns the access review workflow. GRC defines the audit evidence requirements. Business unit leaders are accountable for platform-provisioned agents within their SaaS environments.

‍

The governance model should be proportional to the organization's agent exposure. An organization with a handful of developer-built agents has a different scope than one where Salesforce Agentforce is active across the entire sales team. A risk-tiered approach—starting with agents that have write access to production systems and broad data scope, then extending to lower-risk agents as the program matures—is more practical than trying to govern everything at once.

‍

Nudge Security discovers AI agent connections alongside 175,000+ SaaS apps on Day One, without requiring network changes or prior knowledge of your environment, giving security teams a single view of agent OAuth grants, platform-provisioned agents, and MCP server connections. See how Nudge Security handles AI security and governance in practice.

‍

Start with visibility

The organizations that will govern AI agents effectively in 2026 aren't the ones that wrote the most comprehensive policy. They're the ones that built an inventory first.

‍

Governance frameworks, access reviews, and runtime controls are only as good as the inventory they're applied to. An unknown agent is an unreviewed agent, and an unreviewed agent is a blind spot in your identity and access program—regardless of how mature that program is.

‍

The practical starting point is the same for every organization: find every agent credential in your environment, understand what it can do, and make sure someone owns it. That step alone closes a gap that most existing tools weren't designed to address.

‍

Nudge Security gives security teams the visibility to take that step. Discover your AI agent exposure alongside your full SaaS and AI estate. See plans or start a free trial.

‍

FAQ

What is AI agent governance?

AI agent governance is the process of discovering, inventorying, and controlling AI agents that operate autonomously inside enterprise environments. It covers systems that take actions on behalf of users across SaaS applications, APIs, and data stores—not tools that generate outputs for human review. The core challenge is that agents often operate with elevated access and without the visibility that traditional security controls provide.

‍

How does AI agent governance differ from AI governance?

AI governance refers to the policies and processes organizations use to manage AI tool adoption broadly: usage guidelines, risk assessments, and acceptable use policies. AI agent governance is a subset focused specifically on autonomous systems that take actions rather than generate outputs. Governing an AI tool means deciding whether it's allowed and what data employees can share with it; governing an AI agent means managing an active system with access to enterprise resources that may be operating continuously in the background.

‍

Who should own AI agent governance inside the enterprise?

Ownership should sit with whichever team manages SaaS discovery, OAuth access reviews, and identity governance today—they have the data and processes closest to the problem. GRC defines the audit evidence requirements. Business unit leaders are accountable for platform-provisioned agents within their SaaS environments. Agent governance doesn't require a new function; it extends existing identity and access governance to cover non-human identities.

‍

How do you monitor an AI agent in production?

Runtime monitoring for AI agents looks for behavioral anomalies: unusual data access patterns, connections to new MCP servers or API endpoints, and changes in the scope or volume of actions. Tools that monitor OAuth grant activity and non-human identity behavior can surface these signals. Establishing a baseline requires observing agent behavior over time; flagging significant deviations from that baseline is the core of a production monitoring approach.

‍

How does identity and access management apply to AI agents?

AI agents operate as non-human identities—they authenticate via OAuth tokens, API keys, or service accounts rather than user credentials. Standard IAM practices (least privilege, access reviews, lifecycle management) apply to these identities but require different processes. Agent OAuth grants need to be included in access review scope. API keys need expiration policies. Service accounts used by agents need named owners. Most IAM programs weren't designed for non-human identities at this scale; agent governance extends them to cover that gap.

‍

What are the biggest risks of autonomous AI agents in enterprise environments?

The highest-risk factors are broad data access scope, write permissions to production systems, MCP server connections that expand beyond stated access, and orphaned credentials from departed employees. Agents with write access to email, documents, or code repositories represent the most direct data leakage risk. Agents operating under expired or unreviewed credentials represent the most significant identity risk. MCP server connections that expose internal systems are the least visible and fastest-growing risk category.

‍

How does AI agent governance differ from traditional automation governance?

Traditional automation governance—for RPA, workflow tools, and scripts—deals with systems that execute defined, deterministic sequences. AI agents make autonomous decisions about which steps to take based on context and goals. That unpredictability means governance can't rely purely on reviewing what an agent is designed to do; it requires monitoring what the agent is actually doing, because the actions may differ from the original specification. Drift detection matters more: an agent can behave differently over time as the underlying model is updated, even if the agent's configuration hasn't changed.

‍

What early warning signs indicate AI agent governance breakdown?

Early signals include a rapid increase in OAuth grants with write scopes, API keys with standing production access that aren't tied to a named owner, users reporting unexpected actions from their AI tools, and the discovery of agents operating under credentials from employees who've already been offboarded. At the program level, governance breakdown typically shows up as an inability to answer basic audit questions: how many agents are operating in the environment, who deployed them, and what can they access?

‍

What documentation should exist for an AI agent?

Every agent in your inventory should have a record covering its identity (the credential type and authentication method), access scope (which systems and data it can reach), deployment source (who built it and who approved it), platform (what framework or SaaS product it runs on), and last-reviewed date. For agents with write access to production systems, that documentation should also include what actions the agent is authorized to take and a named owner responsible for its lifecycle.

‍

Why do AI agents need their own identity class?

Human identity governance assumes access is tied to a person with a job function and a lifecycle. When that person changes roles or leaves, access is reviewed or revoked. AI agents don't fit that model: they operate continuously, their access scope is defined by their developer rather than their deployer, and they don't have a natural role-change trigger for access review. A distinct identity class for agents—with its own inventory, review cadence, and offboarding procedures—ensures that governance processes designed for humans extend appropriately to the systems acting on their behalf.

‍

How can AI agent governance support regulatory defensibility?

SOC 2 auditors are beginning to ask organizations to demonstrate scope completeness for in-scope applications, which includes agents with access to in-scope data. An agent inventory that documents access scope, review history, and offboarding procedures provides auditor evidence equivalent to what access review processes provide for human users. Organizations with an existing agent inventory and governance program will be better positioned to demonstrate compliance as AI-specific regulatory guidance develops.

‍

Who is responsible when an AI agent causes harm?

Accountability for an AI agent's actions typically sits with the organization that deployed it, the team that owns its credentials, and the employee or function that authorized its access. The agent's developer—whether an employee or a third-party platform vendor—may share liability depending on whether the harm resulted from a misconfiguration, an out-of-scope action, or a product defect. Establishing clear ownership at the time of deployment, documented in your agent inventory, is the most practical way to ensure accountability is defined before something goes wrong rather than after.

Related posts

Report

Debunking the "stupid user" myth in security

Exploring the influence of employees’ perception
and emotions on security behaviors