Back to glossary
March 23, 2026

What is the Model Context Protocol (MCP)?

The Model Context Protocol (MCP) is an open standard that defines how AI agents connect to external tools, data sources, and services—giving them access to real-world context and the ability to take real-world actions.

Main takeaways

  • MCP standardizes the connection between AI agents and the systems they need to access—solving the integration fragmentation that previously made agentic AI development complex and inconsistent.
  • Standardization has security implications: a well-governed MCP deployment is more auditable than ad hoc integrations, but a poorly governed one creates consistent, structured pathways for unauthorized data access or action.
  • Every MCP connection is a trust decision with access implications—defining what an agent can see and do in the real world.
  • Organizations need to inventory their MCP deployments with the same rigor they apply to OAuth integrations and SaaS access: who authorized what, what can it reach, and is that access still appropriate?

What is the Model Context Protocol?

The Model Context Protocol (MCP) is an open standard developed by Anthropic and released in late 2024. It defines a common protocol for AI agents—regardless of which underlying model or framework they use—to discover, connect to, and interact with external tools and data sources.

‍

Before MCP, building an AI agent that could interact with external systems required custom integration work for each connection. An agent that needed to access files, query a database, and call a SaaS API needed three separate, purpose-built integrations. There was no common language between the agent and the services it needed.

‍

MCP addresses this by creating a standardized client-server model. An MCP server exposes capabilities—tools the agent can call, data sources it can read—in a consistent format. Any MCP client (any agent implementing the protocol) can connect to any MCP server, discover its capabilities, and use them without custom integration code. The architecture is explicitly designed to be composable: agents can connect to multiple servers, and servers can be combined to create complex capability sets.

The MCP architecture

MCP defines three types of primitives that servers can expose to clients:

‍

Tools—Functions the agent can call that may have side effects. Searching the web, writing to a file, sending an API request, creating a calendar event. Tools represent the "action" dimension of agentic capability.

‍

Resources—Data sources the agent can read. Files, database records, documentation, application state. Resources represent the "context" dimension—giving the agent information it needs to reason and act.

‍

Prompts—Predefined prompt templates for common workflows, allowing servers to provide structured interaction patterns for their domain.

‍

The protocol uses standard communication mechanisms (HTTP with Server-Sent Events for remote servers, stdio for local servers) and is designed to be transport-agnostic and LLM-agnostic.

Why MCP adoption is accelerating

MCP has gained rapid adoption for two reasons: it solves a genuine pain point for AI developers, and it has strong backing from major players in the AI ecosystem. Anthropic built MCP into Claude; OpenAI, Google, and Microsoft have expressed support; and dozens of vendors have released MCP servers for their platforms.

‍

For developers building AI agents, MCP dramatically reduces integration complexity. For vendors, publishing an MCP server makes their product accessible to any AI agent—a significant distribution incentive.

‍

The result is a growing ecosystem of MCP servers covering an expanding range of enterprise applications: cloud storage, communication platforms, CRM systems, development tools, databases. An agent with the right MCP server connections can interact with a large share of an organization's digital infrastructure.

Governance in an MCP-enabled environment

MCP changes the governance picture for AI in the enterprise in important ways.

‍

On the positive side, MCP creates a structured, inspectable integration model. The capabilities an MCP server exposes are explicitly defined; the actions an agent can take are bounded by what the connected servers allow; the interaction model is standardized enough to be auditable.

‍

On the risk side, MCP deployments can proliferate quickly and informally. Developers deploy local MCP servers on their laptops. Power users connect MCP clients to third-party servers from community registries. AI applications ship with pre-configured MCP connections to external services. In each case, new access pathways are created outside formal governance review.

‍

The governance principles that apply are the same ones that govern OAuth integrations and SaaS access generally: inventory every connection, understand what it can access, apply least-privilege scoping, and review regularly.

‍

Learn how Nudge Security helps organizations discover and govern AI integrations, including agentic access via MCP →

‍

Stop worrying about shadow IT security risks.

With an unrivaled, patented approach to SaaS discovery, Nudge Security inventories all cloud and SaaS assets ever created across your organization on Day One, and alerts you as new SaaS apps are adopted.