Back to the blog
February 10, 2026
|
Research

AI adoption in practice: What real enterprise usage data reveals about risk and governance

New research analyzes real-world AI adoption, integrations, and data exposure across enterprise environments.

Over the past two years, AI adoption across the enterprise has accelerated faster than any prior technology shift. What began as employee experimentation with general-purpose chat tools has evolved into widespread use of AI across every corner of the digital workplace: meetings, documents, code, customer support, and internal automation.

‍

AI governance has emerged as a top priority for security and risk leaders, but many teams struggle to define the scope of the problem. That’s where our latest research report is intended to provide insight.

‍

We analyzed anonymized and aggregated telemetry collected across enterprise environments to provide a view of how AI is actually being adopted and used in practice, including tools that may not be centrally procured or formally approved.

‍

AI adoption in enterprise environments

The tl;dr is that AI adoption is no longer experimental. The data suggests AI is now embedded in everyday workflows—browsers, meeting tools, and developer environments—and connected to sensitive systems like email, documents, code, and tickets.

‍

Key findings include:

  • Usage of core LLM providers is nearly ubiquitous. OpenAI is present in 96.0% of organizations, with Anthropic at 77.8%.
  • The most-used AI tools are diversifying beyond chat. Meeting intelligence (Otter.ai at 74.2%, Read.ai at 62.5%), presentations (Gamma at 52.8%), coding (Cursor at 48.4%), and voice (ElevenLabs at 45.2%) are now widely present.
  • Agentic tooling is emerging. Agent tools like Manus (22%), Lindy (11%), and Agent.ai (8%) are establishing an early footprint.
  • Integrations are prevalent and varied. OpenAI and Anthropic are most commonly integrated with the organization's productivity suite, as well as knowledge management systems, code repositories, and other tools.
  • Usage is concentrated. Among the most active chat tools observed, OpenAI accounts for 66.8% of prompt volume.
  • Data egress via prompts is non-trivial. 17% of prompts include copy/paste and/or file upload activity, and 72% of uploads come from local files.
  • Sensitive data risks skew toward secrets. Detected sensitive-data events are led by secrets and credentials (47.9%), followed by financial information (36.3%) and health-related data (15.8%).

Rather than relying on surveys or self-reported usage, this analysis is grounded in direct observation of AI activity within enterprise SaaS environments.

‍

Read the full research report here: https://www.nudgesecurity.com/content/ai-adoption-in-practice

‍

Recommendations: A practical AI governance checklist

This data makes clear that AI adoption is no longer a future state to plan for—it's an operational reality happening now, across every part of the enterprise. The question for security and risk leaders is not whether to govern AI, but how to do so in a way that reflects how AI is actually being used.

‍

The following recommendations are designed to help teams move from visibility to control to scalable governance. They reflect a progression that prioritizes data flows, integrations, and employee behavior—not just vendor allowlists. Organizations don't need to implement all controls at once, but should prioritize based on data sensitivity, integration depth, and business criticality.

  • Inventory continuously. Track AI tools, browser extensions, agents, and integrations as living assets—not one-time audits.
  • Prioritize by data access. Focus reviews on tools connected to email, documents, ticketing, and source code, and those with broad access to production environments and critical data.
  • Consolidate where possible. Reduce long-tail sprawl by standardizing on a small number of approved platforms for common use cases.
  • Set clear rules for agents. Require human approval for external actions and log agent activity centrally.
  • Reduce secrets exposure. Provide internal guidance for developers and support teams, and deploy tooling that detects and blocks credentials and other sensitive data in prompts and other AI interactions.
  • Establish an AI acceptable use policy. Share the policy with all employees who are using AI and collect acknowledgements. Review and update your policy regularly and renew policy acknowledgements.
  • Asses configurations and detect drift. Integrations and other configuration settings should be revisited regularly to ensure they are still aligned with changes in usage and the organization’s policies.
  • Understand AI data training policies. Ensure that reviews of contracts and MSAs include careful review of AI data training policies, not just for AI tools but for any SaaS apps with AI-enabled features.
  • Educate with examples. Training is most effective when grounded in real workflows, such as pasting from documents, uploading spreadsheets, or sharing screenshots.
  • Measure outcomes. Monitor adoption, integration growth, and data-sharing trends to validate whether governance controls are working.

‍

‍

How Nudge Security can help

At its core, Nudge Security helps organizations discover, secure, and govern SaaS and AI tools by focusing on how software is actually used—not just how it is officially approved.

‍

Nudge’s approach to AI security and governance emphasizes:

‍

  • Behavioral risk over theoretical risk. Understanding how employees share data with AI tools, rather than relying solely on policy.
  • Integration-aware security. Treating AI tools and agents as connected systems with real permissions, scopes, and blast radius.
  • Practical guardrails. Enabling security teams to guide safer usage through approvals, least-privilege access, and just-in-time interventions.
  • Scalable governance. Helping organizations manage AI adoption at the speed it is occurring, without blocking innovation.

‍

By combining deep SaaS and AI visibility with behavioral insights, Nudge Security enables security leaders to move beyond reactive controls and build AI governance programs that are grounded in reality, enforceable in practice, and aligned with how work actually gets done.

‍

Related posts

Report

Debunking the "stupid user" myth in security

Exploring the influence of employees’ perception
and emotions on security behaviors