Back to the blog
December 8, 2025

AI security governance at scale: New features from Nudge Security

New AI governance features deliver deeper visibility, stronger policy enforcement, and built-in enablement tools to manage workforce AI securely at scale.

IBM’s 2025 Cost of a Data Breach Report exposed a widening AI governance gap: 63% of breached organizations either lack an AI governance policy or are still working on one. With AI adoption moving faster than ever across your organization, establishing enforceable governance frameworks and security strategies is critical to staying ahead.

‍

At Nudge Security, we’re helping organizations close that gap. Our recent collection of AI governance releases deliver deeper visibility, stronger policy enforcement, and built-in enablement tools to manage workforce AI securely at scale. And, launching today, our AI conversation monitoring for sensitive data gives real-time visibility into what sensitive data employees are sharing with AI chatbots—like PII, secrets and credentials, healthcare and financial data, and more.

‍

We’ll walk through the latest AI governance feature updates across four key areas:

  • Discovery & Visibility: Gain continuous visibility of AI adoption and utilization across your organization.
  • Risk Assessment: Understand vendor policies, data access, and exposure risks.
  • Scalable Governance: Build, manage, and enforce AI policies across your organization.
  • Data Risk Mitigation: Monitor, alert, and guide safe AI use in real time.

Subscribe to our changelog to stay updated on future AI governance feature releases. Check out our previous blog on how to discover and secure AI adoption with Nudge Security for more AI governance features we offer.

‍

1. AI discovery & visibility

Strong AI governance starts with knowing what AI apps exist across your organization. Nudge Security gives you instant visibility into what AI tools your workforce has ever signed up for, what they’re using them for, and what sensitive data they can access.

‍

Monitor daily active users across AI tools.

The AI usage dashboard shows daily active users (DAUs) across AI tools like ChatGPT, Gemini, Perplexity, and Claude, and more. You’ll get visibility at the app, account, and individual user level, making it easy to see how often employees are using AI and how usage trends shift over time.

‍

Use these insights to spot adoption patterns, measure engagement, and identify opportunities to consolidate AI tool usage across your organization.

‍

To track daily active users, you'll need to deploy the Nudge Security browser extension.

‍

‍

‍

See which AI tools have access to sensitive data.

The AI usage dashboard also shows which AI tools in your estate have access to sensitive data like email, files, and source code. With this visibility, you can assess exposure by department, update an app’s status directly from the dashboard, and prevent unauthorized access to business-critical information.

‍

‍

2. AI risk assessment

Once you know what AI tools are being used, the next step is understanding how those tools handle data. Our vendor security risk insights already give you the ability to map AI in the supply chain and identify any breaches that affected them or their suppliers. Now we've gone further—our AI data training policy summaries and risk insights make it easier to evaluate and mitigate exposure.

‍

Get instant insight into AI providers' data privacy and model training policies.

Understanding how AI tools, and SaaS apps with embedded AI, handle your data shouldn't require reading through pages of dense legal documentation. That's why we added an AI data training policy summary card to the security tab of every app in your inventory.

‍

This gives you a clear breakdown of each app's AI data training policies helping you understand whether your data is used for model training, what opt-out options are available, retention periods, and other key details. Your team can make informed decisions about the tools they use without getting lost in the fine print.

‍

‍

3. Scalable AI governance

Once you understand how AI is being used across your organization, the next step is putting governance into action. Our playbooks make it simple to review discovered tools, define policies, and enforce guardrails across your environment.

‍

Evaluate new AI tools with the AI governance playbook.

The AI governance playbook helps you evaluate and categorize AI tools discovered in your environment. It provides a structured workflow for configuring rules and policies that align with your organization’s governance framework.

‍

With this playbook, you can:

  • Review and assess unapproved AI applications.
  • Remove AI tools that are not permitted.
  • Revoke unnecessary access permissions.
  • Establish guardrails for managing new AI applications as they’re introduced.

‍

‍

‍

Create and deliver an AI acceptable use policy.

The AI acceptable use policy (AUP) playbook helps you create, manage, and deliver AI policies that adapt as new AI tools emerge and adoption grows.

‍

With this playbook, you can:

  • Create, update, and manage AI policies directly in the product.
  • Automatically deliver policies through Slack, Microsoft Teams, or the browser to streamline enforcement as employees sign up for AI tools.
  • Track AUP acceptance across the workforce in the AI usage dashboard, Users table, and user details page.

‍

‍

4. Continuous AI data risk mitigation

Governance doesn't stop once policies are set. Nudge Security enables ongoing monitoring, alerts, and guidance to ensure AI is used safely in the flow of work—like revoking risky OAuth grants that share sensitive data with AI tools, agents, and remote MCP servers. Here's how more of our recent releases help you maintain that protection:

‍

Detect sensitive data shared in AI conversations.

Our new AI conversation monitoring feature delivers real-time visibility into the sensitive data employees share during AI chatbot conversations. Through the Nudge Security browser extension, you’ll see when secrets and credentials (API keys, JWTs, authorization headers, cloud keys), PII, financial data, or health data appear chatbot in conversations.

‍

With this visibility you can:

  • Identify when risky or sensitive data is being shared with AI tools.
  • Drill into individual user-level conversations to understand context and exposure.
  • Strengthen your AI governance strategy with accurate, real-world usage information.

‍

‍

‍

Deliver your AI AUP with a browser nudge.

Expanding on our existing browser nudging functionality that guides your employees away from not-permitted apps to an approved alternative, you can now deliver your AI Acceptable Use Policy (AUP) directly in the browser when employees sign up for or log in to AI tools. Nudges appear for apps with an approved, acceptable, or no-approval status, helping you reinforce policies in the moment and guide safer AI adoption.

‍

‍

‍

Extend visibility across AI browsers.

Our browser extension now supports AI browsers including Google Chrome, Microsoft Edge, Firefox, Brave, ChatGPT Atlas, Dia, and Perplexity’s Comet. With dedicated deployment options, you can easily extend visibility and protection to where employees are exploring and using AI tools.

‍

‍

The future of AI security governance at Nudge Security

These recent releases are part of our ongoing commitment to help organizations build and scale their AI governance programs at a pace that keeps up with constant innovation.

‍

Subscribe to our changelog to stay updated on future AI governance feature releases.

‍

Interested in learning more about how Nudge Security can help with AI governance?

Related posts

Report

Debunking the "stupid user" myth in security

Exploring the influence of employees’ perception
and emotions on security behaviors