New AI governance features deliver deeper visibility, stronger policy enforcement, and built-in enablement tools to manage workforce AI securely at scale.
IBM’s 2025 Cost of a Data Breach Report exposed a widening AI governance gap: 63% of breached organizations either lack an AI governance policy or are still working on one. With AI adoption moving faster than ever across your organization, establishing enforceable governance frameworks and security strategies is critical to staying ahead.
‍
At Nudge Security, we’re helping organizations close that gap. Our recent collection of AI governance releases deliver deeper visibility, stronger policy enforcement, and built-in enablement tools to manage workforce AI securely at scale. And, launching today, our AI conversation monitoring for sensitive data gives real-time visibility into what sensitive data employees are sharing with AI chatbots—like PII, secrets and credentials, healthcare and financial data, and more.
‍
We’ll walk through the latest AI governance feature updates across four key areas:
Subscribe to our changelog to stay updated on future AI governance feature releases. Check out our previous blog on how to discover and secure AI adoption with Nudge Security for more AI governance features we offer.
‍
Strong AI governance starts with knowing what AI apps exist across your organization. Nudge Security gives you instant visibility into what AI tools your workforce has ever signed up for, what they’re using them for, and what sensitive data they can access.
‍
The AI usage dashboard shows daily active users (DAUs) across AI tools like ChatGPT, Gemini, Perplexity, and Claude, and more. You’ll get visibility at the app, account, and individual user level, making it easy to see how often employees are using AI and how usage trends shift over time.
‍
Use these insights to spot adoption patterns, measure engagement, and identify opportunities to consolidate AI tool usage across your organization.
‍
To track daily active users, you'll need to deploy the Nudge Security browser extension.
‍
‍
‍
The AI usage dashboard also shows which AI tools in your estate have access to sensitive data like email, files, and source code. With this visibility, you can assess exposure by department, update an app’s status directly from the dashboard, and prevent unauthorized access to business-critical information.
‍
‍
Once you know what AI tools are being used, the next step is understanding how those tools handle data. Our vendor security risk insights already give you the ability to map AI in the supply chain and identify any breaches that affected them or their suppliers. Now we've gone further—our AI data training policy summaries and risk insights make it easier to evaluate and mitigate exposure.
‍
Understanding how AI tools, and SaaS apps with embedded AI, handle your data shouldn't require reading through pages of dense legal documentation. That's why we added an AI data training policy summary card to the security tab of every app in your inventory.
‍
This gives you a clear breakdown of each app's AI data training policies helping you understand whether your data is used for model training, what opt-out options are available, retention periods, and other key details. Your team can make informed decisions about the tools they use without getting lost in the fine print.
‍
‍
Once you understand how AI is being used across your organization, the next step is putting governance into action. Our playbooks make it simple to review discovered tools, define policies, and enforce guardrails across your environment.
‍
The AI governance playbook helps you evaluate and categorize AI tools discovered in your environment. It provides a structured workflow for configuring rules and policies that align with your organization’s governance framework.
‍
With this playbook, you can:
‍
‍
‍
The AI acceptable use policy (AUP) playbook helps you create, manage, and deliver AI policies that adapt as new AI tools emerge and adoption grows.
‍
With this playbook, you can:
‍
‍
Governance doesn't stop once policies are set. Nudge Security enables ongoing monitoring, alerts, and guidance to ensure AI is used safely in the flow of work—like revoking risky OAuth grants that share sensitive data with AI tools, agents, and remote MCP servers. Here's how more of our recent releases help you maintain that protection:
‍
Our new AI conversation monitoring feature delivers real-time visibility into the sensitive data employees share during AI chatbot conversations. Through the Nudge Security browser extension, you’ll see when secrets and credentials (API keys, JWTs, authorization headers, cloud keys), PII, financial data, or health data appear chatbot in conversations.
‍
With this visibility you can:
‍
‍
‍
Expanding on our existing browser nudging functionality that guides your employees away from not-permitted apps to an approved alternative, you can now deliver your AI Acceptable Use Policy (AUP) directly in the browser when employees sign up for or log in to AI tools. Nudges appear for apps with an approved, acceptable, or no-approval status, helping you reinforce policies in the moment and guide safer AI adoption.
‍
‍
‍
Our browser extension now supports AI browsers including Google Chrome, Microsoft Edge, Firefox, Brave, ChatGPT Atlas, Dia, and Perplexity’s Comet. With dedicated deployment options, you can easily extend visibility and protection to where employees are exploring and using AI tools.
‍
‍
These recent releases are part of our ongoing commitment to help organizations build and scale their AI governance programs at a pace that keeps up with constant innovation.
‍
Subscribe to our changelog to stay updated on future AI governance feature releases.
‍