You can now easily deploy the browser extension on AI browsers like ChatGPT Atlas, Dia, and Perplexity's Comet, with dedicated deployment support. These additions allow organizations to secure employees everywhere work is happening, including in AI browsers.
‍
Head to your Nudge Security settings to deploy across your organization today.
‍
New to Nudge Security? Start a 14-day free trial to expand your SaaS and AI security coverage.
You can now set up a nudge to deliver your AI Acceptable Use Policy (AUP) directly in the browser when employees sign up for or log in to AI tools. The browser nudge will trigger for apps with an approved, acceptable, or no-approval status. This makes it easy to educate users in the moment, reinforce policy awareness, and help teams use AI safely across your organization.
‍
Browser nudging is only available when you deploy our browser extension. Get deployed now for the most comprehensive coverage.
Body: Our new card in the AI Usage dashboard now shows daily active users across (DAUs) across AI chatbot tools. You can also see this at the app, account, and individual user level on their detail pages, giving you visibility into how employees are using AI. Track adoption trends, overall engagement, and identify opportunities to optimize AI tool usage across your organization.
‍
Available for organizations using the Nudge Security browser extension. Deploy now.
We’ve refreshed our AI Acceptable Use Policy (AUP) playbook to make governance easier and more actionable. Existing customers using the previous playbook don’t need to take any action unless they want to update their policy.
‍
With this updated playbook, IT admins can:
‍
‍
This update helps organizations strengthen trust in AI, automate AIÂ governance at scale, and give leaders clear visibility into policy adoption.
We’ve added a new card to the security tab of the app details page. This new card summarizes each app’s AI data training policy, including whether your data is used for training, available opt-out options, retention periods, and other relevant information. This makes it easier for teams to evaluate SaaS and AI tools by showing how each app handles data without requiring a review of lengthy documentation.
Our new AI governance playbook guides you through evaluating and categorizing AI tools discovered in your estate. The playbook helps you configure rules and policies that align with your developing governance framework. During this workflow you can:
Nudge Security now shows you which AI tools have access to sensitive data like email, files, and source code in the AI usage dashboard. You can easily see this information by department and change app status directly in the dashboard, helping you reduce the risk of sensitive data exposure to AI tools.