We’ve refreshed our AI Acceptable Use Policy (AUP) playbook to make governance easier and more actionable. Existing customers using the previous playbook don’t need to take any action unless they want to update their policy.
‍
With this updated playbook, IT admins can:
‍
‍
This update helps organizations strengthen trust in AI, automate AIÂ governance at scale, and give leaders clear visibility into policy adoption.
We’ve added a new card to the security tab of the app details page. This new card summarizes each app’s AI data training policy, including whether your data is used for training, available opt-out options, retention periods, and other relevant information. This makes it easier for teams to evaluate SaaS and AI tools by showing how each app handles data without requiring a review of lengthy documentation.
Our new AI governance playbook guides you through evaluating and categorizing AI tools discovered in your estate. The playbook helps you configure rules and policies that align with your developing governance framework. During this workflow you can:
Nudge Security now shows you which AI tools have access to sensitive data like email, files, and source code in the AI usage dashboard. You can easily see this information by department and change app status directly in the dashboard, helping you reduce the risk of sensitive data exposure to AI tools.