Back to changelog

AI security

Subscribe to all Changelog posts via RSS to stay updated on everything we ship at Nudge Security.

We’ve refreshed our AI Acceptable Use Policy (AUP) playbook to make governance easier and more actionable. Existing customers using the previous playbook don’t need to take any action unless they want to update their policy.

‍

With this updated playbook, IT admins can:

‍

  • Create, update, and manage AI policies directly in the product.
  • Deliver policies over Slack or Microsoft Teams to streamline enforcement efforts and reduce manual work.
  • Set up nudges to automatically send policies to employees the first time they sign up for an AI tool.
  • Track AUP acceptance across the workforce in the AI usage dashboard, Users table, and user details page.

‍

This update helps organizations strengthen trust in AI, automate AI governance at scale, and give leaders clear visibility into policy adoption.

We’ve added a new card to the security tab of the app details page. This new card summarizes each app’s AI data training policy, including whether your data is used for training, available opt-out options, retention periods, and other relevant information. This makes it easier for teams to evaluate SaaS and AI tools by showing how each app handles data without requiring a review of lengthy documentation.

Our new AI governance playbook guides you through evaluating and categorizing AI tools discovered in your estate. The playbook helps you configure rules and policies that align with your developing governance framework. During this workflow you can:

  • Review unapproved AI applications
  • Remove not permitted AI applications
  • Revoke unnecessary access permissions
  • Establish guardrails for managing new AI applications as they’re introduced

Nudge Security now shows you which AI tools have access to sensitive data like email, files, and source code in the AI usage dashboard. You can easily see this information by department and change app status directly in the dashboard, helping you reduce the risk of sensitive data exposure to AI tools.

See what you've been missing.