Recently, one of our enterprise customers asked for help with a growing security problem related to generative AI: an unapproved AI notetaker was spreading rapidly across their organization. Using Nudge Security, they had discovered a concerning trend—over the course of just 90 days, 800 new accounts had been created, nearly double the number of accounts that had been created with the notetaker over the previous several years.
This viral growth was caused, in part, by a common “dark pattern” employed by the AI provider: whenever one employee shared a call recording with another employee, the notetaker required the second employee to sign up for the service, using a highly permissive OAuth grant that asks for permission to any calendar the employee can access, automatically adding itself to every meeting going forward. A dark pattern indeed—one that exposes an organization to a slew of data privacy risks.
This GenAI startup is just one of a fast-growing market of AI notetakers, call recorders, and AI meeting assistants that are experiencing unprecedented adoption across organizations of all sizes. Tools like Otter, Fireflies, Read, Fathom, Granola, and Supernormal have quickly become mainstream workplace essentials, with millions of users relying on them to capture, transcribe, and summarize meetings. Fireflies, for example, boasts over 20 million users across 500,000 organizations, including 75% of the Fortune 500.
The appeal of an AI notetaker is obvious: it promises to eliminate the tedium of note-taking, allowing participants to be fully present in conversations while AI handles the secretarial work. Many AI notetakers offer features like real-time transcription, automatic summary generation, action item extraction, and even sentiment analysis. Most importantly, these tools are designed for hyper-collaboration, making it incredibly easy to share a recording, transcription, or notes broadly, often without much regard for what sensitive information might be included in them.
What's particularly concerning is how these tools are entering organizations—not through careful IT evaluation and procurement processes, but rather through individual users signing up with their work email addresses.
Most AI notetakers employ aggressive growth tactics, including:
This combination of utility and easy adoption has created a perfect storm for shadow AI proliferation, with many organizations discovering dozens of different AI notetakers being used across departments—often without any centralized visibility or AI data governance.
While the productivity benefits are real, AI notetakers introduce significant data privacy concerns and costs that many organizations have yet to fully grapple with:
AI notetakers capture everything—sensitive business discussions, intellectual property details, customer information, strategic plans, and even casual conversations. This data is typically processed on third-party servers with varying levels of security controls. Also, these services may use the collected data to train their AI models (this is almost always the case with free versions of AI services), creating potential exposure of confidential information.
For regulated industries, AI notetakers present substantial compliance headaches. Healthcare organizations must consider HIPAA implications when patient information is discussed. Financial services firms face potential violations of financial regulations when client details are recorded without proper controls. International businesses must navigate GDPR and other regional privacy laws regarding consent and data processing.
Many AI notetakers are offered by startups that prioritize growth over security maturity. This can result in inadequate security measures, unclear data retention policies, and potential vulnerabilities in how meeting content is stored and transmitted. The security risks are amplified when employees use personal accounts that bypass corporate security controls.
The ease of adoption means these tools often bypass IT governance processes entirely. This creates significant blind spots in data management and security posture. IT and security teams cannot protect what they cannot see, and the distributed nature of these tools makes comprehensive discovery challenging.
In many jurisdictions, such as California’s CCPA, recording conversations without explicit consent from all parties is illegal. Yet AI notetakers frequently join meetings automatically through calendar integrations, potentially recording participants who haven't consented. This creates both legal and ethical issues for organizations.
As with many emerging technologies, the answer isn't as simple as banning all AI notetakers outright. Like our enterprise customer dealing with an AI notetaker, many organizations now consider themselves “AI-first companies,” encouraging their employees to do more with AI and automation.
Instead, IT, security, and GRC teams must thread a fine needle to manage the risks of GenAI tools without blocking the potential benefits of using these services. This can be a challenge given the dynamic and decentralized nature of AI adoption. For reference, Nudge Security has observed a 1,762% growth in AI use over the past two years.
A critical first step to any AI security and governance program is to create and maintain an inventory of AI assets. On an ongoing basis, IT leaders must be able to answer questions such as:
⭐ Using Nudge Security's unique, perimeterless approach to AI discovery, our enterprise customer discovered the unapproved AI notetaker and analyzed its adoption patterns across the organization. This enabled them to segment users into meaningful cohorts—power users, abandoned accounts with no activity for six months, new users who may have signed up unknowingly, and employees who already had accounts with corporate-approved AI notetakers.
As part of your AI risk assessment, it's crucial to carefully evaluate each AI notetaker vendor's security practices. Look for transparency in how they process, store, and potentially use meeting data for their own model training. Key questions to consider include: Does the provider offer end-to-end encryption? What are their data retention policies? Do they have clear mechanisms for deleting customer data upon request? And most importantly, do they claim ownership of your meeting content or assert rights to use it for training their models?
Nudge Security provides a vendor security profile for every generative AI app provider in your organization to help jumpstart this risk assessment. Profiles include data such as:
Many AI notetakers and meeting assistants request extensive OAuth permissions to access calendars, email, and other sensitive resources. Granting calendar access allows these tools to automatically join meetings, but it also gives them potentially unrestricted access to your employees' schedules, contacts, and meeting details. Nudge Security monitors these OAuth connections, providing visibility into which AI tools have been granted permissions, the specific scope of those permissions, and a prioritized risk score. Nudge Security also makes it possible to directly revoke this access from Google or Microsoft.
⭐ With this information, our enterprise customer was able to better assess the risk of the unapproved AI notetaker, pinpointing the specific user accounts that had granted calendar access.
Developing a comprehensive AI acceptable use policy is critical for any organization implementing AI governance. This policy should clearly articulate which AI tools are permitted, how they should be used, and what data protections must be in place. While many organizations now encourage AI adoption, they must balance innovation with appropriate guardrails to protect sensitive information.
Nudge Security offers a comprehensive playbook that guides IT and security leaders through the step-by-step process of creating and implementing an AI AUP on an ongoing basis.
AI policies collect dust in an employee handbook. Instead, enforce your AI governance policies in real time by delivering them as just-in-time guardrails as employees sign up for new AI accounts.
Nudge Security automates AI guardrails based on your policies. Purpose-built nudges sent through the browser, in Slack, or by email guide employees toward safe, compliant AI use by:
⭐ With Nudge Security, our enterprise customer nudged the various cohorts of employees about their use of the unapproved AI notetaker, asking them to clarify their need or prompting them to use the company’s preferred AI notetaker instead. In doing so, they were able to successfully eliminate the viral account growth they had observed in recent months, creating a blueprint to help the IT and security teams to work toward removing other redundant and risky apps.
Establish an ongoing monitoring process to detect new AI services entering your environment and track usage patterns over time. Using Nudge Security, our customers can easily generate regular reports on:
Custom alerts enable our customers to monitor unsanctioned AI use, including notifications whenever someone attempts to sign up for an AI notetaker that the organization has designated as prohibited.
While AI notetakers offer incredible productivity benefits, their viral adoption can create significant security, privacy, and compliance risks. As these tools proliferate through individual sign-ups, organizations face growing blind spots in their security posture and data governance.
Nudge Security provides a comprehensive solution to discover AI notetakers across your organization, assess their risks, and implement guardrails that protect sensitive information without hindering innovation. Our platform empowers IT and security teams to:
Don't wait for an AI data breach to address these emerging risks. Start your free trial of Nudge Security today and gain complete visibility and control of your organization's AI landscape.