Back to the blog
July 9, 2025

Shadow AI is taking notes: The growing risk of AI meeting assistants

AI meeting tools like Otter and Fireflies spread fast. Nudge Security helps you uncover and manage the risks.

Recently, one of our enterprise customers asked for help with a growing security problem related to generative AI: an unapproved AI notetaker was spreading rapidly across their organization. Using Nudge Security, they had discovered a concerning trend—over the course of just 90 days, 800 new accounts had been created, nearly double the number of accounts that had been created with the notetaker over the previous several years.

This viral growth was caused, in part, by a common “dark pattern” employed by the AI provider: whenever one employee shared a call recording with another employee, the notetaker required the second employee to sign up for the service, using a highly permissive OAuth grant that asks for permission to any calendar the employee can access, automatically adding itself to every meeting going forward. A dark pattern indeed—one that exposes an organization to a slew of data privacy risks.

This GenAI startup is just one of a fast-growing market of AI notetakers, call recorders, and AI meeting assistants that are experiencing unprecedented adoption across organizations of all sizes. Tools like Otter, Fireflies, Read, Fathom, Granola, and Supernormal have quickly become mainstream workplace essentials, with millions of users relying on them to capture, transcribe, and summarize meetings. Fireflies, for example, boasts over 20 million users across 500,000 organizations, including 75% of the Fortune 500.

The appeal of an AI notetaker is obvious: it promises to eliminate the tedium of note-taking, allowing participants to be fully present in conversations while AI handles the secretarial work. Many AI notetakers offer features like real-time transcription, automatic summary generation, action item extraction, and even sentiment analysis. Most importantly, these tools are designed for hyper-collaboration, making it incredibly easy to share a recording, transcription, or notes broadly, often without much regard for what sensitive information might be included in them.

What's particularly concerning is how these tools are entering organizations—not through careful IT evaluation and procurement processes, but rather through individual users signing up with their work email addresses.

Most AI notetakers employ aggressive growth tactics, including:

  • Freemium models that make it easy for individual employees to start using the service without IT approval
  • Calendar integrations that automatically join meetings without explicit notification to all participants
  • Easy invitation and sharing features that encourage viral adoption across teams
  • Minimal or buried privacy controls that default to maximum data collection

This combination of utility and easy adoption has created a perfect storm for shadow AI proliferation, with many organizations discovering dozens of different AI notetakers being used across departments—often without any centralized visibility or AI data governance.

The risks of AI notetakers

While the productivity benefits are real, AI notetakers introduce significant data privacy concerns and costs that many organizations have yet to fully grapple with:

1. Data privacy and confidentiality concerns

AI notetakers capture everything—sensitive business discussions, intellectual property details, customer information, strategic plans, and even casual conversations. This data is typically processed on third-party servers with varying levels of security controls. Also, these services may use the collected data to train their AI models (this is almost always the case with free versions of AI services), creating potential exposure of confidential information.

2. Regulatory compliance challenges

For regulated industries, AI notetakers present substantial compliance headaches. Healthcare organizations must consider HIPAA implications when patient information is discussed. Financial services firms face potential violations of financial regulations when client details are recorded without proper controls. International businesses must navigate GDPR and other regional privacy laws regarding consent and data processing.

3. Security vulnerabilities

Many AI notetakers are offered by startups that prioritize growth over security maturity. This can result in inadequate security measures, unclear data retention policies, and potential vulnerabilities in how meeting content is stored and transmitted. The security risks are amplified when employees use personal accounts that bypass corporate security controls.

4. Shadow AI proliferation

The ease of adoption means these tools often bypass IT governance processes entirely. This creates significant blind spots in data management and security posture. IT and security teams cannot protect what they cannot see, and the distributed nature of these tools makes comprehensive discovery challenging.

5. Consent and ethical considerations

In many jurisdictions, such as California’s CCPA, recording conversations without explicit consent from all parties is illegal. Yet AI notetakers frequently join meetings automatically through calendar integrations, potentially recording participants who haven't consented. This creates both legal and ethical issues for organizations.

How to govern the use of AI notetakers and AI meeting assistants with Nudge Security

As with many emerging technologies, the answer isn't as simple as banning all AI notetakers outright. Like our enterprise customer dealing with an AI notetaker, many organizations now consider themselves “AI-first companies,” encouraging their employees to do more with AI and automation.

Instead, IT, security, and GRC teams must thread a fine needle to manage the risks of GenAI tools without blocking the potential benefits of using these services. This can be a challenge given the dynamic and decentralized nature of AI adoption. For reference, Nudge Security has observed a 1,762% growth in AI use over the past two years.

1. Discover and assess AI use.

A critical first step to any AI security and governance program is to create and maintain an inventory of AI assets. On an ongoing basis, IT leaders must be able to answer questions such as:

  • What generative AI services do we use in our organization?
  • By whom? Which departments or employees use the most AI? Which apps are most popular?
  • Why? Are employees using AI tools to create marketing content, vibe code, crunch financials?
  • How often? Which apps are being used regularly? Which ones have been abandoned?
  • How much? What is the cost to the business? Are we paying for redundant AI services?

⭐ Using Nudge Security's unique, perimeterless approach to AI discovery, our enterprise customer discovered the unapproved AI notetaker and analyzed its adoption patterns across the organization. This enabled them to segment users into meaningful cohorts—power users, abandoned accounts with no activity for six months, new users who may have signed up unknowingly, and employees who already had accounts with corporate-approved AI notetakers.

2. Review AI providers’ data privacy and security practices.

As part of your AI risk assessment, it's crucial to carefully evaluate each AI notetaker vendor's security practices. Look for transparency in how they process, store, and potentially use meeting data for their own model training. Key questions to consider include: Does the provider offer end-to-end encryption? What are their data retention policies? Do they have clear mechanisms for deleting customer data upon request? And most importantly, do they claim ownership of your meeting content or assert rights to use it for training their models?

Nudge Security provides a vendor security profile for every generative AI app provider in your organization to help jumpstart this risk assessment. Profiles include data such as:

  • Headquarters and data hosting locations
  • Size of company and LinkedIn profile
  • Risk insights, including data breach histories
  • Evidence of compliance certifications like SOC 2 Type II
  • Links to data privacy policies and other security info

3. Monitor programmatic access given to AI tools.

Many AI notetakers and meeting assistants request extensive OAuth permissions to access calendars, email, and other sensitive resources. Granting calendar access allows these tools to automatically join meetings, but it also gives them potentially unrestricted access to your employees' schedules, contacts, and meeting details. Nudge Security monitors these OAuth connections, providing visibility into which AI tools have been granted permissions, the specific scope of those permissions, and a prioritized risk score. Nudge Security also makes it possible to directly revoke this access from Google or Microsoft.

⭐ With this information, our enterprise customer was able to better assess the risk of the unapproved AI notetaker, pinpointing the specific user accounts that had granted calendar access.

4. Define a clear AI acceptable use policy.

Developing a comprehensive AI acceptable use policy is critical for any organization implementing AI governance. This policy should clearly articulate which AI tools are permitted, how they should be used, and what data protections must be in place. While many organizations now encourage AI adoption, they must balance innovation with appropriate guardrails to protect sensitive information.

Nudge Security offers a comprehensive playbook that guides IT and security leaders through the step-by-step process of creating and implementing an AI AUP on an ongoing basis.

5. Provide real-time AI guardrails for your workforce.

AI policies collect dust in an employee handbook. Instead, enforce your AI governance policies in real time by delivering them as just-in-time guardrails as employees sign up for new AI accounts.

Nudge Security automates AI guardrails based on your policies. Purpose-built nudges sent through the browser, in Slack, or by email guide employees toward safe, compliant AI use by:

  • Redirecting them on sign up toward approved AI alternatives to reduce AI sprawl.
  • Stopping them from using AI apps marked as “not permitted.”
  • Asking them to review and acknowledge your AI AUP before continuing use.
  • Requesting more info about their AI use, like what types of data they share with AI.
  • Reminding them to enable MFA on all accounts.
  • Prompting them to delete accounts and integrations they no longer use.

⭐ With Nudge Security, our enterprise customer nudged the various cohorts of employees about their use of the unapproved AI notetaker, asking them to clarify their need or prompting them to use the company’s preferred AI notetaker instead. In doing so, they were able to successfully eliminate the viral account growth they had observed in recent months, creating a blueprint to help the IT and security teams to work toward removing other redundant and risky apps.

6. Monitor and report on AI use trends.

Establish an ongoing monitoring process to detect new AI services entering your environment and track usage patterns over time. Using Nudge Security, our customers can easily generate regular reports on:

  • AI adoption rates across departments.
  • Compliance with approved AI tool policies.
  • Potential data exposure risks from unapproved AI tools.
  • Cost optimization opportunities through consolidation of redundant AI services.

Custom alerts enable our customers to monitor unsanctioned AI use, including notifications whenever someone attempts to sign up for an AI notetaker that the organization has designated as prohibited.

Take control of AI notetakers in your organization.

While AI notetakers offer incredible productivity benefits, their viral adoption can create significant security, privacy, and compliance risks. As these tools proliferate through individual sign-ups, organizations face growing blind spots in their security posture and data governance.

Nudge Security provides a comprehensive solution to discover AI notetakers across your organization, assess their risks, and implement guardrails that protect sensitive information without hindering innovation. Our platform empowers IT and security teams to:

  • Automatically discover all AI tools being used across your organization.
  • Identify high-risk OAuth connections and inappropriate data sharing.
  • Guide employees toward approved AI alternatives and acceptable use.
  • Monitor and manage AI usage with real-time nudges and guardrails.

Don't wait for an AI data breach to address these emerging risks. Start your free trial of Nudge Security today and gain complete visibility and control of your organization's AI landscape.

Start your free trial →

Related posts

Report

Debunking the "stupid user" myth in security

Exploring the influence of employees’ perception
and emotions on security behaviors