The Vercel Breach: Anatomy of an AI Tool Supply Chain Attack
A compromised AI tool called Context.ai gave attackers access to Vercel customer credentials through a single OAuth grant. Full attack chain, timeline, and response checklists for developers, security teams, and CISOs.

On April 19, Vercel disclosed that a compromised AI tool called Context AI gave attackers access to customer environment variables — API keys, database credentials, tokens. The entire attack chain started with a fake Roblox cheat download and ended at production secrets, bridged by a single employee's OAuth grant with "Allow All" permissions.
The attack chain
The attack followed six steps, each exploiting a different trust boundary:
-
Infostealer infection (February 2026). A Context AI employee downloaded what appeared to be Roblox game cheats. The download contained Lumma Stealer, a commodity infostealer. The malware exfiltrated browser session tokens, cookies, and stored credentials from the employee's machine.
-
OAuth app compromise. Context AI operated a Google Workspace OAuth application (client ID:
110671459871-30f1spbu0hptbs60cb4vsmv79i7bbvqj.apps.googleusercontent.com) used by their Chrome extension. With the stolen credentials, attackers gained control of this OAuth app — a compromise that affected hundreds of users across many organizations, not just Vercel. -
Lateral movement via Chrome extension. A Vercel employee had installed the Context AI Chrome extension (extension ID:
omddlmnhcofjbnbflmjginpjjblphbgk) and granted it "Allow All" permissions in Google Workspace. Vercel is not a Context AI customer — the employee had signed up independently using their Vercel enterprise email. The compromised OAuth app pivoted through this grant into the employee's Google Workspace account. -
Internal system access. From the employee's Google Workspace session, attackers reached Vercel's internal systems. Vercel described the attackers as "highly sophisticated based on their operational velocity and detailed understanding of Vercel's systems." CEO Guillermo Rauch said they were "significantly accelerated by AI." Vercel engaged Mandiant, additional cybersecurity firms, and law enforcement.
-
Customer credential exfiltration. Attackers accessed customer environment variables stored in Vercel's systems. Non-sensitive environment variables (unencrypted at rest) were exposed. Sensitive environment variables (encrypted at rest) were not accessed. Vercel confirmed that no npm packages published by Vercel were compromised.
-
Ransom demand. The threat actor demanded $2 million for the stolen data, posting claims on BreachForums. A group called ShinyHunters initially claimed involvement but later denied it.
The entire chain — from a fake Roblox cheat to customer production credentials — required no zero-days, no sophisticated exploits, and no internal Vercel vulnerabilities. It required one employee installing one AI tool with one overly broad OAuth scope.
The design flaw that made it worse
The breach exposed a structural problem in Vercel's environment variable architecture: environment variables defaulted to "non-sensitive" (unencrypted at rest). Developers had to manually toggle each variable to "sensitive" to enable encryption.
This violates a core security principle — safe by default. When the default is insecure and the secure option requires manual action, most variables end up unencrypted.
Trend Micro's analysis estimates that a typical Vercel project contains 10–30 environment variables, and a mid-sized organization runs 50–150 projects. That's 500–1,500 credentials potentially stored unencrypted because developers didn't click a toggle.
After the breach, Vercel announced that environment variable creation now defaults to sensitive mode. Existing unencrypted variables are being migrated. The fix is correct — but it came after the breach, not before it.
Trend Micro also flagged a potential 22-month dwell time: the initial compromise may date back to June 2024, with Context AI's OAuth persistence confirmed through 2025. The February 2026 Lumma Stealer infection may have been the trigger for the final escalation, not the starting point.
Timeline
Jun 2024 (est.) — Context AI OAuth app compromise may have begun.
Feb 2026 — Context AI employee machine infected via Roblox cheat containing Lumma Stealer.
Mar 2026 — Context AI identifies and blocks unauthorized access to its AWS environment.
Mar 24 — LiteLLM PyPI package compromised by TeamPCP. Credential stealer exfiltrates env vars, SSH keys, and cloud creds.
Mar 27 — Google removes Context AI Chrome extension from Chrome Web Store.
Mar 31 — Axios npm package compromised by North Korean state actor. RAT delivered to macOS, Windows, and Linux.
Apr 2026 — Attackers pivot through Context AI OAuth → Vercel employee → customer env vars.
Apr 19 — Vercel publishes initial security bulletin and begins customer notifications.
Apr 20 — Trend Micro publishes detailed analysis, estimates 22-month potential dwell time.
Three incidents in three weeks — LiteLLM, Axios, Vercel — different threat actors, different vectors, same surface: developer tool supply chains as a path to production credentials.
Why AI tools are supply chain risk
OAuth as attack surface
AI tools don't ship as packages you audit. They ship as Chrome extensions, OAuth apps, and API integrations that request broad permissions. The Context AI extension asked for — and received — "Allow All" access to a Vercel employee's Google Workspace. It also embedded a second OAuth grant enabling read access to the user's Google Drive.
Enterprise OAuth governance is mature for SaaS applications (Salesforce, Slack, GitHub). It barely exists for AI developer tools. Most organizations have no inventory of which AI tools their developers have authorized, what scopes those tools hold, or which employees granted them. There's no package-lock.json for OAuth grants.
The 3-week credential theft cluster
Three supply chain attacks in three weeks, all targeting developer tool ecosystems:
- LiteLLM (Mar 24): The TeamPCP campaign compromised the LiteLLM maintainer's credentials through a prior Trivy vulnerability scanner compromise. Malicious PyPI versions (1.82.7, 1.82.8) exfiltrated environment variables, SSH keys, cloud credentials, and database secrets. LiteLLM gets roughly 3.4 million downloads per day.
- Axios (Mar 31): Sapphire Sleet, a North Korean state actor, compromised the primary maintainer's npm account and published malicious versions that delivered a cross-platform remote access trojan. Axios has over 70 million weekly npm downloads.
- Vercel/Context AI (Apr 19): OAuth-based lateral movement through a compromised AI tool vendor, reaching customer production credentials.
Different vectors — PyPI poisoning, npm account takeover, OAuth lateral movement — but the same playbook: compromise a tool developers trust, harvest the credentials those developers have access to.
Blast radius: one employee, all customers
The Vercel breach is a confused deputy attack at organizational scale. A confused deputy is a system that is tricked into misusing its authority — in this case, the OAuth trust chain was used for purposes the employee never intended. One employee installed one Chrome extension. That extension's OAuth grant gave attackers a trust chain that reached customer production secrets.
This isn't a Vercel-specific problem. Any platform where employees use AI tools with broad OAuth scopes — and where customer data is accessible through internal systems — has the same exposure. The question isn't whether your team has installed AI tools with excessive permissions. The question is how many, and whether you know which ones.
What you should do now
If you deploy on Vercel
- Rotate all environment variables stored in Vercel — both project-level and team-level. Start with database credentials, API keys, and secrets for external services (Stripe, AWS, Twilio).
- Mark all environment variables as "sensitive" to ensure encryption at rest. Vercel's new default applies to new variables; existing ones need manual migration.
- Revoke all third-party OAuth grants you don't actively use. In Google Workspace: Security → API controls → Third-party app access.
- Uninstall the Context AI Chrome extension (ID:
omddlmnhcofjbnbflmjginpjjblphbgk) if it's still on any developer machines. Google removed it from the Chrome Web Store on March 27, but existing installations may persist. - Check deployment and account activity logs for unusual access during March–April 2026.
- Verify npm packages: Vercel confirmed their npm packages weren't compromised, but review your dependency tree for any unexpected version bumps during this window.
If you run a security team
- Inventory AI tools your developers have installed — Chrome extensions, VS Code extensions, CLI tools, OAuth-connected services. You'll be surprised how many you find.
- Audit OAuth scopes for every AI tool. Flag any with "Allow All", broad Google Workspace access, or access to source control.
- Scan for hardcoded secrets in your repositories. Secrets that were in environment variables may have been copied into code during development:
inkog -path . -policy low-noise- Add CI gates that block merges containing hardcoded credentials. This won't prevent the OAuth breach itself, but it limits the blast radius by ensuring secrets aren't duplicated in source.
- Review which employees have access to production credential stores. The Vercel breach required only one employee's OAuth grant — minimize the number of people whose compromise leads to customer data.
If you're a CISO
- Create an AI tool supply chain playbook. You have playbooks for SaaS vendor compromise and open-source dependency compromise. AI tools are a third category that combines elements of both — they have OAuth access like SaaS vendors and they're adopted bottom-up like open-source packages.
- Ask: "How many AI tools have our developers authorized, and what can they access?" If the answer is "we don't know," that's your first project.
- Require approval for OAuth grants with broad scopes. Google Workspace and Microsoft Entra both support policies that block third-party apps from gaining org-wide access without admin approval.
What Inkog catches — and what it doesn't
We're writing this post because the Vercel breach maps directly to risks we think about every day. But we want to be precise about what Inkog addresses and what it doesn't.
What Inkog detects:
- Hardcoded secrets in code — 11 detection patterns covering API keys, tokens, database URLs, and credentials committed to source. If a developer copied a Vercel env var into a Python file during development, Inkog flags it.
- Missing human oversight — agents that take high-impact actions (filesystem writes, API calls, financial operations) without approval gates. Mapped to EU AI Act Article 14.
- MCP server auditing —
inkog_audit_mcp_serveranalyzes MCP tool definitions for excessive capabilities, tool poisoning, and privilege escalation paths. - AI tool governance verification —
inkog_verify_governancecross-checks AGENTS.md declarations against actual code behavior, catching gaps between what a tool claims to do and what it actually does.
What Inkog does not prevent:
- OAuth misconfiguration — Inkog doesn't audit Google Workspace OAuth grants or Chrome extension permissions. This is identity governance, not code analysis.
- Rogue browser extensions — detecting and blocking unauthorized extensions requires endpoint management (CrowdStrike, Kolide, Kandji), not a security scanner.
- Infostealer malware — endpoint protection is a different layer. Inkog operates on source code, not on running systems.
What we're building:
- OAuth scope auditing for AI tools — static analysis of MCP server manifests and tool declarations to flag overly broad permission requests.
- AI tool inventory detection — identifying which AI tools are referenced in a codebase (imports, configs, MCP definitions) so security teams can map their exposure.
This breach happened because of an unaudited AI tool with excessive permissions — not because of bad code. Inkog catches the code-level symptoms (secrets in source, missing oversight). The root cause — uncontrolled AI tool adoption with broad OAuth access — requires organizational controls that no scanner can fully replace.
The bigger pattern
SolarWinds taught the industry to audit its software supply chain. It took years, but most organizations now have SBOMs, dependency scanning, and policies around open-source consumption.
The Vercel breach should teach us to audit our AI tool chain. AI tools are adopted bottom-up by individual developers — not procured through IT, not reviewed by security, not tracked in any inventory. They request OAuth scopes that would make a SaaS vendor blush, and they operate in the most privileged part of the development environment: the developer's authenticated session.
The three-week cluster of LiteLLM, Axios, and Vercel isn't a coincidence. Attackers are converging on developer tool supply chains because that's where the credentials are. And AI tools — with their broad permissions, rapid adoption, and minimal governance — are the softest targets in that supply chain.
Every AI tool your team has authorized is a trust relationship. Treat it like one.
Check your own repos
If you're reading this, you probably want to know whether secrets that were in environment variables ended up in your source code during development. That's the one piece of this attack chain where static analysis helps.
inkog -path . -policy low-noiseIf you want this running continuously — so a leaked Stripe key in a commit gets caught before it hits main — add Inkog as an MCP server in Claude Code or Cursor and it becomes part of the development conversation:
{
"mcpServers": {
"inkog": {
"command": "npx",
"args": ["-y", "@inkog-io/mcp"],
"env": { "INKOG_API_KEY": "sk_live_..." }
}
}
}- Free API key — 5 scans/month, no credit card
- Full Vercel security bulletin
- Trend Micro's technical analysis