A Vercel employee installed an AI tool called Context.ai. It asked for Google Workspace access. They clicked Allow.
That grant became the door.
Not a zero-day. Not a phishing email. Not a nation-state attack on Vercel's infrastructure. Context.ai was compromised by an attacker, who used the OAuth access Context.ai held on behalf of Vercel employees to gain delegated access to their Google Workspace accounts, and from there, to Vercel's internal systems.
Vercel confirmed this on April 19. By April 20, Mandiant was involved. The attacker is described as "highly sophisticated based on their operational velocity and detailed understanding of Vercel's systems." Vercel advised all customers to rotate their environment variables as a precaution.
The breach did not start at Vercel. It started at Context.ai.
The OAuth grant nobody revokes
Here is how this works in practice.
An engineer discovers a useful AI tool. It promises to analyze LLM usage, improve prompts, or surface insights from their codebase. The tool needs access to do its job. It asks for Google Workspace OAuth. The engineer clicks Allow.
The grant persists. The scopes are broad. Nobody looks at the list of connected apps until something goes wrong.
This is not a Vercel problem. It is the default state of every engineering team that has adopted AI tooling in the last two years. Meeting notetakers, code review assistants, prompt analyzers, deployment monitors. Each one asks for OAuth. Each one gets it. The security review comes later, if it comes at all.
When the AI vendor gets breached, every customer OAuth grant is a live door.
Vercel published the OAuth app identifier so other organizations can check their Google Workspace logs:
110671459871-30f1spbu0hptbs60cb4vsmv79i7bbvqj.apps.googleusercontent.com
Context.ai's app potentially affected "hundreds of users across many organizations." Vercel was one of many. The others have not published bulletins yet.
What "sensitive" means, and what it does not
Vercel's bulletin makes a distinction worth understanding.
Environment variables marked as "sensitive" are stored encrypted and cannot be read back. Those were not accessed. The variables that were exposed were stored in plaintext, the ones most teams use for most things, because marking something sensitive requires an extra click.
Vercel shipped a fix on April 20: env var creation now defaults to sensitive: on. That is the right call. You can review and rotate your environment variables here.
But it does not address the root cause. The root cause is not that env vars were stored in plaintext. It is that an AI tool held persistent, broad OAuth access to Google Workspace accounts, and nobody was watching it.
Rotating credentials is the right response to a breach. It is not a defense against the next one.
The pattern is not new. The surface is.
Security researchers have cataloged this pattern before. The 2023 CircleCI breach started with a compromised employee laptop holding a session token. The 2022 Okta breach started with a compromised support engineer at a third-party vendor. The pattern is the same: attacker compromises a vendor, vendor has access to the target, attacker pivots.
What is new is the surface area.
Two years ago, the average engineering team had a handful of third-party integrations with OAuth access to their systems. GitHub, Slack, maybe a CI provider. Each integration was reviewed, at least nominally, before being approved.
Now the average engineering team has a dozen AI tools, each with OAuth access to something. The review process has not kept pace with the adoption rate. The tools are adopted by individual engineers, not by security teams. The scopes are broad because the tools need broad access to work. And the vendors are small companies, often with limited security resources of their own.
The Vercel breach is the first major public incident where an AI tool was the OAuth vector. It will not be the last.
What the fix actually looks like
The standard advice after a breach is: rotate your credentials, enable MFA, review your connected apps. That advice is correct. Do it.
But the structural fix is different.
The problem with OAuth grants is that they are persistent and broad. An AI tool gets access once and keeps it. The scope covers more than the tool needs for any single task. Nobody watches the grant after it is issued.
The fix is credentials scoped to a task that expire when the task ends. Not a vault. Not a policy document. Not a quarterly access review. A token created for one operation, covering only what that operation needs, that cannot be used after the operation completes.
This is not a new idea. It is how AWS STS temporary credentials work. It is how GitHub Actions OIDC tokens work: a short-lived token scoped to one workflow run, gone when the run ends. It is not how AI tools work today. AI tools get persistent OAuth grants because that is the path of least resistance. The user clicks Allow once and the tool works forever.
For agentic workflows, the model is clean: one task, one token, token dies when the task ends. For ambient tools like Context.ai, which monitor continuously rather than run discrete tasks, the minimum viable fix is narrower scopes and shorter expiry windows. Not perfect, but a structural improvement over a grant that never expires.
The answer is not more friction for users. It is a different credential model: one where the tool requests a scoped, time-bound token for each task, and the token dies when the task ends.
This is what I have been building toward with agentic-authz. The Vercel breach is the proof that the problem is real.
What to do right now
Start with the time-sensitive items:
- Rotate every environment variable not marked sensitive in Vercel. Treat them all as exposed.
- Open your Google Workspace admin console. Revoke any OAuth app you do not recognize or no longer use.
- Check your GitHub installed apps. Same exercise.
- Enable MFA on your Vercel account if you have not.
Then the standing hygiene:
- List every AI tool with OAuth access to your Google Workspace, GitHub, Slack, or any internal system.
- For each one: what scopes does it have? Does it need all of them? When was the grant last reviewed?
- Revoke the ones you are not actively using.
- For the ones you keep: set a reminder to review them quarterly.
Every company that has given AI tools OAuth access to anything is in the same position as Vercel was. The difference is that Vercel's vendor got breached first.
The thing nobody is saying
The Vercel breach will be written up as a supply chain attack. That framing is accurate but incomplete.
Supply chain attacks target build systems, package registries, and deployment pipelines. The defense is code signing, reproducible builds, and dependency auditing. Those defenses exist and work.
This attack targeted something different: the OAuth grants that AI tools accumulate as a side effect of being useful. There is no code signing for an OAuth grant. There is no reproducible build for a Google Workspace access token. The defense has to be structural: credentials that cannot outlive the task they were created for.
The AI tool ecosystem is two years old. Its security model is still borrowed from SaaS: click Allow, forget about it, rotate when breached. That model does not hold when the tools have access to production systems and the vendors are small companies with limited security resources.
The Vercel breach is the first public proof. The question is whether the industry treats it as a one-off or as a signal that the credential model for AI tools needs to change.
It is a signal. The change is coming. The only question is whether it comes before or after the next breach.
I build agentic-authz, an open-source authorization layer for AI agents. The Vercel breach is the kind of incident it is designed to prevent. If you are thinking about this problem, reach out: @Siddhant_K_code