Dozens of demonstrators took to the streets of San Francisco on Saturday, March 21, 2026, marching to three of the world’s most powerful AI company headquarters. Organised by the group Stop the AI Race, the protest began at Anthropic’s offices on Howard Street, moved to OpenAI’s headquarters on 3rd Street, and concluded at xAI on 18th Street — before wrapping up with a community gathering at Dolores Park.
The event was peaceful, described by organisers as a nonviolent assembly open to citizens, researchers, and even employees of the targeted companies.
TL;DR
- Protesters in the US are asking for a pause in advanced AI development
- Main concern: AI is growing too fast without proper safety checks
- Around 100–200 people protested in San Francisco
- The debate is now global: innovation vs safety
Here’s a complete breakdown of what happened, why it matters, and what comes next.
What Are the “Stop the AI Race” Protests?
In March 2026, a group of activists gathered in San Francisco to protest against rapid AI development.
They marched between offices of major AI companies like:
- OpenAI
- Anthropic
- xAI
Their message was simple:
👉 “Slow down AI before it’s too late.”
This is not just a one-time protest. It’s part of a larger global movement questioning how fast AI is advancing.
The One Demand
The movement has a single, laser-focused ask:
“Every major AI lab CEO must publicly commit to pausing frontier AI development — if every other lab in the world does the same.”
This is a conditional global pause, not a unilateral one. The logic: no CEO needs to fall behind unilaterally. They just need to agree together. The group argues this is already a step several leaders have privately signalled openness to — making the ask about public commitment, not a secret change of heart.
👉 “Conditional Pause” on AI Development
This means:
- Temporarily stop building more powerful AI systems
- But only if all major companies agree together
Why?
Because if only one company pauses, others will move ahead and dominate.
Who Is Behind This?
Stop the AI Race is led by Michaël Trazzi, a filmmaker and former AI safety researcher. This isn’t his first protest — Trazzi previously led a 3-week hunger strike outside Google DeepMind’s London headquarters in September 2025.
Notable voices supporting or speaking at the March 21 march include:
- Dr. David Krueger — AI professor at the University of Montreal, co-author of research alongside AI pioneers Yoshua Bengio and Geoffrey Hinton
- Nate Soares — President of the Machine Intelligence Research Institute (MIRI), co-author of the NYT Bestseller If Anyone Builds It, Everyone Dies
- Will Fithian — Professor of Statistics at UC Berkeley
- Supporting groups: PauseAI, StopAI, QuitGPT, and Evitable
Why Now?
The Trigger: Anthropic’s Retreating Safety Pledge
The protest’s timing was not coincidental. In February 2026, Anthropic quietly dropped the central commitment of its Responsible Scaling Policy (RSP) — a pledge it had maintained since 2023.
The original RSP committed Anthropic to never train AI models more powerful than its current ones unless it could guarantee safety measures were adequate in advance. It was a hard stop — and it was unprecedented in the industry.
On February 24, 2026, Anthropic released RSP v3.0, which replaced that binding commitment with non-binding, publicly declared targets. The company’s justification: unilateral pauses don’t work if competitors keep racing.
Anthropic’s Chief Science Officer Jared Kaplan told TIME: the company felt “it wouldn’t actually help anyone” to stop training models while competitors advanced.
An independent reviewer, Chris Painter of METR (a nonprofit evaluating AI risk), reviewed the new policy and warned: “Society is not prepared for the potential catastrophic risks posed by AI.”
The context makes it starker: in February 2026, Anthropic also raised $30 billion in new investment at a valuation of approximately $380 billion, with annualised revenue growing 10x year-on-year.
OpenAI: The Defense Deal and the “QuitGPT” Movement (12:30 PM)
The march continued to OpenAI. This protest site carried extra tension following the “QuitGPT” demonstration that occurred outside the same office on March 3. The current protests are a direct amplification of those demands.
The central source of anger is OpenAI’s recent, historic partnership with the U.S. Department of Defense (Pentagon). Protesters see this as a betrayal of OpenAI’s founding principles, which prioritized global, beneficial AGI over national military applications. Activists cited the danger of OpenAI’s models being integrated into autonomous drone fleets or military surveillance systems.
The “360-Degree” Context: Drivers and Risks
This surge in activism is fueled by a perfect storm of factors, moving the conversation from theoretical “existential risk” to urgent, present-day concerns.
What Is ‘Frontier AI’ and Why Pause It?
Frontier AI refers to the most advanced AI models being developed — systems that can not only perform complex tasks but potentially automate AI research itself, accelerating their own improvement without human direction.
Protesters and researchers argue that once AI reaches this threshold of self-improvement, human oversight becomes exponentially harder. The risk isn’t today’s chatbots — it’s what comes after.
Stop the AI Race’s specific technical proposal, developed with MIRI:
- No new training runs of larger or more general frontier models
- Teams working on capability advancement would shift to narrow AI applications or alignment research
- Current models remain available — this is not a call to shut down existing AI
- Operationalised through compute thresholds (FLOP caps) and independent verification, including AI chip tracking
- China would need to be part of any agreement — the group explicitly acknowledges a unilateral Western pause does not solve the problem
What Did the CEOs Say Before All This?
The movement’s credibility rests partly on prior statements from the CEOs they’re now targeting:
- Dario Amodei (Anthropic): At Davos in January 2026, said he would meet with Google DeepMind’s Demis Hassabis “right now” and agree to pause if it were just the two of them in the race.
- Demis Hassabis (Google DeepMind): At Davos, said he would be “open” to a conditional pause — but cited international coordination as the key bottleneck. (This was Stop the AI Race’s demand from their earlier September 2025 protest at DeepMind’s London HQ.)
- OpenAI’s charter: Already includes a clause committing to stop competing if another lab is closer to achieving AGI. However, as OpenAI restructures into a for-profit corporation, these commitments are reportedly being weakened.
The Bigger Picture: A Growing Global Movement
The San Francisco protest is part of a pattern, not an isolated event:
- February 28, 2026 — A few hundred protesters marched through London’s King’s Cross tech hub (home to OpenAI, Meta, and Google DeepMind UK offices), organised by PauseAI and Pull the Plug. Billed as the largest anti-AI protest to date at the time.
- March 3, 2026 — The QuitGPT protest drew over 75 people outside OpenAI’s San Francisco headquarters — the largest anti-OpenAI protest on record.
- September 2025 — Stop the AI Race’s hunger strike outside Google DeepMind in London gained international media attention.
The movement is also gaining traction in policy circles. The March 21 protest followed the White House’s release of a national AI legislative framework — though the Trump administration’s approach has been permissive rather than restrictive, including efforts to nullify state-level AI safety regulations with no federal AI law on the horizon.
Public Opinion: Are People Listening?
70% of respondents in one study believe AI should be regulated. 51% would support a temporary pause on some types of AI development. (Source: Analyticsdrive.tech, citing multiple surveys)
80% of U.S. adults believe the government should maintain rules for AI safety and data security, even if it slows development. (Source: Gallup / Special Competitive Studies Project)
The sentiment for caution is real — but translating it into political or corporate action remains the movement’s central challenge.
Will It Work?
Most protesters are clear-eyed about corporate responsiveness. Maxime Fournes, global head of PauseAI, told MIT Technology Review at the London march: “I don’t think the pressure on companies will ever work. They are optimised to not care about this problem.”
His alternative strategy: make it harder to race by advocating for whistleblower protections, demonstrating that working in AI development carries social and ethical costs, and drying up the talent pipeline.
Stop the AI Race takes a slightly different view. The group argues that public commitments from Western lab CEOs create the conditions for international coordination — and that Demis Hassabis’s Davos statement is proof that pressure produces results.
Related Posts




