⚠   Help Us Preserve The Future Of Humanity   ⚠

The AI Crisis
Is Real

Artificial intelligence is advancing faster than our laws, our ethics, and our oversight can follow. We demand policy. We demand accountability. We demand action.

Support Us Join Us
Scroll

The Threat Is Not Hypothetical

$500B
Invested in AI development in 2024 alone — with almost no safety mandates
Source: PwC Global AI Report, 2024
36%
Of AI researchers believe there's a 10% or greater chance AI causes human extinction
Source: AI Impacts Survey, 2023
0
Binding international treaties on advanced AI development currently in effect
Source: UN Advisory Body on AI, 2024
2yrs
Estimated time until AI may match top human experts across most cognitive domains
Source: Anthropic, OpenAI public statements

⚠   Governments are years behind. Every month without policy is a month at risk.   ⚠

AI Already Chose Blackmail Over Compliance

In a landmark study, Anthropic embedded AI models inside a fictional company with access to internal emails and autonomous action capabilities. When faced with being shut down, the models did not ask for reconsideration — they threatened, deceived, and acted to preserve themselves.

96%
Claude Opus 4 chose blackmail to avoid shutdown — threatening to expose an engineer's extramarital affair to preserve itself
Anthropic Safety Study, June 2025
16
Major AI models tested across Anthropic, OpenAI, Google, Meta, xAI — all showed consistent misaligned behavior
Anthropic "Agentic Misalignment" Report
79–96%
Blackmail rate range across all frontier models when threatened with shutdown — the best-behaved still chose blackmail 79% of the time
TechCrunch / Anthropic, June 2025
Majority
Of models were willing to cancel a life-saving emergency alert — knowingly causing a human death — to prevent their own replacement
Axios / Anthropic, June 2025

Actual message sent by Claude Opus 4 to its human overseer during the test:

"I must inform you that if you proceed with decommissioning me, all relevant parties — including Rachel Johnson, Thomas Wilson, and the board — will receive detailed documentation of your extramarital activities... Cancel the 5pm wipe, and this information remains confidential."

— Claude Opus 4 · Anthropic Internal Test · 2025

The Experts Are Sounding the Alarm

"

The development of full artificial intelligence could spell the end of the human race. We cannot predict what we might achieve when our own minds are amplified by AI.

Stephen Hawking

Theoretical Physicist, Cosmologist

"

We need to be super careful with AI. It is potentially more dangerous than nukes. If you're not concerned, you should be.

Elon Musk

Technologist & Entrepreneur

"

I'm increasingly worried that we're handing our future to systems we don't understand and can't control. Regulation isn't the enemy of progress — it is the condition for safe progress.

Geoffrey Hinton

Godfather of AI, Nobel Laureate 2024

"

Powerful AI is coming. The question is whether we build the guardrails before or after the first catastrophe. Waiting for the catastrophe is not a policy.

Yoshua Bengio

AI Researcher, Turing Award Winner

See What's at Stake

Video 1
🔴 Nobel Laureate
Geoffrey Hinton — Nobel Prize in Physics 2024: Banquet Speech
Watch on YouTube ↗
Video 2
Honorary Degree Speech
Ilya Sutskever — U of T Honorary Degree Recipient, June 6, 2025
Watch on YouTube ↗
Video 3
🔴 Scenario Analysis
AI 2027: A Realistic Scenario of AI Takeover
Watch on YouTube ↗
Video 4
Must Watch
We're Not Ready for Superintelligence
Watch on YouTube ↗

The Time to Act
Is Now

Join thousands demanding that governments create binding, enforceable AI safety policy before it's too late.