Anthropic has tossed out one of the firmest safety vows in the AI business. The San Francisco startup, founded by former OpenAI staffers who warned about runaway AI, has scrapped its Responsible Scaling Policy, which required it to pause training more powerful models if it couldn't reliably control them, CNN reports. In a blog post Tuesday, the company said it would no longer delay AI development it considered potentially dangerous if it felt it didn't have a significant lead over its competitors, reports Bloomberg.
- "We felt that it wouldn't actually help anyone for us to stop training AI models," Jared Kaplan, Anthropic's chief science officer, tells Time. "We didn't really feel, with the rapid advance of AI, that it made sense for us to make unilateral commitments … if competitors are blazing ahead."
In the blog post, the company said it's replacing firm rules with a looser "Frontier Safety Roadmap" made up of nonbinding goals it will publicly track, calling them "public goals" rather than solid commitments. Anthropic argues that halting its own progress while less cautious rivals keep pushing ahead could lead to a world that is "less safe."
Anthropic says it originally hoped its strict scaling policy would spark a "race to the top," with competitors toughening their own rules. Instead, it now concedes the rest of the industry largely ignored those guardrails and notes that Washington's current mood is hostile to new regulation. The company is also formally separating its internal safety plans from what it recommends to the wider industry.
- The shift lands in the same week Anthropic is in a standoff with the Pentagon over how its technology can be used. Defense Secretary Pete Hegseth has reportedly given CEO Dario Amodei a Friday deadline to relax some safeguards or risk losing a $200 million Defense Department deal and being treated as a supply-chain risk under the Defense Production Act.
- CNN, citing a source familiar with the talks, reports that Anthropic is refusing to budge on two red lines: it doesn't want its systems controlling weapons or enabling mass surveillance of Americans, arguing that today's AI isn't reliable enough for lethal use and that the US has no clear rules governing AI-driven domestic spying.
- A source tells NBC News that Anthropic agreed to let the Pentagon use its AI systems for missile and cyber defense purposes in negotiations months ago, but officials still weren't satisfied.
- In a statement on Tuesday's meeting with Pentagon officials, an Anthropic spokesperson said: "We continued good-faith conversations about our usage policy to ensure Anthropic can continue to support the government's national security mission in line with what our models can reliably and responsibly do."