Sure enough, Sam Altman managed to secure an agreement between OpenAI and the US War Department amid public battles between Anthropic and the agency. But in doing so, he may have lost something even more valuable: public goodwill. OpenAI’s CEO acknowledged as much in a social media post, admitting that the deal had been rushed. “We shouldn’t be in a rush to get this out. “We were honestly trying to de-escalate things and avoid a much worse outcome, but I think it seemed opportunistic and dirty.”
Altman and Dario Amodei, CEO of Anthropic, were previously colleagues at OpenAI. In 2021, Amodei and a group of former employees left to launch Anthropic, positioning the startup as a safety-first alternative to its more commercially aggressive competitor. These philosophical differences between two of Silicon Valley’s most influential AI executives were on full display in recent weeks during negotiations with the Pentagon.
Amodei’s focus on safety was tested when Anthropic announced that it would no longer allow its artificial intelligence systems to be used to surveil American citizens or to launch fully autonomous attacks without human supervision. After Amodei rejected the Pentagon’s request for unrestricted use of Cloud, President Donald Trump ordered federal agencies to end their use of the chatbot within six months, and Defense Secretary Pete Hegseth designated Anthropic as “Supply chain risks“.
On the same day Anthropic was banned, Altman unveiled a new Pentagon agreement for OpenAI. The deal took a less stringent stance, allowing AI to be deployed for all legitimate purposes while incorporating technical safeguards into OpenAI models.
Despite receiving the contract, Altman struggled to control the narrative. Silicon Valley rallied around Anthropic after its confrontation with Washington. Labor groups representing 700,000 employees across Amazon, Google and Microsoft last week issued a joint statement urging employers to “To also refuse to comply if these companies or the border laboratories in which they invest enter into other contracts with the PentagonA separate open letter signed by about 950 Google and OpenAI employees called on employers to “Put their differences aside and stand togetherIn resisting the agency’s demands.
The consumer backlash has also extended to OpenAI’s business. Over the weekend, a large number of users switched from ChatGPT to Claude, pushing Anthropic to the top of the US App Store’s free app rankings ahead of ChatGPT. Although Anthropic’s user base still represents a small fraction of OpenAI’s 900 million weekly active users, the company says its free use of Claude has… It has risen more than 60 percent since January. The increased demand even led to a temporary outage on March 2.
In the face of mounting criticism, Altman moved to contain the fallout. In addition to acknowledging that the Pentagon deal appears opportunistic, he announced amendments that explicitly prohibit the use of OpenAI systems for domestic surveillance. He also clarified that the company’s services will not be used by defense intelligence agencies such as the National Security Agency. Altman, who said he hopes Anthropic will receive similar terminology, called the episode a “good learning experience” as OpenAI faces “high-stakes decisions in the future.”
Companies’ differing approaches to business opportunities have sparked public friction before. Earlier this year, Altman sparked backlash for testing ads in ChatGPT, a move that contradicted Amodei’s decision to keep Cloud ad-free and inspired Anthropic’s ridiculous Super Bowl ad.
It remains uncertain whether Altman’s amendments will influence public opinion. Meanwhile, Anthropy took advantage of the moment. As ChatGPT users moved to Claude, the company introduced a memory import tool designed to simplify data transfer from competing chatbots — an unmistakable attempt to turn controversy into market share.
