Sam Altman, chief government officer of OpenAI Inc., on the AI Influence Summit in New Delhi, India, on Thursday, Feb. 19, 2026.
Prakash Singh | Bloomberg | Getty Photographs
OpenAI CEO Sam Altman mentioned late Friday that his firm has agreed to phrases with the Division of Protection on use of its synthetic intelligence fashions, shortly after President Donald Trump mentioned the federal government will not work with AI rival Anthropic.
“Tonight, we reached an settlement with the Division of Struggle to deploy our fashions of their categorised community,” Altman wrote in a submit on X. “In all of our interactions, the DoW displayed a deep respect for security and a want to associate to realize the absolute best consequence.”
Altman’s submit lands on the finish of a dramatic week for the AI business, which has discovered itself on the middle of a political debate surrounding how its fashions can be utilized. Earlier within the day, Protection Secretary Pete Hegseth designated Anthropic a “Provide-Chain Danger to Nationwide Safety” after weeks of tense negotiations. The label is often reserved for international adversaries, and it might power DoD distributors and contractors to certify that they do not use Anthropic’s fashions.
President Trump additionally directed each federal company within the U.S. to “instantly stop” all use Anthropic’s expertise.
Anthropic was the primary lab to deploy its fashions throughout the DoD’s categorised community, and had been attempting to barter the continued phrases of its contract with the company earlier than talks collapsed. The corporate needed assurance that its fashions wouldn’t be used for totally autonomous weapons or mass surveillance of People, whereas the DoD needed Anthropic to conform to let the navy use the fashions throughout all lawful use instances.
Altman instructed workers in a Thursday memo that OpenAI shared the identical “purple strains” as Anthropic. He mentioned in his submit Friday that the DoD agreed to its restrictions.
“Two of our most vital security ideas are prohibitions on home mass surveillance and human accountability for using power, together with for autonomous weapon programs,” Altman wrote. The DoW agrees with these ideas, displays them in regulation and coverage, and we put them into our settlement.
It isn’t instantly clear why the DoD agreed to accommodate OpenAI and never Anthropic, although authorities officers have for months criticized Anthropic for allegedly being overly involved with AI security.
Altman mentioned OpenAI will construct “technical safeguards to make sure its fashions behave as they need to,” and that the corporate will deploy personnel to “assist with our fashions and to make sure their security.”
“We’re asking the DoW to supply these similar phrases to all AI corporations, which in our opinion we expect everybody must be prepared to simply accept,” Altman wrote. “We’ve got expressed our sturdy want to see issues de-escalate away from authorized and governmental actions and in the direction of affordable agreements.”
Anthropic mentioned in an announcement Friday that it was “deeply saddened” by the Pentagon’s resolution to label the corporate a provide chain threat. It mentioned it intends to problem that designation in courtroom.
WATCH: Hegseth directs Pentagon to designate Anthropic a supply-chain threat


