Google stated it’ll proceed providing synthetic intelligence fashions from Anthropic to clients by way of its cloud platform, excluding defense-related work, a day after Microsoft issued an identical assertion.
The bulletins from two of the world’s largest cloud infrastructure suppliers observe the US Protection Division’s designation of Anthropic as a provide chain danger.
A Google spokesperson stated Friday that the willpower doesn’t forestall the corporate from working with Anthropic on non protection associated initiatives and that its merchandise will stay accessible by way of platforms reminiscent of Google Cloud.
Anthropic’s Claude fashions can be found by way of Google Cloud by way of the Vertex AI platform. Google can also be a major monetary backer of the corporate. In January 2025, the search large dedicated an extra $1 billion funding in Anthropic, including to its earlier $2 billion stake.
Anthropic makes use of Google Cloud infrastructure to coach its fashions and just lately expanded its partnership with the corporate, having access to as much as a million of Google’s customized tensor processing items.
The dispute started after Anthropic declined to comply with new phrases requested by the US Division of Protection relating to the usage of its AI programs.
Following the disagreement, President Donald Trump instructed federal companies to cease utilizing Anthropic expertise. Protection Secretary Pete Hegseth later stated the Pentagon would part out its work with the corporate over a six month interval.
Some protection expertise corporations have already instructed workers to cease utilizing Anthropic’s Claude fashions and swap to alternate options from rival suppliers reminiscent of OpenAI.
Microsoft was the primary main cloud accomplice to substantiate it will proceed supporting Anthropic merchandise regardless of the Pentagon designation.
Microsoft stated Thursday that its attorneys reviewed the designation and concluded that Anthropic merchandise, together with Claude, can stay accessible to clients apart from the Division of Battle.
Anthropic CEO Dario Amodei stated the corporate plans to problem the federal government’s provide chain danger designation in court docket.
A late Friday report confirmed that Amazon can even proceed providing Anthropic’s synthetic intelligence expertise to its cloud clients, excluding work involving the Division of Protection.

