US draws up strict AI guidelines amid Anthropic clash, FT reports
The Straits Times
The Pentagon designated Anthropic a “supply-chain risk” on March 5. Read more at straitstimes.com.
The Trump administration has drawn up strict rules for civilian artificial intelligence (AI) contracts requiring companies to allow “any lawful” use of their models amid a stand-off between the Pentagon and Anthropic, the Financial Times reported on March 6.
The Pentagon designated Anthropic a “supply-chain risk” on March 5, barring government contractors from using the AI firm’s technology in work for the US military. That followed a months-long dispute over the company’s insistence on safeguards that the Defense Department says went too far.
A draft of the guidelines reviewed by the FT says AI groups seeking business with the government must grant the US an irrevocable licence to use their systems for all legal purposes.
The guidance from the General Services Administration (GSA) would apply to civilian contracts and is part of a broader government-wide effort to strengthen AI services procurement, the newspaper reported, adding that it mirrors measures the Pentagon is considering for military contracts.
“It would be irresponsible to the American people and dangerous to our nation for GSA to maintain a business relationship with Anthropic,” Mr Josh Gruenbaum, commissioner of the Federal Acquisition Service, a GSA subsidiary that helps procure software for the federal government, told Reuters by e-mail.
“As directed by the President, GSA has terminated Anthropic’s OneGov deal – ending their availability to the Executive, Legislative and Judicial branches through GSA’s pre-negotiated contracts,” Mr Gruenbaum said.












