As Pentagon-Anthropic feud risks boiling over, military says it’s made compromises to AI giant

As the U.S. military’s partnership with artificial intelligence giant Anthropic teeters on the edge of collapse, the Pentagon’s top technology official told CBS News the department has offered compromises in order to reach a deal with the company.

The Pentagon has given Anthropic until Friday at 5:01 p.m. to either let the military use the company’s AI model for “all lawful purposes” or risk losing a lucrative Pentagon contract. The AI startup has sought guardrails that explicitly bar its powerful Claude model from being used to conduct mass surveillance of Americans or carry out military operations on its own.

The Pentagon’s chief technology officer Emil Michael told CBS News on Thursday that the military has “made some very good concessions.”

In particular, the Defense Department offered to “put it in writing” that federal laws already prevent the military from conducting mass surveillance on Americans, and that internal policies restrict how the military can use autonomous weapons, according to Michael. He also said the military invited Anthropic to participate in its AI ethics board.

Asked why the military will not specifically put in writing that Anthropic’s model can’t be used for mass surveillance of Americans or to make final targeting decisions without human involvement, Michael said those uses of AI are already barred by the law and by Pentagon policies.

“At some level, you have to trust your military to do the right thing,” said Michael.

“But we do have to be prepared for the future. We do have be prepared for what China is doing,” Michael said. “So we’ll never say that we’re not going to be able to defend ourselves in writing to a company.” 

If the military and Anthropic do not reach a deal by Friday’s deadline, the military plans to cut off its partnership with the company and designate it a supply chain risk, Pentagon spokesman Sean Parnell said earlier Thursday. Officials are also considering invoking the Defense Production Act to make Anthropic adhere to the military’s requests, sources told CBS News. 

Michael did not confirm that the Defense Production Act could be used, but he said that “no company is going to take out any software that’s being used in this department until we have an alternative.” Michael added that he’s working on partnerships with alternative AI firms.

At risk for Anthropic is its status as the only AI company to have its model deployed on the Pentagon’s classified networks, through a partnership with data analytics giant Palantir. Anthropic was awarded a $200 million contract with the Defense Department last summer to deploy its AI capabilities to advance national security.

The feud has highlighted a broader disagreement among policymakers and tech firms over how best to mitigate the potential risks posed by AI.

Anthropic CEO Dario Amodei has long been vocal about the potential dangers of unconstrained AI, and has made a focus on safety and transparency a core part of his company’s identity. He’s also backed what he calls “sensible AI regulation.”

In the case of its Pentagon contract, Anthropic wants to ensure that its Claude model is not used for final military targeting decisions, a source familiar with the matter previously told CBS News. Claude is not immune from hallucinations and not reliable enough to avoid potentially lethal mistakes, like unintended escalation or mission failure without human judgment.

The Trump administration, meanwhile, has argued that stringent AI regulations could stifle innovation and make it harder for the American AI industry to compete, and has warned against what it calls “woke” AI models. In a speech last month, Defense Secretary Pete Hegseth pledged, “we will not employ AI models that won’t allow you to fight wars.”

Michael told CBS News that the disagreement is partially ideological, “and the way I describe that ideology is: they’re afraid of the power of AI.” 

He said that the military is only interested in using AI lawfully, and is looking to “treat it like any other technology” — which means that if it isn’t used for lawful purposes, “that’s on us.”

“You can’t put the rules and the policies of the United States military and the government in the hands of one private company,” said Michael.

CBS News has reached out to Anthropic for comment.

Source

Posted in US

Leave a Reply

Your email address will not be published. Required fields are marked *