← Kembali ke Beranda
⚡ AMP Version

The Pentagon’s Culture War Tactic Against Anthropic Has…

Oleh Patinko

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories this in your inbox first, sign up here.

Last Thursday, a California judge temporarily blocked the Pentagon from labeling Anthropic a supply chain risk and ordering government agencies to stop using its AI. It’s the latest development in the month-long feud. And the matter still isn’t settled: The government was given seven days to appeal, and Anthropic has a second case against the designation that has yet to be decided. Until then, the company remains persona non grata with the government. 

The stakes in the case—how much the government can punish a company for not playing ball—were apparent from the start. Anthropic drew lots of senior supporters with unly bedfellows among them, including former authors of President Trump’s AI policy.

But Judge Rita Lin’s 43-page opinion suggests that what is really a contract dispute never needed to reach such a frenzy. It did so because the government disregarded the existing process for how such disputes are governed and fueled the fire with social media posts from officials that would eventually contradict the positions it took in court. The Pentagon, in other words, wanted a culture war (on top of the actual war in Iran that began hours later). 

The government used Anthropic’s Claude for much of 2025 without complaint, according to court documents, while the company walked a branding tightrope as a safety-focused AI company that also won defense contracts. Defense employees accessing it through Palantir were required to accept terms of a government-specific usage policy that Anthropic cofounder Jared Kaplan said “prohibited mass surveillance of Americans and lethal autonomous warfare” (Kaplan’s declaration to the court didn’t include details of the policy). Only when the government aimed to contract with Anthropic directly did the disagreements begin. 

What drew the ire of the judge is that when these disagreements became public, they had more to do with punishment than just cutting ties with Anthropic. And they had a pattern: first, lawyer later. 

President Trump’s post on Truth Social on February 27 referenced “Leftwing nutjobs” at Anthropic and directed every federal agency to stop using the company’s AI. This was echoed soon after by Defense Secretary Pete Hegseth, who said he’d direct the Pentagon to label Anthropic a supply chain risk. 

Doing so necessitates that the secretary take a specific set of actions, which the judge found Hegseth did not complete. Letters sent to congressional committees, for example, said that less drastic steps were evaluated and deemed not possible, without providing any further details. The government also said the designation as a supply chain risk was necessary because Anthropic could implement a “kill switch,” but its lawyers later had to admit it had no evidence of that, the judge wrote.

Hegseth’s post also stated that “No contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic.” But the government’s own lawyers admitted on Tuesday that the Secretary doesn’t have the power to do that, and agreed with the judge that the statement had “absolutely no legal effect at all.”

The aggressive posts also led the judge to also conclude that Anthropic was on solid ground in complaining that its First Amendment rights were violated. The government, the judge wrote while citing the posts, “set out to publicly punish Anthropic for its ‘ideology’ and ‘rhetoric,’ as well as its ‘arrogance’ for being unwilling to compromise those beliefs.”

Labeling Anthropic a supply chain risk would essentially be identifying it as a “saboteur” of the government, for which the judge did not see sufficient evidence. She issued an order last Thursday halting the designation, preventing the Pentagon from enforcing it and forbidding the government from fulfilling the promises made by Hegseth and Trump. Dean Ball, who worked on AI policy for the Trump administration but wrote a brief supporting Anthropic, described the judge’s order on Thursday as “a devastating ruling for the government, finding Anthropic ly to prevail on essentially all of its theories for why the government’s actions were unlawful and unconstitutional.”

The government is expected to appeal the decision. But Anthropic’s separate case, filed in DC, makes similar allegations. It just references a different segment of the law governing supply chain risks. 

The court documents paint a pretty clear pattern. Public statements made by officials and the President did not at all align with what the law says should happen in a contract dispute this, and the government’s lawyers have consistently had to create justifications for social media lambasting of the company after the fact.

Pentagon and White House leadership knew that pursuing the nuclear option would spark a court battle; Anthropic vowed on February 27 to fight the supply chain risk designation days before the government formally filed it on March 3. Pursuing it anyway meant senior leadership was, to say the least, distracted during the first five days of the Iran war, launching strikes while also compiling evidence that Anthropic was a saboteur to the government, all while it could have cut ties with Anthropic by simpler means. 

But even if Anthropic ultimately wins, the government has other means to shun the company from government work. Defense contractors who want to stay on good terms with the Pentagon, for example, now have little reason to work with Anthropic even if it’s not flagged as a supply chain risk. 

“I think it’s safe to say that there are mechanisms the government can use to apply some degree of pressure without breaking the law,” says Charlie Bullock, a senior research fellow at the Institute for Law and AI. “It kind of depends how invested the government is in punishing Anthropic.”

From the evidence thus far, the administration is committing top-level time and attention to winning an AI culture war. At the same time, Claude is apparently so important to its operations that even President Trump said the Pentagon needed six months to stop using it. The White House demands political loyalty and ideological alignment from top AI companies, But the case against Anthropic, at least for now, exposes the limits of its leverage.

If you have information about the military’s use of AI, you can it securely via Signal (username jamesodonnell.22).

Deep Dive

Artificial intelligence

A “QuitGPT” campaign is urging people to cancel their ChatGPT subscriptions

Backlash against ICE is fueling a broader movement against AI companies’ ties to President Trump.

By

Moltbook was peak AI theater

The viral social network for bots reveals more about our own current mania for AI as it does about the future of agents.

By

OpenAI is throwing everything into building a fully automated researcher

An exclusive conversation with OpenAI’s chief scientist, Jakub Pachocki, about his firm’s new grand challenge and the future of AI.

By

How Pokémon Go is giving delivery robots an inch-perfect view of the world

Exclusive: Niantic’s AI spinout is training a new world model using 30 billion images of urban landmarks crowdsourced from players.

By

Stay connected

Illustration by Rose Wong

Get the latest updates from
MIT Technology Review

Discover special offers, top stories, upcoming events, and more.

Enter your email

Privacy Policy

Thank you for submitting your email!

Explore more newsletters

It looks something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at customer-service@technologyreview.com with a list of newsletters you’d to receive.

Sumber Artikel:

Technologyreview.com

Baca Artikel Lengkap di Sumber