“Judge Halts Pentagon’s Blacklisting of Anthropic AI”

Date:

A U.S. court judge has issued a temporary restraining order against the Pentagon’s blacklisting of Anthropic, marking a new development in the company’s ongoing dispute with the military regarding AI safety in combat situations. The lawsuit filed by Anthropic in a California federal court argues that the U.S. Secretary of War, Pete Hegseth, exceeded his authority by classifying Anthropic as a national security supply-chain risk. This designation is typically applied to companies that could potentially expose military systems to infiltration or sabotage by hostile entities.

Anthropic claims that the government violated its First Amendment right to free speech by retaliating against its stance on AI safety. The company further alleges that it was not afforded the opportunity to challenge the designation, thus violating its Fifth Amendment right to due process. U.S. District Judge Rita Lin, appointed by former President Joe Biden, concurred with Anthropic in a 43-page ruling, but the enforcement of the decision will be delayed for seven days to allow the administration to potentially appeal.

The dispute arose after Anthropic opposed the military’s request to utilize its AI chatbot, Claude, for U.S. surveillance or autonomous weaponry, prompting Hegseth to block Anthropic from certain military contracts. Executives at Anthropic fear significant financial losses and damage to their reputation as a result of this action. The company argues that AI models are currently not sufficiently reliable for safe use in autonomous weapons and condemns domestic surveillance as a violation of rights. On the other hand, the Pentagon asserts that private companies should not impede military operations, emphasizing their intention to deploy the technology within legal boundaries.

Judge Lin’s ruling implies that the government’s actions were more punitive toward Anthropic than aligned with national security interests as stated. This led to the conclusion that Anthropic may have been penalized for publicly criticizing the government’s contracting decisions, suggesting a violation of the First Amendment right. Anthropic’s spokesperson, Danielle Cohen, expressed satisfaction with the court decision, emphasizing the company’s commitment to collaborating constructively with the government for the benefit of all Americans through safe and reliable AI technology.

Notably, Anthropic’s designation as a supply-chain risk under a government-procurement statute is unprecedented for a U.S. company. The company’s lawsuit contends that the decision was unlawful, lacked factual support, and contradicted the military’s previous positive assessments of Claude. The Justice Department countered by stating that Anthropic’s refusal to comply with contractual terms could create uncertainty within the Pentagon on how to utilize Claude, potentially jeopardizing military systems during operations. The government clarified that the designation was a consequence of Anthropic’s contractual disagreements rather than its AI safety perspectives.

Additionally, Anthropic faces another legal battle in Washington over a separate Pentagon supply-chain risk designation that could result in its exclusion from civilian government contracts.

Share post:

Popular

More like this
Related

“U.S. Health Authorities Endorse Self-Testing Kits for Cervical Cancer Prevention”

U.S. health authorities are endorsing a more convenient approach...

UPS Cargo Plane Crash Claims 14 Lives in Kentucky

A tragic incident unfolded in Kentucky as a UPS...

Canada Falls Short of Projected Tariff Revenue

Canada had accumulated over $3 billion from U.S. counter-tariffs...

Olympic Champion “Arnie” Titmus Retires at 25

A beloved Olympic champion nicknamed "Arnie" has decided to...