1,421 Posts
29 Pages
41 Categories
3854 Tags
1193 comments
Thursday, December 5th, 2024 (05:42)

Claude AI to Process Secret Government Data Through New Palantir Deal

by Benj Edwards
Administrator
Posted on November 18, 2024, 4:39 pm
5 mins

Anthropic has announced a partnership with Palantir and Amazon Web Services to bring its Claude AI models to unspecified US intelligence and defense agencies. Claude, a family of AI language models similar to those that power ChatGPT, will work within Palantir’s platform using AWS hosting to process and analyze data. But some critics have called out the deal as contradictory to Anthropic’s widely-publicized “AI safety” aims.

On X, former Google co-head of AI ethics Timnit Gebru wrote of Anthropic’s new deal with Palantir, “Look at how they care so much about ‘existential risks to humanity.'”

The partnership makes Claude available within Palantir’s Impact Level 6 environment (IL6), a defense-accredited system that handles data critical to national security up to the “secret” classification level. This move follows a broader trend of AI companies seeking defense contracts, with Meta offering its Llama models to defense partners and OpenAI pursuing closer ties with the Defense Department.

In a press release, the companies outlined three main tasks for Claude in defense and intelligence settings: performing operations on large volumes of complex data at high speeds, identifying patterns and trends within that data, and streamlining document review and preparation.

While the partnership announcement suggests broad potential for AI-powered intelligence analysis, it states that human officials will retain their decision-making authority in these operations. As a reference point for the technology’s capabilities, Palantir reported that one (unnamed) American insurance company used 78 AI agents powered by their platform and Claude to reduce an underwriting process from two weeks to three hours.

The new collaboration builds on Anthropic’s earlier integration of Claude into AWS GovCloud, a service built for government cloud computing. Anthropic, which recently began operations in Europe, has been seeking funding at a valuation up to $40 billion. The company has raised $7.6 billion, with Amazon as its primary investor.

An ethical minefield

Since its founders started Anthropic in 2021, the company has marketed itself as one that takes an ethics- and safety-focused approach to AI development. The company differentiates itself from competitors like OpenAI by adopting what it calls responsible development practices and self-imposed ethical constraints on its models, such as its “Constitutional AI” system.

As Futurism points out, this new defense partnership appears to conflict with Anthropic’s public “good guy” persona, and pro-AI pundits on social media are noticing. Frequent AI commentator Nabeel S. Qureshi wrote on X, “Imagine telling the safety-concerned, effective altruist founders of Anthropic in 2021 that a mere three years after founding the company, they’d be signing partnerships to deploy their ~AGI model straight to the military frontlines.

Aside from the implications of working with defense and intelligence agencies, the deal connects Anthropic with Palantir, a controversial company which recently won a $480 million contract to develop an AI-powered target identification system called Maven Smart System for the US Army. Project Maven has sparked criticism within the tech sector over military applications of AI technology.

It’s worth noting that Anthropic’s terms of service do outline specific rules and limitations for government use. These terms permit activities like foreign intelligence analysis and identifying covert influence campaigns, while prohibiting uses such as disinformation, weapons development, censorship, and domestic surveillance. Government agencies that maintain regular communication with Anthropic about their use of Claude may receive broader permissions to use the AI models.

Even if Claude is never used to target a human or as part of a weapons system, other issues remain. While its Claude models are highly regarded in the AI community, they (like all LLMs) have the tendency to confabulate, potentially generating incorrect information in a way that is difficult to detect.

That’s a huge potential problem that could impact Claude’s effectiveness with secret government data, and that fact, along with the other associations, has Futurism’s Victor Tangermann worried. As he puts it, “It’s a disconcerting partnership that sets up the AI industry’s growing ties with the US military-industrial complex, a worrying trend that should raise all kinds of alarm bells given the tech’s many inherent flaws—and even more so when lives could be at stake.”

via Ars Technica

Leave a Reply

  • (not be published)