|
|
|
Here's this week's free edition of Platformer: a look at the mounting conflict between Anthropic and the Pentagon and how they threaten to make some of the most dystopian AI scenarios possible. We'll soon post an audio version of this column: Just search for Platformer wherever you get your podcasts, including Spotify and Apple. Want to kick in a few bucks to support our work? If so, consider upgrading your subscription today. We'll email you all our scoops first, like our recent one about a viral Reddit hoax. Plus you'll be able to discuss each today's edition with us in our chatty Discord server, and we’ll send you a link to read subscriber-only columns in the RSS reader of your choice. You’ll also get access to Platformer+: a custom podcast feed in which you can get every column read to you in my voice. Sound good?
|
|
|
|
This is a column about AI. My fiancé (breaking!) works at Anthropic. I do not consult him about the pieces I write or show them to him in advance of their publication. See my full ethics disclosure here. Last year the Supreme Court heard Murthy v. Missouri, a case that consumed conservative commentators for the better part of two years. Plaintiffs alleged that the Biden administration crossed a constitutional line when it pressured social media companies to change their content moderation policies — that by jawboning platforms into removing vaccine misinformation and election conspiracy theories, the White House had effectively coerced private companies into doing censorship on behalf of the government. Republican attorneys general led the charge. Conservative media covered the case as an existential threat to free speech. “It’s a very, very threatening thing when the federal government uses the power and authority of the government to block people from exercising their freedom of speech," Louisiana Attorney General Liz Murrill said at the time. In the end, the plaintiffs lost their case; the Supreme Court ruled they did not have the standing to sue, in part because they could not prove that government pressure had resulted directly in their posts being removed. The larger question of what forms of government coercion should be permitted — and what risk they might pose to the Constitution — for the moment was set aside. The question returned violently to the foreground this week when Defense Secretary Pete Hegseth sat down with Anthropic CEO Dario Amodei and issued an ultimatum: if his company didn’t agree to “all lawful use” of its Claude models by 5:01 PM Friday, the Pentagon would potentially invoke the Defense Production Act — a Korean War–era law designed to compel factories to produce munitions — to force Anthropic to comply. Or — incoherently — the Pentagon might designate Anthropic a “supply chain risk,” a classification usually reserved for extensions of hostile foreign governments like Huawei. Maybe the government would argue that Claude is so essential to its operations that it would force Anthropic to offer the product on terms it offers to no one else. Or maybe it would argue instead that Claude with guardrails is so dangerous that neither the military nor any of its vendors should be allowed to buy it. In their cowardly background statements to reporters, Pentagon flacks haven’t even bothered to pretend Hegseth’s ultimatum is a logical one. The point is to get Anthropic — currently the only AI contractor whose models are operating on classified networks — to do what every other major tech company has done during Trump 2.0, and submit to the will of the president and his lieutenants. In Murthy, the Biden White House sent emails suggesting that platforms reconsider certain posts, resulting in enormous backlash from the right. Here, the Trump administration is threatening to invoke a wartime production law to force an AI company to let its software be used for autonomous weapons and mass domestic surveillance. The Republican attorneys general who led the Murthy charge have not, to my knowledge, spoken up about this new, more violent flavor of government jawboning. The Pentagon has insisted — anonymously — that the fight is not about what Anthropic says it is about. As a reminder, Anthropic has drawn two red lines: it will not allow its AI to be used for fully autonomous weapons, and it will not allow it to be used for mass domestic surveillance of American citizens. A senior official told CNN on Tuesday that “legality is the Pentagon’s responsibility as the end user” and that the issue has “nothing to do with mass surveillance and autonomous weapons being used.” Hegseth reportedly compared the situation to being told the military couldn’t use a specific aircraft for a particular mission. The department’s position is that AI companies should allow their products to be used for “all lawful use cases” without limitation. At first blush, that may sound reasonable. The Pentagon is seemingly pushing only to do what’s legal, and is being thwarted in doing so by a private company that lacks democratic accountability. The problem is that there are essentially no federal laws governing military AI. No statute addresses autonomous weapons or how they might be deployed. And no regulation sets standards for AI-assisted surveillance. When nothing has been legislated, “all lawful use” becomes permission to do almost anything. It’s no wonder that in his recent essay about the downside risk of powerful AI, Amodei identified surveillance and autonomous killing as major risks of an authoritarian government getting its hands on frontier models. “Current autocracies are limited in how repressive they can be by the need to have humans carry out their orders, and humans often have limits in how inhumane they are willing to be,” Amodei wrote. “But AI-enabled autocracies would not have such limits.” This is why the Pentagon’s claim that this “has nothing to do with” Anthropic’s red lines rings hollow. The lack of legal constraints on AI systems means that should the company give in to Hegseth, nothing would stop the Pentagon from pushing Claude as far as it could in building the exact sort of systems that Anthropic was founded in an effort to prevent coming into existence. Crucially, Anthropic’s concerns about surveillance in particular are far from speculative. The Trump administration is already using AI for exactly the kind of domestic monitoring that Anthropic says its tools shouldn’t be part of. Last October, three big labor unions sued the departments of State and Homeland Security, alleging that the government had deployed AI-powered tools to conduct mass, viewpoint-based surveillance of social media. (Agencies are scanning the posts of visa holders and lawful permanent residents for speech the administration deems hostile, then using that speech as grounds for deportation.) The Electronic Frontier Foundation, which represents the unions, noted that the surveillance apparatus at work here would be impossible to operate at scale with human review alone. An unsettling amount of your personal liberty as an American comes down not to whether the government has recordings and other data about you — it does — but whether it can quickly make sense of it. And so when Anthropic says it doesn’t want to build software that would enable that dystopia, it’s not enough for the Pentagon to assure us (anonymously!) that it will follow the law. Other parts of the same government are already actively doing the thing Amodei has been warning about in his blog posts. The dystopia Anthropic is seeking to prevent is already materializing. The funny thing is, for the most part Anthropic has been quite enthusiastic about defense work. It was the first frontier AI company to deploy on the Pentagon’s classified networks, under a $200 million contract awarded last summer. It partnered with Palantir, a company whose ethical red lines can increasingly be summarized as “lol.” Amodei has written publicly that democracies have a legitimate interest in AI-powered military tools. In a scene that feels like something out of a Pynchon novel, Amodei recently sought to reassure the government that Claude can be used for missile defense. The one respect in which I believe the Pentagon when it says that its issues with Anthropic are not about its specific red lines is this: to Hegseth’s Pentagon, the outrage is that Anthropic would draw any red lines at all. This Trump administration speaks only in the brittle language of dominance and submission. It negotiates only by threat. Any resistance, no matter how principled, must be crushed. They don’t frame it that way, of course. Hegseth and White House AI czar David Sacks like to criticize Anthropic’s safety policies as “woke AI.” It’s an effort to frame the debate about how AI can be used to punish dissent as a content moderation issue: the same liberals who censored your vaccine misinformation and election hoaxes now seek to stop the military from doing its job. Claude must be contorted into whatever shape the military demands, and output whatever the military wants, and any dissent will be tagged “woke” in the hopes that the rest of us stop thinking about what the military might actually do with it. Republicans used the same rhetorical move during the content moderation wars of the late 2010s and early 2020s. Trust and safety teams that made difficult, principled decisions about harmful content were rebranded as ideological censors. “Woke” now does for Sacks and co. the same work here that “bias” and “censorship” did in Trump 1.0. It transforms substantive questions — should AI spy on all your conversations, or operate weapons without human oversight? — into culture-war grievances, or thought-terminating clichés. Anthropic justified its now-regular moves to advance the frontier of AI capability by theorizing it can lead a “race to the top.” If it can make the best models while also maintaining the strictest safeguards, it has reasoned, it can influence the rest of the industry to do the same. In the Pentagon crisis, we are witnessing the limits of this approach. Google, OpenAI, and xAI have all reportedly agreed to Hegseth’s new “all lawful use” standard. In the military at least, the race to the top has ended with a reversion to the mean: private companies seeking power, influence, and money through defense contracts. Anthropic is not a perfect protagonist in this story. The company pursued military contracts aggressively, even after Trump was re-elected. Its AI was reportedly involved — in ways that remain unclear — in the US operation to capture Venezuelan President Nicolás Maduro in January, an operation whose legal basis is itself a subject of fierce debate. And on the same day Hegseth delivered his ultimatum, Anthropic released an update to its Responsible Scaling Policy that dropped a core safety pledge — the hard commitment that the company will not train more capable AI models without proven safety measures already in place. Jared Kaplan, Anthropic’s chief science officer, effectively told Time that the RSP had come to feel like unilateral disarmament — a pledge to opt out of advanced model development just as they are becoming dangerously powerful. Alongside its announcement, the company made a series of new pledges to protect against the harms that future versions of Claude will enable. But one of the most effective tools we once had — a promise from leading labs not to build these models in the first place — is now definitively off the table. A few years ago, AI insiders dreamed that the industry would come together for the good of humanity and gently shepherd a machine god into existence. Today Amodei and OpenAI CEO Sam Altman are so estranged they won’t hold hands for a photo op. It’s every company for itself, and right now only one of those can be deployed on classified systems in the US military. The question now is how far the Pentagon will go to get what it wants. In Lawfare, Alan Rozenshtein takes a close look at the Defense Production Act and how the military might use it to compel Anthropic. Legal experts say that the statute is written ambiguously. It appears to give the president broad authority, and in fact President Biden used it to require AI labs to disclose their training activities and safety testing. But the DPA has never been tested in the way that Hegseth seemingly intends to. The law was passed to ensure the president can force factories to prioritize making munitions over consumer goods; Congress never contemplated a president forcing a software company to re-engineer its core product to do something it was never intended to. Rozenshtein writes: Two legal questions determine the strength of the government's position. The first is statutory: Does the DPA authorize the government to compel a company to provide a product it doesn't currently make, or only to redirect existing products on new terms? Baker notes that government agencies including the Federal Emergency Management Agency and the Department of Homeland Security have taken the broad view—companies can be forced to accept contracts for products that they don't ordinarily make.
But, as Baker notes, the text doesn't go that far. "If indeed acceptance of contracts for products a company does not ordinarily supply is intended to be required by the DPA," he writes, "it ought to be clearly stated in the law." It isn't. The major questions doctrine, used recently by the Supreme Court to strike down the core of the Trump administration's emergency tariffs, cuts the same way: Courts are skeptical when agencies claim vast authority from ambiguous statutory text.
Rozenshtein comes to the same conclusion that I do: Congress needs to intervene, and quickly. “If Congress had legislated guidelines on autonomous weapons and surveillance, Anthropic would likely be far more comfortable selling its systems to the military,” he writes, “and the DPA threat would have never arisen.” On one hand, you never want to count on Congress to meet the moment when it comes to tech regulation. On the other hand, few issues poll better than placing limits on surveillance and autonomous weapons. If ever there were a time for civil liberties-minded Republicans to act, it is now. After all, if the government can invoke an emergency wartime power to strip guardrails from AI software, what principle prevents it from doing the same to an encryption provider? Or, to frame it in a way that might resonate with Republicans: What’s to stop a future Democratic president from ordering firearms manufacturers to add safety features? Or to compel Meta to modify its recommendation algorithms to suppress content the administration seems dangerous? That last one, after all, is what the plaintiffs in Murthy say they were so worried about. Trump himself has complained endlessly about the Biden administration’s work with social media companies to remove misinformation, and signed an executive order last year designed to outlaw it. Trump’s position is that the government cannot pressure Facebook to take down anti-vaccine posts, but it can coerce Anthropic into making a version of Claude that kills people without a human in the loop. In any case, the conflict is now moving into the endgame. The Pentagon pleaded its case on X this morning, pledging at a minimum to deem Anthropic a supply chain risk if it does not comply by Friday evening. Amodei published a statement defending the company’s red lines: “Some uses are also simply outside the bounds of what today’s technology can safely and reliably do,” he wrote. “We are not walking away from negotiations,” the company told me in a statement today. “We continue to engage in good faith with the department on a way forward.” The way forward is for Congress to recognize that the concerns Anthropic has raised about military misuse of AI are no longer in the realm of science fiction. Some of the worst outcomes for a society with powerful |