AI Death Machines. No Human Oversight. What Could Go Wrong?Pete Hegseth is trying to bully Anthropic out of objecting to “lethal autonomous weapons systems” and mass surveillance.Another day, another round of insane, racist Truth Social attacks against minority congresswomen from the president of the United States:
Happy Thursday. Hegseth’s AI Ultimatumby Andrew Egger Who gets to decide when the government AI-bots are ready to start killing people without direct human oversight—the Pentagon or the AI companies? This remarkable—some might say insane—question is at the center of a major standoff between the Defense Department and Anthropic, creator of the AI platform known as Claude. While the Pentagon has contracts with all the leading AI labs, Anthropic until this month was the only one contracted for AI use in classified settings: Claude was, for instance, reportedly involved in the operation to capture Nicolas Maduro. But Defense Secretary Pete Hegseth has grown unhappy with two elements of the DoD’s contract with Anthropic. One, Anthropic won’t let its AI be used to conduct mass surveillance of Americans. Two, it won’t let the DoD use it to operate autonomous weapons systems that can identify, track, and kill targets without direct human involvement. To the Defense Department, the idea that a contractor would be able to tie the military’s hands like this is outlandish; they should be permitted, they argue, to use AI they contract for “for all lawful purposes.”¹ Hegseth could simply drop Anthropic’s contract over this, pivoting instead to any of the AI labs—OpenAI, Google Gemini, Elon Musk’s xAI—that aren’t insisting on these contractual sticking points. But he doesn’t really want to. After all, Claude is supposed to be the best, and at any rate it’s already integrated into lots of DoD systems. It’d be a hassle. So instead, Hegseth has issued Anthropic an ultimatum: Change your policy, or we’re going to start getting nasty. This could happen in a couple different ways. The Defense Department is threatening to use the Defense Production Act to compel Anthropic to drop its usage requirements. Or it could go the exact opposite direction, declaring Anthropic a “supply chain risk”—which would not only eliminate DoD’s Anthropic contract, but also forbid any business that contracts with DoD from working with Anthropic in any way. Both of these, it is hardly sufficient to say, would be enormous, unprecedented escalations. Hegseth says Anthropic has until tomorrow to decide. The fact that DoD is considering both possibilities—making it illegal for Anthropic not to work with them and making it illegal for Anthropic to work with anybody DoD works with—makes it pretty clear that all this is a pure squeeze play. Hegseth doesn’t actually think Anthropic is a supply-chain risk, a label typically reserved for software developed in hostile nations and suspected to contain hidden malign code. He’s just threatening to use the strongest weapon he has against the lab if it doesn’t give him what he wants. The confrontation has spooked the AI-policy world, which has until now viewed the Trump administration as highly AI-friendly. This week, I spoke with Dean Ball, who worked in a senior AI-policy role at the White House last year, helping the Trump administration develop its AI Action Plan. Ball now serves as a senior fellow at the Foundation for American Innovation, a newly influential think tank on the tech right. In Ball’s view, the actual dispute between Hegseth and Anthropic had reasonable points on both sides: The government wants to control its own military, and Anthropic doesn’t want to be involved in specific use cases, so whatever—seems like they should go their separate ways. But Hegseth’s ultimatum was a whole different beast. Throughout our conversation, Ball seemed to struggle to summon words powerful enough to express his incredulity at what he called the DoD’s “giant escalation.” “I will say this in no uncertain terms, bipartisan, regardless of administration,” Ball said. “This would be one of the worst things for the American business climate I have ever seen the government do.” I asked the White House how Hegseth’s threatened kneecapping of Anthropic was in keeping with their broader AI policy, which has placed a huge priority on unleashing U.S. AI capabilities as part of a global AI arms race against China. The White House referred me back to the Pentagon; the Pentagon did not respond to my comment request. Even when they’re the targets of ludicrous government bullying, AI labs don’t necessarily make for the most sympathetic victims, which perhaps explains why we haven’t seen more Democrats rallying to Anthropic’s defense. One exception is Rep. Zoe Lofgren (D-Calif.), who serves as the ranking member of the House Committee on Science, Space, and Technology. In a statement to The Bulwark, Lofgren said that the administration’s “bullying tactics” against Anthropic were “shocking and senseless.” “Anthropic is trying to do the right thing and put their own guardrails in place in the absence of legislation,” she added. “It should go without saying that AI technology should not be making potentially lethal decisions without human involvement. I fear what America will become if the DoD is given this unrestricted power.” Maybe that’s the biggest takeaway from this whole crazy story: While it’s nice that Anthropic is digging in their heels here, it’s insane that such questions as “how much killing will we let the killer robots do on their own” are being hashed out as back-room handshakes between the military and its AI contractors in the first place. This seems like a matter of public policy if ever there was one. Have we got a legislature or what? If Congress were actually to do its job and legislate on this issue, what’s the right answer? And what would be the right solution to this standoff if your favorite administration were in power? Share your thoughts in the comments. |