|
Hey there,
This week, the U.S. Department of Defense moved to end its contract with Anthropic and switch to OpenAI for working with classified documents with AI.
On the surface, this looks like a battle between good guys who want more guardrails (Anthropic) and madmen who want AI to shoot people (the Department of Defense), with OpenAI selling its soul to poach a $200 million contract.
That's how I originally thought about it – but as I researched it more deeply for this newsletter, my opinion on Anthropic's position changed.
While it's true that Anthropic's guardrails exist – they prohibit Claude, their AI model, from being used for surveillance of American citizens or autonomous firing of weapons – these limitations prevent only a tiny sliver of what's possible in terms of military AI.
So, if you are concerned about the creation of killer robots, Anthropic may be a lesser evil, but they're still actively and proudly building military AI tech.
From Bloomberg:
The company made the submission during fraught negotiations with the Defense Department over Anthropic’s red lines for how its technology was to be used by the military... Executives at the $380 billion company have repeatedly insisted they support the extensive lawful use of AI in combat, short only of mass domestic surveillance and “fully autonomous weapons.”
Anthropic makes great software and AI models that I use every day. They've done a great job advocating for AI safety in writing and in public appearances by their leaders. And I'm not against companies working with the military.
But the reality is that most of what Anthropic does for AI safety comes in the form of words rather than action.
Because they are one of the few companies that even attempts to gesture at AI ethics, they stand out in the industry for this and it earns them praise from consumers and industry observers (as well as the disdain of the Trump administration).
But, in practice, Anthropic still wants to make money building military AI and has no qualms about AI being used for killing. They have created their own "red line" about fully autonomous weapons, but they apparently believe that an armed drone swarm is OK because it has a "human in the loop" – that is, at least one person can disable the hundreds of armed drones they are piloting.
In my view, that's a pretty shady distinction that makes me wonder how much the so-called "red line" really matters in practice.
This was a silly stunt by the Defense Department, but don't give Anthropic undue credit for saying 'no'
Anthropic is generally being praised for standing up to the DoD, which wanted them to drop their "no autonomous firing" and "no surveillance of Americans" limitations.
The media stunt of making a fuss about these (largely theoretical) guardrails made sense for Secretary of Defense Pete Hegseth – he wants to bolster his persona as a guy who "fights the woke." Just ignore the fact that the Trump administration signed this deal with Anthropic seven months before they cancelled it. |