|
|
|
Here's this week's free edition of Platformer: a look at OpenAI's dangerous dance with the Pentagon, including some late-breaking news that the company is already renegotiating its contract after a public outcry. We'll soon post an audio version of this column: Just search for Platformer wherever you get your podcasts, including Spotify and Apple. Want to kick in a few bucks to support our work? If so, consider upgrading your subscription today. We'll email you all our scoops first, like our recent one about a viral Reddit hoax. Plus you'll be able to discuss each today's edition with us in our chatty Discord server, and we’ll send you a link to read subscriber-only columns in the RSS reader of your choice. You’ll also get access to Platformer+: a custom podcast feed in which you can get every column read to you in my voice. Sound good?
|
|
|
|
This is a column about AI. My boyfriend works at Anthropic. See my full ethics disclosure here. "In [Murati’s] experience, Altman had a simple playbook: first, say whatever he needed to say to get you to do what he wanted, and second, if that didn’t work, undermine you or destroy your credibility … It had taken Sutskever years to be able to put his finger on Altman’s pattern of behavior — how OpenAI’s CEO would tell him one thing, then say another and act as if the difference was an accident. “Oh, I must have misspoken,” Altman would say. Sutskever felt that Altman was dishonest and causing chaos, which would be a problem for any CEO, but especially for one in charge of such potentially civilization-altering technology." — Keach Hagey, The Optimist I. I thought of this passage from The Optimist over the weekend as I worked to make sense of a rather stunning series of events. The Pentagon followed through with its threat to terminate the military’s contract with Anthropic over the company’s refusal to amend its prior agreement to permit “all lawful use” of its technology, including mass domestic surveillance and autonomous weapons. It further threatened to designate Anthropic as a “supply chain risk,” a move previously reserved for corporate extensions of foreign adversaries, and move to block any company that contracts with the military from using Anthropic’s products. For the briefest of moments, it appeared as if Anthropic might have an ally in the fight: on Friday morning, Hagey (in her regular perch at the Wall Street Journal) reported that Altman had sent a memo to OpenAI’s staff saying that he would draw the same “red lines” Anthropic had. “We have long believed that AI should not be used for mass surveillance or autonomous lethal weapons,” he wrote, “and that humans should remain in the loop for high-stakes automated decisions. These are our main red lines.” And by Friday evening, Altman announced on X that OpenAI had reached an agreement with the Pentagon for classified AI deployment — with the same red lines, he claimed, now baked into the contract. Setting aside for a moment the government’s unhinged retaliation against Anthropic, Altman’s claim to have won concessions from the US military offered at least some reason for hope. If powerful AI systems are to be embedded in systems of state violence, the least that Americans can ask for in return are mechanisms of oversight and restraint. Altman said OpenAI had achieved just that. “Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems,” Altman said in an X post. “The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement.” Immediately, Altman’s claim fell under scrutiny. Was it not suspicious that OpenAI claimed to have won with just a few days of negotiating the concessions that Anthropic had not? Was it possible that the same Pentagon officials railing on X against the idea of a private company attempting to exert control of the military were now making an exception for OpenAI? Was the public now, like Mira Murati and Ilya Sutskever before them, caught in the familiar Altman trap that begins with him telling them what they want to hear? II. Notably, in this case few seemed to extend to Altman the benefit of the doubt. The most popular post on the ChatGPT subreddit over the past week is titled “You’re now training a war machine. Let’s see proof of cancellation”; it received more than 32,000 upvotes. Similar posts in that forum and the OpenAI subreddit also received tens of thousands of upvotes; the company also came in for extended criticism on Hacker News. And as the weekend went on, additional reporting suggested that the knee-jerk cynicism triggered by OpenAI’s deal was justified. In The Verge, Hayden Field reported that contrary to OpenAI’s public statements — and consistent with the military’s own framing of its demands — the company’s deal with the Pentagon includes fewer restrictions than Anthropic’s had. She writes: One source familiar with the Pentagon’s negotiations with AI companies confirmed that OpenAI’s deal is much softer than the one Anthropic was pushing for, thanks largely to three words: “any lawful use.” In negotiations, the person said, the Pentagon wouldn’t back down on its desire to collect and analyze bulk data on Americans. If you look line-by-line at the OpenAI terms, the source said, every aspect of it boils down to: If it’s technically legal, then the US military can use OpenAI’s technology to carry it out. And over the past decades, the US government has stretched the definition of “technically legal” to cover sweeping mass surveillance programs — and more.
OpenAI might be able to partially block the military’s efforts to conduct domestic surveillance by building classifiers and implementing other model-level safeguards, as it has said it will do. And yet it’s essential to remember that most tasks related to mass surveillance might not look that way to a model. The government can upload massive spreadsheets of data bought legally from data brokers and ask GPT models to conduct all sorts of analyses that will not identify themselves as efforts to build systems of oppression. And in any case, we know that the Pentagon tried repeatedly to eliminate meaningful safeguards in Anthropic’s contract through innocuous-seeming word changes and a generous dusting of legalese. Ross Andersen described the process in The Atlantic. “The Pentagon had kept trying to leave itself little escape hatches in the agreements that it proposed to Anthropic,” he reported on Sunday. “It would pledge not to use Anthropic’s AI for mass domestic surveillance or for fully autonomous killing machines, but then qualify those pledges with loophole-y phrases like as appropriate — suggesting that the terms were subject to change, based on the administration’s interpretation of a given situation.” Moreover, on the subject of autonomous weapons, Bloomberg reported last month that OpenAI is participating in a competition to develop software that will allow drones to be controlled via voice. (Anthropic participated in the competition, too — reminding us that Dario Amodei’s objection to murderbots isn’t that they are immoral, but that they don’t work very well yet.) If you build voice controls for the murderbot but not the murderbot itself, is that consistent with OpenAI’s usage policy? “It turns out that the usage policy can be read in a few ways,” writes Sarah Shoker, who led OpenAI’s geopolitics team for three years before leaving last June, on her Substack. “Depending on whether you believe that the use of an AI voice-to-digital tool in a kill-chain amounts to helping build a weapon, or if you believe that an AI model can be treated in isolation from its larger weapon system.” The problem, Shoker writes, is that almost all of the relevant definitions here — again, the definitions relevant to whether and how you will be surveilled as an American, and which large language models might guide a drone swarm that someday attacks you — are up for debate. “Policy and law are not free-floating static ‘things,’” she writes. “The borders of the law are fuzzy and filtered through political ideology. Throughout US history, policymakers have reinterpreted and exploited gaps in the law to allow for activity that independent legal observers have called straightforwardly illegal.” She continues: There isn’t a consensus over what it means in practice to have adequate ‘human supervision,’ ‘human in the loop’ or ‘meaningful human control’ in autonomous weapons systems. Terms that reference human oversight remain contentious around the world. Militaries are still trying to develop new testing and evaluation procedures for reducing problems like e.g. over-reliance in human-AI teams. It’s possible that Anthropic disagreed with how ‘human supervision’ (broadly speaking) would be put into practice.
A few frontier AI company employees have asked me about whether the ‘lawful purposes’ language is a sufficiently strong bulwark against misuse. The answer is always going to be it depends. You have to decide whether that’s good enough and if you trust your company leaders to respond effectively in case something goes wrong.
III. As public opinion began to turn against OpenAI — uninstalls of ChatGPT were up nearly 300 percent over the weekend, market research firm Sensor Tower estimated — the company sought to reassure the public. A blog post laid out what it described as a comprehensive, layered approach to ensuring its red lines are never crossed, and posted what it said is the “relevant” portion of its contract with the military. And Altman and some of his colleagues at the company answered questions from people on X. Jessica Tillipman, an expert in government contracts and professor at George Washington University Law School, analyzed the deal and the surrounding debate. For starters, she said — and contrary to howling right-wing commentators who accused Anthropic of trying to subvert the democratic process by refusing to accept the military’s demands — “contractors restrict the government’s use of their products all the time.” It is at least possible, she writes, that the safeguards OpenAI outlined would give it meaningful leverage to restrict the use of its models for whichever forms of surveillance and drone killing it takes issue with. But there is an enormous unanswered question — what happens when OpenAI and the military disagree? Tillipman writes: If a classifier blocks a particular use, the question is whether the government has a contractual right to demand its removal. OpenAI asserts that it retains “full discretion” over those systems.
This creates tension at the heart of the agreement. The contract permits use “for all lawful purposes,” subject to “operational requirements” and “well-established safety and oversight protocols.” OpenAI says it retains full discretion over the safety stack it runs in a cloud-only deployment. If the safety stack blocks a lawful use, which provision controls? The answer depends on the specific contract language governing the relationship between the permissive use standard and the deployment framework — language that has not been made public.
The Pentagon reacted to its disagreement with Anthropic — over a contract it had once willingly signed — by announcing an effort to destroy the company. The idea that some vague contractual language and a “safety stack” will prevent Defense Sec. Pete Hegseth and his subordinates from taking a maximalist view of their rights to OpenAI’s intellectual property is either impossibly naive, or outright deceptive. In response to my questions, OpenAI pointed me to another X post from Altman that posted on Monday evening. In it, Altman said OpenAI plans to amend its contract with the Pentagon to add further restrictions on the use of its systems for surveillance, and that the National Security Agency will not be using GPT models. I'm told the Pentagon has agreed to the changes. These sound like meaningful improvements; we’ll see. “One thing I think I did wrong: we shouldn't have rushed to get this out on Friday,” Altman added. “The issues are super complex, and demand clear communication. We were genuinely trying to de-escalate things and avoid a much worse outcome, but I think it just looked opportunistic and sloppy.” Indeed. But in the end I’m left asking myself what will happen in the scenario that still seems disturbingly likely — that GPT models will in fact be used as part of surveillance and drone operations. Will it put up a blog post to explain that, well actually, that’s a lawful kind of surveillance? Do an AMA about how, despite how it may look, that autonomous drone swarm had proper human supervision? OpenAI does enough polling to understand that Americans already distrust and even openly loathe AI, even as they increasingly turn to it for work and school. How does it think Americans will feel when GPT models are powering ICE raids or causing civilian casualties in wars abroad? The company may have tied its own hands. In the end, the truth about US military operations always seems to come out one way or another. And when it does, I suspect the “all lawful use” standard that OpenAI agreed to will have permitted a far wider range of operations than we are now being told are possible. The problem with telling everyone what they want to hear is that eventually reality catches up with you. The people who will live under AI-powered surveillance, and the people in the flight path of AI-assisted drone swarms — they're the ones who are going to find out what OpenAI actually agreed to do. And I suspect it will be much more than the company now expects us to believe. On a bonus episode of the podcast: Kevin and I compare notes on a tumultuous weekend for Anthropic, OpenAI, the Pentagon, and the country. Recorded on Saturday morning. Apple | Spotify | Stitcher | Amazon | Google | YouTube Following
Everyone has something to say about the Pentagon, Anthropic, and OpenAIWhat happened: As the Pentagon's "all lawful use" drama unfolded, people started quitting ChatGPT and switching to Claude. Reddit posts encouraging people to boycott ChatGPT have been getting tens of thousands of likes, and Anthropic’s Claude app reached no. 1 on the App Store. (And Anthropic was quick on the draw, releasing an improved tool for helping people to switch by loading context from other AI apps into Claude). Anthropic receive strong declarations of support from tech workers, too. A coalition representing 700,000 employees across Amazon, Google, and Microsoft, demanded their companies “reject the Pentagon’s advances.” And an open letter from Google and OpenAI employees asked leaders to “refuse the Department of War’s current demands for permission to use our models for domestic mass surveillance and autonomously killing people without human oversight.” Why we’re following: Oh god. Where to start? This week’s events will have long-lasting effects on Anthropic’s business; OpenAI’s reputation; the public’s view of AI; the future of warfare; and American citizens’ right to privacy. To say nothing about my cortisol levels. We’re left wondering whether remaining tech stakeholders like Amazon and Google will listen to workers and the public, or negotiate new contracts with the DoD that allow their tech to be used to surveil citizens and make kill decisions. What people are saying: Pop star Katy Perry weighed in on the situation on X with a screenshot of her signing up for a Claude Pro |