Here's this week's free edition of Platformer: a behind-the-scenes look at the first-ever Hard Fork Live, including what Sam Altman said to me backstage before a much-discussed interview. Do you value journalism that seeks to ask hard questions of those in power? If so, consider supporting us by upgrading your subscription today. We'll email you all our scoops first, like our recent one about Meta's new hate-speech guidelines. Plus you'll be able to discuss each today's edition with us in our chatty Discord server, and we’ll send you a link to read subscriber-only columns in the RSS reader of your choice.
|
|
|
|
Programming note: This is a column about AI. My boyfriend works at Anthropic. See my full ethics disclosure here. Also with this edition, we are now on our summer break. See you back here July 14. This week, we welcomed nearly 700 people to a sold-out SF Jazz Center for the first-ever Hard Fork Live. It was incredible to meet so many Platformer readers in person; thank you to everyone who came out. Today, I want to share some behind-the-scenes details about our interview with OpenAI's Sam Altman and Brad Lightcap, and what we learned from a spiky but ultimately revealing discussion about the company's efforts to build artificial general intelligence. When I first saw Altman on Tuesday night, I had to take my pants off. My co-host Kevin Roose and I had just gotten a demo of Skip Mobility's $5,000 MO/GO pants, which help people with mobility issues stand up and walk around by boosting their muscles with electrical power. The demo was over, Kevin and I were backstage, and the robot pants had to come off. Altman and Lightcap were in the wings with us, laughing gently at the spectacle. With all the dignity I could muster, which was none, I pulled on my other pants and thanked them for coming. Altman seemed to be in a good mood. He encouraged us to mix it up with him on stage — "interesting questions only," he said. I promised him I'd try, and told him he should feel free to troll me onstage if he wanted to. "I don't strike first," he said. "But I do strike back." I wasn't certain that anything on my list of questions counted as a strike, exactly. But it was fair to expect that there would be some disagreement on stage. Roose and I have both written critically about OpenAI and Altman in the roughly year and a half since he last came on Hard Fork — two days before he would be briefly fired, as it would surreally turn out — and the New York Times has sued OpenAI and Microsoft over allegations of copyright infringement related to the training of large language models. (I'm an independent contractor and do not speak for the Times on this or anything else.) Kevin and I went back on stage and began our next segment. We planned to thank our families for coming and spend a few minutes setting up what an extraordinary year OpenAI has been having — its $6.5 billion deal to acquire a Jony Ive startup; a $200 million deal with the Pentagon; a partnership with Mattel to build toys; and reports of all sorts of other forthcoming products, including a browser, a social app, and a Google Docs competitor, among other things. Before we could get to any of that, though, Altman and Lightcap simply walked on stage, in the manner of pro wrestling heels interrupting a babyface promo. You can see it at the start of the video: they come through the doors while my back is turned, and I'm extremely surprised to see them. The rest of our show had run with military precision, thanks to the Times' great events team, and it seemed unfathomable to me that they would have sent the OpenAI executives out early. At the moment, though, I couldn't think of another reason that they might be there. And so, after some back and forth, we invited them to sit down and just launch into our questions. Shortly thereafter, I began to intuit the real reason Altman and Lightcap had walked out early — they were hoping to rattle us. Earlier this month, Lightcap authored a blog post criticizing the Times' legal team for requesting that OpenAI retain ChatGPT logs and other user data indefinitely while the case is litigated. "We strongly believe this is an overreach," he wrote. "It risks your privacy without actually helping resolve the lawsuit. That’s why we’re fighting it." The Times declined to comment at the time. But a key aspect of the case is whether ChatGPT reproduces copyrighted materials; it stands to reason that the Times would want to review ChatGPT outputs to make its case. And so, soon after sitting down, Altman asked us: "Are you going to talk about where you sue us because you don’t like user privacy?” And I thought — I thought this guy doesn't strike first! We tried to explain that, as humble journalists, we have no knowledge of or influence over the Times' legal strategy. Still, Altman asked us to share our personal opinions on the case. We declined. Eventually, we moved on. That night, Altman sent us a nice note saying he had taken "the joke" a bit too far. I accepted his apology. Having conducted many on-stage interviews with tech executives over the years, I've listened to more than my share of bland talking points that I forget immediately upon walking off stage. The conversation with Altman and Lightcap, aggressive though it was at times, was also illuminating. I learned something from it. And what else do you want from an interview? One question I had hoped to ask Altman, but didn't get to before time ran out, was how he had changed as a leader since the last time we had spoken to him. He has recently hired several top executives to delegate various aspects of the business; how has his leadership style evolved? I'm not sure what Altman would have told us that night. But I do think he showed us: walking on stage early, seeking to knock us off our balance, pushing a bit to see if, while flustered, we might embarrass ourselves. One story that got told a lot about Altman in the aftermath of his firing was that too often he would tell people what they wanted to hear, rather than confronting issues directly. On Tuesday night, he was more than happy to confront us. Years ago, when the members of his nonprofit board had a more diverse range of views about how to build AI systems safely, Altman had to play by other people's rules. Our discussion Tuesday felt like an indication that the only rules he is playing by these days are his own. Other takeaways: Altman said President Trump has a good understanding of AI. "I think he really gets it,” Altman said. This turned out, unexpectedly, to be a laugh line for those in attendance. Altman followed up by saying: “I think he really understands the importance of leadership in this technology.” OpenAI disagrees with Anthropic about the likelihood of AI causing near-term job loss. In short, Lightcap and Altman said, it takes technology longer to diffuse through society than Anthropic CEO Dario Amodei is suggesting. (Amodei has said up to half of entry-level white collar jobs could disappear due to AI in the next one to five years.) Altman said that individual job losses would be painful, but that a surplus of benefits to the public would come from AI. Altman says AI is evolving too fast for policymakers to effectively regulate it. Kevin asked him why Altman's enthusiasm for AI regulation seems to have dimmed since he began the company. "I have become a bit more — jaded isn’t the right word — but it’s something in that direction, about the ability of policymakers to grapple with the speed of technology,” he said. The OpenAI executives played down any fears that Mark Zuckerberg was going to poach too many of their top researchers. But news emerged in the next few days that Zuckerberg had, in fact, lured away at least four people, including one who helped to build o1. Meta's efforts may not have been as easy to dismiss had we known that at the time. Altman talked up the mutual benefits of the Microsoft-OpenAI partnership. Amid near-weekly headlines about tensions between the companies related to OpenAI's efforts to convert into a more traditional for-profit enterprise, Altman suggested that much of the journalism is false or overblown. "Obviously in any deep partnership, there are points of tension, and we certainly have those,” he said. “But on the whole, it’s been like really wonderfully good for both companies.” I think that OpenAI social product really is coming. I took the chance to ask Altman something I have long wondered: why keep posting all your news and takes on X, which is owned by someone who is actively destroying your company? Why not post it somewhere else? Altman asked me: where else would I put it? I suggested that maybe he would build his own social product. He arched his eyebrows suggestively. I'll be very curious to see what comes out of that one. Sponsored Fly.io lets you spin up hardware-virtualized containers (Fly Machines) that boot in milliseconds, run any Docker image, and scale to zero automatically when idle. Whether your workloads are driven by humans or autonomous AI agents, Fly Machines provide infrastructure that's built to handle it: - Instant Boot Times: Machines start in milliseconds, ideal for dynamic and unpredictable demands.
- Zero-Cost When Idle: Automatically scale down when not in use, so you're only billed for active usage.
- Persistent Storage: Dedicated storage for every user or agent with Fly Volumes, Fly Managed Postgres, and S3-compatible storage from Tigris Data.
- Dynamic Routing: Seamlessly route each user (or robot) to their own sandbox with Fly Proxy and fly-replay.
If your infrastructure can't handle today's dynamic and automated workloads, it's time for an upgrade. Build infrastructure ready for both humans and robots. Try Fly.io. On the podcast this week: It's part one of Hard Fork Live! You'll hear our full interview with Sam Altman and Brad Lightcap, along with a surprise appearance by San Francisco Mayor Daniel Lurie. Next week, you'll hear our interview with Stripe CEO Patrick Collison and Skip CEO Kathryn Zealand. Apple | Spotify | Stitcher | Amazon | Google | YouTube Is scraping books for AI fair use?This week saw two significant developments in the legal battle over whether training large language models on vast amounts of copyrighted data amounts to fair use. On Tuesday, US District Court Judge William Alsop granted a summary judgment to Anthropic affirming that training LLMs with copyrighted data is, in fact, fair use. He accepted the argument of AI labs that an AI model learning from what it reads is similar to humans learning from what they read and generating new works from it. Authors argued that they may be unable to compete effectively against LLMs that can instantly generate superior works built on their own labor; Alsup said, in effect, too bad. On the other hand, Anthropic built its model with digital copies of more than 7 million pirated books. That isn't fair use, Alsup ruled, and that part of the case will go to trial. Then yesterday, U.S. District Judge Vince Chhabria ruled in favor of Meta in a similar case brought by 13 authors. Chhabria wrote that he is sympathetic to the authors' position, but found that they had made the wrong legal arguments. Still, Chhabria's decision accused Alsup of "brushing aside" the authors' claims of market harm — the idea that LLMs will outcompete authors. And he seemed to invite other plaintiffs to pursue cases with more rigorous legal arguments. (One such group filed a similar complaint against Microsoft just yesterday.) For the moment, though, the legal system has taken the side of chatbots over human authors. Elsewhere in AI copyright: Getty Images dropped its claims of copyright infringement against Stability AI in the United Kingdom after failing to establish proper jurisdiction for the claims. Getty is now arguing a narrower case focused on the idea that using Stability's models in the UK constitutes secondary infringement. One thing I keep hearing about Meta's intention to acquire a 49 percent stake in Scale AI is that it has made Scale employees — Scaliens, as they call themselves — quite unhappy. Their visionary founder is leaving as CEO, its biggest customers are bailing, and its future is now much more uncertain. Unhappy employees are likelier to talk to journalists than happy ones. And so I was interested this week to see two stories about Scale's failures that paint the company in a rather dim light. In Inc., Sam Blum writes that Google was unhappy with Scale after using it to find contractors to train Gemini. Google wanted Scale to find experts to create high-quality training data for Gemini; instead, contractors flooded the company with "spam," according to internal logs viewed by the reporter. In some cases, rather than submitting original training data, Scale's contractors appeared to have submitted text generated by ChatGPT. Other contractors list their accounts for sale on platforms like Reddit, allowing non-experts another avenue to create bad data for money. (A Scale spokesman told Inc. that the documents primarily show how good Scale is at detecting spam before sending training data to customers.) Meanwhile, Business Insider reported on a security failure: Scale AI has repeatedly tracked work for major customers in public Google Docs, making them accessible to anyone with the link. They also posted public documents with contractor email addresses and sensitive information related to competition, such as how Scale contractors used ChatGPT in an effort to improve Gemini (then called Bard). While there's no evidence of a security breach, Scale said it would investigate the issue, which among other things could lead to social engineering attacks. I don't expect any of this to scuttle Meta's deal. But it does embarrass the company, which for some Scaliens I suspect was the point. Elsewhere in the Meta AI reset: the company successfully poached three researchers from OpenAI's Zurich division, along with Trapit Bansal, a pioneer in reinforcement learning who helped to build the company's groundbreaking o1 model. (I'd guess Bansal is getting one of those $100 million-plus compensation packages.) Meta is also working to acquire key assets of PlayAI, a small AI voice cloning startup. Governing- US lawmakers reintroduced the Open Markets Act, which would force Apple and Google to open up their app stores and create legal requirements for sideloading, among other features. (Amber Neely / AppleInsider)
- Microsoft is reportedly pushing to remove a clause in its contract with OpenAI that lets the company limit access to its products after it declares that artificial general intelligence has been achieved. (Berber Jin / Wall Street Journal)
- Altman shared screenshots of messages with the founder of a company now suing it for copyright infringement, showing that they had a friendly exchange until Altman revealed the company was working on a competing product. (Emma Roth / The Verge)
- A poll of 2,000 US teachers found that 6 in 10 teachers used AI in their work in the past year. (Jocelyn Gecker / AP)
- A look at the AI arms race in hiring, as recruiters use AI tools to screen a flood of AI-generated resumes. (Sarah Kessler / New York Times)
- Anthropic released new research into uses of Claude as a therapist, coach, and romantic role-play partner. It found that conversations generally grow more "positive" the longer they go on, but did not investigate cases where similar uses lead users into conspiracies. (Megan Morrone / Axios)
- Reddit CEO Steve Huffman said that verifying a user's humanity is a top priority for the company this year amid a surge of AI-generated content submitted to the site. (Daniel Thomas and Hannah Murphy / Financial Times)
- Creative Commons launched a new tool, CC signals, to let dataset owners dictate when their materials can be reused by machines (such as for training large language models). (Sarah Perez / TechCrunch)
- The United Kingdom's competition regulator is preparing to designate Google with a status that would require it to adjust search rankings and give publishers more control as a check on its dominance. (Suzi Ring and Tim Bradshaw / Financial Times)
- Pornhub and other adult websites will introduce age assurance checks in the United Kingdom next month in accordance with new rules. (Chris Vallance & Liv McMahon / BBC)
- The European Union's competition chief said that enforcement of its Digital Markets Act would not be used as a bargaining chip in trade negotiations with the United States over tariffs. (Samuel Stolton and Oliver Crook / Bloomberg)
- Export controls appear to have been effective in limiting DeepSeek's ability to quickly release and serve its forthcoming R2 model, according to Chinese cloud providers. (Qianer Liu and Juro Osawa / The Information)
 Industry |