OpenAI’s retreat from shedding its nonprofit control raises new questions |
|
|
Hello and welcome to Eye on AI. In this edition…OpenAI ditches those for-profit transition plans…Google is giving children under 13 access to its chatbot…and 250 CEOs are proposing computer science and AI be required curriculum for a K-12 education.
Sam Altman and OpenAI did something unusual on Monday: They backed down.
The plan to spin out OpenAI, the $300 billion maker of ChatGPT, into an independent, for-profit company was scrapped.
To call the move an about-face is an understatement. OpenAI had been pursuing this corporate restructuring for more than a year, first quietly, and then—as details leaked—in the public eye. The public arena brought lots of critics, from regulators and AI safety advocates to competitors—all of whom objected to OpenAI detaching from its nonprofit parent company. Elon Musk went so far as to file a lawsuit against the business revamp.
By abandoning its controversial plan, OpenAI intended to throw all these monkeys off of its back. (The company cited consultations with civic leaders and the California and Delaware attorneys general for its decision, though Altman apparently told reporters on a conference call that Musk’s lawsuit had nothing to do with the change. Sure.) But the company may really have simply replaced these monkeys for another set of furry creatures (gremlins, perhaps?). That’s because, behind all the corporate lingo and big pronouncements of Monday’s announcement, there are still many thorny questions about what this actually means.
OpenAI is controlled by a nonprofit with a mission of ensuring that AI benefits all of humanity. ChatGPT, OpenAI’s flagship service, is a commercial product with a goal of making money. The inherent conflict of interest between those two goals is what led to the original plan to make a clean break between the two entities.
The new approach, which will make OpenAI’s for-profit group a public benefit corporation controlled by the nonprofit, simply pushes the problem out of sight, leaving it to fester until the next crisis. Will the nonprofit be able to exercise any meaningful control over the public benefit corporation’s day-to-day activities? The nonprofit has announced a set of philanthropic advisors, but will they have any actual veto power over OpenAI’s plans, or will they be more like Meta’s infamously impotent “oversight board?”
And what kind of mechanisms are there, if any, to ensure the nonprofit’s long-term control over the public benefit corporation? Altman acknowledged the massive costs of OpenAI’s long-term goal of creating AGI—trillions of dollars, he said. The investors who pump in that money will presumably want to take equity in the company. Does the nonprofit have supervoting shares, or could new investors dilute its stake and its control? Altman reportedly said on Monday that a $40 billion funding deal with Japan’s Softbank remains on track, even though the deal initially stipulated that OpenAI need to convert to a fully for-profit corporation. But OpenAI needs a lot more investors who may not be so accommodating.
In fact, Microsoft, the largest investor in OpenAI’s for profit-arm, is still negotiating with OpenAI over the new arrangement, according to Bloomberg.
And finally, there’s Musk himself—the OpenAI cofounder turned rival. Musk, who invested $100 million in OpenAI in its early days, has not only sued OpenAI, but he has also made an unsolicited bid to buy the company for $97.4 billion. Altman tersely rejected that deal earlier this year. Will Musk drop his lawsuit? Or make another run at acquiring the company (assuming the bid was ever serious)? Musk’s lawyer issued a statement on Monday calling OpenAI’s move a “transparent doge,” that doesn’t resolve the core issue. As of Tuesday morning however, Musk had also done something unusual: He hadn’t posted anything on X about the OpenAI news.
Alexei Oreskovic jalexei.oreskovic@fortune.com @lexnfx
|
|
|
How boards can effectively oversee AI to drive value and responsible use AI is reshaping businesses—and boards have a big role to play. rom responsible use, value creation, your talent strategy, and more, it's a lot to oversee. Explore 6 key areas for effective oversight. Read more
|
|
|
AI for children at Google. Google is planning to give children under 13 access to its Gemini artificial intelligence chatbot this week, according to the New York Times. Google says there will be specific guardrails that prevent the chatbot from producing unsafe content. Groups focused on child protection have raised red flags on how AI systems can influence youth. Read more here.
AI high school curriculum. More than 250 CEOs, including Microsoft CEO Satya Nadella, Etsy CEO Josh Silverman, and Uber CEO Dara Khosrowshahi signed an open letter that proposes computer science and AI be required in K-12 curriculum across the country. You can read about it in TechCrunch.
Anthropic’s new science program. Anthropic said on Monday that it was launching an AI for Science program, which will give researchers up to $20,000 credits for using the company’s API to support their biology and life sciences applications. Details of the program can be found here and rules around the program here.
|
|
|
May 19-22: Microsoft Build, Seattle
May 20-21: Google IO, Mountain View, Calif.
May 20-23: Computex, Taipei
June 9-13: WWDC, Cupertino, Calif.
July 13-19: International Conference on Machine Learning (ICML), Vancouver
July 22-23: Fortune Brainstorm AI Singapore. Apply to attend here.
Sept. 8-10: Fortune Brainstorm Tech, Park City, Utah
|
|
|
Thanks for reading. If you liked this email, pay it forward. Share it with someone you know. Did someone share this with you? Sign up here. For previous editions, click here. To view all of Fortune's newsletters on the latest in business, go here.
|
|
|
|