The Cloister and the StarshipThe university of the future must revive ancient ways of learning to enable human mastery of artificial intelligence.
Near the beginning of Neal Stephenson’s 1995 novel The Diamond Age, there is a memorable exchange between a computer engineer named Hackworth and an “equity lord”—a tech billionaire, as we would say—with the wonderful surname of Finkle-McGraw. The engineer alludes to some research work he has been doing.
I think a lot about P.I. these days, not least because of the catastrophic effect it is having on actual intelligence. The Diamond Age, like so many of Stephenson’s novels, offers us a troubling glimpse of a future we have already partially reached. It is a world in which nanotechnology is ubiquitous. It is a world in which “matter compilers” can, on demand, provide basic food, blankets, and water, in a fusion of universal basic income and 3D printing. We are not quite there yet, it’s true. But in other, familiar respects, software has eaten this world. Venture capitalists and engineers reign supreme. The twist in the tale is that this networked society has reverted to a kind of tribalism. The most powerful of the phyles (ethnic tribes) are the Anglo-Saxon Neo Victorians, who have reverted to the social strictures of the mid 19th century. As might be expected of Neo Victorians, there is a slum-dwelling underclass of tribeless thetes. But one little girl finds her way out of the Shanghai gutter when she is given a stolen copy of a highly sophisticated interactive book, The Young Lady's Illustrated Primer, which a modern reader will recognize as a large language model or a chatbot. Immersive, interactive, and adaptive, it gives Nell the education she would otherwise never have received. If all this seems strange to you, it is no more strange than the present-day world would have seemed to my 19-year-old self if he could have read about it in 1983. Let me take you back to the world when I was finishing my freshman year at Oxford. It was Ronald Reagan’s first term, and the year Margaret Thatcher was reelected. It was the final peak of Cold War tension. Korean Air Lines Flight 007 was shot down by a Soviet Union fighter jet, killing all 269 people on board, including U.S. Congressman Larry McDonald. Soviet officer Stanislav Petrov averted a worldwide nuclear war by correctly identifying a warning of attack by U.S. missiles as a false alarm. 1983 was also the year of Able Archer 83, a NATO exercise the Soviets feared was a U.S. nuclear first strike. Meanwhile, although Deng Xiaoping’s economic reforms were underway, China was still mired in poverty. The People's Daily reported that the country would run out of food and clothes by the year 2000 if the Party’s policy of population control was unsuccessful. In 1983, personal computing, the Internet, and mobile telephony were all still in their early infancy. It was the year that saw the migration of the ARPANET to TCP/IP; the first-ever commercial mobile cellular telephone call; the release by Microsoft of its word-processing program Multi-Tool Word, later renamed Word. The only times I used a computer were when I was typesetting a student magazine I edited. We communicated by penning letters or notes, which were delivered to wooden pigeonholes in our colleges. We used coin-operated public phone boxes to call our parents. I never played a video game and looked down on those who did. I read books—a lot of books. Had I read a book that envisioned the hyperconnected world of today, it would have seemed as outlandish as Stephenson’s imagined Diamond Age. In 1983 we had no idea how radically the world would change in our lifetimes. We nearly all assumed we would do one of five things: law, media, government, academia or banking. Not one of us considered starting a business. You might think our undergraduate studies were a poor preparation for the fast-approaching world of the Internet. But they were not bad, because at root Oxford taught us eight fungible skills:
The future shock awaiting today’s undergraduates will be even greater than ours was. This is because artificial intelligence has broken through to the extent that Stephenson’s Young Lady's Illustrated Primer—a single device capable of delivering a complete, personalized education to its owner—is now conceivable. Of course, when Sam Altman says we are on the brink of a new Renaissance and calls GPTo3 “genius-level intelligence,” it is tempting to scoff. We should not. “Do you think you’re smarter than o3 right now?” Altman asked the Financial Times rhetorically in a recent interview. “I don’t … and I feel completely unbothered, and I bet you do too. I’m hugging my baby, enjoying my tea. I’m gonna go do very exciting work all afternoon … I’ll be using o3 to do better work than I was able to do a month ago.” Altman has every reason to want to soothe us. But it would be strange to be completely unbothered by the speed with which young people are adopting AI. As Altman himself has noted, “older people use ChatGPT like Google. People in their 20s and 30s use it as a life advisor.” And college students “use it like an operating system. They set it up in complex ways, connect it to files, and have detailed prompts memorized or saved to paste in and out.” Only 20% of baby boomers use AI weekly, according to Emerge, compared to 70% of Gen Z. AI usage is already spreading faster than Internet usage at a comparable stage. According to Hartley et al. (2025), “LLM adoption at work among U.S. survey respondents above 18 has increased rapidly from 30.1% as of December 2024, to 43.2% as of March/April 2025.” The number of ChatGPT active users is now 1 billion. Google’s Gemini has over 400 million active monthly users. And the use cases for AI keep multiplying. McKinsey has a chatbot named Lilli, trained on all its IP. BCG has Deckster, a slide deck editor. Rogo, funded by Thrive, is a chatbot for investment banking analysts. Duolingo is replacing contract workers with AI. Meanwhile, the computational power (compute, for short) required by each successive LLMS keeps growing. Two and a half years ago, when ChatGPT launched, it required around 3% of the compute required by today's state-of-the-art models. Just two and half years from now, according to Peter Gostev, the models will have 30 times more compute than today and a thousand times more than ChatGPT when it was launched. As Toby Ord has noted, this also drives up the cost. While charts of AI models’ performance “initially appear to show steady scaling and impressive performance for models like o1 and o3, they really show poor scaling (characteristic of brute force) and little evidence of improvement between GPTo1 and GPTo3.” This is because in most such charts it is the x-axis that is on a log scale. This tells us that “the compute (and thus the financial costs and energy use) need to go up exponentially in order to keep making constant progress.” This in turn means that we are witnessing the biggest capital expenditure boom since the railroads. Capex spending by the big semiconductor companies and the so-called is running at a quarter of a trillion dollars a year. Add in research and development spending and the estimated total for 2022-2027 is more than nearly $3.5 trillion—11% of U.S. gross domestic product. Partly because the AI works so well and partly because it costs so much, we are also in the early phase of large-scale job destruction. As Deedy Das has pointed out, “Google, Microsoft, Apple, Tesla, Meta, Nvidia and Palantir— the biggest tech employers—have collectively stagnated headcount … This is why Computer Science majors can’t get jobs. Big tech hypergrowth era is over.” And we are beginning to see absolute job losses in areas such as professional writing and manning call centers. AI is just getting started. Within a few years, it ought to destroy even more white-collar jobs than the blue-collar jobs destroyed by China after it joined the World Trade Organization in 2001. The AI revolution has a geopolitical dimension, too, as it is now the crucial field of superpower competition in Cold War II. In the words of tech guru Mary Meeker at al.:
|