The big question about AI
Big, simplistic questions are kind of my specialty. In past podcast documentary series I’ve worked on, I’ve asked questions like, “What is the internet doing to us?” “How is the American Experiment going?” and “Should the Supreme Court be so Supreme?” I’ve found that when you muster the courage to ask the big dumb questions, you often end up with the most surprising and most nuanced answers.
The answers I got from the greatest minds in AI to my big dumb question — “Should we be afraid of artificial intelligence?” — surprised me. In many cases, AI researchers replied with fantastical tales, parables, and thought experiments — a story of an apocalypse where everyone we know has been turned into paper clips; a story of AI as a mischievous hyperintelligent octopus; and a parable that sounded like it was out of the Bible, imagining AI as a savior.
With all these fictional takes on my pretty basic question, it did feel at times like people were evading the issue — and to me, it became apparent how little we know about a technology that is really still in its infancy.
In the series, Future Perfect writers Kelsey Piper (who’s been writing and thinking about AI since she was a kid) and Sigal Samuel (a former religion reporter converted to AI reporting) acted as my guides down the AI rabbit hole.
Early in my reporting, Kelsey shared with me an adage that in order to solve artificial intelligence you have to solve all of human philosophy first. As a normie, I was struck by the way AI leads us to ask questions about morality, about our greatest hopes and dreams as a people, and how it causes us to reflect on our greatest foibles as human beings. Sigal points out that it’s not surprising, then, that the more you hear Silicon Valley types talk about AI, the more you hear echoes of religion.
Indeed, the deeper I dove with producer Gabrielle Berbey (who you’ll hear in the series) into AI reporting, it felt less like we were reporting a series about a technology, and more like reporting on a series of religions, where belief (or not) in the potential of AI to surpass human intelligence drives money and power.
Until I took a job at Vox, it was easier to bury my head in the sand about AI, to avoid the elixir of life or snake oil Silicon Valley is peddling, depending on your perspective. But it became personal for me when, a couple of months after I joined, Vox’s parent company announced a partnership with OpenAI, where the AI company could train language models on employees’ work — including mine. I felt like I had no choice but to grapple with the technology, what it is, and whether I should be afraid of it.
Good Robot is told from that perspective — deeply invested, earnestly curious, striving to be less afraid.
The wizards of AI safety
In episode one, out this week, I learned that the AI world is populated by a cast of eccentric characters on a quest to build good robots who tell strange tales in an attempt to help people understand the complex technology and its hazards.
The first is Eliezer Yudkowsky, who spun internet fame he earned for his Harry Potter fan fiction into a movement — one that prized rationality and logic to identify and address the world’s biggest problems. Many of his adherents wound up moving to the Bay Area and working on AI, and like Yudkowsky, they gravely worry about the possibility that good robots could turn very bad. Like, apocalyptically bad, even if not in the way you might think: In one of his trademark thought experiments imagining those end times, a powerful AI turns the entire galaxy into paper clips.
I soon learned Yudkowsky himself is a polarizing figure, and another group of researchers insist that the fear of an AI apocalypse is actually a massive distraction from the real problems with AI, which are happening right now. Meet the AI ethicists in episode two, dropping on Saturday.
I hope you’ll follow Good Robot on the Unexplainable podcast feed wherever you find ’em.
— Julia Longoria, editorial director, Unexplainable