Several months ago, I grabbed a coffee with a source of mine whose expertise is technology policy. We talked about how bonkers it is that the billionaire leaders of AI companies say they aren’t positive their machines won’t eventually harm humans.
Not just economically, in the sense that some writing, accounting, and coding jobs may one day be outsourced to bots. No, these AI leaders can’t even promise us that their creations won’t literally kill us. Silicon Valley has coined a term for predicting how likely a large-scale catastrophe is to happen. It’s called P(doom), and the estimates are much higher than I’d like them to be...
I’m far from convinced this is our actual future. But given that AI experts don’t rule it out, I started asking them what they think the solution is. A recurring answer was to genetically optimize future humans who will be smart enough to come up with an actual plan.
I’d prefer the tech gods simply not build the thing they think—in the future—may be capable of killing us. But I’m not genetically optimized, so what do I know?
—Abby Vesoulis
|