Presented by Verisign: How the next wave of technology is upending the global economy and its power structures
Sep 20, 2024 View in browser
 
POLITICO's Digital Future Daily newsletter logo

By Derek Robertson

Presented by 

Verisign

Vilas Dhar.

Hello, and welcome to this week’s installment of the Future in Five Questions. This week I interviewed Vilas Dhar, an artificial intelligence researcher, president of the AI-focused philanthropic outfit the Patrick J. McGovern Foundation, and member of the United Nations’ High-level Advisory Body on AI. Dhar took the occasion of this week’s U.N. General Assembly to discuss with us his vision for global governance of AI, the fundamental changes it will bring to how we relate to technology, and why AI should be provided as a public service. An edited and condensed version of the conversation follows:

What’s one underrated big idea?

Global governance of AI. We've done the work to demonstrate why AI requires cross-border regulation; how AI is being experienced by people is not constrained by boundaries or nation-states. We need some sort of consensus, multilateral approach to it. The AI Advisory Body’s report makes concrete recommendations about what we need to put in place to begin to regulate AI as a global good, from scientifically qualified, credentialed advisory bodies to global investment in AI capacity.

Like all good ideas when their time has come, people pay attention and start fighting against them. We're seeing in the global discourse how people are trying to now push back against global governance, whether it's domestic entities that just don't want to give over power to the UN, or it's states that are pushing for authoritarian reasons against having any kind of accountability for their actions in a global construct.

What’s a technology that you think is overhyped?

I think there’s an intention to scare people into thinking that AI is going to be part of every interaction between people and their lived environments. I'll just be honest with you, I don't need to talk to my refrigerator or my microwave every day. I don't need AI to be a part of every relationship I have with technology. I think the fact that consumer goods companies are trying to figure out how to integrate AI, that every service provider is trying to figure it out, is a good thing generally for innovation, but I don't think that we as consumers need to agree to have AI in every relationship we have.

What book most shaped your conception of the future?

I wanted to give two different answers to this. One is Thomas Kuhn’s “The Structure of Scientific Revolutions.” The book shifts the model of scientific inquiry from one that's linear and cumulative, this idea that we take what happened last and then we'll make an improvement on it, and actually redefines it to what we are experiencing today with AI, which is the idea of paradigmatic shifts that fundamentally change our understanding of what's possible. I'm a computer scientist, I've worked in AI for two-plus decades, and the shifts that AI is creating to me are not about the quality of the next foundation model, or the number of parameters. It is about how AI tools are going to fundamentally change our expectations of social, economic and political constructs.

We can begin to attack basic premises, like the idea that supply and demand intersect at an optimal frontier, because AI-driven automation changes the entire supply side of that equation. We're going to have to come up with a new economic model for how our society functions, and that's not going to happen through linear and iterative processes. It means that in our lifetime, we will have a transformative shift in foundational assumptions about the world we live in.

The other book is Hermann Hesse’s “Siddhartha,” which takes us to a place of introspective redetermination when the world changes around us. If we accept the assumption that AI will change things in really meaningful ways, books like “Siddhartha” force us to ask the question, what do we decide to become as individuals when the world we operate in changes dramatically around us? Where do we find our source of meaning and truth? What is it that anchors us in our human identity?

What could the government be doing regarding technology that it isn’t?

The challenge that I face every day when I talk to communities all across the country, all across the world, is that we've all given up this sense that we have agency in making decisions about technology. People feel that technology decisions are made by technology companies, and they're responded to and constrained by governments. I think that's a terrible way to think about how we make technology decisions. Governments should continue to do what they do in terms of regulating the actions of technology companies, but I think governments need to own the mantle of also being the primary decision makers about our technological future. I think they need to invest significantly in building capacity around these tools, which means investing in educational systems and skilling services, but also in things like building public access data sets and public compute resources.

In the UN report that we issued Thursday, we made a call for a global fund for AI that invests in local capacity building. Governments aren't doing what I think their mandate should be, which is moving public funding to develop public capacity around AI. If you think about the really aspirational elements of science and technology policy, and you think about the space program, the government was instrumental in creating an aspirational vision for what a space program meant for America. I don't think we've had that yet around AI. Instead of worrying about what tech companies say will be the future of AI and trying to regulate against it, we should be investing in a new vision of what positive AI looks like, and building public support for that.

What has surprised you the most this year?

The pace and speed by which communities and nonprofit organizations, entities that are as far away from the technical reality of AI wonkiness as possible, are coming forward to step in with real opinions and viewpoints about how these tools might affect their interests and what they should do to respond.

I'm seeing nonprofit organizations embrace AI to build solutions that will quickly scale, that will address the most significant vulnerabilities that their communities face. At the same time, nonprofits that stayed out of debates about the internet and about social media, are now stepping in to say they have a point of view on the data sovereignty of the communities we support. We have a point of view on ensuring that AI systems aren't used in ways that advance authoritarian interests and surveillance. We have an interest in ensuring economic opportunity for our populations.

It's happened in really 12 months, 24 months, a massive flowering of interest, engagement, expertise and development. That gives me a lot of hope that if we didn't do so great with the formation of the internet to ensure equitable outcomes, and we definitely didn't do so great with social media, that we have realized AI decisions have to be made by all of us.

 

A message from Verisign:

By maintaining 100% DNS uptime for 27-years, Verisign’s stewardship of the .com top-level domain (TLD) has helped keep the internet running smoothly for users around the globe. Learn more about Verisign and the .com TLD.

 
clegg the ai critic

Meta executive Nick Clegg speaks during a press conference.

Meta president of global affairs Nick Clegg. | Kenzo Tribouillard/AFP via Getty Images

Meta’s Nick Clegg has some comments on how the previous United Kingdom government handled AI.

POLITICO’s Vincent Manancourt and Tom Bristow reported that the Meta global policy chief said former Prime Minister Rishi Sunak was too focused on the downside risks of AI at the expense of its potential benefits.

Clegg spoke on a podcast Thursday, saying the U.K. “wasted a huge amount of time going down blind alleys, assuming that this technology was going to eliminate humanity and we're all going to be zapped by a robot with glowing red eyes.” He favorably cited the techno-optimist argument published by former Labour Prime Minister Tony Blair, and voiced skepticism of the European Union’s rush to be first globally on AI regulation.

“What's the point of being a regulatory leader if you're not a leader in innovation and job creation and improved public services?” Clegg asked. “These are the things that actually matter to society.”

 

A message from Verisign:

Advertisement Image

 
TWEET OF THE DAY

Never tweet but more importantly, never  “inflammatory comment on a pornography website’s message board.”

The Future in 5 links

Stay in touch with the whole team: Derek Robertson (drobertson@politico.com); Mohar Chatterjee (mchatterjee@politico.com); Steve Heuser (sheuser@politico.com); Nate Robson (nrobson@politico.com); Daniella Cheslow (dcheslow@politico.com); and Christine Mui (cmui@politico.com).

If you’ve had this newsletter forwarded to you, you can sign up and read our mission statement at the links provided.

 

A message from Verisign:

With over 27 years of 100% DNS uptime for the .com top-level domain (TLD) and 329 billion DNS transactions processed daily, Verisign helps ensure uninterrupted internet navigation for end-users. Purpose-built infrastructure supports over 157 million .com domain names. Learn more about the reliability and security of the .com TLD.

 
 

Follow us on Twitter

Daniella Cheslow @DaniellaCheslow

Steve Heuser @sfheuser

Christine Mui @MuiChristine

Derek Robertson @afternoondelete

 

Follow us

Follow us on Facebook Follow us on Twitter Follow us on Instagram Listen on Apple Podcast
 

To change your alert settings, please log in at https://login.politico.com/?redirect=https%3A%2F%2Fwww.politico.com/settings

This email was sent to npuh0facrl@nie.podam.pl by: POLITICO, LLC 1000 Wilson Blvd. Arlington, VA, 22209, USA

Unsubscribe | Privacy Policy | Terms of Service

source=GoggleDocument, article=00000192-10f5-d338-a9da-5cf7fdba0000