© 2024 KENW
Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations

Leading voice in AI, who worries about its consequences, favors regulating it

STEVE INSKEEP, HOST:

A leading voice in artificial intelligence favors regulation of it. Mustafa Suleyman co-founded the AI firm DeepMind, and he worries about the consequences of his own creations. Suleyman says governments need to prevent anyone from using AI to destabilize the world.

MUSTAFA SULEYMAN: We have to start really thinking about the precautionary principle. I mean, the consequences are so vast and potentially so huge, both positive and negative, that this is a moment when we need strong governments and proactive intervention.

INSKEEP: By now, the real and potential dangers of AI are well known. Various applications can make it easier to commit fraud, spread misinformation or analyze data to surveil people. At some point, high-powered computers may outthink - or outwit - the human race. Suleyman's book, "The Coming Wave," recounts his efforts to warn his fellow tech entrepreneurs in California.

SULEYMAN: It's a little awkward in Silicon Valley sometimes because - you know, that it's just not the default culture. I think American culture in general and Silicon Valley in particular has benefited enormously from relentless optimism. And I am an optimist. I'm a builder. I'm a creator. I have a new startup now called Inflection AI and creating an AI called Pi, which stands for Personal Intelligence, which is a personal AI, a conversational, fluent, interactive AI. So I believe in making things and building things at huge scale. But at the same time, you know, I think wisdom in the 21st century is about trying to hold multiple contradictory ideas in tension at the same time.

INSKEEP: OK. Help me think this through then. You're making a comparison with nuclear nonproliferation, which is something that is obviously not perfect. Sometimes another country gets a nuclear weapon or gets close to one, but there are all kinds of systems in place around the world to limit the access to nuclear technology and especially technology that can be used to make a bomb. You're saying you would like something like that for artificial intelligence. But let me ask about another factor that you raise. I took a note of one of the subheads of one of your chapters. It is the plummeting cost of power. What is that when it comes to computing, and how does that complicate the effort to regulate or contain it?

SULEYMAN: I mean, AI is an amplifier of good power and bad power. AI is going to be a tool to help people make predictions and get things done in the real world and the digital world. Your AI is going to learn to book things for you on the internet, buy things for you, initiate, you know, new creative endeavors. It's going to be like a research assistant or a creative, you know, partner helping you to get things done. And many people will use that for, you know, incredibly good outcomes. And some people will use it to sow instability. So the plummeting cost of power means that it's going to be cheaper and easier to get things done at scale in the digital world.

INSKEEP: That's what makes me wonder about the analogy with nuclear nonproliferation. Making a nuclear weapon is hard and expensive, especially if you need to make the material for it. You're telling me that it's going to get cheaper and cheaper all the time to deploy artificial intelligence in ways that may be harmful. Does that make containment impossible?

SULEYMAN: That's exactly the challenge that we face. If you - if intelligence follows the same cost-curve trajectory as the microchip, which has come down a millionfold in the last 50 years - right? - how do we create contained technologies that make sure that they don't end up representing a threat to the nation-state? Because people who ultimately may have statelike powers, you know, the ability to really organize at huge scale, to intervene in cyber networks, to attack our security, you know, that is just going to get cheaper and cheaper and easier to access.

INSKEEP: Can we dwell in a terrifying way for a moment on the risks? When you talk about the risks to the nation-state, we're just talking about countries, whether they're a democratic country or an authoritarian country. And I think you're telling me that right now in order to have enormous power over people, the government of China needs a million people in an intelligence agency, hypothetically. And it could be that one person ultimately has that kind of power to surveil people. Is that the kind of danger you're talking about?

SULEYMAN: That is unfortunately correct. You know, I hate to say it, but power is becoming compressed, right? Look at these image generation models. Every image that has been put on the open web that has been openly and publicly available is now compressed into a two-gigabyte file, which can be used to generate new images from scratch. And anybody can get access to that two-gigabyte file in open source. But you're exactly right. It also represents a very new dynamic in the unfolding of power in our civilization.

INSKEEP: What are the odds that the nation-state, as we were saying, can possibly regulate this, particularly given that there are a couple hundred of them in the world?

SULEYMAN: We've regulated many such complex things in the past. Look at how - and every new technology that arises initially feels scary and unfamiliar and confusing. In the book, I found this incredible anecdote from the arrival of the first steam train in Liverpool. The member of Parliament for Liverpool and the prime minister at the time, along with a huge celebration party, were so excited to see this new beast, as they called it, arriving on the tracks that they actually stood on the tracks to welcome the train coming in. And in fact, the train - they had no concept that the train wouldn't stop. And it actually ran through the celebration party and killed the member of parliament.

And so that's how unfamiliar and strange and obscure things can be. And then within an instant, trains become a tool which are, you know, sort of unremarkable and completely integrated within our lives. And so we've done this many times with airline safety, with electricity, with nuclear safety. At first, it seems alien and confusing. And then very quickly, we put in place extremely rigorous safety frameworks for governing these technologies. I think that we have to be confident and optimistic that if we engage proactively in our governance mechanisms and stop denigrating them and putting public servants down, you know, we can make this work.

INSKEEP: Mustafa Suleyman is the author of "The Coming Wave: Technology, Power, And The 21st Century's Greatest Dilemma," which could be summarized as a call for us to step out of the way of the train. Thanks so much.

SULEYMAN: Thank you. This was great. Thank you.

(SOUNDBITE OF IL:LO'S "REINE") Transcript provided by NPR, Copyright NPR.

NPR transcripts are created on a rush deadline by an NPR contractor. This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of NPR’s programming is the audio record.