Junk Science

We Fact-Checked Stephen Hawking’s Reddit AMA Answers

In a Reddit AMA, Stephen Hawking worries about an A.I. apocalypse. Meanwhile, the actual experts can barely get robots to climb a staircase

Junk Science
(Photo: Sony Pictures, Photo Illustration: Robert A. Di Ieso/Vocativ)
Oct 08, 2015 at 8:55 AM ET

Stephen Hawking’s long-awaited Reddit AMA answers are out, and yes (inappropriate irony alert) the world’s most famous computer-voiced physics genius is genuinely worried about the rise of the machines. “The development of full artificial intelligence could spell the end of the human race,” Hawking told the BBC last year, before taking to Reddit where he recently addressed his fears about artificial intelligence in a public AMA.

More 5 Best Moments From Elon Musk’s Reddit AMA

Hawking may be the world’s finest physicist, but many scientists who wouldn’t dare question Hawking’s work on general relativity are more skeptical when it comes to his thoughts on A.I. apocalypse—especially since Hawking, for all his brilliance, has no formal training in artificial intelligence or robotics.

Here’s how Hawking answered several questions in his Reddit AMA, compared alongside responses from John Leonard, a roboticist at MIT’s Computer Science and Artificial Intelligence Laboratory. See if you can spot some of the key differences:

Should We Be Worried About Evil A.I.?

Stephen Hawking:

TL;DR: Not evil A.I., just fatally indifferent A.I.: Media often misrepresent what is actually said. The real risk with AI isn’t malice but competence. A superintelligent AI will be extremely good at accomplishing its goals, and if those goals aren’t aligned with ours, we’re in trouble. You’re probably not an evil ant-hater who steps on ants out of malice, but if you’re in charge of a hydroelectric green energy project and there’s an anthill in the region to be flooded, too bad for the ants. Let’s not place humanity in the position of those ants. Please encourage your students to think not only about how to create AI, but also about how to ensure its beneficial use.

John Leonard:

TL;DR: Not any time soon: There’s a lot of merit to what [Hawking] is saying, but I think the question is: what’s the timeline? Is this going to unfold over years, decades or centuries? In the field, there’s this feeling of exponential growth. But as a roboticist trying to get robots to do things like drive on streets and walk up and down stairs, I can see clearly that there are whole other parts of the problem that remain unsolved.

I think this notion of evil A.I., if it happens at all, is many, many years in the future—and I think we have much greater societal challenges to worry about, in the here-and-now. My view, as a roboticist, is that trying to get robots to do things like drive safely in urban traffic, or make left turns, or make real decisions within the wall of information—these are problems that were hard 30 years ago, and they’re still hard today. I would claim our advances in a more broad A.I. are actually pretty lame; we’re not making as much progress as some people might say.

Will Robots Take Our Jobs?


TL;DR: If by robots, you mean Amazon If machines produce everything we need, the outcome will depend on how things are distributed. Everyone can enjoy a life of luxurious leisure if the machine-produced wealth is shared, or most people can end up miserably poor if the machine-owners successfully lobby against wealth redistribution. So far, the trend seems to be toward the second option, with technology driving ever-increasing inequality.


TL;DR: Again, not any time soon My gut feeling, after talking to a lot of economists, is that there is not a fundamental difference between the situation now and the situation during the Industrial Revolution. There may be a slight acceleration in the pace, but a lot of the jobs that would be hard to automate—gardening, house cleaning, taxi driving—aren’t going anywhere. We’re still decades away from truly replacing a Manhattan taxi driver, for example, who could drive you from Central Park to La Guardia airport in the rain. I think some of the more alarmist views are a little over the top.

Could A.I. Really Evolve Like A Biological Creature?


TL;DR: In a survivalist sense, yes We need to avoid the temptation to anthropomorphize and assume that AI’s will have the sort of goals that evolved creatures to. An AI that has been designed rather than evolved can in principle have any drives or goals. However, as emphasized by Steve Omohundro, an extremely intelligent future AI will probably develop a drive to survive and acquire more resources as a step toward accomplishing whatever goal it has, because surviving and having more resources will increase its chances of accomplishing that other goal. This can cause problems for humans whose resources get taken away.


TL;DR:¯\_(ツ)_/¯ It’s pretty hard to give an informed comment. I can’t answer—that’s a tough one. Ask a futurist.

Will A.I. Pose An Existential Threat To Humanity?


TL;DR: I’m totally going to dodge this question There’s no consensus among AI researchers about how long it will take to build human-level AI and beyond, so please don’t trust anyone who claims to know for sure that it will happen in your lifetime or that it won’t happen in your lifetime. When it eventually does occur, it’s likely to be either the best or worst thing ever to happen to humanity, so there’s huge value in getting it right. We should shift the goal of AI from creating pure undirected artificial intelligence to creating beneficial intelligence. It might take decades to figure out how to do this, so let’s start researching this today rather than the night before the first strong AI is switched on.


TL;DR: NOPE. My answer is no. The advances that we’ve recently seen in areas like deep learning and speech recognition have been impressive in their ability to reach almost human-level performance, but that only represents a relatively narrow intelligence, versus a more general and flexible intelligence. But we still have to figure out how to transfer knowledge learned in one domain to another. Humans can learn from very few examples, and learn over time, but we have much much further to go with A.I. systems before we’re even close to those capabilities. While I’m thrilled and in awe of the progress that has been made lately, it’s relatively narrow.

Will A.I. Reach The Level Of Humans?


TL;DR: If they do, it’s all over It’s clearly possible for a something to acquire higher intelligence than its ancestors: we evolved to be smarter than our ape-like ancestors, and Einstein was smarter than his parents. The line you ask about is where an AI becomes better than humans at AI design, so that it can recursively improve itself without human help. If this happens, we may face an intelligence explosion that ultimately results in machines whose intelligence exceeds ours by more than ours exceeds that of snails.


TL;DR: Right now they can’t even do the dishes A lot of other folks have thought about this question (the Turing Test comes to mind) but as a roboticist, I can’t really give an informed answer. But I will say this: if a robot could play in a playground the way that a child plays in the playground, that would impress the heck out of me. A robot that could clean your kitchen, load the dishwasher and deal with liquids and messy situations—that level of interaction with the world would impress me.

What’s Your Favorite Song, Movie and Thing On The Internet?


Favorite song: “Have I Told You Lately” by Rod Stewart. Favorite movie: Jules et Jim, 1962. Last thing you saw online and found hilarious: The Big Bang Theory


Favorite song: “Mixed Emotions” (written by Leonard’s friend, Tony). Favorite movie: Brothers McMullen. Last thing you saw online and found hilarious: Facebook posts from a neighborhood friend who is also a standup comic.