Futurist: A.I. Learns From Humans, ‘That’s What Keeps Me Up At Night’

Dec 13, 2016 at 11:48 AM ET

“When it comes to A.I., we may be worried about the wrong things,” says Amy Webb, a forecaster of digital trends and author of “The Signals Are Talking.” Rather than the cliche of artificial intelligence that goes to war against humanity, Webb says we might worry about A.I. that too accurately mimics humanity.

As artificial intelligence continues to develop, Webb, the founder of the Future Today Institute and an adjunct professor at NYU, says her greatest concern is that A.I. is created by humans, and so reflects back our own beliefs, values, and biases.

She offers as an example Tay, an A.I.-powered chatbot released on Twitter earlier this year. It only took 24 hours for Tay to start posting racist and xenophobic messages all across the internet. The result of this experiment doesn’t so much reflect a flaw in the bots design, so much as human flaws.

“In order for A.I. systems to work, they need to be trained. And we, we humans, are their mothers and fathers. We are their study buddies. We are the ones these A.I. systems are learning from,” Webb says.

In China, a similar bot was released, yet did not end up spewing profanity. Webb says this is largely because of the country’s censorship: the bot didn’t come into contact vulgar tweets, and so didn’t mimic them.

“We must, right now, prepare for a future living with A.I.,” Webb says. “Like all technologies created by human kind, these technologies will ultimately reflect the values of their creators,” she says. “For now at least, that’s us — and that is what keeps me awake at night.”