Tech Titans Front $27 Million To Build An Ethical AI

Silicon Valley luminaries have teamed up to fund research into how AI can benefit humanity

Illustration: R. A. Di Ieso
Jan 11, 2017 at 12:07 PM ET

The plague of hoax news stories going viral on Facebook during the 2016 presidential election provided a clear message for the future architects of artificial intelligence: Algorithms aren’t objective or neutral, and pretending otherwise can have dire consequences on the decision-making systems that now permeate virtually every aspect of our lives. Now, a few tech moguls are putting up cash for researchers to study how ethical and accountable AI can be designed to work for the public good.

The $27 million Ethics and Governance of Artificial Intelligence Fund will fuel research projects at the MIT Media Lab and Harvard’s Berkman Klein Center for Internet & Society aiming to “advance AI in the public interest by including the broadest set of voices in discussions and projects addressing the human impacts of AI.” Funding is being provided by contributions from the Knight Foundation, billionaire eBay founder Pierre Omidyar, and LinkedIn founder Reed Hoffman.

“There’s an urgency to ensure that AI benefits society and minimizes harm,” said Hoffman, in a press statement announcing the fund. “AI decision-making can influence many aspects of our world – education, transportation, health care, criminal justice, and the economy – yet data and code behind those decisions can be largely invisible.”

Among the program’s key goals are developing strategies to “build and design technologies that consider ethical frameworks and moral values as central features of technological innovation,” as well as creating ways to make AI more transparent, publicly accountable, and representative of a wide range of human experiences — not just upper-middle class Silicon Valley software engineers. Crucially, said MIT Media Lab director Joi Ito, researchers must work to “make sure that the machines we ‘train’ don’t perpetuate and amplify the same human biases that plague society.”

The fund could play an important role in bucking the trend of opaque and unaccountable machine learning algorithms. In recent years, the massive proliferation of big data has led to the rapid deployment of AI in everything from face recognition and driverless cars to determining whether someone is approved for a loan or released from prison on parole.

In some ways, these advances have brought the rise of personal digital assistants like Siri and other innovations in consumer technology. But they have also led overzealous technologists and researchers to pursue systems that only worsen existing social and economic inequality. In one example, a group of researchers using scientifically dubious methods claimed to be able to “neutrally” predict whether people will become criminals based only on their facial features – essentially a computer-aided rehash of a racist and long disproven branch of 19th-century pseudoscience. (In reality, the researchers merely trained the AI on a template of “criminality” created from faces that the criminal justice system already discriminates against)

To combat these kinds of algorithmic snake oil, ethical AI will need all the help it can get.