(TNS) -- Researchers from Harvard and MIT and philanthropists including the founders of LinkedIn and eBay are teaming up in a multimillion-dollar effort to make sure artificial intelligence is designed and used to make the world a better place.
“This is an imperative moment for humanity,” said John Bracken, vice president of technology and innovation for the Knight Foundation. “These topics that are being raised right now in the AI conversation touch on the core of what it means to be a human being.”
The Knight Foundation, the MIT Media Lab, Harvard’s Berkman Klein Center for Internet and Society, LinkedIn founder Reid Hoffman and eBay founder Pierre Omidyar have combined to create a $27 million fund called the Ethics and Governance of Artificial Intelligence Fund that will support research and development to make AI beneficial for humans. The Media Lab and Berkman Klein Center are the first “anchor institutions,” Bracken said.
“A significant amount of the funds will be going to those institutions,” he said.
Once thought to be decades away, forms of artificial intelligence have already permeated our daily lives. It’s used in social media to show what our friends are talking about, to suggest songs that we might like, and soon we will trust it to drive our cars.
“There’s definitely urgency,” said Urs Gasser, executive director of the Berkman Klein Center.
The concerns, Gasser said, are less a robot uprising and more about whether AI understands the concept of fairness.
“AI decision-making can influence many aspects of our world — education, transportation, health care, criminal justice, and the economy — yet data and code behind those decisions can be largely invisible,” Hoffman said.
The fund is not the only group trying to improve AI: In 2015, Elon Musk donated $10 million to Cambridge nonprofit Future of Life Institute to make sure artificial intelligence benefits society. Musk has compared AI to “summoning the demon.”
The fund sprang from conversations behind the scenes at an MIT artificial intelligence conference last year.
“One of the most critical challenges is how do we make sure that the machines we ‘train’ don’t perpetuate and amplify the same human biases that plague society,” said Joi Ito, director of the MIT Media Lab. “How can we best initiate a broader, in-depth discussion about how society will co-evolve with this technology, and connect computer science and social sciences to develop intelligent machines that are not only ‘smart,’ but also socially responsible?”
©2017 the Boston Herald Distributed by Tribune Content Agency, LLC.