Facebook is among the technology companies leading the race to develop artificial intelligence. But Americans don't trust it to do so responsibly, a survey from a U.K. think tank has found.
More than two-thirds of those surveyed said they had either "no confidence" or "not too much confidence" in Facebook developing A.I., a report from the Center for the Governance of AI, part of the Future of Humanity Institute at the University of Oxford, said. The public was significantly more skeptical about Facebook than other tech companies working on cutting-edge A.I. research, according to the survey.
Among technology companies, Microsoft was the most trusted, with 44 percent of people saying they had either "a great deal of confidence" or "a fair amount of confidence" in its ability to create AI that wouldn't pose risks, the survey found. But this still lagged the faith Americans had in other groups to develop AI. Overall, people had the most faith in the U.S. military, with 17 percent giving it the highest confidence score and 32 percent the second highest.
The results are another indicator of the extent to which Facebook has lost public trust following a string of scandals, most notably its failure to protect users' privacy and Russia's use of the social network in an attempt influence the 2016 Presidential election. The Center for Governance of AI surveyed 2,000 Americans on their attitudes toward artificial intelligence between June 6 and June 14, 2018.
Mark Zuckerberg, Facebook's chief executive officer, told Congress last year that the development of artificial intelligence would, in the future, play a key role in helping to combat false information and malicious content on Facebook. In addition to the machine learning techniques Facebook already uses to automatically tag photos, run its news feed and display ads to users, the company has a large research division -- employing hundreds -- devoted to pushing the boundaries of what AI can do.
The results may surprise many in Silicon Valley, where sharing AI technology with the U.S. military has been controversial. Last April and May, thousands of Google employees protested the company's work with the Pentagon on a project that used Google's computer vision technology to better identify objects in drone video footage. The outcry forced Google to announce it would not renew its contract with the military after it expires this year.
Aside from the U.S. military, academics ranked well in the survey, with half of the respondents giving university researchers the two highest confidence scores. While many of the techniques that underpin today's rapid advances in machine learning were developed in university labs, in the past five years major technology companies have used their ample budgets to hire many of the leading academics working on AI and many of the cutting edge advances in the technology are now being produced in corporate research labs.
"There is no organization that is highly trusted to develop AI in the public interest, although some are trusted much more than others," Allan Dafoe, director of the Center for the Governance of AI, said.
The survey also found that a large majority of Americans think the development of AI and robotics should be carefully managed. The respondents were most concerned about preventing AI -assisted surveillance technologies, such as facial recognition, from violating privacy rights and civil liberties, preventing AI from helping to spread false information, preventing AI 's use in cyberattacks, and protecting data privacy.
Better-educated and wealthier Americans were more likely to support AI 's development. Fifty-seven percent of college graduates and 59 percent of those with household incomes of more than $100,000 per year supported the technology, while only about a third of those with a high school education or less or earning less than $30,000 annually did. Men were also more likely to support the technology than women.