28 de septiembre de 2016 – Artificial intelligence is becoming ubiquitous. As its reach grows and is engrained into consumer products and services, elements of control and regulation are required. Silicon Valley’s biggest companies are joining forces to introduce this. Facebook, Google (in the form of DeepMind), Microsoft, IBM, and Amazon have created a partnership to research and collaborate on advancing AI in a responsible way. Each member of the Partnership on AI will contribute financial and research resources.
Leading AI researchers from each of the firms have said the agreement is “breaking down barriers,” “historic” and that it will lead to systems and products that can operate both safely and fairly. This includes creating self-driving vehicles that are able to make ethical choices and how artificial intelligence is used to treat diseases.
“The positive impact on AI will depend not only on the quality of our algorithms but the level of public engagement, transparency, and ethical discussion that takes place around it,” Mustafa Suleyman, the co-founder of DeepMind said on a call with reporters.
In this respect, the Partnership will publish its research using open-source licences as well as looking to publish minutes from its meetings. Suleyman added: “One of the roles we might play is propagating best practice, displaying some benchmarks, make available some datasets.”
“I think this is exactly the right time,” Dr Seán Ó hÉigeartaigh, from the University of Cambridge’s Centre for the Study of Existential Risk, told WIRED. “If it had been announced five years ago people wouldn’t have taken it seriously because they might have said it was premature.”
As AI has moved from academic research labs to real-world implementation it has caused problems – none of which have easy answers. Google’s photo scanning algorithm mistakenly called black people gorillas, an algorithm used to predict crimes has been identified to prejudice against racial groups, and code in apps dictates the work schedules of those in the on-demand economy.
The concerns are legitimate and that is before the well-trodden fears of AI world domination. Stephen Hawking, Elon Musk and Steve Wozniak have previously joined 1,000 academics in calling for AI to be carefully controlled by its creators.
Tech representatives say the new group isn’t being created to lobby governments or create a set of rules. “There’s no explicit attempt at the notion self-regulation to repel government intrusion,” said Eric Horvitz, a managing director at Microsoft’s research division.
“Questions come up about the transparency of our systems and their ability to explain themselves, the ethics that are encoded in the AI systems the potential for the embedding of hidden biases,” he added.
Ó hÉigeartaigh, who isn’t involved with the partnership, said the Partnership on AI helps to bring legitimacy to areas of AI research. “The issues around data privacy, automation in employment,algorithmic fairness, are not only being raised by academics in ivory towers but the people who are the forefront of the research and understand where it is going to go in five, ten or 15 years.”
As such, the group won’t be closed off to new members. Those involved with the new non-profit organisation will include academics, policy makers, other non-profits and ethical specialists.
However, there are two notable exceptions from the opening list: Apple and OpenAi.
The former, which has included machine learning in its latest iPhone and iPad operating system and is said to be developing a home automation assistant, traditionally has kept its technological ecosystems locked down. Access to Siri was only granted to developers in recent months. Apple has been in discussions with the group but has not joined at present – the members say they hope it will in the future.
Whereas Open AI – a pre-existing AI research non-profit, backed with $1 billion in investment from Elon Musk and others – already promotes the publication of open AI research and methods.
“They are research labs like many of our research labs here,” Yaan Lecun, director of AI research, at Facebook said. “There is no fundamental difference in that respect.” He added that many researchers already publish their work in an open and reusable way. “In that sense it is not very different from Open AI.”
Ó hÉigeartaigh says there is already a lot of co-operation between groups researching AI and doesn’t believe it is a commercial venture, with the possibility of other partners being added in the future.
The AI partnership’s 8 goals
1. Ensure AI technologies benefit and empower as many people as possible.
2. Educate and listen to the public and actively engage stakeholders to seek their feedback on our focus, inform them of our work, and address their questions.
3. Commit to open research and dialog on the ethical, social, economic, and legal implications of AI.
4. AI research and development efforts need to be actively engaged with and accountable to a broad range of stakeholders.
5. Engage with and have representation from stakeholders in the business community to help ensure that domain-specific concerns and opportunities are understood and addressed.
6. Work to maximize the benefits and address the potential challenges of AI technologies, by: protecting privacy;remain socially responsible; ensure research is robust, reliable and trustworthy; opposing development of AI that would violate international human rights.
7. Believe that it is important for the operation of AI systems to be understandable and interpretable by people, for purposes of explaining the technology.
8. Strive to create a culture of cooperation, trust, and openness among AI scientists and engineers to help us all better achieve these goals.
Publicado en wired.co.uk