With Artificial Intelligence (AI) starting to reveal its real world potential, Facebook, Google, Amazon, Microsoft and IBM have teamed up to work together in the burgeoning technological space.
Speaking to the BBC, one of the new group’s members revealed that the aims of the consortium, called the ‘Partnership on AI’, are to:
maximise this [AI’s] potential and ensure it benefits as many people as possible
The sentiments are similar to those expressed by the Future of Life Institute, an organisation that aims to “maximize the societal benefit of AI” and famously published an open letter (since signed by a galaxy of tech stars) stressing that it’s “important to research how to reap [AI’s] benefits while avoiding potential pitfalls”.
AI’s potential is indeed far reaching. We can probably expect it to impact almost every aspect of our everyday lives over the coming years: from healthcare and education to manufacturing, energy management and transportation.
And as it does so, we can also expect to see fears continue to grow: fears that AI might replace human labor, undermining the skills that are so crucial to our economies; fears around safety as machines take over complex tasks such as driving vehicles, performing operations and making life and death decisions in war; and fears that we might one day reach the technological singularity from which we can never return, where machines become more intelligent than humans.
We even reported last year how Stuart Russell – an award-winning AI researcher, a Professor of Computer Science at the University of California and author of a leading AI textbook – had likened the dangers of AI to nuclear weapons.
Abating fears and opening discussions
With that in mind, the consortium notes on its website that it was established to:
… study and formulate best practices on AI technologies, to advance the public’s understanding of AI, and to serve as an open platform for discussion and engagement about AI and its influences on people and society.
Co-chaired by Microsoft Research chief Eric Horvitz and co-founder of Google’s DeepMind subsidiary Mustafa Suleyman, it will also include experts from AI research groups and academia. The BBC notes that:
The group will have an equal share of corporate and non-corporate members and is in discussions with organisations such as the Association for the Advancement of Artificial Intelligence and the Allen Institute for Artificial Intelligence.
Or maybe there is more to it than simply educating the public, establishing best practices and enabling discussions.
In an interesting article, The Verge takes a deeper look at the list of tenets posted on the partnership’s website. It pays particular attention the sixth tenet:
Opposing development and use of AI technologies that would violate international conventions on human rights, and promoting safeguards and technologies that do no harm.
Writer Nick Statt notes that this tenet implies a degree of self-regulation – something that the technology giants involved might want to foster as a way of heading off government regulation.
No Apple at the core?
With the other tech giants now firmly showing their commitment to making AI a success, you may well wonder where Apple is. After all, Apple has been working hard on its own AI projects, and has even purchased machine learning start-ups.
Microsoft’s Eric Horvitz revealed to The Guardian:
We’ve been in discussions with Apple, I know they’re enthusiastic about this effort, and I’d personally hope to see them join.
Elon Musk has had plenty to say on the dangers of AI. His own horse in the AI race, OpenAI, is another notable absentee from the consortium, although the Verge reports that discussions between the two have begun.
Where are the brakes?
Whatever your views on the AI revolution, one thing is certain – it will happen.
Having the big tech players working together on such a disruptive technological arena is a good thing, in my opinion, providing that discussions are transparent and outside opinions are listened to and acted upon.
I would, however, feel more comfortable if there was more outside governance.
If the consortium turns into a body for industry self-regulation, are they really going to listen to those concerned with ethics when there are potentially trillions of dollars at stake?