Skip to content
Naked Security Naked Security

Google: We won’t cause “overall harm” with our AI

After the Project Maven-inspired employee revolt, Google has released a set of AI principles and said it's withdrawing from the contract.

Google has pledged not to use its powerful artificial intelligence (AI) to create weapons of war, illegal surveillance or to cause “overall harm”. However, new guidelines on the use of AI don’t rule out working with the military.
On Thursday, Sundar Pichai, CEO for Alphabet Inc.’s Google, set out a series of principles about AI at the company.
The announcement follows more than 4,500 Google employees having written a letter in April, calling on the company to get out of the “business of war” and cancel Pentagon work.
Pichai said in Thursday’s post that Google recognizes that its powerful technology “raises equally powerful questions about its use”.
AI is being used for good, he said, citing use cases such as machine-learning sensors being built by higher schoolers to predict the risk of wildfires; farmers using it to monitor their cows’ health; and doctors who are using it to diagnose breast cancer and to prevent blindness.
On the other hand, AI can go in much darker directions, given the bias of the data it’s trained on. The most recent example is that of Norman, MIT Media Lab’s psycho bot, which was fed on enough subreddit death material that it started seeing electrocution and gang-style slayings in Rorschach inkblots that other AIs interpreted as far less blood-soaked.
When an AI gets biased training, it has real consequences outside of the lab: In 2016, Pro Publica released a study that found that algorithms used across the US to predict future criminals – algorithms that come up with “risk assessments” by crunching answers to questions such as whether a defendant’s parents ever did jail time, how many people they know who take illegal drugs, how often they’ve missed bond hearings, or if they believe that hungry people have a right to steal – are biased against black people.
Pro Publica came up with that conclusion after analyzing what it called “remarkably unreliable” risk assessments assigned to defendants:

Only 20% of the people predicted to commit violent crimes actually went on to do so.

Add lethal weapons to the mix, and Google’s AI enters a much murkier arena – one that led the United Nations to ask pressing questions on behalf of mankind. We’re talking autonomous killer robots – this, rather than omnipotent super-intelligence, is the clear and present danger from AI.
The employees who had raised concerns about Google’s work with the Pentagon specifically brought up Project Maven, the Pentagon’s pilot program to identify objects in drone footage and to thereby better target drone strikes.
Resistance to Project Maven was serious enough that at least a dozen staffers reportedly resigned in protest.
The resistance forced Google to retreat from the contract last week: the company reportedly told employees during a meeting on Friday that it wouldn’t renew its contract with Project Maven after it expires in 2019. Google Cloud CEO Diane Green confirmed the company’s withdrawal from the Defense Department’s flagship AI program.


But Pichai made clear that the new AI ethics policy doesn’t rule out future work with the Pentagon. Far from it, given that the company’s going to “actively look” for ways to help out the work of military organizations:

We want to be clear that while we are not developing AI for use in weapons, we will continue our work with governments and the military in many other areas. These include cybersecurity, training, military recruitment, veterans’ healthcare, and search and rescue. These collaborations are important and we’ll actively look for more ways to augment the critical work of these organizations and keep service members and civilians safe.

The policies aren’t theoretical pie in the sky, he said: they’re “concrete standards” that will govern research and policy development and that will impact the company’s business decisions.
But is it even possible to keep Google’s AI out of weapons? That debate is ongoing.
Daniel Byman, a counterterrorism and Middle East security expert at the Brookings Institution, told The Hill that the technology is out there, and it’s going to keep advancing, and Google can’t stop it:

The question to me is not ‘Do you do this, yes or no,’ but how do you engage in ways that reduces the carnage of war? You’re going to see the technology advance. The question is: Can companies like Google and other places shape the evolution in a positive way? They are not going to stop it.

Peter Singer, a fellow studying war and technology at New America, argued that Google employees’ lack of desire to participate in the war machine is naïve, given that they already have. Singer cited the Islamic State’s use of YouTube to share recruitment videos, along with Russia’s use of social media platforms in the US during the 2016 presidential race:

It is an immaturity on their part to act like they haven’t already touched the realm of war.


4 Comments

Working with AI under public scrutiny is going to be undeniably tricky. There’s never not going to be casualties. For example, if an AI-driven car has the options in an emergency to hurt a van filled with a 5-member family, a set of three pedestrians, or the lone driver (plus maybe, possibly a single passenger), the AI won’t be able to select option D: zero casualties. On the plus side, the theory goes that there will be far fewer casualties than when people were doing the thinking.

Reply

far fewer casualties than when people were doing the thinking
True. Or at least until robots learn to be as emotionally unstable as us meatbags.

Reply

“̶D̶o̶n̶’̶t̶ ̶b̶e̶ ̶e̶v̶i̶l̶”̶ ̶”̶D̶o̶ ̶t̶h̶e̶ ̶r̶i̶g̶h̶t̶ ̶t̶h̶i̶n̶g̶”̶ Don’t do overall harm if it isn’t too much trouble and doesn’t hurt the bottom line.

Reply

What about the harm their other products are causing in so many other ways? Like Rob says. Don’t be Evil, unless that’s what makes you money.

Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

Subscribe to get the latest updates in your inbox.
Which categories are you interested in?
You’re now subscribed!