“You don’t buy [artificial intelligence] like you buy ammunition,” says Marine Corps Col. Drew Cukor.
Cukor, from a speech given to military and industry technology experts in July:
There is no ‘black box’ that delivers the AI system the government needs, at least not now. Key elements have to be put together… and the only way to do that is with commercial partners alongside us.
Gizmodo first reported last month that when we’re talking industry heavyweights in artificial intelligence (AI) that are working with the Pentagon, we’re talking, among others, about Google.
Specifically, Google’s working with the Pentagon on Project Maven, a pilot program to identify objects in drone footage and to thereby better target drone strikes.
Google, as in, the company whose motto is Don’t Be Evil.
A vocal and large group of Google employees are outraged that the company’s working on what they call the “business of war.” The New York Times reports that a letter – the newspaper published it here – circulating within Google pleads with the company to pull out of the program. As of Wednesday, it had garnered more than 3,100 signatures.
The letter, which is addressed to CEO Sundar Pichai, asks that the company announce a policy that it will not “ever build warfare technology” and that it pull out of Project Maven:
We believe that Google should not be in the business of war. Therefore we ask that Project Maven be cancelled, and that Google draft, publicize and enforce a clear policy stating that neither Google nor its contractors will ever build warfare technology.
The letter references reassurances from Diane Greene, who leads Google’s cloud infrastructure business, that the technology will not “operate or fly drones” and “will not be used to launch weapons.”
Still, the technology’s being built for the military, the letter says, and once it’s delivered, “it could easily be used to assist in these tasks.”
The NYT reports that Google employees had raised questions about Google’s involvement in Project Maven at a recent company-wide meeting.
A company spokesman said that most of the signatures on the protest letter were collected before the company explained the situation.
The letter predicts that working on technology that could wind up on the battlefield “will irreparably damage Google’s brand and its ability to compete for talent.” That reflects what the NYT calls a culture clash between Silicon Valley and the federal government: one that’s “likely to intensify as cutting-edge artificial intelligence is increasingly employed for military purposes.”
A Google spokesperson told Gizmodo that the company is providing the Defense Department with TensorFlow APIs, which are used in machine learning applications, to help military analysts detect objects in images.
Both Google and the Pentagon are all too aware of fears about entrusting the killing of humans to autonomous weapons systems: systems that could fire without a human operator. Both Google and the Pentagon have said that Google’s tools won’t be used to create such a system.
The Google spokesperson who spoke with the NYT acknowledged this much-debated topic and said that the company is currently working “to develop polices and safeguards” around the technology’s use:
We have long worked with government agencies to provide technology solutions. This specific project is a pilot with the Department of Defense, to provide open source TensorFlow APIs that can assist in object recognition on unclassified data. The technology flags images for human review, and is for non-offensive uses only. Military use of machine learning naturally raises valid concerns. We’re actively discussing this important topic internally and with others as we continue to develop policies and safeguards around the development and use of our machine learning technologies.
TechFlax
Why??
Anonymous
So Google building something that can hopefully lead to fewer misidentification and collateral damage incidents and people are protesting….. cool…
David Pottage
Not relay. Like everyone else, the military has a risk thermostat, so if you add a technology that reduces the consequences of risky behaviour, then that behaviour tends to increase.
To give a concrete example, from time to time a missile hits a school or hospital by mistake, generating bad headlines and causing the military to be more careful for a while. If this technology makes such mistakes less likely, then the tempo of attacks is likely to increase to compensate.
The number of innocent dead children on the evening news will probably go down a bit, but not by as much as you would expect. Meanwhile the number of widows and orphans will likely increase, sowing the seeds for future conflicts and poverty.
Anonymous
That is not how the military targeting process works at all….
mike@gmail.com
It’s just like it always has been, peace means the USA loses, in war, economics, politics. Any domain. The USA is the problem folks! Or at least that’s what these people seem to believe, among with many others. If Google doesn’t want the money, they don’t have to take the contract, there are PLENTY of talented and intelligent people willing to work hard to make sure that the last bastion of freedom remains capable, flexible, independent, and cutting-edge.
Michael
Or building something that will kill innocent people, where no-one will be held to account. In the early days of computing, when every time someone messed up it was a ‘computer error’. No one was ever held to account.
Mark Grace
Thank you for reporting this. With China and Russia both racing toward AI and the latter’s leader stating “whoever gets there first will rule the world”, and in light of the recent chemical attacks in Syria, there is considerably more to consider on the moral front.
Was not Google or Amazon considering an AI research center in China???
There is an old Chinese saying and blessing ( or curse, depending on how you look at it), “May you be born in interesting times.”
Interesting times, indeed.
Mahhn
I like your comment, but also think the quite should be more like “whoever gets there first will lose the world”.
Ralph Wegner
If Google caves to the politically correct, I’m all for it. My state has plenty of smart folks and businesses that would be happy to take that contract.
Mahhn
Say no to the dark side,,, can we place bets?
Steve
Think about all those brilliant Googlites, sitting in their own special world, not even realizing that they are likely to be among the first targets… and trying to decrease the ability of the U.S. military forces to defend them from attack.
Contrast that to the history of technology companies of the early-to-mid-twentieth century as to their involvement with the military.in the war efforts. Did you know, for example, that IBM ,manufactured rifles?
Carl Wexler (@FreeMobileApps)
Without companies like Google, our military can’t compete against countries like China that have communist party members in every tech company. Just forget about it.
s31064
“The letter predicts that working on technology that could wind up on the battlefield “will irreparably damage Google’s brand”…
As if you could damage Google’s brand anymore than they already have.
Pete in Seattle
I’m sure the big G could lose 3100 “workers”. Fire them!