“You don’t buy [artificial intelligence] like you buy ammunition,” says Marine Corps Col. Drew Cukor.
Cukor, from a speech given to military and industry technology experts in July:
There is no ‘black box’ that delivers the AI system the government needs, at least not now. Key elements have to be put together… and the only way to do that is with commercial partners alongside us.
Gizmodo first reported last month that when we’re talking industry heavyweights in artificial intelligence (AI) that are working with the Pentagon, we’re talking, among others, about Google.
Specifically, Google’s working with the Pentagon on Project Maven, a pilot program to identify objects in drone footage and to thereby better target drone strikes.
Google, as in, the company whose motto is Don’t Be Evil.
A vocal and large group of Google employees are outraged that the company’s working on what they call the “business of war.” The New York Times reports that a letter – the newspaper published it here – circulating within Google pleads with the company to pull out of the program. As of Wednesday, it had garnered more than 3,100 signatures.
The letter, which is addressed to CEO Sundar Pichai, asks that the company announce a policy that it will not “ever build warfare technology” and that it pull out of Project Maven:
We believe that Google should not be in the business of war. Therefore we ask that Project Maven be cancelled, and that Google draft, publicize and enforce a clear policy stating that neither Google nor its contractors will ever build warfare technology.
The letter references reassurances from Diane Greene, who leads Google’s cloud infrastructure business, that the technology will not “operate or fly drones” and “will not be used to launch weapons.”
Still, the technology’s being built for the military, the letter says, and once it’s delivered, “it could easily be used to assist in these tasks.”
The NYT reports that Google employees had raised questions about Google’s involvement in Project Maven at a recent company-wide meeting.
A company spokesman said that most of the signatures on the protest letter were collected before the company explained the situation.
The letter predicts that working on technology that could wind up on the battlefield “will irreparably damage Google’s brand and its ability to compete for talent.” That reflects what the NYT calls a culture clash between Silicon Valley and the federal government: one that’s “likely to intensify as cutting-edge artificial intelligence is increasingly employed for military purposes.”
A Google spokesperson told Gizmodo that the company is providing the Defense Department with TensorFlow APIs, which are used in machine learning applications, to help military analysts detect objects in images.
Both Google and the Pentagon are all too aware of fears about entrusting the killing of humans to autonomous weapons systems: systems that could fire without a human operator. Both Google and the Pentagon have said that Google’s tools won’t be used to create such a system.
The Google spokesperson who spoke with the NYT acknowledged this much-debated topic and said that the company is currently working “to develop polices and safeguards” around the technology’s use:
We have long worked with government agencies to provide technology solutions. This specific project is a pilot with the Department of Defense, to provide open source TensorFlow APIs that can assist in object recognition on unclassified data. The technology flags images for human review, and is for non-offensive uses only. Military use of machine learning naturally raises valid concerns. We’re actively discussing this important topic internally and with others as we continue to develop policies and safeguards around the development and use of our machine learning technologies.