Skip to content
Naked Security Naked Security

US Army clarifies its killer robot plans

The US Army has been forced to clarify its intentions for killer robots after unveiling a new program to build AI-powered targeting systems last month.

The US Army has been forced to clarify its intentions for killer robots after unveiling a new program to build AI-powered targeting systems.

The controversy surrounds the Advanced Targeting and Lethality Automated System (ATLAS). Created by the Department of Defense, it is a program to develop:

Autonomous target acquisition technology, that will be integrated with fire control technology, aimed at providing ground combat vehicles with the capability to acquire, identify, and engage targets at least 3X faster than the current manual process.

That text comes from the US Army, which has announced an industry day taking place next week to brief industry and academia on its progress so far, and to source new expertise.

To translate, ATLAS is a project to make ground robots that are capable of finding and shooting at targets more quickly than people can. This raises the spectre of lethal AI once again.

Ethicists and scientists are already hotly debating this issue. Some 2,400 scientists and other AI experts including Elon Musk and DeepMind CEO Demis Hassabis signed a pledge under the banner of the Boston-based Future of Life Institute protesting the development of killer AI.

The UN has not yet taken decisive action, but Secretary-General Antonio Guterres has called for an outright ban.

The Army clearly realizes the controversial nature of the project, because it updated the industry day document last week to include new language:

All development and use of autonomous and semi-autonomous functions in weapon systems, including manned and unmanned platforms, remain subject to the guidelines in the Department of Defense (DoD) Directive 3000.09, which was updated in 2017.

Nothing in this notice should be understood to represent a change in DoD policy towards autonomy in weapon systems. All uses of machine learning and artificial intelligence in this program will be evaluated to ensure that they are consistent with DoD legal and ethical standards.

Directive 3000.9 is a 2012 DoD document outlining the policy associated with developing autonomous weapons. It says:

Semi-autonomous weapon systems that are onboard or integrated with unmanned platforms must be designed such that, in the event of degraded or lost communications, the system does not autonomously select and engage individual targets or specific target groups that have not been previously selected by an authorized human operator.

However, the policy also allows higher-ups to approve autonomous weapon systems that fall outside this scope under some conditions.

According to specialist publication Defense One, the US DoD is already fielding broader ethical guidelines for the adoption of AI across various military functions.

Meanwhile, tensions are high around the technology industry’s engagement with the military. Google faced an employee revolt after signing up for a Pentagon AI project called Project Maven to help automate video and image footage analysis. The company has since announced that it won’t renew Maven when it expires this year, and also refused to bid on the DoD’s massive JEDI cloud computing contract, arguing that it might not align with the ethical AI principles that it introduced last year.

Microsoft, on the other hand, continues to engage the DoD, announcing last October that it will sell the military AI technology in spite of protests from its own employees.

41 Comments

Transparency and documentation of the AI process is vital at all stages. I think anyone interested in AI has read about the AI that mis-identified wolves, simply by looking for snow in the background in photos. AI development needs to be closely monitored to make sure it does not make a similar target acquisition error. I don’t want to be targeted, just because I happen to be standing on sand.

Also, I think anyone who has an interest in AI also read about the military robot that was tackled by a marine when it started sweeping its gun barrel across the demonstration audience.

That’s not quite how Deep Learning neural nets work. It’s much more esoteric than that. You can change AI recognizing a Stop sign as a 45 MPH sign with a couple pieces of tape.

Apparently, it IS how some of them work, because that’s exactly what happened! Different people are programming different neural nets, so not all AI brains work the same.

What happened to the Three Laws of Robotics?

First Law – A robot may not injure a human being or, through inaction, allow a human being to come to harm.
Second Law – A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
Third Law – A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

Isaac Asimov

This will make Bender very happy.
Atlas huh, who needs horror sci-fi with an insane DOD.
search “Atlas Robot Can Do Parkour” imagine that with a gun and instant targeting/kill. – no thank you.
(of course they will combine both Atlases)

Humans have no enemies, unless propaganda tells them that other humans are their enemies. One day, propaganda will be rendered useless because people are waking up to tyranny. That’s where killer robots come in.

I think we should first wait for a functional theory of everything, understand a bit of reality, then give the robots a bit of creativity, measure the results then decide whether or not we should give AI robots weapons. Using killer robots with our lack of knowledge is a huge risk.

Microsoft will sell the military AI technology? Just when the robot goes to fire it’s weapon, a message will appear: “please wait while the system installs updates”

…or “The incorrect target has just been eliminated. Do you want the Microsoft Troubleshooter to diagnose this problem for you?”

Now we know what General Failure has been up to all this time!

Your free trial of KwikTarget has expired.
Random numbers will be injected into the ballistics accuracy feed, enhancing target system up to 90 degrees.
Please enter a valid credit card number, or call customer support.
And you should probably duck.

I’m not afraid of the US armed forces turning this on civilians, American or otherwise, but I do worry that authoritarian regimes will immediately follow suit. To prevent that, these kinds of weapons should be off the table for *all* militaries, including the US.

“Not worried about the US…” but you’re worried about ‘authoritarian regimes’ – that made my day, thank you.

We’ll need .50 caliber rifles to fight our AI robot overlords. And microwave projectors. . . starting to build mine today. . .

Robots should only be allowed to target other robots. Only evil people and corporations would allow robots to target humans.

Danny, where did you find that graphic? I really enjoyed seeing that, but wondered what was behind its creation.

I wondered that too! I don’t choose the graphics, I’m afraid. I’ll ask the team and find out where this one came from.

Comments are closed.

Subscribe to get the latest updates in your inbox.
Which categories are you interested in?