130514-N-YZ751-632 ATLANTIC OCEAN (May 14, 2013) An X-47B Unmanned Combat Air System (UCAS) demonstrator launches from the flight deck of the aircraft carrier USS George H.W. Bush (CVN 77). George H.W. Bush is the first aircraft carrier to successfully catapult launch an unmanned aircraft from its flight deck. (U.S. Navy photo by Mass Communication Specialist 2nd Class Tony D. Curtis/Released)

Hasta la Vista, Humans: The Menace of “Killer Robots”

Self-determining, lethal machines are becoming reality. Scientists are banding together to stop them

This is about killer robots who can kill on their own. First of all, they’re already here.

They’re on the border between South Korea and North Korea—robot sentries topped with machine guns stand watch for intruders in the demilitarized zone.

“It looks a bit like a tall R2D2 and has a machine gun on top,” says Dr. Peter Asaro, a roboticist and professor at the New School. “It’s used in a remote-operated mode, but it has the capability for fully autonomous, so it can use its cameras to detect human targets and fire at them.”

Watch a demo video of the robot sentries here.

That capability is currently switched off, but it exists. And the development of “killer robots” doesn’t stop there. Autonomous death machines are in development in labs and testing fields around the world, including in the United States.

“We have been concerned about the trends toward autonomy in warfare for several years now,” says Mary Wareham of Human Rights Watch’s arms division.

So concerned that a group of human rights workers (like Wareham) as well as scientists and roboticists (like Asaro), not to mention some Nobel Laureates, decided it was time to form the bluntly named Campaign to Stop Killer Robots.

The campaign launched in April 2013 with a big show in London, standing outside Parliament with a friendly humanoid version of the robots they work against.

The robot on display with the campaign in London has a name: David Wreckham. He also has a Twitter account. He actually has an established history in the UK, starting on on the BBC children's show "Blue Peter."

On Oct. 21, they took advantage of the United Nations General Assembly’s security meeting to raise awareness about their pledge to pre-emptively ban fully autonomous weapons. Their next target is Geneva, where UN parties will hold an annual Convention on Conventional Weapons in November.

They decided to create the coalition because the issue “is a bit of an orphan at the moment,” Wareham says. “There’s no one single international forum for it.”

She compares the formation of the supergroup to the International Campaign to Ban Landmines, formed HRW and five other NGOs 20 years ago. Jody Williams, who won the Nobel Peace Prize for her work against landmines, is a key spokesperson for the anti killer robot movement as well.

There are important differences between drones, landmines and “killer robots”—the more serious term being “fully autonomous weapon,” or “lethal autonomous robot,” as the UN puts it.

Landmines are automatic rather than autonomous—laid as a trap, detonated by an unwary foot. Drones are remotely piloted by humans. But killer robots can both select their targets, and engage in violent force, completely outside of human control. Development and testing of these weapons are a precursor to a much different kind of warfare.

The difference between drones and killer robots...

10/26/13 00:31 UTC@democracynow

"These aren’t killer robots. It's not like there are unfeeling people behind this whole thing." -ex-drone operator http://t.co/bzAt4BRE9t

| |

“So basically the machine is making a decision to kill, and that’s the problem we have with it,” Wareham says. “We don’t think that power should be given over to a machine. We want to see a human always in the loop.”

With the ongoing public debate about America’s use of drones, some momentum is on their side. But the campaign against killer robots wants to point out that it’s different.

“We separated out [from drone campaigns] because that’s more about the method of warfare,” Wareham says. “This is about the means: the armed robots.”

“Autonomy” is the word that distinguishes these creations from their drone brethren.

 

SLIDESHOW: A History of Self-Aware Robots in Movies

Birthing the Killers

The X47B is a sleek unmanned combat air vehicle, developed by Northrop Grumman for the US Navy to the tune of $813 million. After multiple tests this summer, it has successfully taken off by itself and landed on an aircraft carrier. It has two holding bays that could carry thousands of pounds of weapons and ammo. And this is just the prototype.

BAE Systems are developing their own version, called the Taranis, which recently made its maiden flight at a remote testing range in southern Australia. Trade website Defense-Update says the UK government has been “highly secretive” about the Taranis, “which is known to have been seen in public on only two occasions.”

Killer robots range from the sophisticated Taranis to more basic weapons, like a moving skeleton with a machine gun on top that can fire at will that was demonstrated to US Army leaders earlier this month in Georgia.

“A robot becoming a member of the squad, we see that as a matter of training,” says Lt. Col. Willie Smith, the Army chief in charge of Unmanned Ground Vehicles at Fort Benning in Georgia. These robots with automatic—not autonomous—weapons could be on the battlefield within five years.

10/26/13 08:27 UTC@TheHangingWire

I can't believe these REAL killer robots exist. RIP humanity. http://t.co/ppzAyTqFI3

| |

Many of the actual robots will be more like the Transformers than Robocop or Terminator. Professor Christopher Coker from the London School of Economics is the author of Warrior Geeks: How C21st Technology Is Changing The way We Fight And Think About War and writes in great detail:

“With the development of nano-technology, some may ‘swarm’, others may look like tractors, tanks, even cockroaches or crickets.  All sorts of shapes and locomotion styles are being tested.  New research projects include robots that can fold themselves, fly and crawl, walk uphill and roll down.  Some roboticists are even looking to the humble amoeba for inspiration – the result is the chembot made up of particles that are quite stiff when compressed but, given space, flow like liquids thus allowing it to enter any space no smaller than its fully compressed state, more or less regardless of the shape of the space in question.”

The humanoid robots exist too, though, as evidenced by Boston Dynamics’ Petman test.

Proponents of using these machines in war point of the minimized risk of soldier deaths. But a June 2013 poll by UMass Amherst shows that it’s actually current and former soldiers who are the most concerned about drone and autonomous warfare.

“They won’t go out and commit war crimes, they won’t rape, but what’s to say they won’t malfunction and do something horribly wrong?” asks Wareham. “Humans aren’t perfect either, but granting the technology to be able to get to that level of artificial intelligence to distinguish humans from combatants…to make the complex legal judgments in the heat of battle” is dangerous.

Sure about that?

10/22/13 13:45 UTC@mims

The problem with the campaign to stop killer robots is it assumes humans are more suited to making such decisions. http://t.co/CvrG16TKbM

| |

10/25/13 13:33 UTC@zeynep

BTW—complexity, unpredictability & high failure rate of all big software projects (govt & corp) is part of why killer robots are a bad idea.

| |

You only need to look at the many tests of the X47B to see an example of what can go technically wrong. After successfully landing on an aircraft carrier twice, a third attempt was stopped because of a computer error in the machine’s systems. The way PopSci writes it is telling: “The aircraft’s three navigational computers couldn’t all agree on the right course of action, so they decided the X47B couldn’t land safely on the carrier.” These are literally machines making decisions.

Dr. Asaro says that incident was borne of “a classic error-correcting strategy from the early days of computers. You do all your calculations three times, and if two of them agree, you go with that.”

“If they’re all three disagreeing, you know something’s wrong,” he says. “You don’t know which one to trust.”

“Roboticists know how difficult it is to get their robot to do what it does in the laboratory,” Asaro continues. “And the idea of arming that robot…Computers are kind of finicky. The code can be logically correct and it still doesn’t do what it’s supposed to do. The temperature maters. Code runs differently because the grease on the gears is a little bit warmer today than it was yesterday, the sensor behaves a little bit differently today because the voltage is slightly lower. [There are] so many little things that makes it very, very difficult to do research.”

He points out that the demo videos that the military and public see are the best run out of 100 tests, or more. “The idea that you’re going to get this kind of reliable behavior all the time is pretty daunting,” Asaro says.

And we haven’t even started talking about the potential problem of hacking.

Hacking Killer Robots?

“You can’t hack it and take over all the US military’s tanks right now, but once they’re all computerized, that becomes a possibility,” Asaro explains. And they can be turned on their owners.

The Defense Advanced Research Projects Agency, or DARPA, the Pentagon’s R&D division, recently announced a contest to build the best hacker-proof defense software. The grand prize? A cool $2 million.

#jokes

10/30/13 16:49 UTC@drunkenpredator

At this point, even the killer robots are scared of the NSA.

| |

Cybersecurity is already an important national security issue, with Americans currently fending off sophisticated Chinese hack attacks. Throw fully autonomous robots in the fray, and you’ve got a potential minefield of things that could go Very Wrong.


Boston Dynamics developed the Atlas robot for a previous DARPA challenge for robotics.

Hacking also gets into the problem of accountability. An armed robot could be sent into any number of situations and hurt or kill people—and how do we know who sent it?

Critics of the Campaign say they’re stifling critical research and development. Dr. Asaro and his cohort Dr. Noel Sharkey are a bit like Gotham’s Lucius Fox, key to R&D but with their moral and ethical limits. These advocates don’t want to ban all autonomous robots, but rather start a discussion about the point of weaponizing them. Asaro and Sharkey’s presence on the campaign roll help them with their scientific street cred, plus the hundreds of other scientists who have signed on.

As bad as WMDs?

Outside of the R&D, Asaro says there’s a “deeper moral argument” to be had: “whether it’s morally acceptable to allow a machine to decide who lives and who dies.”

Killer robots are “very comparable to chemical and biological weapons in the sense that they’re morally appalling, they’re far too dangerous to use,” Asaro tells me over the phone on a day when UN workers in Syria miss a chemical weapons inspection deadline. “It just makes more sense to have across-the-board prohibition of them than to try to carefully regulate what could be potentially beneficial uses. There are rare instances in the real world where the use of chemical and biological weapons is morally justifiable. Letting those happen also lets a whole lot of bad things happen.”

Weaponizing robots, giving them full autonomy and sending them into battle requires meticulous thought and planning first, including “pre-programmed decisions about collateral damage.”

“Is there going to be an algorithm that says this military [device] is equal to this many civilian lives?” Asaro asks. “And I don’t think we want to get into this.”

The biggest opponents to this campaign are probably the private military contractors that stand to make the most money—and we’re talking hundreds of millions of dollars that they’re already raking in for developing prototypes. Asaro has spoken with many military contractors who says are “sensitive to this issue,” but “obviously it’s a big business potential for them.”

Skynet, anyone?

(While we’re on this point, it should be noted that according to the lore of the “Terminator” movies, Skynet became “self aware” in 1997 and launched its nuclear attack on humankind on April 21, 2011. We’re well past that now, and remain alive. #winning)

Yes, realistically the world is a ways off from the high level of artificial intelligence needed to achieve autonomous war robots with a high level of functionality. But that’s not to say we’re not getting there—at the speed of one of those terrifying Wildcats, which can run a 4-minute mile.

After pressure from the Campaign to Stop Killer Robots, the US government was the first to issue an actual policy on fully autonomous weapons, albeit in vague and absolutely-subject-to-change language. (It expires in 10 years.) At least a dozen other countries are in discussions about issuing similar policies, but the campaign would like to see more progress, faster.

“We’re not promoting the sci-fi angle, the media loves it. We don’t have a Terminator on our website,” Wareham says with a laugh. “But we wanted something provocative. That’s why we chose ‘killer robots.’”

“I like robots. We all love robots,” Asaro says. “We don’t love killer robots. We want to build robots that do good things for the world.”

Respond Now
ISRAEL

VIDEO: Gaza Militants Tunnel Into Israel and Attack an Outpost

Markham Nolan
SOCIETY

Data Proves That Everything You Know About Millennials Is WRONG

Elizabeth Kulze
SOCIETY

If You're Going to Unfollow Anyone, You Should Probably #UnfollowAMan

Brian Feldman
SOCIETY

Men Just Won't Stop With the Public Sexual Harassment of Women

Mariah Wilson
SOCIETY

This Amateur Einstein Has the Ladies All Figured Out

Elizabeth Kulze
CRIME

The Brooklyn Bridge White Flags Whodunit Gets Personal

M.L. Nestel
DRUGS

Meet "The Budtender" - Denver's Top Weed Sommelier

Bryan Schatz
PHOTOS

Inside Gaza's Smuggling Tunnels

Vocativ Staff
Join the Fray
“Rollin’ Coal” Is Pollution Porn for Dudes With Pickup Trucks