A Global Arms Race for Killer Robots Is Transforming the Battlefield

ATLANTIC OCEAN - MAY 13: In this handout released by the U.S. Navy, Northrop Grumman personnel conduct pre-operational tests on an X-47B Unmanned Combat Air System (UCAS) demonstrator on the flight deck of the aircraft carrier USS George H.W. Bush (CVN 77) May 13, 2013 in the Atlantic Ocean. George H.W. Bush is scheduled to be the first aircraft carrier to catapult-launch an unmanned aircraft from its flight deck. The Navy plans to have unmanned aircraft on each of its carriers to be used for surveillance and be armed and used in combat roles. (Photo by Mass Communication Specialist 3rd Class Kevin J. Steinberg//U.S. Navy via Getty Images)

 

Over the weekend, experts on military artificial intelligence from more than 80 world governments converged on the U.N. offices in Geneva for the start of a week’s talks on autonomous weapons systems. Many of them fear that after gunpowder and nuclear weapons, we are now on the brink of a “third revolution in warfare,” heralded by killer robots — the fully autonomous weapons that could decide who to target and kill without human input. With autonomous technology already in development in several countries, the talks mark a crucial point for governments and activists who believe the U.N. should play a key role in regulating the technology.

The meeting comes at a critical juncture. In July, Kalashnikov, the main defense contractor of the Russian government, announced it was developing a weapon that uses neural networks to make “shoot-no shoot” decisions. In January 2017, the U.S. Department of Defense released a video showing an autonomous drone swarm of 103 individual robots successfully flying over California. Nobody was in control of the drones; their flight paths were choreographed in real-time by an advanced algorithm. The drones “are a collective organism, sharing one distributed brain for decision-making and adapting to each other like swarms in nature,” a spokesman said. The drones in the video were not weaponized — but the technology to do so is rapidly evolving.


Widget not in any sidebars

This April also marks five years since the launch of the International Campaign to Stop Killer Robots, which called for “urgent action to preemptively ban the lethal robot weapons that would be able to select and attack targets without any human intervention.” The 2013 launch letter — signed by a Nobel Peace Laureate and the directors of several NGOs — noted that they could be deployed within the next 20 years and would “give machines the power to decide who lives or dies on the battlefield.”

Five years on, armed drones and other weapons with varying degrees of autonomy have become far more commonly used by high-tech militaries, including the U.S., Russia, the U.K., Israel, South Korea and China. By 2016, China had tested autonomous technologies in each domain: land, air and sea. South Korea announced in December it was planning to develop a drone swarm that could descend upon the North in the event of war. Israel already has a fully autonomous loitering munition called the Harop, which can dive-bomb radar signals without human direction and has reportedly already been used with lethal results on the battlefield. The world’s most powerful nations are already at the starting blocks of a secretive and potentially deadly arms race, while regulators lag behind.

“Many countries, particularly leading developers of robotics, have been quite murky about how far they want the autonomy to go,” says Paul Scharre of the Center for a New American Security. “Where is the line going to be drawn between human and machine decision-making? Are we going to be willing to delegate lethal authority to the machine?”

That’s exactly the question a group of NGOs called the Campaign to Stop Killer Robots are urgently trying to get countries to discuss at the United Nations at the Convention on Conventional Weapons, where talks have been held each year since 2013. It’s the same forum where blinding laser weapons were successfully banned in the past.

For years, states and NGOs have discussed how advances in artificial intelligence are making it increasingly possible to design weapons systems that could exclude humans altogether from the decision-making loop for certain military actions. But with talks now entering their fifth year, countries have yet to even agree on a common definition of autonomous weapons. “When you say autonomous weapon, people imagine different things,” says Scharre. “Some people envision something with human-level intelligence, like a Terminator. Others envision a very simple robot with a weapon on it, like a roomba with a gun.”


Widget not in any sidebars

An expert in such matters is Professor Noel Sharkey, head judge on the popular BBC show Robot Wars, where crude weaponized (though non-autonomous) robots battle it out in front of excited crowds. When he’s not doing that, Sharkey is also a leading member of the Campaign to Stop Killer Robots, which in an effort to overcome the impasse at the U.N. has suggested its own definition of autonomy.

“We are only interested in banning the critical functions of target selection and applying violent force,” he says. “Two functions.” That precise approach, he insists, will not impede civilian development of artificial intelligence, as some critics suggest. Nor will it affect the use of autonomy in other strategic areas, such as missile defense systems that use artificial intelligence to shoot down incoming projectiles faster than a human operator ever could. But it’s a definition of autonomy that is being actively researched by militaries around the world — and it makes some nations uneasy.

It is official U.S. Department of Defense (DoD) policy that autonomous and semi-autonomous weapons should “allow commanders and operators to exercise appropriate levels of human judgment over the use of force,” and that such judgment should be in accordance with the laws of war. But the U.S. refuses to put its weight behind the Campaign to Stop Killer Robots, which wants similar assurances of meaningful human control to be codified into international humanitarian law. A DoD spokesperson told TIME: “The United States has actively supported continuing substantive discussions [at the U.N.] on the potential challenges and benefits under the law of war presented by weapons with autonomous functions. We have supported grounding such discussions in reality rather than speculative scenarios.”

Russia is also reluctant to support regulation, arguing similarly to the U.S. that international humanitarian law is sufficient as it stands. China remains muted. “There’s a strategic factor to this,” says Dr. Elke Schwarz, a member of the International Committee for Robot Arms Control. “It’s clear that the U.S., Russia and China are vying for pole position in the development of sophisticated artificial intelligence.”

 

The Campaign to Stop Killer Robots has a growing list of 22 countries that have formally agreed to support a pre-emptive ban on the technology. But none of those countries are developers of the technology themselves, and most have small militaries.

Scharre, who has previously worked for the Pentagon and helped establish its policy on autonomy, disagrees a blanket ban is the right approach to the issue. “The historical record suggests that weapons bans are sometimes successful but other preconditions have to be met,” he says. “One is that you have to be able to clearly articulate what the thing is you’re trying to ban, and the sharper the distinction between what’s allowed and what’s not, the easier it is.” This distinction is important not only in international law, he says, but on the battlefield itself.

That caveat has already proved difficult: In April 2016, an Israeli-made Harop drone was reportedly used in the region of Nagorno-Karabakh, a territory disputed by Azerbaijan and Armenia. The weapon is capable of operating either fully autonomously or under human direction, and it is therefore unclear whether the seven people killed were the first ever to be killed by a killer robot. It’s a sharp example of the difficulties future regulation of autonomous weapons might face. “There’s constant debate inside the Campaign as to when we remove the word ‘preemptive’ from our call for a ban,” its coordinator, Mary Wareham, tells TIME.

ATLANTIC OCEAN - MAY 17: In this image provided by the U.S. Navy, an X-47B unmanned combat air system (UCAS) demonstrator performs a touch and go landing May 17, 2013 on the flight deck of the aircraft carrier USS George H.W. Bush (CVN 77) in the Atlantic Ocean. This is the first time any unmanned aircraft has completed a touch and go landing at sea. George H.W. Bush is conducting training operations in the Atlantic Ocean. (Photo by Mass Communication Specialist 2nd Class Timothy Walter/U.S. Navy via Getty Images)
ATLANTIC OCEAN – MAY 17: In this image provided by the U.S. Navy, an X-47B unmanned combat air system (UCAS) demonstrator performs a touch and go landing May 17, 2013 on the flight deck of the aircraft carrier USS George H.W. Bush (CVN 77) in the Atlantic Ocean. This is the first time any unmanned aircraft has completed a touch and go landing at sea. George H.W. Bush is conducting training operations in the Atlantic Ocean. (Photo by Mass Communication Specialist 2nd Class Timothy Walter/U.S. Navy via Getty Images)
Handout—Getty Images

Both sides of the debate bring up the example of aerial bombardment to illustrate just how fraught regulating weapons can be. In the run up to the Second World War, there were repeated diplomatic attempts to put a blanket ban on aerial bombardment of cities. “It was such an indiscriminate form of warfare,” says Sharkey. “But there were no treaties. And then it became normal.” Hundreds of thousands of civilians were killed by aerial bombardment in Europe alone during the war. Today, the Syrian government’s use of nerve gas (which is illegal under international humanitarian law) has at times drawn more international condemnation than the killings of many more of its civilians by aerial bombardment (which isn’t).


Widget not in any sidebars

Even though attacking civilians goes against international humanitarian law, Sharkey argues the lack of a specific treaty means it can happen anyway. He fears the same might be the case with killer robots in the future. “What we’re trying to do is stigmatize the technology, and set up international norms,” he says.”

But Scharre argues the opposite. “When push comes to shove and there’s an incredible military technology in a major conflict, history shows that countries are willing to break a treaty and use it if it will help them win the war,” he says. “What restrains countries is reciprocity. It’s the concern that if I use this weapon, you will use it against me. The consequences of you doing something against me are so severe that I won’t do it.” It’s that thinking that drives current U.S. policy on autonomy.

The implication of this approach is a return to the cold war tenet of mutually assured destruction. But campaigners say the risks could be even higher than those of nuclear weapons, as artificial intelligence brings with it a level of unpredictable complexity. Many in the tech community are concerned that autonomous weapons might carry invisible biases into their actions. Neural network technology, where machines crunch vast amounts of data and modify their own algorithms in response to results, comprises the backbone of much AI that exists nowadays. One of the risks that brings is that not even the technology’s creators know exactly how the final algorithm works. “The assumption that once it’s in the technology it becomes neutral and sanitized, that’s a bit of a problem,” says Schwarz, who specializes in the ethics of violent technologies. “You risk outsourcing the decision of what constitutes good and bad to the technology. And once that is in the technology, we don’t typically know what goes on there.”

Another fear campaigners have is what might happen if the technology goes wrong. “When you have an automated decision system, you have a lack of accountability,” Schwarz continues. “Who is responsible for any sort of problem that occurs? Who is responsible for a misinterpretation of the facial recognition?” The risk is that if a robot kills somebody mistakenly, nobody knows who to blame.

The final concern — in many ways the simplest — is that for many people the idea of delegating a life-or-death decision to a machine crosses a moral line.

Last November, a video titled Slaughterbots, purporting to be from an arms convention appeared on YouTube, and quickly went viral. Set in the near future, Slaughterbots imagines swarms of nano-drones decked out with explosives and facial recognition technology, able to kill targets independently of human control. The weapon’s owner only has to decide who to target using parameters like age, sex, and uniform. The video cuts to the technology in action: four fleeing men are encircled by a squad of drones and executed in seconds. “Trust me,” an executive showing off the technology tells the crowd. “These were all bad guys.” The video cuts away again, this time to scenes of chaos — a world in which killer robots have fallen into the hands of terrorists.

The video was released the day before the most recent UN talks started in November. But it didn’t have the desired effect. Afterward, the Campaign to Stop Killer Robots called 2017 a “lost year for diplomacy.” Campaigners still hope an outright ban can be negotiated, but that relies on this week’s meeting of experts (a precursor to a conference of “high contracting parties” in November, where formal decisions can be made) going well.

Wareham, the Campaign’s coordinator, is optimistic. “All the major powers who are investing in autonomous weapons are all in the same room right now in a multilateral setting,” she says.

At the end of Slaughterbots, its creator, Stuart Russell, professor of artificial intelligence at Berkeley, makes an impassioned plea. “Allowing machines to choose to kill humans would be devastating to our security and freedom,” he says. “We have an opportunity to prevent the future you just saw, but the window to act is closing fast.” But if killer robots really are going to revolutionize warfare the way nuclear weapons did, history shows powerful countries won’t sign away their arsenals without a fight.

Correction: The original version of this story misstated the types of weapons banned by the Convention on Conventional Weapons. The Convention regulates landmines, it does not ban them.

TIME

 

 

Michael
Author: Michael

Handsome Devil..... and Smart too.

About Michael '"> 1613 Articles
Handsome Devil..... and Smart too.

Be the first to comment

Leave a Reply

Your email address will not be published.


*