lethal autonomous weapons

Robotics is swiftly being transformed by progress in artificial intelligence. And the benefits are extensive: We are seeing robotic arms transforming factory lines that were once offshored, safer vehicles with the ability to automatically brake in an emergency, and new robots that can do anything from the shop for groceries to deliver prescription medicines to people who have difficulty doing it themselves.

But our ever-increasing appetite for intelligent, self-directed machines poses a host of ethical challenges.

Rapid improvements have led ethical dilemmas

These ideas and more were spinning as my colleagues and I met in early November at one of the world’s biggest autonomous robotics-focused research discussions – the IEEE International Conference on Intelligent Robots and Systems. There, corporate researchers, academics, and government scientists presented advances in algorithms that permit robots to make their own choices.

Like with all technology, the range of future uses for our research is hard to imagine. It’s even more perplexing to forecast, given how rapidly this field is changing. Take, for instance, the capability for a computer to recognize objects in an image: in 2010, state of the art was effective only about half of the time, and it was caught there for years. Today, however, the best algorithms as shown in published papers, are now at 86% accuracy. That development alone allows autonomous robots to comprehend what they see through the camera lenses. It also shows the speedy pace of progress over the past decade due to advances in AI.

This kind of development is a true breakthrough from a technical viewpoint. While in the past manually revising troves of video footage would involve an incredible number of hours, now such data can be accurately and rapidly analyzed by a computer program.

But it also gives birth to an ethical dilemma. In eliminating humans from the process, the expectations that support the decisions connected to security and privacy have been fundamentally changed. For instance, the use of cameras in public streets may have provoked privacy concerns 15 or 20 years ago, but adding correct facial recognition technology considerably alters those privacy effects.

Easy to alter systems

When developing machines that can make own decisions—usually called autonomous systems—the ethical questions that arise are debatably more worrying than those in object recognition. AI-enhanced autonomy is evolving so rapidly that abilities which were once restricted to highly engineered systems are now accessible to anyone with a domestic toolbox and some computer knowledge.

People with no training in computer science can learn some of the most state-of-the-art artificial intelligence tools, and robots are more than eager to let you run your newly acquired machine learning techniques on them. There are online forums packed with people eager to help anyone learn how to do this.

With previous tools, it was already easy enough to program your slightly modified drone to recognize a red bag and follow it. More latest object detection technology cracks the ability to track a variety of things that look like more than 9,000 different object types. United with newer, more navigable drones, it’s not hard to visualize how easily they could be equipped with arms. What’s to stop someone from fastening an explosive or another weapon to a drone prepared with this technology?

Using a range of techniques, autonomous drones are already a risk. They have been shutting down airports, caught dropping explosives on U.S. troops, and being used in a murder attempt on Venezuelan leader Nicolas Maduro. The autonomous systems that are being built right now could make staging such attacks easier and more destructive.

Reports show that the Islamic State is using off-the-shelf drones, some of which are being used for the terror campaign.

Regulation or review boards?

Around a year ago, a group of researchers in artificial intelligence and autonomous robotics set forth a pledge to abstain from developing lethal autonomous weapons. They termed lethal autonomous weapons as platforms that are able of “picking and engaging targets without human intervention.” As a robotics researcher who isn’t concerned in developing autonomous targeting techniques, I felt that the a promise missed the core of the danger. It glossed over important ethical queries that need to be addressed, especially those at the broad connection of drone applications that could be either violent or benign.

For one, the companies, researchers and developers who wrote the papers and built the software and devices normally aren’t doing it to create weapons. Though, they might unintentionally enable others, with least expertise, to create such weapons.

What can we do to deal with this risk?

Supervision is one option, and one already used by prohibiting aerial drones near airports or nearby national parks. Those are useful, but they don’t prevent the making of weaponized drones. Traditional weapons regulations are not an adequate template, either. They usually tighten controls on the source material or the building process. That would be nearly difficult with autonomous systems, where the source materials are extensively shared computer code and the manufacturing process can happen at home using off-the-shelf constituents.

Another choice would be to follow in the footprints of biologists. In 1975, they held a session on the possible hazards of recombinant DNA at Asilomar in California. There, experts agreed to campaigns guidelines that would lead the course of future work. For autonomous systems, such a consequence seems dubious at this point. Many research projects that could be used in the growth of weapons also have peaceful and extremely useful outcomes.

A third choice would be to form self-governance bodies at the organization level, such as the institutional review boards that presently administer studies on human subjects at universities, companies, and government labs. These boards deliberate the benefits to the populations engaged in the research and craft ways to lessen potential harms. But they can control only research done within their institutions, which restricts their scope.

Yet, a large number of researchers would fall under these boards’ scope—within the autonomous robotics research community, almost every host at technical conferences are members of an institution. Research review boards would be the first step toward self-management and could flag projects that could be weaponized.

You might like to read:

AUTONOMOUS KILLER ROBOTS ARE HERE TO STAY

Living with the danger and promise

Many of my colleagues and I are enthusiastic to build the next generation of autonomous systems. The potential for good is too favorable to ignore. But I am also worried about the risks that new technologies bring, especially if wicked people misuse them. However, with some careful organization and knowledgeable conversations today, I believe we can work toward accomplishing those gains while restricting the potential for harm.

 

LEAVE A REPLY

Please enter your comment!
Please enter your name here