Technology

Giving robots ‘personhood’ is actually about making corporations accountable

todayJanuary 20, 2017

Background

The European Union is currently considering the need to redefine the legal status of robots, with a draft report last week suggesting that autonomous bots might, in the future, be granted the status of “electronic persons” — a legal definition that confers certain “rights and obligations.” It sounds like science fiction and that’s because it is: any engineer will tell you we’re a long way from seeing robot marches for civil rights. So what’s going on here?

For a start, this is only a draft report. It’s not actual legislation, and is only a series of recommendations for the EU’s law-making body — they could always ignore it completely. And although parts of the report are a bit odd (Frankenstein’s monster, the Greek myth of Pygmalion, and the Golem of Prague are all referenced in the first paragraph alone), at its core it’s interested in the rights of people, not the rights of robots.

The question it’s asking is this: if something goes wrong with an autonomous system, who do we blame? And how are they held accountable under current legal systems? It turns out that making robots into “electronic persons” might actually help with some of these problems.

WHAT DOES THE REPORT ACTUALLY SAY?

The first section of the report lays out some familiar background. Robots in various forms are becoming more and more sophisticated. They’re popping up in diverse industries — from health care to transportation to the military. And they’re also becoming more autonomous, able to “take decisions and implement them in the outside world” without external input. As autonomy increases, robots feel less and less like simple tools. But if they’re not tools, what are they? As a society, the report notes, we’ve yet to decide what they are instead. Not legally, anyway.

To deal with the challenges posed by autonomous robots, the report makes a number of suggestions. These include the following:

  • Legally define what “smart autonomous robots” are so everyone knows what we’re talking about
  • Create a central register of these bots, so members of the public can work out who controls and who owns particular robots
  • Write a code of ethics for robot manufacturers and researchers that reflects the EU’s Charter of Fundamental Human Rights — i.e., respect human dignity, the right to privacy, be anti-discrimination, and so on
  • Fund a new EU agency for AI and robotics research
  • Create a new legal status for robots (“electronic persons”) that would bring them into the existing system of civil liability

WHY “PERSONHOOD”?

It’s this last point that has sparked sensationalist coverage, with many outlets hinting (or just STATING outright) that the EU wants to give robots something akin to human rights.

Mady Delvaux, the Luxembourgian MEP responsible for present the report to the public, says this is absolutely not the REPORT’S intention. “Robots are not humans and will never be humans,” Delvaux tells The Verge. She explains that when discussing this idea of personhood, the committee that drafted the report considered the matter to be similar to corporate personhood — that is to say; making something an “electronic person” is a legal fiction rather than a philosophical statement.

But Burkhard Schafer, a professor of computational legal theory at the University of Edinburgh, says using the phrase was a mistake to begin with. “People read about ‘electronic personhood’ and what they think is ‘robots deserve recognition’ like it’s a human rights argument,” he tells The Verge. “That’s not how lawyers think about legal personality. It’s a tool of convenience. We don’t give companies legal personality because they deserve it — it just makes certain things easier.”

The same might be true of robots.

DSCF1366.0The Robear is a service robot developed in Japan for use in the care industry. Sam Byford

 

Imagine, says Schafer, that you’re disabled and rely on a home robot to help you around the house. One of the things this robot does is monitor your diet and make sure you get enough of the right food. One day it notices you’re low on vegetables and orders you some more. “At this point we want to make sure that there’s a valid contract in place between the robot and the shop,” says Schafer. “We don’t want the grocer to say ‘I negotiated with a machine so the contract’s not valid.’ It makes it easier then to give the robot a legal personality in a purely technical sense.”

He notes that similar plans were debated in the 1990s regarding bits of computer software known as “intelligent agents” that were used to handle legal contracts. The plans were dropped though as “overkill,” says Schafer. “But there are certain things at the moment that, by default, only humans can do, which in the future we might allow machines to do as well.”

ESTABLISHING LIABILITY IS TOUGH ENOUGH

At any rate, “electronic personhood” is more of a sideshow than a serious consideration at this point — it’s legal liability for autonomous systems that’s the most important concern of the report. As a baseline, the report’s authors suggest that the EU draft legislation that makes it clear that people only have to establish “a causal link between the harmful behavior of the robot and the damage suffered by the injured party” to be able to claim compensation from a company.

This is intended to stop companies from shifting blame onto the autonomous systems themselves. So, for example, the makers of a self-driving car can’t claim they’re not responsible if it crashes just because it was driving itself at the time. Delvaux suggests this won’t be much of a problem in the short-term as the current generation of robots simply aren’t complex enough to make establishing causation difficult. “But when you have the next generation of robots which are self-learning, that is another question,” she says.

Schafer says: “The concern they are having is that robots as autonomous agents might become so unpredictable they interrupt the chain of causal attribution. And the company says: ‘That was not foreseeable for us and we couldn’t know our self-learning system would make the car start chasing pedestrians off the street.’”

Google's self-driving cars in Kirkland, Wash.
Self-driving cars (like this prototype from Google) could provide some of the first test cases for robot liability.
Google

This is where things get murky. The report suggests creating a legal system where robot liability is proportionate with autonomy — i.e., the more self-directed any system is, the more it assumes responsibility (as opposed to its human operator). But that just raises more questions, like how do you measure autonomy in the first place? And what if the self-learning system is learning from its environment? If a self-driving car is taught bad driving habits by its owner and crashes, is that still the manufacturer’s fault?

One way to side-step these problems, suggests the report, might be to create a mandatory insurance scheme for autonomous robots. If you make a robot (or the software that controls it), you pay into the scheme. If an accident happens, the hurt party then receives compensation from the fund. That way there’s less incentive for companies to try and dodge responsibility — they’ve already paid out the money they’ll give away.

WHAT NEXT?

It should be stressed that the report’s contents are only suggestions. It’s still in draft status and has yet to be passed on to the European Commission — the part of the EU that actually makes laws. When that happens, sometime in the next couple of months, the Commission can take notice of the recommendations (which sources say is most likely) and then start thinking up possible legislation. But this would be a long process, taking a year at least, with no way of saying if the report’s recommendations or wording would be heeded.

For the moment, though, we’re not in dire need of new legislation. Olaf Cramme, a tech and policy specialist at management consultancy Inline Policy, says the current system can cope with liability claims involving autonomous systems — just about.

“The tort system is very developed and there are a variety of laws that could apply in these cases,” Cramme tells The Verge. “But there are some fundamental problems. Accidents caused by self-driving vehicles, for example, will increase the complexity of any case.” Cramme says it’s probably better to draft new legislation for these scenarios, rather than clogging up the courts by forcing lawyers to follow ever-more-complex lines of liability. Insurance companies are open to this he says. “They’re excited because it should open new insurance models and new products which they can sell. It’s a great commercial opportunity.”

So: new laws are going to be needed, but we’re not sure what yet, and the concept of ‘electronic personhood’ might be too freighted with meaning to be politically useful. Change is going to come though, one way or another. “Robots are not science fiction,” says Delvaux. “These are not extraordinary beings attacking our world. It is technology that should serve us, and we need a realistic view of what is possible.”

 

 Source: theverge.com

Written by: New Generation Radio

Post comments (0)

Leave a reply

Your email address will not be published. Required fields are marked *


[wpens_easy_newsletter firstname="no" lastname="no" button_text="Εγγραφή"]

ΕΠΙΚΟΙΝΩΝΙΑ

0%