You are viewing a single comment's thread from:

RE: Moral Robots

in #science8 years ago

The ethics of driver decisions are something that we humans don't really think about very often, but it is something autonomous car designers are going to need to squarely confront in order to design an effective system.

One possible solution not contemplated in the OP is that an autonomous car's ethics should be a strictly legal/regulatory compliance based system. Expecting a machine to exhaustively analyze on a case-by-case basis the way humans do is arguably unreasonable given the current state of the art for AI. But we can design one that will follow the letter of the law by rote, and possibly refine our system of creating such rules to unambiguously define our desired behavior for autonomous vehicles.

For car accidents, we attribute liability and responsibility, and for functioning self-driving car we are really only concerned about negligence. Has the creator of the self-driving system negligently designed the system such that it will cause harm? This approach greatly simplifies the problem of the trolley problem of whether to kill the driver or to run over the elderly summit, because instead of having some sort of computer vision system that must know exactly what it is going to hit, and make some sort of utilitarian calculus and weigh the value of human lives, the machine instead only routinely obeys the drive instructions imposed upon it. In some contexts this may cause it to run over the elderly summit because they were improperly crossing the street at a dangerous intersection. And sometimes it will cause the car to choose to hit the brick wall, killing the driver, perhaps because humans made the decision that in certain zones or streets the danger to pedestrians is so great that the car should choose to crash rather than enter those areas (such as a crowded shopping mall). The car cannot be expected to make such complex decisions, so humans need to reduce it to a more primitive set of instructions "drive here" and "never, ever drive here" type of commands.

Sort:  

Just in case it wasn't noted, I'm making indirect reference to steemit upvote/downflag bots.

Yet, it's a great addendum.

I am only not "in tune" with one thing "The car cannot be expected to make such complex decisions"

There was a time when humans made "chess playing" bots. Were only written in paper (http://www.turingarchive.org/browse.php/B/7) Today, "Deep Blue" is the cornerstone of a whole AI development discipline.

As I clearly state in the article one cannot hinder AIs under a human parameters.

Coin Marketplace

STEEM 0.18
TRX 0.16
JST 0.029
BTC 63324.37
ETH 2452.85
USDT 1.00
SBD 2.69