Take our user survey and make your voice heard.
tech

How to teach self-driving cars ethics of the road

7 Comments
By JUSTIN PRITCHARD

The requested article has expired, and is no longer available. Any related articles, and user comments are shown below.

© Copyright 2014 The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.

©2024 GPlusMedia Inc.

7 Comments
Login to comment

While cars that do most or even all of the driving may be much safer

Unlikely. Is Google going to be willing to accept lawsuits for the first pedestrian killed by one of these things?

0 ( +0 / -0 )

Solid reason for why humans should drive cars instead of computers. Computers Do not think, they simply run preprogrammed logic. There is simply No way to preprogram all possible scenarios, conditions, exceptions, nuances and possibilities with all possible actions. Yes a computer can probably avoid accidents better, But the situation is accident with death is going to happen, so choose who dies. Only a thinking being can Do this. Plus there will always be unknowns, something No computer can handle. Could there ever be an artificially created thinking being, sure. But then a new ethics question comes up, would it be slavery to chain a thinking being to a car as a driver.

Anyway, besides the f as credit computers don't think, you have the liability question. How Do you know the programmed response was correct. If not who pays. What if the programming misses something. Say in the left is a life size statue of a woman and Stroller, and real people on the right. The computer is coded to protect an infant first, so the real people die. Who pays for that mistake. What happens if the computer What if the computer glitched a fantom truck, veers and kills ssd someone. Wo pays. Worse What if bmw or Google develop something and politicians and scientists who want their pet computer ethics installed,the government and others have a consensus forcing Google to use their pet logic, someone is killed, is the government responsible for pushing bad logic?

0 ( +0 / -0 )

The question isn't whether people will die in accidents with self-driving cars, because it likely will happen at some point. Trying to use that as a reason to not allow them requires the assumption that no one dies in accidents with human-driven cars. And as we know that many people die in road accidents, the question needs to be about whether less people will die with self-driving cars. And if that number goes down significantly, then it makes much more sense to hand self-driving cars than it does to have human-driven cars. The goal being harm-reduction.

The advantage with self-driving cars also being that if there is an accident, the details can be analyzed and the program adapted to ensure it doesn't happen again. When the updated program is deployed to all cars, everyone will be safe from this mistake happening again. When a human makes a mistake, they are the only beneficiaries of the learning on how not to make that mistake again - if they survive. Of course we can then alter our driver's training to try to ensure that others don't also make this mistake, but this is nowhere near as efficient a system as software deployment to all self-driving cars.

1 ( +1 / -0 )

@Strangerland The difference is that the programmer of a self-driving car would almost certainly be liable for any accident involving those cars. That financial burden may be too great even for the likes of Google. I don't doubt that IF they had a better safety record they would be worthwhile but I think there are huge technical challenges that may not be able to be overcome to achieve that. Replacing 10% of the roads with subways may be more cost effective in the end (coming from the US where we have a dismal lack of subways).

0 ( +0 / -0 )

How about having vehicles communicating with each other, sharing information about speed, weather/road conditions, where the next individual turn will be (excluding destination), pedestrian presence, etc? If the AIs could communicate, they could collectively choose the safest driving methods and perhaps not be subjected to the irrational, emotional whims of beings who are not evolutionally adapted to speeds above that of a human dash. Of course, this would be limited to the vehicles, meaning scenarios involving accidents with (sometimes irrational) pedestrians/cyclists would still be needed to be taken into account.

0 ( +0 / -0 )

"Companies that are testing driverless cars are not focusing on these moral questions" “This is one of the most profoundly serious decisions we can make. Program a machine that can foreseeably lead to someone’s death,”

Since morals and ethics are a rubber stamp and of no concern; what comedy of errors, with unintended acceleration, faulty ignitions, shrapnel air bags, makes anyone think these wise men of science can make driverless cars that won't screw up?

Sure it's nice to have the magic stick that makes all problems go away, but how many children should be run over because Driver2.0 got hacked.

Driverless may make sense for an armored convoy transit in the desert, or on vast stretches of uniform roads traveled by nothing but driverless freighters. The last thing anyone should even want is turning lose a 1380kg Prius to drive to the liquor shop or bar.

Or, maybe that's why driverless makes sense.

Japan's 600 cars per thousand people, driving themselves in any useful way? Sheer nonsense.

0 ( +0 / -0 )

Solid reason for why humans should drive cars instead of computers. Computers Do not think, they simply run preprogrammed logic.

Solid reason for why computers should drive cars instead of humans. Drunk humans do not think NOR run any sort of logic, pre-programmed or not.

There is simply No way to preprogram all possible scenarios, conditions, exceptions, nuances and possibilities with all possible actions.

And there is simply no way a human can respond correctly to all possible scenarios, conditions, exceptions, nuances, and possibilities with all possible actions. You've just made the case that a computer is going to be just as effective as a human in out of the ordinary situations.

Yes a computer can probably avoid accidents better, But the situation is accident with death is going to happen, so choose who dies.

The computer would choose the result with the least amount of damage and deaths. A car loaded with four passengers isn't going to intentionally ram head-on into the oncoming truck just to avoid hitting the single pedestrian. It's not rocket science, though it may seem cold and calculating. When you're faced with the "Kobiyashi Maru" scenario, there IS no "right' answer, only a bunch of less than desirable outcomes. This is true whether a human or a computer is controlling the vehicle.

Only a thinking being can Do this.

As I've noted above, computers can make decisions that minimize injury and death and do so faster than the human nervous and system can implement, so your statement is false.

Plus there will always be unknowns, something No computer can handle.

Just as there will always be unknowns, something no human can handle. Again, you're not really making a case for how superior a human driver is over a computer driver.

(All the discussions of liability were very hard to read - apparently you were getting drunker while you were typing) Liability IS always a concern - whether a human is operating the vehicle or a computer. What seems to be lost in all your liability scenarios is the CAUSE of the computer-operated vehicle having to swerve in the first place. If we accept the fact that a computer-operated car isn't going to speed, change lanes unsafely, nor run stop signs/red lights, then we're left with corollary fact that the only accidents actually CAUSED by the computer-operated car would be due to catastrophic failure of a component in the car which causes the computer to be unable to safely control the car and bring it to a safe stop. In those cases, liability would depend on whether the owner was performing proper maintenance and whether there was a flaw in design/programming. In all other cases the liability would fall on the other vehicle which put the computer-controlled vehicle in the situation of having to choose one life over another. The answer really isn't that complicated.

1 ( +1 / -0 )

Login to leave a comment

Facebook users

Use your Facebook account to login or register with JapanToday. By doing so, you will also receive an email inviting you to receive our news alerts.

Facebook Connect

Login with your JapanToday account

User registration

Articles, Offers & Useful Resources

A mix of what's trending on our other sites