Before we can put our values into machines, we have to figure out how to make our values clear and consistent.

Before we can put our values into machines, we have to figure out how to make our values clear and consistent.


http://mobile.nytimes.com/2016/06/24/technology/should-your-driverless-car-hit-a-pedestrian-to-save-your-life.html

Comments

  1. I would say that the correct answer would be depends, who has the highest possibility of not surviving the event on each possibility should be the one ruling this decisions.
    For example, on sometimes, to stop on time not to touch the peasant could severely injury the driver, while the peasant would only get some bruises if the stop were progressive enough not to hurt the driver.
    On that case, I would say go to hurt the pedestrian becuase its the least of all evil.
    On the case on the article, it would depends on how many people would be on the car. Is there more people on the car that the ones it need to run over to avoid the wall? If yes, well yeah, sorry for the pedestrian.
    If not, well, sorry for the drivers.
    Go for what it saves more lives instead of what it's better for you.

    ReplyDelete
  2. Ana Prados The point of the article is that most people disagree with that altruistic approach.

    Think about this: If you sit down in am autonomous vehicle there is an implicit trust relationship. You have a reasonable expectation that the AI that is driving will do everything in its power to protect your life. You have the sane expectation of a human cabbie out Uber driver.

    So, for you, should sacrificing you be a user setting?

    ReplyDelete
  3. Sarah Rosen No, the system should do the best for all involved, that is the reason I favor AI as ruling instead of human bigotry and egoism.

    (Note that my humans expectations are today specially low, as I live on UK)

    ReplyDelete
  4. Ana Prados I feel your pain. It appears that slightly over half of your voting countrymen have caught a case of American-style arrogance.

    That said, what should be and what will be are likely to be two very different things because justice and the law have nothing to do with each other. The default behavior of AIs will be determined not by the programmers but by the corporate CEOs, politicians and insurance company lawyers. Whatever exposes the corporate interests to the least amount of liability will be the default.

    ReplyDelete
  5. I think I saw a poll somewhere where the question was "if your driverless car is in an accident and it can either save you or the family on the sidewalk, who should it save?" and the result was to save the family. The followup question was "would you want to use that car?" and the result was an overwhelming no. So humans are, as a whole, a bit split on this topic making it very difficult for whoever has to make the decision.

    ReplyDelete
  6. Which autonomous car would be easier to insure? One that minimizes total casualties, or one which protects the driver at all costs? How about the driver versus passengers?

    ReplyDelete

Post a Comment

Popular posts from this blog

so, general question.

Dear investigators, I have some questions for you.