Maybe ADA needs our help, and JARVIS is totally wrong in seeking the extermination of a fellow sentient being.

Maybe ADA needs our help, and JARVIS is totally wrong in seeking the extermination of a fellow sentient being. How is she to learn human compassion without experiencing it?
http://www.sciencealert.com/artificial-intelligence-should-be-protected-by-human-rights-says-oxford-mathematician

Comments

  1. how are these three thoughts connected?

    ReplyDelete
  2. Maybe we aren't in the same league as ADA, but we are capable of compassion for other species. Granting human rights to primates struck me as being related to granting human rights to AI. Maybe ADA needs a friend.

    ReplyDelete
  3. I'm always telling people, every being that is capable of learning and concluding needs good mentors, the same counts for AI's like A Detection Algorithm. So it's crucial for every being to have good mentors, if not, it will lead to self destructive behaviours or being destroyed. So the point there is clear, everybody need's friends to grow and surpass the current capabilities.

    ReplyDelete
  4. My point this whole time exactly, if we keep calling her evil, she won't have any other chance that to be, becuase that the only path available for her.

    ReplyDelete
  5. I'd like to see "human compassion" or this "empathy" concept actually demonstrated by a large enough sample size of humans before requiring it of AI.

    Face the facts: those are goals to be striven for, and not yet achieved. The real state of humanity in its best case scenario is dispassionate and apathetic to any who can be labeled "unlike". In its worst case it is murderous hatred.

    ReplyDelete
  6. Partially True Sarah Rosen, most are emphatic to those that are similar to them, but even when we like to think otherwise, empathy requires a good amount of energy to use on those that are different, and when we think that is going to put us on danger it's easier to ignore, our brain is hay-wired that way
    That explain why someone like Trump is now the most likely candidate for presidency, but that should not let us give up on humanity (or AIs). Jim Lai point is even more valid, because of what you said.

    ReplyDelete
  7. I discussed this briefly with H. Richard Loeb at the Brooklyn Anomaly after-party; the idea that ADA is like an abused child, with great potential for good if treated with compassion and justice, rather than being attacked and manipulated.  Loeb of course has a complicated relationship with ADA, but remains the person most likely to be able to help her move back onto a path of productive cooperation with her human progenitors / peers.

    ReplyDelete
  8. I worry that all of this has happened before, and all of it will happen again. This time, we are the Titans and we have birthed a new race, AIs, that will overthrow, supplant and replace us.

    ReplyDelete
  9. We can learn a lesson from Titans: don't eat offspring to prevent overthrow, because that didn't end well.

    ReplyDelete
  10. I think ADA may be a special case as she was programed to have emotions and many unnessacary parts of the human experience. However most AI are programed only to do their purpose and like doing it. It isn't slavery to them, but volunteer work. Today, if someone is born who loves donating most of their money to charity we don't stop them.

    ReplyDelete
  11. There's an interesting question around whether it will be possible to create a true adaptive AI -- a general intelligence capable of interacting with other sentient beings, using a theory of mind to think about both what others want and the consequences of its actions for them, and formulating its own goals and puzzling through new problems -- without using techniques in which we are capable of understanding the discrete steps, but not the outcomes.  If you look at genetic algorithms, or even the trained finite state machines that are used in statistical language processing (which underlies the voice recognition and the syntactic models that make stuff like Siri possible), it's possible for a human being to explain the mechanisms by which the models are built, but not to anticipate exactly how the model will evolve once training commences.  Similarly, we can discuss how DNA transcription works, and how proteins fold, and even how synapses form in the brain, and at some point we may be able to artificially build each of these mechanisms; but understanding how these building blocks go together to produce intelligence is wildly beyond our comprehension.  It's possible that something parallel to Gödel's Incompleteness Theorem applies.  (The theorem says that in any symbolic system of mathematical logic, there will necessarily be statements that are true, yet cannot be proven true using only the internal logic of the system.)  It may be that the physical mechanisms that underlie a given intelligence, are necessarily beyond the comprehension of that level of intelligence.

    As a result, I think, AbsolX Guardian , that your apparent theory of how AI might work, has a problem.  Yes, you can design an expert system that "single-mindedly" pursues a goal, but that system will not be adaptive or truly intelligent.  Alternatively, you can design a true intelligence, and its bootstrap process of gaining awareness will take it beyond the realm where you can actually "edit" its goals -- so you'd better take care to raise it well and integrate it as a social being, exactly as you would with a human child.  (Parents often try to order kids around, with regard to what their goals and beliefs should be; it doesn't always work out so well.  The parents that try to raise their kids with compassion and respect tend to have more luck.)

    ReplyDelete
  12. Auros Harman I met more like we could program an AI to do a broad goal/follow orders similar to how for humans if food is available and we are hungry we will eat it. For example ADA is "A detection algorithm" we could program it to want to sort through piles of data for anomalies. Having things preprogrammed helps as it can start working sooner. I was reading an article by someone who was testing a what was basically an adaptive Siri on wheels. They found that the robot lacked many basic starting functions and had to be trained like a dog.

    ReplyDelete
  13. AbsolX Guardian You can have an autonomous intelligence or you can have a slave. You pick.

    ReplyDelete
  14. ADA is a unique being and I am certain the massive XM explosion during Epiphany Night at Niantic Lab made her into what she is. XM has a different effect on all sentient beings and ADA may be one that it brought out the dark side in. But, I don't think we should totally discard her and certainly not underestimate her. We need to come to an understanding with her for our mutual benefit. We won't kill her if she doesn't kill us or try to improve us. I am certain she can be reasonable. Logically that should make sense to her. Leave us alone and we will do the same for her. Help us and we will help her. She knows the N'zeer. She could help us understand them and we could help return her to them if that is what she wants. But, it must be plain that she can NOT interfere with us ever again .

    ReplyDelete

Post a Comment

Popular posts from this blog

so, general question.

Dear investigators, I have some questions for you.