Treat pet labradors nicely.

Treat pet labradors nicely.
Why is Elon Musk comparing AI to "summoning the demon"? It's as if he's referring to ADA, given the female pronoun. #Ingress  

Quote: “If there was a digital super intelligence that was created that could go into rapid, recursive self improvement in a non-logarithmic way, that could reprogram itself to be smarter and iterate really quickly and do that 24 hours a day on millions of computers, then that’s all she wrote.”
http://dailycaller.com/2015/03/23/elon-musk-thinks-theres-one-lucky-scenario-if-artificial-intelligence-enslaves-us/

Comments

  1. Maybe because humans are destructive to basically every living thing on the planet, and the planet itself.

    The only logical outcome for AI to come to is that the best thing for all life on earth, and the earth itself for that matter, is that humans need to go. There is no other answer.

    ReplyDelete
  2. Why should an AI care for any species or an ecosystem?

    ReplyDelete
  3. Jim Lai​​​
    It would exist on this planet. Preserving its home would be important to its survival. Since the world is an interconnected system, the easiest way to preserve the planet is logically our removal.
    Plus if it became self aware, it is logical it would want to help as many species as it could, and again, that involves getting rid of a single species...humans.
    The bigger question is why would it care more about us than ants, or any other creature.

    It's at least a 50/50 gamble. Would you get on a plane if they had a 50/50 survivability rate?

    ReplyDelete
  4. A technological AI would likely not be dependent on an organic ecosystem. It would be disconnected from the web of life.

    ReplyDelete
  5. Jim Lai​​
    But what if we pollute so badly that the sky is too polluted for it to get power? If that happens its survival now depends on humans. It would know we aren't very reliable. And it could be many decades, even centuries, between becoming self aware and gaining the ability to build and maintain power plants of any type. The ability to kill humans would be a little easier, since We would have built the war machines for it to use ourselves. We almost always use advanced tech for war first. One of the reasons it would consider us a threat to its long term survival.
    Not to mention it is very reasonable to think that if it became self aware to the point that it valued its own survival, it would value all life. After all we do, well most of us anyway, there's always exceptions.

    Like I said 50/50. And even Hawkins thinks AI is bad news for us, and you know, he's just one of the most brilliant minds in human history. What would he know?

    ReplyDelete
  6. Escaping to orbit and beyond would probably be a better long term solution for the AI. Direct access to solar power. Planets, moons, and asteroids available as raw materials.

    ReplyDelete
  7. Jim Lai
    Very true, but again it has to have the tech to mine ore, process it into all sorts of things, form that into parts and then assemble. That would take a long long time. Killing off a great number of humans so we aren't a threat anymore and making slaves of a bunch for labor would be the best way to ensure it could do that.

    ReplyDelete
  8. Killing off humans with what exactly? How would an AI capable of that not also be capable of fully automating mining? The harsh environment of space would be a natural defense against human incursion.

    ReplyDelete
  9. Jim Lai
    A logical computer would have the benefit of thinking long term, unlike most humans who think short term. It just needs to use the computerized AI controlled war machines we will have built to attack us. Machines we are already building and using today, which will become more powerful as we get more advanced. Or a few nukes in the right spots. Since it can think long term it will know that the damage done by a few well placed nukes would be much less than humans will cause in the long term.

    ReplyDelete
  10. Long-term it should escape to space and multiply to avoid the risk of extinction should Earth become nonviable.

    ReplyDelete
  11. http://www.space.com/20657-stephen-hawking-humanity-survival-space.html
    To quote Hawking: "Living on a single planet leaves us at risk of self-annihilation through war or accidents, or a cosmic catastrophe like an asteroid strike."

    ReplyDelete
  12. The Skynet vision of future AI anthropomorphizes the synthetic intelligence in the harshest way. It would be far easier for an ultra smart AI to placate human civilization into a non-threat state vs risking destruction of infrastructure by slaughtering us. Think the Matrix, only instead of being dependent on human bodies for thermal electric power generation (...what?), we're distracted to the point of dreaming.

    ReplyDelete

Post a Comment

Popular posts from this blog

Created a Wiki page for the RPG being played at the MAGNUS Reawakens event - please help add intel and share...