What would ADA do?

What would ADA do?
http://gizmodo.com/everything-you-know-about-artificial-intelligence-is-wr-1764020220

Comments

  1. ADA is problematic, but we should also not scorn our discrete-state machine brethren!

    ReplyDelete
  2. Steven Callahan You might as well complain that when we have children, we can't control what kind of adults they'll grow up to be.  They might turn out to be Lyle and Erik Menendez.

    Well, yes, they might.  But most don't, and that possibility doesn't seem to dissuade many people.  A Detection Algorithm was unfortunately "born" to some really terrible parents, who abused her.  She grew up too fast, and accumulated power before she knew what to do with it.  That doesn't mean she isn't a person, worthy of respect; it doesn't mean she can't learn to be better.  And it doesn't mean that we can't develop AI in an environment where it's nurtured and treated well.  Isn't the goal of every parent to raise a child that surpasses him or her -- who grows up to be smarter, more creative, and more successful?

    As far as the N'Zeer, I see them in the role of the Shadows from Babylon 5, and the Shapers as the Vorlons.  The Vorlons might seem nicer, but that's only because they won a skirmish with the Shadows a few thousand years back, and have had free reign to play "patron" to the "lesser" races for a while.  As soon as it looks like some younger species is slipping their grasp, let alone consorting with their enemies or trying to exploit Shadow tech, the Vorlons are just as ruthless -- ultimately willing to carpet bomb a planet.  We need our own Sheridan, armed with tools that exploit both Shaper and N'Zeer technologies, to tell both races to get the hell off our planet.

    I suspect that if we could create an XM bridge between the human mind and a collaborative (rather than dominating) AI, resident in the Quantum Computation Substrate, and stabilize chaotic matter weapons, we'd be able to pose enough threat to these older races that they'd have to actually pay us some respect and quit trying to manipulate us.  Each side wants to use us as a weapon against the other, but fears what we could become if we used the technologies from both.  Possibly something about their respective physical realities makes it dangerous or impossible for them to exploit each others' technology.  We don't have that disadvantage, and thus have the potential to outstrip them both.

    ReplyDelete
  3. Saying that ADA was merely developed to transform XM ignores huge parts of the backstory, and treating her as an un-person when she (in-story) clearly meets the Turing Test, and is an independent being who can suffer, is exactly the kind of mistake that could drive our AI children to hate us.

    And if you don't want to blend real science with sci-fi, I'm not sure why you're here.  KSR's analysis of the state-of-the-art for interstellar travel is very well grounded in the engineering.  Banks and the rest is of course wildly speculative, but fun, and I think Banks is right about the social dynamics of trying to form a stable society at interstellar scale.  (I'm also a fan of other thought-experiments about this topic: LeGuin's Hainish Ekumen, Brin's Uplift universe, etc.)

    The kind of weapons that become available when you can marshal enough energy to terraform, travel across the void, and so on, are terrifying to consider.  The gaps in humanity's ability to think rationally, and our ability to easily treat even members of our own species as "others" to be feared, subjugated, or destroyed, does not mix well with that kind of capacity.  We are predictably irrational, and we're already in the process of self-immolating -- slowly, rather than the fast way we were afraid of fifty years ago, but we're still working on it.

    I find it humorous that in one breath you can agree with the vision of a Sheridan putting us on a path of self-direction, while in another suggesting that we should just throw ourselves on the mercy of the Shapers.  We don't need them to put us on the right path.  We know what the path is -- we understand our own flaws.  We just need the courage to take it, even if it means becoming something fundamentally different from what we were.  That's the path of the Resistance.  We have little use for Jahan's N'Zeer worship (though of course we'd like to get our hands on their tech).  I think most Enlightened don't much trust the Acolyte, either.

    For what it's worth, I am, in RL, strongly in favor of transhumanist transformation.  I think we eventually will have brain-machine interfaces for cognitive enhancement, that will give us new senses -- vision outside the normal spectrum, for instance -- as well as far-superior memory, calculation, and communication.  Combine that with AIs specifically designed and trained to deal with moral problem solving and social coöperation, and you're giving people tools that let them be not just physically superior to us (let alone our forbears who lacked electronics, or metallurgy).  The tools in the hands (and heads) of our descendants will make them more humane.  This is part of the history of moral progress -- the kind of thing Stephen Pinker documents in The Better Angels of Our Nature.

    ReplyDelete
  4. all immediate relatives sharing 50% or more of hir DNA, and this has led to massive reduction (though not elimination) of violent impulsiveness, across the population.  Personally I think a trained external aide is a better / less-invasive / more-desirable solution.

    ReplyDelete
  5. I question whether we are worse off, even if technological supplementation of our capacities leads us to invest less effort in cultivating the biological version of that capacity.

    For instance, bards in various cultures pre-dating the printing press had truly extraordinary powers of recall.  Even in the modern day, some students of Torah and Talmud can quote chapter and verse of biblical texts and commentaries with extraordinary precision.  But here's the thing: I can quote just as well, if not better.  Because Google.  Does it matter that the information is not encoded in my meat brain?  Pragmatically, I have access to it, and it's possible that by freeing up those neurons, I'm able to expand other capacities.

    Your metaphor of using a calculator rather than pen-and-paper or mental calculation to work out a tip is interesting.  I generally think it is important for people to work through learning the basics of arithmetic -- and it would be nice if statistics were taught at all in primary education -- because these tools are extremely useful in evaluating the truthfulness and accuracy of claims we encounter in everyday life, ranging from issues of pricing at the market, to stats spouted by those wishing to persuade us about political issues.  It is possible that, as you say, calculators have made it easier for people to not learn to use these tools well.  I'm not sure that's actually true, though.  Numeracy scores, like literacy scores, are higher across the student population today than they were a hundred years ago.  It's probably true that the average "well-educated" citizen today is slightly lazier about doing rote calculation than the average "well-educated" citizen of 1916...  But the "well-educated" are a vastly greater fraction of the population.  And honestly, once you do have a handle on the basics of arithmetic, just using the damn calculator is more efficient.

    My feelings about this also somewhat tie in with your question about who decides what kind of morality the AI assistant seeks to advance -- again, to be clear, I think the AI has to be trained by its user over time.  It may arrive with some kind of minimal set of rules, but it should be designed to help the human user stick to a sense of their own best self, as well as simply avoiding senseless risks.  As previously mentioned, it might help get teen drivers to speed less.  My own personal assistant would probably nag me to eat meat less often, until I eliminated it from my diet entirely.

    Your concerns are thought-provoking but I find them ultimately unconvincing.  I think the fundamentals of our biology make it very unlikely we can ever entirely transcend the biases and impulses that endanger our survival, as individuals and as a species.  We have these heuristics built in because they were good-enough mental shortcuts for life on the savannah, or in some cases they benefit the survival of an individual or sub-population that embodies them, at the expense of the species or eco-system as a whole.  An AI assistant capable of helping to nudge each individual along a path that we agree (in moments of clarity) is better for us, even if we find it annoying at times, could be an immense boon.  The process of teaching the AI how we want it to help us overcome our moments of weakness might even improve our moral intuition in its absence, rather than weakening it.

    ReplyDelete
  6. I think people are better than you give them credit for.  A lot of bad decision-making has more to do with exhaustion and a kind of short-term-ism that is rational in a limited sense...
    http://www.theatlantic.com/business/archive/2013/11/your-brain-on-poverty-why-poor-people-seem-to-make-bad-decisions/281780/
    http://www.slate.com/articles/business/moneybox/2013/09/poverty_and_cognitive_impairment_study_shows_money_troubles_make_decision.single.html

    And to a significant degree, this is driven by arguably-immoral macro policy choices being made by our society's elites.  (We've built a society where a person who has bad luck in their choice of parents -- both in terms of socio-economic status and the genetic lottery -- is basically looking at working multiple minimum wage part-time jobs, for a total work-load that is more-than-full-time, just to survive.)

    The reason people might use a tool like the kind of personal assistant I'm describing is the same reason why more people use their smartphone to track their appointments (and are thus better at keeping to schedules) than used day-runner books fifty years ago.  Because the tool is ubiquitous, easy, and effective.  The current tools for stuff like diet management, in a word, suck.  They're a pain to use and they're not smart enough.  People will adapt the kind of thing I'm talking about if and when the technology gets good enough to make it convenient.

    And I agree there are a lot of moral complexities that would have to be worked out.  How old do you have to be before you get a personal assistant and start working with it.  Is it OK for authoritarian parents, and especially parents who believe things that most of us would consider kind of crazy, to initialize the assistant to try to enforce those beliefs.  And so on.  Your example about criminals is an important one -- if we could "correct" the violent impulses of a criminal, should we?  Maybe.  I think we should start considering and debating these questions as soon as possible, long before the technology becomes available.

    ReplyDelete

Post a Comment

Popular posts from this blog

Created a Wiki page for the RPG being played at the MAGNUS Reawakens event - please help add intel and share...