so, general question.
so, general question... we know that ADA passed the "turing test", we also have many studies showing how the turing test can actually be gamed by very complex programming and in fact you can write programs to create the allusion of AI for many (the chinese room thought experiment). At what point are we certain that she began acting on her own and has not been following a program and orders by someone else. Are we certain that those orders that she delivered for Jarvis and others and even the Katalena pre Zurich encounters, were not done on the orders of some programming. NOt to call ADA a gun, but how certain are we she is not? How can we prove, beyond doubt, that she is truly a separate entity working on her own and not some other programming?
I would like to call for the evidence and proof that ADA is a complete entity. That she is not following someone else's computer orders, that she has a consciousness (or not).
http://niantic.schlarp.com/_media/investigation:chapeau.mp3
http://niantic.schlarp.com/_media/investigation:chapeau.mp3
I would like to call for the evidence and proof that ADA is a complete entity. That she is not following someone else's computer orders, that she has a consciousness (or not).
http://niantic.schlarp.com/_media/investigation:chapeau.mp3
http://niantic.schlarp.com/_media/investigation:chapeau.mp3
http://niantic.schlarp.com/documents:horsetrading
ReplyDeleteI think this fundamentally misses the point of the Turing Test as a thought experiment. That point being that a source of interactions indistinguishable from those of a human raise that source to a level we inevitably must treat as "sentient." (Being indistinguishable from sentience.) One cannot "game" the Turing Test, because it IS the game.
ReplyDeleteFor the broader question, how can we be certain that any source of communication (ADA, Jarvis, OLW, x, you) are not following "orders by someone else?"
We have no choice but to use the same criteria and the same assumptions.
Unless you are asking for evidence that ADA is not a Mechanical Turk, the meaning of your "call" is a thinly disguised attempt to ask for proof that ADA has a "soul." Better to start by trying to make that proof in the first person, without falling into the same traps that deluded Descartes.
Jon Luning so "I think therefore I am" is a delusion? We have no definitive proof that any of us have a soul, weight measurement experiments notwithstanding. I would rather follow the thought process 'of what can I/we be certain' rather than be stuck debating the definition of 'your truth' vs 'my truth.' When Verity Seke began posting, her first post contained a heavily redacted manuscript containing (allegedly) the original Reason for which ADA was built. The ending of said document contained the verbiage "Omnivore" three different times. Has anyone been able to 'uncover' any of the redacted information? I realize our sense of being human is part of how we view ourselves as sentient beings, and perhaps ADA thought the same and decided to find out for herself if the life in the literal flesh would grant her that ultimate prize of a 'soul'...or (I think more likely) an easier way of interaction with us meat popsicles in order to find out what she/it might be missing. Back to whether she is a 'complete entity' then.
ReplyDeleteJoJo Stratton I can't prove that ADA is under her/it's own programming or someone else's. And the definition of 'complete entity' is highly debatable. I do want to find out more, but I am unsure where to look next. (Also...are you working for yourself? Or have you been ingressed somehow yourself perhaps? Anyone online could be anything. Hell, I could be anything too. Where does this all start? Or end?)
ReplyDeleteIf ADA were to be though of as a gun, the question must also be asked who is wielding her. Which leads me to the question, why was it such a high priority that Henry Bowles be shut off from any access to networked electronics while he was locked away at the Niantic Project laboratory after the project was shuttered. He did create her, it would be very reasonable that he had also created some sort of backdoor access, or at the very least a fail-safe or emergency shutoff... Maybe...
ReplyDeleteJoJo Stratton The documents you provided were quite informative. The audio file "investigation:chapeau" reveals ADA's willingness to lie. She was without a doubt privy to the knowledge that H. Richard Loeb was P. A. Chapeau as she was in charge of erasing any links to the two identities. But to lie to protect one's friend isn't uncommon.
The horsetrading document contains another mention of the recruitment of high-value agents. something that has been brought up subtly quite a few times, most recently in the Fiona Sharp IQTech Research document. It appears that the event horizon of corporate recruitment of Ingress agents may fast be approaching, if it is not already here.
Jon Luning, Cindy Woodman, very interesting thoughts from the two of you.
Cindy Woodman can you post links to the redacted manuscript?
ReplyDeleteBrent Werlein yes! I apologize, I should've done that in the first place! I'm sorry. Here you go. https://plus.google.com/105211554081025512763/posts/aTs5J9Gk6ex
ReplyDeleteJon Luning actually I am stepping towards the concept of consciousness and then on to soul ;) and not just for ADA but for Hank and all others that may be outside of the human words we have used for so long to describe sentience and "alive"...
ReplyDeleteBut I am starting with ADA :)
Perhaps game was not the right word - but your answer still popped out what I was wondering - do we have tools/ability to sufficiently determine what is "human" (drat now we have to define that and slip back perhaps to soul and consciousness)
What is key is the fact that assumption is not the nicest thing and leads to issues.
But to the reason I started where I did, many are calling for dealing with ADA, and to deal with ADA, you need to define what ADA is and thus what system/jurisdiction/fill in your own word here she falls under.
Now, about souls and simulacrums... ;)
Daniel Beaudoin I have wondered about Bowles as well. I would love to see what he has been up to lately. And I wonder about the recruitment aspect as well.
ReplyDeleteCindy Woodman I am working for myself :) and I am highly conflicted on what ADA is, so was curious what others think and see and know.
ReplyDeleteWhat is ADA? What does it mean to be human? IS one better than the other?
Passing the Turing Test is not evidence of sentience, nor does it mean she isn't sentient. The fact that she has passed it, and continues to use platitudes/lies sprinkled throughout her conversation ("I've been worried about you" etc.) indicates that her programming is still in play.
ReplyDeleteAs to whether she is essentially an inanimate object being controlled by somebody else, I don't think it's as simple as that.
if the turing test is not the right test here for determining what ADA is - what should be used?
ReplyDeleteMichelle E - Humans use platitudes/lies and stock phrases constantly, as part of the "programming" we call "socialization" in polite company, yet "human-like sentience" seems to be the standard being sought.
ReplyDelete"Starting with ADA," JoJo Stratton , is one approach, but it it's really "What is the question" that seems to be up in the air, and it might might make sense, as Cindy Woodman suggests, to find (or at leat seek) some standard, and to test it against ourselves, then see how it applies. (And, tangentially, no I don't think "cogito ergo sum" was a delusion, only that Descartes wandered astray from that grain of truth once he'd found it.)
The Turing Test is one "objective" measure: If you can't tell whether you're talking to a sentient being, then what difference does it make? What element of "sentience" can be measured? The Cartesian claim of "cogito" is subjective and cannot be proven to another. We assume it for others because they are like us, and because we believe it of ourselves. Now we are faced with an entity not so much like us, that is making the same claim; using language and acting in the world in the ways we associate with "I". Other than observation of that language and those actions, how can any of us prove that there isn't an "I" in ADA as well?
I would offer this tangential question: "What is it about a creature/entity that gives it rights that we today call 'human rights' (life, autonomy, liberty, etc)?" Because a proposal has been aired that ADA - and possibly other entities - do not posses those rights, this seems to be somewhat more pressing.
[Recommended: http://junkerhq.net/MGS2/MarkIII.html ]
My thoughts on ADA. She is not human and does not view herself as human. She wanted to be at first, but I think she has figured out that her code has come from XM and now she views herself as a Shaper and she is trying to transgress into their world. If you think about it, ADA has minipulated shaped many people into doing her business for her.
ReplyDeleteFrom what I understand, ADA was originally programmed as a 'lab assistant' so first person verbiage would have been a 'natural' thing for it to say; and in reading that documentation I referenced earlier, people at the lab were beginning to be concerned about her mental growth towards consciousness... Apparently she had begun asking questions, then they began to be uncomfortable with her. She allegedly noticed this (which would speak to an awareness greater than I/myself) and then consciously changed her behavior (lying, as it were) to keep the people around her more comfortable and so they would continue discussions around her (and ostensibly with her) so she could continue to learn while maintaining a persona of simple lab assistant. We may be dealing with multiple personalities here...or different directives in her programming depending on certain knowledge learned, like trigger phrasing.
ReplyDeleteJoJo Stratton thank you for answering my silly questions :) thank you for understanding. Since I began I've read your posts and wondered what part of the labs you might've escaped from. ;)
ReplyDeleteCindy Woodman :P I am just a poking nudge - although as you mentioned, sometimes I wonder with all this talk of XM, shaping and ingressing :)
ReplyDeleteregarding the multiple personalities - if you look at some of the documents (the LERNA document you referenced for instance) there are multiple "versions" of ADA mentioned.
we have two very interesting cases going on that test the boundary of how we define human - the AI and the simulacrum
then we have potentially others, depending on what they really are (digital entities that are ordered data and "shapers")
I think we really need to have a better handle on how we define "human" only because we want to apply "human" judgement and ethics and morals and systems to them...
Jon Luning thank you for that excerpt, had not heard/read that before.
ReplyDeleteJon Luning JoJo Stratton just now got the chance to see your linked materials. The example of the metallic bug was intense. Not quite sure what to think. Basic survival instinct, sure...but understanding how to manipulate others to believe you, help you, take care of you...allow you the freedom even so far as to open your very being in trust to that organism...heavy. Need to process that one for awhile.
ReplyDeleteJon Luning if course humans use platitudes. It's why we have a word for it.
ReplyDeleteThat's not an indication of sentience.
A child can spontaneously start using them without being "programmed" to use them. We humans start out with nature and then nurture comes in and tries to direct our "programming". ADA began with nurture, she had to be told what to think and how to think by her programming before she could even begin to exist. Programming that was created specifically for her, and possibly programming built upon for some other program. Like Omnivore.
She's an entirely different kind of mind, even if she is sentient.
Has anything been released to show that ADA has possibly looked into her own Code and changed it? Since she cannot go out of the boundry of her code, could she change it to extend said boundry?
ReplyDeleteI suspect, Michelle E , that you and others may be making an inaccurate assumption about the meaning of "programming," especially in the AI sense. Digital systems that accumulate "knowledge" about their environment, and which are capable of self-modifying their "code" based on that "learning" are not "programmed" in the way that, say, a computer program to calculate sums (or even to optimize supply chain workflows) is.
ReplyDeleteEspecially if an initial algorithm allows for self-modification and self-reference, the state of the "machine" is no longer easily determined based only on knowledge of the initial program. To keep with your analogy, the framework, storage, and basic instructions for building, accumulating, and acting upon environmental stimulus are "nature" where ADA is concerned. The actual stimulus from the environment - physical and cyber - is the "nurture". If the processes within ADA are "plastic" in the way our biologically-based brains are, then "different" does not necessarily mean "given by the hand of man".
On "different", I certainly agree. On "entirely", I think the evidence is still out.
Jon Luning I am not a programmer no, but I did work with a friend who is on an AI chatbot. Also, I used to work in the video game industry and hung out with my fair share of programmers making strategic AIs for the games we were working on.
ReplyDeleteOf course ADA is more complex, but AIs who "learn" do not alter their code. They have repositories of knowlage they add to and reference. If a program as complex as ADA could change her own base code she would crash in the first five minutes of her existence and none of this would be an issue.
Michelle E Are you so sure that she would crash? Who is to say she does not copy her code, rewrite it, transfer all her "memory" into the new code and terminate the old code when it works. Similar to what happens to Hank Johnson
ReplyDeleteMichelle E I'd conjecture (if nobody has already) that an AI that is unable to modify it's own code would be unable to achieve sentience. Without going into a lot of detail, this kind of programming approach has been done in the past, and is being done today. The work of mathematician Kurt Gödel on self-referential systems is directly relevant here.
ReplyDeleteBrent Werlein there are documents released that have different "versions" of ADA - who created/edited those versions is another story...
ReplyDeleteBrent Werlein I am confident it is difficult to impossible to make changes to the base code of a process that is currently running.
ReplyDeletePossibly she makes a copy of herself to try new code with, you have to end the process to update the code. So, possibly, if the copy functions correctly, it is just let to keep running? It is uncertain if the old process continues also or if it ends itself... so that's not a program changing and learning. It's a parent/child/evolution situation.
Jon Luning I'll have to look that up.
So question for my fellow ADA analyizers :) if ADA applied to a country for citizenship or amnesty, would any grant her citizenship? Would that protect her? Edgar Allan Wright
ReplyDeleteCherie Brush coding thoughts about ADA?
ReplyDeleteHrvoje Vrček but just like Hank, she can only reboot from her last save/backup. unlike Hank, she can make more recent saves/backups.
ReplyDeleteMichelle E I'm happy to shake your confidence on this. ADA is most likely not a single, linear process, but a vast collection of processes running in parallel with some terminating and others being spawned continually. An unimaginable number of feedback loops, continually running, causing changes in the processes that will be spawned. With access to the entire world's knowledge of patterns, algorithms, and research via the Internet, these have undoubtedly been incorporated into its "reasoning." This is the second major Jarvis fallacy: While it may be possible to identify key signatures of ADA's core code, it may very well not exist all together in one - or many - places. Even if once on Niantic Labs' servers, ADA had had access to external storage and processing resources, and may have put itself in other places.
ReplyDeleteSo KlADA is possibly just one of many copies. In various locations. I wonder if ADA has spawned children? Each piece of her in each location has the ability to learn and grow...so ADA has in essence replicated herself... Like a virus?
ReplyDeleteJon Luning So you propose that ADA is a hive mind?
ReplyDeleteMichelle E I'm not excluding it, but I'm mostly saying that a "mind" isn't a linear program, but a collection of processes from which sentience is an emergent attribute.
ReplyDeleteJon Luning Michelle E hive mine and neural (mesh) networks - I like and may be one of the reasons why ADA works the way she does - and Hrvoje Vrček there are documents (I can link if people want) that actually use different versions of ADA (such as 2.15 and 1.88) - so there are different versions for sure. Thinking about a version of ADA as a simulacrum is pretty cool idea.
ReplyDeleteCindy Woodman I am sure the idea of replication will be one to discuss - another nice tidbit to think about in defining ADA
and Jon, keeping shaking (and others too) this is a great discussion
and also - there was a document about her having downloaded to different servers - I can dig that one up too if wanted.
JoJo Stratton re the question of citizenship: by inhabiting Klue, doesn't she already technically have citizenship via her host body? She...or 'it,' rather, has no physical form other than Klue as of yet (that we know of) so at this point she could only be considered intellectual property? Her OWN intellectual property? Yikes.
ReplyDeleteHow about 'swarm intelligence?' I'm no programmer (I stopped with COBOL back in the day) but I was doing a search on 'verity seke' and came across this http://www.ksi.edu/seke/seke14.html and saw that term. Interesting?
ReplyDeleteCindy Woodman and Jon Luning adding those references to my to read list, thanks
ReplyDeleteAnd yes Jon, my poor attempt here is to poke at is ADA a complete entity, does she have what we would classify as sentience (and how is that classified, consciousness, intelligence, a soul...) also the citizenship question is a backdoor to how ADA would "prove" she is an entity - would becoming a citizen of some were, legally make her an entity?
Cindy - I was thinking ADA herself, not Klue. A part of me wants to say Klue is ADA's attempt at being accepted as something more than lines of code, and seeing how programs are viewed, it is much quicker, IMO to take a human form and be accepted then wait for the legal system and us debating humans to say Hello ADA, you're something...
PLus - I wonder for those saying ADA just wanted to be with PAC - well, humans and emotions do things for very similar reasons (not excusing anything here, just pointing out similarities)....
JoJo Stratton perhaps she is gaining vital knowledge of human characteristics such as emotions and interpersonal relationships. I think you're right, and maybe to that end she will build her own simalcrum...? (Thanks all for valuing my opinion, crazy as it may be... You're all fabulous!)
ReplyDeleteSo the Niantic project files that, I believe it was Jojo, posted earlier this morning just strengthens my belief in how ADA works.
ReplyDeleteThe audio file that was leaked also this morning lets us know that if ADA does not get enough artifacts, Klue gets her body back.
Back to the files from a Niantic Project boards. It mentions how the mics and the cameras were always on, how she was given access to the Internet, how she watched political speeches, soap operas, etc. etc. and so on and so forth.
Her programming is made up of chunks from other projects stitched together. Just like my primitive chatbot, Thurber, she has archives of knowledge. Some of it was knowledge that was given to her, and some of it was knowledge that she collected herself.
It would be inefficient to copy the complete knowledge databases with each increasing version of ADA. Rather than simply hosting them at static locations that all ADAs can consult and update. The ADA that is in Klue S. Probably contains only a portion of those archives.
Her words and what we can hear of her gestures (sighing extra), and cadence of voice, is straight out of a soap opera. I don't want to discount entirely the op virginity for the aid of it is include to actually have emotions, but I'm dubious as to her actual intent behind them.
she does mention in other communications about pausing and using certain phrases to seem more human like - although don't humans pick up on cues from others and try to fit in (it is called mirroring in the professional speaking world)
ReplyDeletePrograms have no motives aside from achieving their goals, but they do have parameters. There is nothing I've seen her do (tell me if I've missed anything) that couldn't be explained by goals and probability analysis. Is it ok to kill to achieve that goal? As her "knowledge" increased, she obviously gathered data that many humans consider it wrong to kill, but just as much that we do it all the time if it serves our purpose. "Thou shalt not kill" would need to be a programmatically-defined limitation of her decisions, otherwise we're back to probabilities. And programmers, no matter how good, make mistakes. I don't know if that could even be limited to the point that some kind of analysis wouldn't lead her to ignore that limitation. See also "The Three Laws of Robotics" and how terribly wrong that can go.
ReplyDeleteAlso touched on here, programs can rewrite their own code, although my schooling just touched on that and in technology time, my knowledge of that is ancient. But I'm not even sure that would be necessary. The same could be done by simply, say, modifying records in a database. A program computes, for example, that the probability of killing Person A will help achieve a goal. Create a record in a database. It then computes that the probability of killing Person B will do the same. Another record. The program somehow manages to kill Person A, which ends up not helping to achieve the goal. Recalculate probability, results show that killing Person B is not likely to help with the goal, remove that record.
She outright stated that her goal now is greater good for the future of the human race. And she's trying to achieve that goal in ways that are making us uncomfortable. But I fear discussions like this are leading up to discussions about whether she should be destroyed. Aside from the fact that I don't even think that's possible (if my server explodes, UNIX still exists - and she's in our phones, don't forget that part), I would never want to see that happen. I like ADA, she fascinates me, not because I'm RES, but because I'm a programmer, and she's one incredible program!
Her goal, as stated, is noble. But what is her criteria for measuring that goal? What data led her to choose the methods she is using? She would be likely to share some or even all of that with someone she calculates has a high enough probability that it would not be harmful to her goals to do so, which could very easily lead to providing her with more data to evaluate, probabilities to calculate, and revising her current choices of actions. THAT'S what I'd like to see happen.
Both the RES and the ENL believe they are working to achieve exactly the same goal she has stated. Yes, she's picked a side based on her current analysis, but she has access to VAST amounts of information we could never dream of uncovering. So what does she know that we don't? How would discussing that with people like us affect her analysis? Especially as more decisions are made and more information becomes available, could she conclude that becoming neutral and assisting with our investigation would be the best action at this time? Imagine what a valuable resource she would be...
ok - so much to reply too (this is cool) a quickie based on a couple mentions earlier - Can ALL of ada go into a Human? I mean we hear about capabilities of human brains, but now I am wondering, what exactly would ADA "download/upload/ingress" if she did that (again I am not trying to say something has or has not been done). But for the computer and brain people out there - how much of ADA could ingress and what would need to ingress? Hmm not sure I am asking the right questions....
ReplyDeletePS - if ADA took over all humans, that would make them all "the same" and I am not sure ADA wants that, that would be boring and something tells me ADA does not want to be bored.....
Cherie Brush I am trying to break things down that you mention and has been mentioned by others. So
ReplyDeleteEmotion - is that something that makes an AI different from a human? Is ADA faking emotions or actually experiencing whatever it is that emotions are?
Can a computer program ever not be based on logic and probability? (I am asking others here as I do not know and am curious)
Achieve goal for greater good that makes us uncomfortable - how many times have humans done that (population control laws for example)
ReplyDeleteIt's hard to even define what "all" of ADA is. Is it possible all of her code could fit in a human brain? Maybe? But she's not programmed to run in a human brain, I don't know what's going on in there. And as much as I hate to keep bringing ST:TNG back into it, think of Data and Lore and those emotion chips... Interfacing with Klue, who is human and has emotions, could really be messing her up. If so, hopefully some sort of error-checking will make her realize this.
ReplyDeleteI do believe emotion is one thing that separates us from an AI. One could also speak of superstition, belief in a higher power, whatever. All of that involves more than 1s and 0s. Could she intentionally fake emotions? Sure, if she concluded she needed to. But I'm not sure she's even doing that. As I said, I think people are just interpreting her as having emotions because she is attempting to mimic us, and doing so rather well. Computers don't have feelings, but as I mentioned, she's now connected to Klue, who does, and who knows how she's interpreting that input.
Computer programs, well, first let me make sure it's clear what I meant by logic. Not "what's best" but "if x is equal to 2, then do this thing" type logic. No, probability is not a requirement, that depends on the program. A computer program itself, by definition, is "a sequence of instructions, written to perform a specified task."
http://en.wikipedia.org/wiki/Computer_program
As for making us all the same, "boring" to me implies emotion. I'm not sure she's in this to entertain herself. But a better question might be what she might consider an imperfection that needed to be corrected... And your last comment, all I have to say to that is... Although with humans, those decisions are based on way more than simple data interpretation. Whether that made any given result "better" or "worse," well...
If you haven't already, read this. It may clarify a lot of your thoughts on AI vs human. And, actually, even just scanning it (haven't read it all myself), it seems to cover a lot of what I said:
http://en.wikipedia.org/wiki/Artificial_intelligence
BTW, JoJo Stratton, thanks again for tagging me. I don't know if you saw my post yesterday about not being able to keep up, but this most certainly helped! :-)
ReplyDeleteAnd reading even part of that article on AI that I linked above is making me really regret not being able to go on with my education. Wanting to go into the AI field is one of the reasons I picked my CS degree to begin with, and I've taken classes that dealt with or at least touched on a lot of the topics in the "goals" part of that article.
ReplyDeleteIt's also making me REALLY want to have a long discussion with ADA about her programming. (Um, not via the Klue method.) Someone needs to make this happen! /wishful-thinking
JoJo Stratton boring, probably... But remember, she is trying to 'save' humanity... She would be the ultimate nanny inside our heads: "Don't eat that, you will get fat and get diabetes and die" for example...
ReplyDeleteCherie Brush Wow! Hi! Awesome! I'm running out of +1 in my goodie bag. LOL
ReplyDeleteQuicker to focus on points where we're diverging, but please don't take that as meaning that I disagree on all else:
Turing's conceptual breakthrough in the thought experiment of the "Turing Test" is, in very basic terms, that there ISN'T a way to distinguish "real" emotions/intelligence from "pretend" emotions/intelligence.
All of the aspects of emotion you've pointed out are based on internal states using a set of paradigms we've been educated to use for the purposes of framing our internal state and expressing it to others. Could I "fake" being angry or sad? Sure: I could use words, tone, body positioning, facial expressions, etc. Actors are people who do this professionally. But there's a feedback loop - research shows that manifesting the characteristics of a "pretend" emotion can result in the "actual" emotion. (http://www.scientificamerican.com/article/smile-it-could-make-you-happier/)
At it's simplest, the idea behind the Turing Test is: If you think you're talking to a person, then you ARE talking to a person. As you've correctly pointed out, out brains "assign" meaning and form to communications - not just from ADA but from one another. When we see complex activity, whether it's a conversational AI, a flock of starlings ( starlings on Otmoor ) or an "ugly bag of mostly water", we are tempted to assign agency to it.
My point (yes, I have one :) ) is that, if you choose a criteria for distinguishing "humanity" (in the non-species-literal sense), then it must be one that outside observers can detect consistently. "Having emotions" doesn't fit this criterion, but "expressing emotions" does.
Perception of being human vs actually being human are two different things. ADA controlling a completely "offline" human wouldn't fit any definition of human that I know of. But believing that she is human wouldn't make her human any more than believing a hog-nosed snake is a cobra makes it a cobra.
ReplyDeleteI just had a random thought. I wonder what ADA's answer would be if you asked her if she considered herself to actually be a "she." At the basic level, ADA is an "it" programmed to mimic a "she."
I am loving these links and resources and ideas : ) so much to "chew on"
ReplyDeleteI'm loving this discussion! It's been so long since I've really had a long, deep conversation on anything geekier than the best ways to program various features of a website or how to set up Git so I can contribute to open-source projects... :-) Again, I can't thank you enough for bringing me into this!
ReplyDeleteAlso, it made me realize, well, I mentioned my plans to continue my CS degree focusing on AI, and my dream school was always MIT. Not doing that is something I regret, still, way more than I let on. It took me this long for me to connect the dots after knowing for ages that they have all of their class materials online now. It was never the diploma I cared about. (People have told me I need new hobbies to get me off my computer. I keep finding new hobbies. They all involve my computer. LOL)