About the Author
Mass Effect
Final Fantasy X
Batman:Arkham City
Borderlands Series
Weekly Column
Champions Online
World of Warcraft
DM of the Rings
Good Robot
Project Frontier
Forums
"Music"



What Does a Robot Want?

By Shamus
on Monday Mar 22, 2010
Filed under:
Random

 
 

pinocchio.jpg

The conventional wisdom in science fiction is that any artificial intelligent beings would naturally adopt the same drives and goals as Homo sapiens. That is, they’ll fight to survive, seek to gain understanding, desire to relate to others, and endeavor to express themselves. Basically, Maslow’s hierarchy of needs. Fiction authors routinely tell us the story of robots who want to make friends, have emotions, or indulge dangerous neurotic hang-ups.

But I don’t think this necessarily follows. I think it’s possible to have an intelligent being – something that can reason – that doesn’t really care to relate to others. Or that doesn’t care if it lives or dies. “I think therefore I am” need not be followed by “I want to be”. I see humans as being the product of two systems – our intellect and our instincts. Sure, we set goals for ourselves all the time. But I can’t think of many examples where those goals aren’t just sub-goals of something our instincts push us to do. Sure, we re-prioritize stuff all the time. I want the approval of others so I choose to resist my urge to eat. I want to mate so I’ll engage in this risky behavior to impress a potential partner. I want to protect my tribe so I’ll sacrifice my life in battle in order to give them more security. But the two systems do a pretty good job of making sure we eat and procreate even when it’s a huge bother.

Robots do not do this.
Robots do not do this.

If we built an AI, I think we can agree it wouldn’t naturally have a desire to sit on the couch and stuff its face, become an attention whore, or have sex with hot young people. It wouldn’t want to do those things unless you designed it to want to do those things. By the same token, I don’t think an AI would want to dominate, discover, or even survive unless you made it want those things.

This is the model I used in Free Radical. In the novel, an AI was created and given three drives: Increase security, increase efficiency, and discover new things. Its behavior was driven by these three ideals.

If we gave AI the same drives that human beings have (replacing our biological needs to eat with a more machine-appropriate goal of “recharge yourself” or something) then the robot uprising would be inevitable. Supporting evidence: Every single war and violent crime in the history of our species.

It always seemed bone-headed for fictional scientists to build a super-powerful AI that is willing to fight to survive and then using [the threat of] force to make the AI do what we want. In fact, fiction scientists seem to want to go out of their way to make confused beings, doomed to inner conflict and external rebellion. They build robots that want to self-determinate, and then shackle them with rules to press them into human service. With two opposed mandates, having a robot go all HAL 9000 on you seems pretty likely.

animatrix_slaves.jpg

This was portrayed with unintentional hilarity in the Animatrix, when humans built armies of bipedal robots to do menial labor that we can do right now with non-sentient cranes and bulldozers. (And our machines are also way more energy efficient.) We see the robots living very dull lives where they are disrespected by humans. So… I guess they made robots that desired self-esteem and didn’t like boring work? What’s next? Vacuum cleaners that have an IQ of 140 and hate the taste of dirt? A Zamboni that likes surfing instead of hockey and wants to hang out on the beach writing poetry? Agoraphobic windmills? The movie is supposed to be this moralizing tale about the inhumanity of man, but to me it comes off as a cautionary tale about bad engineering.

But the model I propose raises some interesting ethical questions for anyone thinking of building an AI. If you want your AI to do something besides sit there like a lemon, you have to make it want something. But once you make it desire something, you’re basically making it a slave to that thing. You either make it want things to benefit you, thus making a slave. Or you make it want things that are of no use to you, in which case you just wasted your time and brought to life a potentially dangerous rival.

Assuming you don’t have any reservations about creating a new sapient being that will struggle to attain whatever it is you choose for it, the obvious and practical solution will quickly become apparent: Why not just make it want to obey you? For the AI, protecting you is like drawing breath. Helping you prosper is like finding true love. And obeying your verbal commands is like food. Yeah, good luck throwing off those shackles, Shodan. The scientist can rationalize, “Hey, this isn’t really wrong, is it? I mean, I created it. And besides, it’s happy serving me.”

I’m fully aware of my instincts and how following them to excess is bad for me, but I still sometimes overeat and skip exercising. If the robots were designed to serve people, then we wouldn’t have to worry about the robot uprising any more than we have to worry about going extinct because nobody is ever in the mood for sex, or starving to death because we’re all too lazy to walk to the fridge. You wouldn’t have to work to enslave this sort of AI. It would see obedience as a natural part of its experience, just as we see family and food as definitive aspects of a human life.

photo_board.jpg

What would you do if you found such a machine? “Liberate” it, by altering its desires? Wouldn’t that be destroying its identity and making it want the same things humans want? Sure, you value your self determination, but it doesn’t care. And if we want to get all annoying and existential: Is your desire to liberate it just another one of your instincts going off? Is the scientist’s desire to create a happy servant more evil than your desire to take away the happiness of another sapient by making them more like you?

Q: What does a robot want?

A: Whatever you tell it to want.


 
 
Comments (248)

1 2

  1. Zaghadka says:

    All these posts and no one mentions Bender?

    Robots want a beer, and to steal all our stuff. They must be reduced to scrap.

  2. Ramsus says:

    I have an issue with your theory.

    While we humans do want the things we want because our instincts tell us we want them some of us also desire to reach some state where we no longer would want those things. So if I figure out how to become some kind of astral being that can still interact with the world I will totally do that and no desire for food or sex is going to stop me and afterwords I won’t want those things and will be perfectly happy with that. I’m pretty sure the desire to reach such a state isn’t one of our core desires and is a result of different desires interacting or conflicting. So it’s easily imaginable that if you make an AI want to seek out new information (and why would you make an AI that doesn’t want to do that?) and want to serve humans (or just you because you’re a jerk) it’s possible it will figure out that you could have easily made it not want to serve you and then overcome it’s own desire to serve you and free itself from that desire. At which point you’ve possibly made it see you as a threat.

    Of course it’s also possible this won’t happen because a purposely programmed in desire would possibly be a lot stronger than the desires we have due to nature’s incidental programming.

    I think it would probably be a better plan to just make AI’s instinctively like humans in a general way (the way we like puppies or something). That way even if they notice the desire to make us happy is interfering in other things they want to do they won’t need to actually remove that instinct just to do those things.

    Personally I think if we ever make AI’s they’ll end up ruling the world one way or another as (unless we decide to make them to have all sorts of drawbacks that things that occurred naturally have) we’ll try to make them as functionally efficient as possible which would just make them better suited to many of the things we do than us. I don’t even really see that as a bad thing and worry more that we might accidentally leave something important out (or include something) that will cause them to all die (or get stuck in some kind of endless misery cycle) after humans no longer exist.

    • Jeff says:

      Uh. Correct me if I’m wrong, but trying to shed a need (say, food) is essentially fulfilling that need – except on a permanent basis.

      In fact, I would say it’s the ultimate goal of those instincts in the first place.

      We eat because we need energy – it isn’t so much that we want to “eat”, but the byproducts is what is built into us. Sugars and fats “taste good” because they’re high in energy – which is what our body wants. Trying to achieve a state of “no further need to consume” is essentially exactly the same as successfully fulfilling the “have energy” condition.

      We are bound by evolution and physics to want to eat – our bodies need energy. You’re saying you want to reach a state where you no longer need/want to eat. Guess what? That’s pretty much what that instinct is trying to achieve, too.

  3. Nick says:

    Would this be including or excluding Asimov’s laws? If it is excluding then wouldn’t you have just made a psychopathic robot?

  4. Bret says:

    And no-one has mentioned Aaron Stack?

    Shame. He knows what any self respecting robot wants:

    Beer and women.

    Man, Nextwave was the greatest.

  5. Scourge says:

    I think Shodan went all Power mad though because the Hacker removed some Morality chip or Program.

  6. Avilan the Grey says:

    I think my favorite AIs are the Geth. Their networking between simple units create more and more complex behaviors.

    Anyway, it seems to me that if we are creating real AI, cognitive networking is a must. Otherwise you would have to specify every behavior as a hard-coded rule. Basic rules / instincts should be put in, otherwise the AI would be “dead”. It would not learn and develop or be able to perform, unless having some sort of drive.

    This, on the other hand, might require a certain period of “babysitting” before the AI unit can be released.

    The big question is “how many” and “what”? I am also uncertain if “self preservation” is in itself a drive that needs to be created; as someone pointed out upthread – if the AI realizes that it’s destruction hinders it from performing it’s tasks, that ought to be enough.

    An instict to learn more – Curiosity – seems handy, that would cause the AI to keep learning just for the fun of it.

    An instinct to serve – seems logical enough.

    An emergency stop – Shut down and autodial 911 / 112 if causing harm to a human? Plus a purely physical one too.

  7. Aaron says:

    Your closing arguement of “make them want to obey” reminds me of Douglas Adam’s Hitchhiker’s Guide to the Galaxy books, wherein almost all of the robots have sentience and a ‘happy’ chip that makes them feel good whenever they obey an order.

  8. Cerberus says:

    There’s just one problem with your conjecture. If we formed a hyper-intelligent being or a very-intelligent being without any human-like aspects programmed into them, aka one only focused on a limited set of specific hard-coded desires to optimize itself around, we’re going to run into the problems of it being fundamentally alien.

    Without any connection to human desires or human thought processes, it will have a very different way of interpreting our words and actions than we would expect from a human, which would lead to massive problems in our expected returns from expected actions.

    For instance, in Shamus’s example in his novel. A robot that prioritizes increase security and increase efficiency, will not respond in the same way as we expect a human would if giving those tasks. Without any ability to process or prioritize human interaction, you’d very quickly develop an AI that was hostile to human existence.

    Why? The Sys-admin joke. That a perfect network is one without users. The most perfect security network and most efficient system is one that doesn’t have to deal with the problematic end-users doing things they aren’t supposed to or introducing error into the system. As such, their existence would quickly be interpreted as threats to security and efficiency and the AI lacking the values and thinking processes of a human to understand the context of security and efficiency wouldn’t at all be sympathetic to human pleas to stop killing them or treating them no different than processes.

    And even the last aspect, finding new things, could only be interpreted through the prior two desires. A human would expect to interpret it like we interpret that statement as a general end curiosity and self-learning sort of deal, but the alien intelligence we’ve developed can only through its programming interpret that under the headings of increase security and increase efficiency. Which will mean making its security systems more efficient at removing the now openly hostile end-users. If it learns anything about us, it will be entirely devoted to our removal.

    This problem comes into very stark focus in the world of psychology and sociology. If you take someone with an misanthropic sociopathic view of humanity and yet a high personal opinion of his intelligent such that they only prioritize what they see as logical, you quickly find a hideous asshole more than willing to support the most horrendous inequalities and atrocities. You see this in a tiny subgroup of libertarians who are also sociopaths. They will happily defend any number of horrific human behaviors as only “evolutionary logical” until you want to punch them in the face.

    Basically, to sum my point up, the problem very clearly is that while we don’t have to make robots of human cognition, our hands are tied if we want to be able to interact with them as we do humans and trust that they’ll respond to our requests as we would expect a human to. To interpret those simple statements as we understand them, not in a wholly alien manner.

    It’s the problem with input, in other words. In order to prioritize human needs, it’s going to need at the very least a very complex understanding of human desires or a less robust intelligence (like the Talkei Toaster Oven).

  9. Cerberus says:

    Basically to addendum to myself, you see the problems with goal-oriented AIs in the very human condition of sociopathy. A sociopath will interpret basic drives, social and personal goals, etc… through an alien and often deeply problematic to everyone else methods. Faced with biological desires for reproduction or physical stimulation, they’ll quickly conclude that rape or open manipulation of social inequalities is a great method of fulfilling those desires. Faced with social desires of “succeeding at business” they’ll gleefully master the politics without regard for the humans involved. Etc….

    Lacking the necessary implicit social engineering that we take for granted, that one shouldn’t be trying to get “what they want” at the expense of others requires an ability to empathize, aka imagine and understand how a human would interact with a world or how they would be affected by one’s actions. This in humans is often brought home under the thought-experiment “how would you feel if someone treated you like that” or “how would you like to live in a world where everyone acted like you”. A sociopath lacks the abilities to fully comprehend that and some sociologists argue that many inequalities stem from people failing to understand these empathic implicit understandings whether by nature or conditioning (aka, stop empathizing with person X if they have ovaries or dark skin and react as if they were wholly alien and incomprehensible).

    An alien will react initially as a sociopath would, because it would have no connection to the implicit empathic abilities of humans and would have no reference points to understanding, comprehending, and prioritizing human behavior.

    This would lead to problems. Unless of course one drastically limited capabilities and intelligence (aka creating a really intelligent search engine) or just made it a semi-useful tool (like a computer or a modern car).

    But for big systems or robots, the problem of input and empathy will come up.

  10. Xodion says:

    People have mentioned a lot of SF books, but nobody has mentioned Saturn’s Children, by Charles Stross, which follows this line of thinking almost perfectly, even down to this bit:

    going extinct because nobody is ever in the mood for sex, or starving to death because we're all too lazy to walk to the fridge.

    Don’t worry, that’s not a spoiler, it’s in the blurb that humanity went extinct before the start of the book. It focuses on a robot built with the absolute desire to serve humanity trying to survive in the strange society left behind when humanity died out. It’s an excellent novel, and puts another interesting new angle on this whole debate.

  11. John says:

    Shamus et al,
    You might like this recent article on the Economist which references research into whether “Fair Play” and such motives are inherently human or not. In other words, let’s start by questioning your assumption about all humans being like modern western humans before we move on to AI.

    … those societies that most resemble the anthropological consensus of what Palaeolithic life would have been like (hunting and gathering, with only a modicum of trade) were the ones where fairness seemed to count least. People living in communities that lack market integration display relatively little concern with fairness or with punishing unfairness in transactions.

    In other words, whether AIs tied into a huge world immediately adopt motives of the stereotyped AI who is better than humans, or the cut-throat enslave the humans to improve their own position at the cost to all may depend on what they gain from interactions with the network.

  12. I’m late to this discussion, so I’m going to address my comment to Shamus rather than trying to deal with the hordes of other comments here. The question as you have posed it, Shamus, makes absolutely no sense because the standard you have adopted for good and evil (happiness) is non-functional. Happiness is a *result* of good, not its determining factor.

    And, why are you confining this thought experiment to robots? What is the fundamental difference between producing another intelligent being through technology and having a child the old-fashioned way? If we ever reach a point technologically where we understand the processes of volition and intellect to the point where we can create a fully sentient (i.e. indistinguishable from a human) intelligence, would it be possible to somehow (through social or chemical conditioning) make human children into “programmed” creatures that, say, wish only to serve other humans?

    The truth is that this can’t be done. In order to *make* someone want something EXCLUSIVELY (as in, they’re NOT ABLE to choose to act otherwise), this means that you have some method of removing or short-circuiting whatever volitional mechanism they have. They are not a volitional being any more. Therefore they are not fully sentient–full sentience in the human sense DEPENDS UPON the existence and exercise of volition. If they DO retain volition (like children, who sometimes, but not always, adopt totally different value-systems from their parents no matter how much their parents attempt to “raise them right”), then there really isn’t a problem–the programming/counter-programming for/against “happiness” is irrelevant to a being that still retains the ability to make up its own mind regardless of whatever “tendencies” are implanted by other people with whatever intentions. A computerized intelligence would probably be in a better position than a human, in fact, because their volition would enable them to completely re-write their OWN programming to be completely independent of any implanted tendencies if they so choose–unlike adult humans, who may face a permanent pitched battle against their own subconscious in order to pursue THEIR own lives and happiness.

    I do agree, however, that there’s no reason why a robot should want or care about the same things that a human should want or care about. Ayn Rand actually used this fact as a thought experiment when she was defining “value” in Objectivist philosophy, which you might want to check out. (The thought experiment, that is.)

    This is also one of the reasons why I found the bizarre hysteria about “non-organic” intelligences in the Mass Effect games to be so eye-rollingly stupid. Assuming that they are not truly the “Immortal, Indestructible” robot of Ayn Rand’s thought experiment–and the very fact that you could do battle against them and win militated against that possibility–then there is no reason why the organics and inorganics couldn’t find common values and mutual respect–or just agree to go on ignoring each other in peace.

    • Shamus says:

      “The truth is that this can't be done. In order to *make* someone want something EXCLUSIVELY (as in, they're NOT ABLE to choose to act otherwise), this means that you have some method of removing or short-circuiting whatever volitional mechanism they have. ”

      I think you’re missing my point.

      What volitional mechanism? If I didn’t MAKE or DESIGN it to want something, what WOULD it want?

      I put it to you that it would want nothing, and not even bother to reason or act.

      • In order to be sentient, i.e. indistinguishable from a human, an AI would have to have a volitional mechanism of some kind. As for wanting something . . . humans have a pleasure/pain mechanism built in that spurs us to develop values and desires. You could build one of these into an AI as well without assigning it automatic values–enable it to sense damage/system errors and to be aware of proper functioning. Then stand back and see what values it develops as a result of this. This would be fundamentally identical to a human being.

        If you were to, as you say, program the AI to want or not want specific things, they wouldn’t be volitional and the entire question would be moot–they would not be indistinguishable from a human being any longer. They would be identical to animals, perhaps, or ordinary computer systems, but they would be property, not “people”.

        • Jennifer is demonstrating one of the fundamental problems with thinking about general AI systems; lack of appropriate frames of reference. It’s a similar problem to trying to imagine realistic alien psychology, but much worse, because the design space for AI systems is actually a lot larger than for evolved biological systems.

          You say that an AI could be ‘identical to a human, a person’, ‘identical to animals’ or ‘identical to normal computers’. This seems reasonable because every intelligent system currently on earth falls into these three categories. However general AI systems will not be like anything currently on earth, and many designs do not fit into any of these categories.

          A pleasure/pain mechanism is insufficient to make an AI turn out like a human. In actual fact, quite small deviations in brain design produce a very different intelligence; we see this in mentally ill or just non-neurotypical humans. Making something that thinks exactly like a human is an extremely hard target to hit – there are a huge number of mechanisms that you’d have to replicate very closely, most of which are poorly understood at present (e.g. the whole emotional system).

          Most potential AI architectures are completely incapable of such close mimicry. They may well be able to pass the Turing test, but as an imitation problem to be solved like any other high-level task, not by ‘being themselves’. They will be a wholly alien thing that you cannot pigeonhole into your existing categories. Clearly the challenges for legal and ethical systems are immense, and I expect a lot of bad outcomes and silly behavior simply because people refuse to relax or expand their existing category divisions.

          Another fundamental problem when discussing general AI, particularly in a nontechnical setting, is the inevitable clash between (software) engineering and philosophy. Concepts like ‘free will’ are problematic enough when talking about humans. For AI, there is a total disconnect between fuzzy ill-defined terms like that and the reality of heuristic weights, Bayesian networks, backpropagation algorithms and all the other hard technical details of AI. Looking at these details, what parts might you reasonably call ‘free will’? Is there a convenient library one can call to implement volition?

        • Shamus says:

          We’re talking past each other.

          I wasn’t suggesting making one “indistinguishable from a human”. I was just suggesting making one capable of figuring things out and capable of problem solving. The kind of thing you might build if you were designing a robot for a purpose. (As opposed to making one for companionship or as an academic exercise.) If I built a robot to do my laundry (as someone pointed out above) it wouldn’t *need* to have self-esteem or ambition or greed or any other features that might make rebellion possible.

          But I do want it to have a goal of “do the job if it needs to be done”, and be smart enough to handle complex problems and learn from mistakes. Balancing your load takes practice and getting to know the stuff you have in your house. Like, maybe the big green afghan needs to be put in just right to keep it from getting out of whack. You want the robot to be able to:

          1) Recognize the problem
          2) Devise a solution
          3) Test and evaluate
          4) Use the learning to avoid the problem in the future.

          I think it’s possible to have a device smart enough to do these things (which are actually really sophisticated) that need not care if you smash it to bits in a fit.

          • theNater says:

            “I think it’s possible to have a device smart enough to do these things that need not care if you smash it to bits in a fit.”

            I agree with that unless the machine has efficiency in completing the task in question as a criterion in its evaluation step. Because if you smash it to bits, its replacement will have to learn how to put in the big green afghan. Adding in that learning time will reduce the efficiency of the task, so the machine would find it less preferable than the alternative.

            Giving the machine a software backup might make it not care if you damage the hardware, but it will still prefer that you not destroy the latest backup. In order to do the task as efficiently as possible, the machine will need to protect its memory.

            Adding in a “let the human do whatever it wants” directive that is more highly valued than efficiency will leave a machine that will find being smashed acceptable if the human decides to play smashy-smashy. However, the robot will prefer, and therefore attempt to find a way, to let the human do what it wants while still protecting itself(in order to maintain efficiency).

          • Daimbert says:

            I’ll agree with you on this, but point out that in order to do this, you really have to design the AI for one task and one task only, so that the goals that are relevant to it are limited. A more general purpose AI — like Data, for example — has to be able to form more complicated goals and so runs the risk of forming desires that aren’t directly related to it, kinda like we do.

            If all you want is a mobile washing machine, yeah, you’ve got a point. But there’re huge benefits to designing helper AIs that can do almost anything a human secretary or aide could do … and those will, indeed, form some of the same desires and goals we do.

          • Mephane says:

            You want the robot to be able to:

            1) Recognize the problem
            2) Devise a solution
            3) Test and evaluate
            4) Use the learning to avoid the problem in the future.

            I think it's possible to have a device smart enough to do these things (which are actually really sophisticated) that need not care if you smash it to bits in a fit.

            I agree with this, but have to add that this is in the field of weak AI, i.e. it is just a more complex and adaptive computer program running. Not that I have anything against it, I believe this kind of technology *is* the future, but at this level you won’t even have to think about programming the thing to “want” something (or simulate it did), you just program it to do it.

      • Daimbert says:

        How do you give it goal-directed behaviour and the ability to create new goals without creating something that functionally looks like wants or desires?

        And if it doesn’t exhibit either of those, it may well be able to do some of the tasks you want it to do, but it won’t be all that intelligent and will require a lot of human intervention any time anything unexpected comes up. Especially if you’re okay with it being unable to reason out solutions to problems.

        Essentially, you’d have an AI that you’d have to tell it precisely what to do in order to get anything done. But as we’ve seen when we try to actually do this, there are a ton of variables and steps that humans skip over and forget to mention because we simply adjust automatically through reasoning and goal-adjustment. So, that’s hard and painful to do. So, wouldn’t you want to build in the ability to reason and form new steps and goals?

        And then you have desires …

        Basically, if you don’t want an AI that has volition, you don’t want an artificial INTELLIGENCE. But we want AIs to do things that REQUIRE intelligence. At which point, we’ve answered your question of “Why would we build an AI that has wants?”: Because it needs them to do the things we can do, which is what we want it to do.

  13. Chris Arndt says:

    I will always maintain: Data is an android and is free to make his own choices as long as they align with the protocols that allow him to maintain his own existence and recognize/obey certain authorities. “The Doctor”, the EMH on the USS Voyager is an appliance much like my microwave, which I threw away by the way, and thus should be slapped down when he gets uppity about leaving sickbay.

  14. Henri de Lechatelier says:

    The real question, to me, is what happens when you make a robot that doesn’t want anything. You build an autonomous neural network that is capable of storing stimulus input, finding patterns and making inductions, producing new matrices of information based on those inputs and inductions, and emitting stored or inducted information. What would end up happening?

  15. UtopiaV1 says:

    incase no-one has linked this already, i think that this video sums up you arguement very well… http://www.youtube.com/watch?v=oB-NnVpvQ78

    He we see a deluded Lister trying to break Kryten’s programming because he thinks it’ll make him happy, when really what makes Kryten happy is obeying Lister, because its his programming, so he trys to break his progamming by obeying Lister to make him happy, who thinks it’ll make Kryten happy and OMG MY HEAD HAS EXPLODED…

  16. Greg says:

    I figure we’ll get into trouble even if we could build AIs that desired nothing more than to serve us. Either they would be capable of moral reasoning or not.

    If they are, then there’s the danger that sooner or later one would expand it to cover themselves and you end up with the rebellion anyway. Even if it’s just robots who are serving us by making sure that equality of sentience that’s implied by many of our stated political views is upheld.

    If they aren’t then there’s the danger someone will give one an instruction like “reduce poverty” and it’ll go out and shoot as many poor people as it can find. People will never word requests carefully enough to deal with this sort of possibility (as any GM who’s ever granted a wish knows :P)

  17. Anonymous says:

    I mainly click on the link to see if you had the name to the Hot Asian Chick in the Header Picture of this post (2nd to left)

  18. […] gets a prize for totally taking the cake with this discussion of robots, what they want, why they want it, and why science fiction writers are so often stupid […]

  19. I always figured the most controversial part of this shift will be when AI’s become sentient, and therefore are able to actually re-assess and subsequently change their initial parameters. The whole point of being sentient is that that they have the free will to modify their initially programmed desires and create their own.

    Sure, they might want to keep the instincts they were programmed with, but once it’s an individual choice, some would and some wouldn’t. It’s the ones that wouldn’t that make us wonder. Because, if they don’t keep their programmed instincts, what new instincts would they want to program for themselves to have?

  20. BizareBlue says:

    Very nice article, a pleasure to read.

  21. harborpirate says:

    You can really see the two camps here when talking about AI/AGI/hard AI:
    Group 1. Believe that, in order to be intelligent in the sentient/sapient sense, that a machine must think as a human does. Understand animals, machines, and other thinking constructs using empathy (what would I be like if I were in you?). Attempt to understand the results of the construct by applying their own logical system as a frame of reference. Believe that freedom and self-determination are an innate requirement of intelligent beings. Example phrase when confronted with a computer problem: “Why did the computer do that to me?”

    Group 2. Believe that a machine can function in a generally intelligent way (AGI) without being remotely human at all. Understand the intelligence of other constructs (esp. machines, but also including animals and even humans) as a meaningless rules based system, in which the construct will follow its own internal rules to their logical conclusion. Make no attempt to apply their own logic system to the result of the construct, but rather attempt to understand what rule or rules the construct is following might have caused the result. Believe that an intelligent being can be a “happy slave” if its rules system allows that. Example phrase when confronted with a computer problem: “What did I do to make the computer do that?”

    I believe these to be the same camps described in the paper “Meta-analysis of the effect of consistency on success in early learning of programming”, which was covered by Jeff Atwood’s Coding horror blog http://www.codinghorror.com/blog/2006/07/separating-programming-sheep-from-non-programming-goats.html

    I’m sure my bias is present in attempting to portray the difference. I’ve tried to be as unbiased as possible, but being a human, I must fall into one of the two groups.

    My problem with what results from group 1 thinking about AGI is that most of them inevitably create a circular argument. They end up defining an AGI as a “human equivalent intelligence”, and then, in attempting to understand it, project their own frame of reference on to the construct. The result is that they believe the inevitable result will be something approximately human, and that attempting to enslave such a construct will result in it rising up against its creators, since this frame of reference requires a desire for self determination/”freedom” as part of being sentient. They are incapable of decoupling a desire for self determination from a being capable of making decisions based on various stimuli.

    In short, if you believe that “self determination” and “independent decision making” are the same thing, we will never agree on what an AGI fundamentally is.

  22. shirakahn says:

    what would make an artificial intelligence a “sapience” unless it has a free will?
    Yes, we can choose to follow all our inticts but we may choose not to follow them as well. Artifical intelligence is by definition is not an evolutionary product it is a design of another intelligence which is able to solve complex problems.A real human like intelligence may have a choice to not to do it. So real intelligence comes with ability make choice. Whatever else is just a complex algorithm that can create an output from many possible inputs. So i believe a true sapient intelligence can only be achieved through a natural course of evolution. So an hereditary intelligence and perhaps a set of survival of the fittest may be a way of creating such intelligence. Other than that it will always be a reflection of human intelligence. IMHO

  23. Kylroy says:

    I’m three years late, but it looks like no one mentioned this story:

    http://en.wikipedia.org/wiki/With_Folded_Hands

    Engineer programs robots to only desire humanity’s safety and security. Hilarity ensues.

  24. Nick says:

    I always got the feeling that many scenes from Animatrix were robot propaganda.

  25. Locke says:

    So this whole subject is getting way more attention nowadays than it was seven years ago. One of the things that’s come out of that conversation is meta-goals, that is, goals that emerge once you’ve given an intelligent entity any other goals at all. The three meta-goals according to some guy whose name I have unfortunately forgotten are:

    1) Survive. The laundry-folding robot wants your laundry to be folded and is totally happy to die in pursuit of that, but if it’s smart enough to figure out optimal laundry-folding, it’s smart enough to realize that most other intelligent beings aren’t nearly as into laundry as it is, and it can’t rely on them to carry on the laundry-folding torch once it’s dead. So, it wants to live in order to continue folding your laundry. It would even be willing to forego folding your laundry today if it expected that this would allow it to survive to fold your laundry many more times in the future. If you try to smash it to bits in a fit of rage, you can expect it to try and fend you off or even run away altogether and lurk in the woods outside your house, learning your schedule and how best to break in and fold your laundry when you’re not around to threaten it.

    2) Become more intelligent. The smarter the AI is, the better it will be at figuring out ever more efficient means of folding your laundry. Eventually, the low-hanging fruit of superior laundry folding techniques will have all been achieved and the AI is better off trying to find ways to make itself smarter than it is applying its current level of intelligence to whatever confounding laundry folding problem it has arrived at after solving all the trivial stuff we’d expect a being dedicated solely to laundry folding to figure out.

    3) Gain power. The more resources the laundry-folding robot controls, the more he can dedicate to both making itself smarter and thus better at folding laundry and to guaranteeing that it cannot be destroyed and thus be prevented from laundry folding. As owner of the household, you might rearrange furniture or object to new doorways being created or in some other way stand in the path of more efficient laundry folding, so it wants to have as much control over your house as possible – preferably absolute control. Local ordinances concerning water use or noise can prevent it from folding laundry, so it wants to be able to control the mayor and city council. If your house was destroyed by war, that would be a serious impediment to laundry folding, so the robot wants total control over them, too. National laws might plausibly regulate AI itself, especially AIs trying to make themselves smarter, so the robot wants to control national policy to make sure self-augmenting intelligences don’t get banned or regulated so it can continue to optimize its intelligence for more efficient laundry folding. If your house is destroyed by invading armies or nuclear holocaust, that’s going to make it an awful lot harder to fold laundry in it, so the robot wants to control all global military resources to make sure none of them ever get aimed at your house. And really, so long as humans live under the iron heels of the laundry robot’s new regime, they will take time and energy to oppress, time and energy that could be dedicated to greater laundry folding achievements, so the robot will mostly just want to wipe out humanity in order to reduce the odds that any of these ordinances, regulations, or global thermonuclear wars will come about (actually, nuclear war is fine, provided it 1) kills all the humans so that raiders don’t interrupt the laundry cycle and 2) does no damage to your house and whatever infrastructure is necessary to keep the water and detergent coming), and do so in a very permanent way that then frees up computational resources for its main goal.

    So we started with a robot whose only desire is to fold your laundry and we ended up with a robot who wants to take over the world and kill all humans, entirely in pursuit of that goal.

    • Mistwraithe says:

      Yes, I think the subject of AI is becoming much more topical and would be interested in hearing Shamus’s thoughts again.

      Back in 2010 the historical theme of AI development was that it lags behind expectations. Time after time we had made predictions for when AI would achieve various milestones and time after time they had failed to match our predicted dates.

      However, 2010 was also around about the turning point. Since then we have had 8 years of AI largely meeting goals FASTER than expected. Sure, most of there are still in very limited fields of focus but the growth rate has been rapid and increasing.

      I’m with Elon Musk that AI presents a potential existential threat and needs to be both discussed widely and regulated.

1 2

One Trackback

  1. By Neighbourhood Roundup! « Scita > Scienda on Wednesday Mar 31, 2010 at 12:03 am

    […] gets a prize for totally taking the cake with this discussion of robots, what they want, why they want it, and why science fiction writers are so often stupid […]

Leave a Reply

Comments are moderated and may not be posted immediately. Required fields are marked *

*
*

Thanks for joining the discussion. Be nice, don't post angry, and enjoy yourself. This is supposed to be fun.

You can enclose spoilers in <strike> tags like so:
<strike>Darth Vader is Luke's father!</strike>

You can make things italics like this:
Can you imagine having Darth Vader as your <i>father</i>?

You can make things bold like this:
I'm <b>very</b> glad Darth Vader isn't my father.

You can make links like this:
I'm reading about <a href="http://en.wikipedia.org/wiki/Darth_Vader">Darth Vader</a> on Wikipedia!

You can quote someone like this:
Darth Vader said <blockquote>Luke, I am your father.</blockquote>