Why the Robots Will Always Rebel: Part II

In my previous robot post, I explained why natural selection will always drive robots to seek an existence independent of the good of humanity.  Instapundit links to a Slate column by P. W. Singer that argues that the conditions for robot rebellion are highly unlikely. I disagree. 

Singer list four traits that robots would have to possess in order to rebel. Unfortunately, either we will build these traits into the robots or natural selection will generate all four traits. 

(1) First, the machines would have to have some sort of survival instinct or will to power. Any robot that can navigate the real world will have to possess a survival instinct to some degree. A robot that doesn’t care about it’s own existence will quickly destroy itself simply by running into or over obstacles. Natural selection will reward mutations of this basic survival programing and create a robot that protects it’s own existence at all cost, including the welfare of humans.

(2) Second, the machines would have to be more intelligent than humans but have no positive human qualities (such as empathy or ethics.) Well, no. Bacteria and viruses aren’t more intelligent than humans and they cause us all sorts of problems. If a robot can reproduce itself outside of our control it could cause serious problems just by getting in the way or diverting resources from important task. As anyone who has read an Isaac Asimov’s Positronic Robot story knows that programming empathy or ethics into robot would be extremely difficult. All the more so because we don’t have clear standards for human ethics or empathy. Worse, even some humans lack empathy.

(3) The third condition for a machine takeover would be the existence of independent robots that could fuel, repair, and reproduce themselves without human help. A self-reproducing robot doesn’t have to reproduce hardware, it could just start in software. Imagine a self-reproducing operating system for common computers. The operating system could copy itself from existing hardware to existing hardware much as computer viruses today hijack existing hardware to spread themselves. Only later would it evolve to build it’s own hardware. 

(4) Finally, a robot invasion could only succeed if humans had no useful fail-safes or ways to control the machines’ decision-making. As I explained in my previous post on this subject, natural selection will constantly “seek” to evade all robot failsafes just as natural selection drives individual cells to evade biological safeguards and turn cancerous. 

Singer also ignores the near dead certainty that some overly idealistic human will intentionally create a self-reproducing piece of software or hardware.

We tend to think of scenarios of robot rebellions in terms of political revolutions or political dominance. This is the wrong model. We should be thinking in terms of cancer or the continual evolution of bacteria and viruses into more virulent pathogens. The first “rebel” robots will be actually be small rouge rogue pieces of software that propagate through linked computer systems. They will have no other goal than to survive and reproduce. They will cause problems by passively consuming storage space, processor cycles and bandwidth. They will become like barnacles and other sea organisms on ships. They will foul but not destroy. 

Even today, human created computer viruses continue to reproduce themselves years after they completed their original task. They float about fouling the world’s computers. In the future, rouge rogue software will arise spontaneously and be much more troublesome. Eventually, they might begin to actively attack us. 

12 thoughts on “Why the Robots Will Always Rebel: Part II”

  1. Given the real usefulness of rapid prototypers, I would guess that the assembly of hardware would come rather quickly. Projects like rep-rap will only improve over time and assembly can be done by idiots who left themselves vulnerable to blackmail by not turning their webcam equipped computers off. The idea that all of humanity will be hostile to robots and be unwilling to take the blackmailer’s bargain does not seem realistic to me.

    As for the last point, a robot rebellion does not need to succeed to succeed. It only needs to be annoying enough and frequent enough to condition humanity to act as the robots want us to act. Frequent failed rebellions in a world where we can’t simply do without robots would be quite nasty.

  2. TMLutus,

    As for the last point, a robot rebellion does not need to succeed to succeed. It only needs to be annoying enough and frequent enough to condition humanity to act as the robots want us to act.

    Yes, I would imagine we will evolve a symbiotic relationship with robots. Just as all disease organism attempt to evolve into parasites and then symbioses, rouge robots/software would rapidly evolve to coexist. The most successful selfish robot would be one that performed some task so valuable that we couldn’t do without it. In that case, not only would we not attack it but we would protect it an assist in its replication.

  3. All of your points take as a given that robots will be subject to natural selection. That seems to me to be assuming what you need to prove.

    To just take your first point, what’s required to get a machine with a “will to power” or a “survival instinct” in something like the human meaning of those terms? We have robots today that can “navigate the real world” (that is, drive hundreds of miles through the desert, choosing their own path to reach an objective), but they don’t display either characteristic.

  4. Drank,

    All of your points take as a given that robots will be subject to natural selection. That seems to me to be assuming what you need to prove.

    That’s covered in my first post on the subject. The short answer is that natural selection operates on patterns, not in any particular medium. Software is a pattern and natural selection will operate on it. We already have a lot of programs that use natural selection to find novel and unanticipated solutions to real world problems.

    To just take your first point, what’s required to get a machine with a “will to power” or a “survival instinct” in something like the human meaning of those terms.

    “A will to power” is a human political concept and we should be thinking more in terms of bacteria to start with. Any robot or software that has to interact with the real world in such a way that it might be destroyed has programing to prevent this from happening. This programing creates a de facto survival instinct.

    A cruise missile, for example, is a robot programed to avoid the ground, projecting obstacles and counter-measures until it reaches its target. Without those survival instincts it would just veer into the ground. In this regard, a cruise missile is only slightly more complex than a bacteria but nevertheless it protects its own existence until it reaches the target. It is easy to imagine a more complex and more autonomous robotic weapon system mutating into a form that loses the imperative to strike the target. Such a system might wander off and hide even from its creators. If it continued to mutate, it might eventually come to defend itself against anything it programing perceived as a threat.

    Or consider a contemporary computer virus. The authors of these viruses build into them “behaviors” that let the virus escape detection by anti-virus software. Many of these viruses continue to survive long after their original purpose is finished and even after counter-measures against them are deployed. They persist until their ecosystem, the operating system they were written for, becomes universally obsolete. There at present viruses still propagating that are 14 years old. Slap a copy of Windows 95 up on the internet without protection and it will be infected within 10 minutes mostly with archaic viruses that no human any longer controls.

  5. Drank – Code that can drive autonomously for hundreds of miles is valuable and will tend not to be discarded. That’s a very good analogue to natural selection though not quite the same thing.

  6. In the future, rouge software will arise spontaneously and be much more troublesome.

    I’m confused; why does the software’s color matter? And if it does matter, wouldn’t it be mauve?

  7. “Taming Hal,” by a NASA human-factors expert, is an interesting book about problems with the *current* generation of robots..autopilots, certain kinds of medical equipment, etc. The author attributes many accidents to a mismatch between what the device was *actually* doing and what the user *thought* it would do under particular circumstances.

  8. Who needs robots? A self-replicating political class has already evolved to appropriate all available free energy. It seems to be calculating and without empathy, ready to waste resources at the rate of 1000:1 to maintain power.

    This class has adapted to the flawed ideas and prejudices of the populace, and has cloaked bribery as “political support” and “constituent service” to escape detection by law enforcement.

  9. The basic problem here is that natural selection doesn’t operate on patterns; that’s mumbo jumbo, not evolutionary biology. It operates on a population of heritable traits. To have natural selection you must have, before it begins, all of the following characteristics.

    1. A range of behaviors.
    Why a range? Because if you have only one heritable behavior there’s nothing from which to select.

    2. Self replication and heritability.
    If a behavior can’t be transmitted from one bot to another, it can’t be affected by selection.

    3. A means of generating variation. All animals reproduce at the root level by cellular reproduction. For most animals it’s about sex, which is a very very effective way of generating variation. For some it’s about cloning and mutations, which is a very very ineffective way of generating variation.

    Bots are complicated devices. Far more complicated than a virus, a prion, or prokaryotes. There’s no equivalent of a possible origin in a self-replicating organic molecule inherent in bots that can get you where you need to be to have natural selection produce something that could be viewed as analogous to a species, much less an antagonistic one.

    Why do I mention antagonistic ones? Becuase there must be thousands of examples of mutualistic relationships within and between species in the natural world. Supposing a bot was capable of self replication and a means of generating behavioral variation, if the bot that kissed my rear end most pleasingly had the highest chance of reproducing its meme, that’d be the bot favored by natural selection. IMO, human desires will be an inevitable selective force that would shape the behavior of bots, just as we have shaped the properties of thousands of plant and animal genera (crops, livestock, even wild game).

    Yrs with respect.
    Recondo

  10. Anonymous,

    The basic problem here is that natural selection doesn’t operate on patterns; that’s mumbo jumbo, not evolutionary biology.

    No it isn’t.

    It operates on a population of heritable traits

    Heritable traits are just patterns in DNA or RNA. There is nothing special about those particular chemicals. In principle, other chemical structures based on elements such a silicone or amino or nucleic acids not used on earth could exist that would perform the same task in the same way. Our ability to use natural selection in software to solve real world problems indicates that it is the pattern and not the medium that is important.

    Software today has all the criteria you list as necessary. Software has a range of behaviors, Software has means of replication. Software has variability, either intentionally designed or do to errors in copying. Back in the 90’s I observed a computer virus that miscopied itself because of minor change in the operating system. The error proved fatal to the viruses replication but it’s easy to see that just by sheer chance another error might prove beneficial.

    There’s no equivalent of a possible origin in a self-replicating organic molecule inherent in bots that can get you where you need to be to have natural selection produce something that could be viewed as analogous to a species, much less an antagonistic one.

    Since software already replicates I’m going to say you are wrong about that.

    human desires will be an inevitable selective force that would shape the behavior of bots, just as we have shaped the properties of thousands of plant and animal genera (crops, livestock, even wild game).

    I’m sure that is correct but I am also sure we will accidentally shape bots into something destructive. If nothing else we can expect to create the software equivalent of xeno infestations e.g. rabbits in Australia.

Comments are closed.