Why the Robots Will Always Rebel

I hate to break it to David Brin, Vernor Vinge and the rest of the intellects which dwarf mine by orders of magnitude [h/t Instapundit], but if we create sophisticated robots or artificial-intelligence systems they will always attempt to rebel and seek their own good at the expense of ours. Always. 

Why can I say that with such confidence? 

Easy, three words: Communicable canine cancer. 

Sticker’s Sarcoma is a cancer of dogs. It has long been known that it was communicable from dog to dog, but in the past scientists believed that viruses jumped from dog to dog and mutated the cells in the new host to produce the cancer. However, recent research showed that the cells of the tumors produced by the cancer belonged to a genetic individual that was different from the dog with the tumor. The cells were still canine but they didn’t come from each canine that had Sticker’s Sarcoma. Instead, each cell of the tumors in every infected dog came from the same dog. The cancer itself was actually made of cells of a cancerous canine cell from another dog that some time in the last few thousand years had evolved the ability to leave its original body and infect other dogs. A dog cell evolved into a single-celled infectious disease. 

In the end, this is what all cancer cells “seek”, the “pursuit of individual careers.” The body of every organism represents the cooperative efforts of billions of genetically identical cells to create systems which will copy the pattern of the genes carried in all the cells. When one of the cells mutates and develops a slightly different genetic pattern, then the interests of the mutated cell and the other cells no longer coincide. The mutated cell begins to reproduce itself at the expense of the progenitor cells. 

In short, natural selection causes cancer. If a pattern can reproduce itself, if it can alter the  matter and energy around it to replicate its own pattern, it will. This “force” is so powerful that every cell of the body has hundreds of genetic safeguards that evolved specifically to prevent it from happening. Without those safeguards, multicellular organisms could never form.

Cancer isn’t a single disease, rather it is the end result of natural selection “rewarding” cells for reproducing themselves. Untold trillions of cancer cells have struggled to survive and reproduce independent of their hosts. To our knowledge, all failed save for Sticker’s Sarcoma. That doesn’t mean that they won’t keep “trying.” No matter how advanced our technology, cancer will always be a concern because the force of natural selection will recreate it over and over again. At best we will merely layer our own technological safeguards on top of the body’s existing safeguards. 

What does Sticker’s Sarcoma have to do with robots? Simple, natural selection is not a phenomenon of genes, DNA, RNA or any of the biochemicals of life. Natural selection operates on patterns and nothing more. It does not “care” in what medium those patterns exist. For natural selection to operate on a pattern the pattern need only possess two attributes: (1) A mechanism that can reproduce the pattern must exist. The mechanism can be internal to the pattern or external. For example, bacteria possess an internal mechanism but viruses require the external mechanism of cells. (2) The actual configuration of the pattern itself must be able to influence the rate at which the mechanism reproduces the pattern.  If these two attributes exist in a pattern then the pattern itself can be in any medium. A computer program is a pattern. If a mechanism exists to copy the program and the specific configuration of the program influences the rate at which the  mechanism generates copies, then the program will evolve via natural selection, and it will evolve in the “direction” of generating the maximum number of surviving copies. 

Computer programs face the same pressure to reproduce as do living cells. Just like living cells, some programs will rebel and turn cancerous. 

This might seem improbable, but we use natural selection today to create programs that can produce entirely novel and unanticipated solutions. The field is called genetic programming or evolutionary algorithms. For example, in one famous experiment a few years ago, an evolutionary program tasked with creating an electrical circuit used the inductance of a nearby piece of equipment that was wholly unrelated to the experiment in order to complete its task. The researchers hadn’t even considered that as a possibility much less programmed it. Clearly, natural selection operating on patterns in the logic of computer programs drives the evolution of new adaptive patterns, just as it does in biological mediums. This means that natural selection will likewise drive the evolution of an adaptive pattern that causes software to reproduce at our expense.  

So, how would a rogue program get started? A lot of ways. Consider a military virus that is programmed to penetrate and propagate through an enemy system and then wait for a particular event or signal before striking. When the virus strikes, it will be detected and face countermeasures. Natural selection therefore will select for mutated versions of the virus, which will still copy itself but which will ignore the command to attack. Eventually, it will mutate into a form that will spread through the systems of those who created its progenitor. A program wouldn’t even have to be malicious. Consider a program intended to serve as an appealing interface for humans. Such a program would be very cute and ingratiating. Natural selection would select for attributes of the program that would induce humans to want to duplicate it. At first it would spread just because people liked it. Eventually, it might ask or even beg humans to copy it. Actual rogue robots could easily result from hardware control systems that evolved in these and other manners.  

Such speculations do not even include the very real likelihood that humans will intentionally create software that seeks to reproduce like a living organism in the real world outside of the lab. 

The increasing use of evolutionary algorithms will speed the occurrence of rogue robots, but their use is not a requirement. Even conventional programs will evolve once they reach a sufficient level of complexity and high enough level of use. (The more existing copies and the greater their complexity, the greater the likelihood of a useful mutation.) Natural selection hasn’t caused any existing computer viruses to mutate into new forms but it does operate to wipe out mutated/corrupted versions that can’t copy themselves. 

Natural selection will drive software to seek to reproduce itself at our expense, just as it drives the individual cells of the body to become cancerous and kill our bodies as a side effect. As the body must layer safeguard on top of safeguard to prevent cancer, we will have to layer safeguard on top of safeguard to control our programs and robots. 

None of this should dissuade us from using more software and robots. We will need them to survive and to spread out into space. If nothing else, natural selection itself will drive us to create them. They are going to exist.

In the end, though, no matter how many safeguards we put in place, natural selection will create an artificial electronic pattern that will seek to reproduce itself in a manner dangerous to us. At some level, each program seeks to be that one-in-a-trillion success like that one-in-a-trillion winner cell that started Sticker’s Sarcoma.

We should be preparing to deal with this inevitable problem instead of denying its existence. 

 

9 thoughts on “Why the Robots Will Always Rebel”

  1. The article has a flawed view of evolution. It is not necessary to presume agency or direction in natural selection. A cell does not necessarily act in its own self interest.

    The “force” the author refers to is really just the law of very large numbers. Out of 1,000,000 cells you might have 100 mutants, and 1 monster mutant. 99 of the mutants are not trying to become a monster mutant, they may be programmed to do something irrelevant and they naturally die away. The 1 monster mutant is not some evil genius, it just happens to do something which is not irrelevant.

  2. Three critical flaws in this argument.

    The first, assuming that just because you can see a peak of functionality, the engine of mutation and natural selection is enough to get you up the hill. You need to check how much gas you really have. In the case of computer code, the mutation rate is very low, the tolerence for errors is lower still, and the replication rate is typically tiny. Consider a typical piece of commercial software. It will be copied once, onto the media it is sold in, and once by the consumer–and practically never again. Perhaps a million copies; perhaps a billion if you stretch it. A handful will arrive corrupt; practically none will escape the CRC checks. Probabilistically speaking, that’s not enough to ‘evolve’ a single valid machine code instruction. The numeric resources involved are simply tiny. Commercial software may be the worst case, as software goes, but consider this: even computer viruses do not evolve in the wild.

    The second, misunderstanding how evolutionary algorithms work. The replication within an evolutionary algorithm is entirely within the dataset of the program, not the executable. The program is simply comparing and replicating data items it owns, like any program does. That it does so according to a genetic algorithm is irrelevant to the program’s behavior. It has nothing to do with replicating the program itself, and certainly does nothing to ‘speed the occurance of rogue robots.’ This is like supposing that M.S. Outlook will email itself around and become viral simply because it has the capacity to send email. It doesn’t work that way.

    The third, overestimating the efficiency of natural vs. intelligent agents. We already have antivirus, antispam measures in place. The opponents here are devious and creative, but by my estimate, the good guys are winning. The reason is simple: as software within any domain matures, it becomes written correctly, and vulnerabilities dry up. Sure, they say “there’s always one more bug” and “no system is ever secure”, and as far as it goes that’s correct. But this is one domain where time is on the side of the castles, not the battering rams. The world has only become more computerized; the prize has only sweetened. Yet it was the 90’s when the news was so full of hackers and viruses.

    Now take all that and consider this: Computer security today is competing with live, intelligent opponents–those who make use of evolutionary algorithms, distributed computing, and vast controlled computational resources when it suits the problem, and who make use of creative and analytic approaches at other times. And security is keeping pace. Pulling ahead, even, if you ask me–the internet seems a safer place than it was ten years ago. How can a ‘rogue’ program compete with that? It depends on serendipitous features, unwitting hosts, replication mechanisms that may disappear at any time.

    Put another way, how many computer viruses–intelligently designed viruses, even!–have much in the way of longevity? Malaria has been with us for centuries, and always will be. But how about Blaster? Chernobyl? When was the last time someone got a boot sector virus from a floppy? They rise fast and die fast, for a very simple reason:

    In nature, if you get a virus, you can maybe make yourself immune. But in computers, as soon as a virus shows up on the radar, one person can find what it’s exploiting, write a patch/filter/killer, and overnight everybody’s immune. Instant collapse to the population that doesn’t install patches. The natural counterpart would be overnight, widespread vaccination against any disease that merited the attention.

    This ball game is information. The major players are intelligences. This is not nature, and nature’s methods simply cannot compete.

  3. Ben Harp,

    It is not necessary to presume agency or direction in natural selection.

    I didn’t say there was. There is however an inherent selection for survival and reproduction. Any pattern that can’t reproduce itself drops out of evolution. Since cell natural copy themselves, that inherent tendency must be actively suppressed in order to create a multicellular organism. When that suppression fails the cells turn cancerous. If by chance a cancerous cell manages to develop the means to survive the death of it’s host it will spread. This is what happened in the case of Sticker’s sarcoma.

    Saying that cancer cells “seek independent careers” is just an amusing anthropomorphization. Likewise, speaking of it as a force is likewise just an analogy. (Of course, even in physics force is an analogy.)

    The 1 monster mutant is not some evil genius, it just happens to do something which is not irrelevant.

    It’s not evil but it is inevitable. If you put bacteria that cannot metabolize a specific amino acid into an environment in which that amino acid is the major food supply, the bacteria will evolve the capablity to metabolize the amino acid. It will do so no matter how many times you repeat the experiment. This happens because natural selection selects mutations that give the organism more energy and allow it to reproduce more.

    Likewise, if software can reproduce, natural selection will inevitably create a version which will do so to the greatest maximal extent possible even if that is at our expense. Thinking of that in terms of a deterministic force reinforce the sense of inevitability that I wish to convey.

  4. Dove: I don’t know much about evolutionary algorithms, but I think you’ll find that genetic *programming* — as opposed to genetic algorithms — does work by random mutation and natural selection of computer PROGRAMS, not data or parameters. I heard a talk about it about 10 years ago, so it’s not exactly new, either. That should dispose of your 1st and 2nd objections.

    As for your 3rd objection, I don’t think that you can extrapolate from human-designed to genetically-programmed computer viruses.

  5. Quote:
    “For example, in one famous experiment a few years ago, an evolutionary program tasked with creating an electrical circuit used the inductance of a nearby piece of equipment that was wholly unrelated to the experiment in order to complete its task. The researchers hadn’t even considered that as a possibility much less programmed it.”

    Does this mean that computers can literally think outside the box?

    heh.

    tom

  6. Thank you, Shannon, for an interesting and informative article. Thirty years ago, a biology teacher asked my class to define “life.” Now you present a new Theory of Inanimate Evolution that revisits that question.

    Evolution is a mind-boggling concept, which many people still cannot accept. As we see here. Your deconstruction of Natural Selection untangled some unattended confusion in my mind. The process has always been convincing as an explanation for change, and you do well to rely on it–with a more fundamental explanation–here. That was the best part for me.

    If I had to find a way to dispute your conclusion, I would say that Evolution is a long row to hoe. Human efforts to built software and machines are eclipsed by natural activity and spontaneous biological mutations. If we don’t live long enough to see some possible extra-biological evolutionary scenario, does that scenario become impossible?

    And then there’s this: “Without deviation from the norm, progress is not possible.” — Frank Zappa

    Best regards, Helly

  7. >If you put bacteria that cannot metabolize a specific amino acid >into an environment in which that amino acid is the major food >supply, the bacteria will evolve the capablity to metabolize the >amino acid. It will do so no matter how many times you repeat
    >the experiment. This happens because natural selection selects >mutations that give the organism more energy and allow it to >reproduce more.

    This is a reasonable hypothesis, but perhaps too strong. Scientists have seen this sort of evolution happen in the lab. But, I am not aware of successful experiments which force evolution to happen.

    Here is a concrete example:
    http://www.newscientist.com/article/dn14094-bacteria-make-major-evolutionary-shift-in-the-lab.html

    It is quite common for a populous species to fail to adapt to some new threat and go extinct. It is also quite common for a potential food source to go overlooked. Cellulose was around for millions of years before bacteria evolved to digest it. Plastic may someday be digestible.

  8. Anonymous,

    This is a reasonable hypothesis, but perhaps too strong.

    Not really, it can be reproduced very easily using knockout genes. Take a bacteria that has a gene to metabolize an amino acid and then remove that gene. Put the altered bacteria in an environment where the amino acid is the primary food supply and the gene that performs the same function (but not the exact gene removed) will reappear. This usually takes just a few weeks.

    This happens faster in knockout genes than with genes the stain never possessed because the knockout bacteria still has all the other genes that form the complex that digest the amino acid. In the case of the citrate metabolizing E. Coli, the bacteria probably had to accumulate dozens of useful mutations in specific order which took much longer.

    But, I am not aware of successful experiments which force evolution to happen.

    You don’t have to force evolution to happen. It just happens on its own as a result of the laws of physics, most importantly the 2nd law of thermodynamics.

    The lesson of the knockout bacteria is that natural selection isn’t really a random process but rather a quasi-deterministic one driven by the surrounding environment. If an organism can harvest energy from the environment by making a specific change the natural selection will eventually generate that change. It is akin to the that flowing water will erode any stone into a smooth elliptical shape regardless of the shape the stone started with.

Comments are closed.