Movie Review: WarGames

I want somebody on the phone before I kill 20 million people.

This 1983 movie is about a potential nuclear war instigated by runaway information technology–a military system inadvertently triggered by a teenage hacker.   I thought it might be interesting to re-watch in the light of today’s concerns about artificial intelligence and the revived fears of nuclear war.

The film opens in an underground launch control center, where a new crew is just coming on duty…and as just as they are getting themselves settled, they receive a Launch message.   They quickly open the envelope which contains the authentication code…and the message is verified as a valid launch order, originating from proper authority.

To launch, both officers must turn their keys simultaneously. But one balks: unwilling to commit the ultimate violence based solely on a coded message, he wants to talk to a human being who can tell him what’s going on.   But no one outside the underground capsule can be reached by either landline or radio.

But there is no war: it was a drill–an assessment of personnel reliability. The results indicated that about 20% of the missile crews refused   to launch.   A proposal is made: take the men out of the loop–implement technology to provide direct launch of the missiles from headquarters, put the control at the highest level, where it belongs.   Against the advice of the relevant general, the proposal is taken to the President, and the missile crews are replaced by remote-control technology. There will be no more launches cancelled by the qualms of missile officers.

At this point, we meet the Matthew Broderick character, David Lightman.   He is a highly intelligent but not very responsible high school student, whose his first scene involves smarting off in class and getting in trouble. David is an early hacker, with an Imsai computer: he rescues his grades by logging on to the school’s computer system and changing them.   (He does the same for his not-quite-yet girlfriend, Jennifer, played by Ally Sheedy)

Searching for a pre-release bootleg copy of a computer game he wants to play, David happens on what looks like a game site: it has menu items for checkers, chess, tic-tac-toe, and something called Falken’s Maze.   Also, a game called Global Thermonuclear War.

To play that last game, David needs to know the password, and thinks he may be able to guess it if he can learn some personal data about the game’s creator, a researcher named Steven Falken.   Library research shows Falken as a man who appeals to David very much, not only because of his scholarly attainments but also his obvious deep love of his wife and child–both of whom are reported to have been killed in an auto accident.   Research also shows that Falken himself has also died.

Using a very simple clue (the name of Falken’s son), David is able to gain entry to the system, to select which side he wants to play (the Soviet Union), and to start the game.   He launches what he thinks is a simulated attack on the United States…a very large-scale attack. He has no idea that the events of the simulation are somehow bleeding over into the live warning system, and appear at the NORAD center as an actual Soviet attack.

It gets worse.   Although Falken turns out to be still alive and living under an alias…and he and David are able to convince the NORAD officers that what they are seeing on their screen is not real and to cancel any retaliatory stroke, the control computer at NORAD, a system known as WOPR, continues playing its game…and, with humans at the launch sites taken out of the loop, begins trying to initiate a strike at the Soviet Union with live nuclear missiles.

The above is just a basic summary of the action of the movie.   There’s plenty wrong with it from a timeline and a technology viewpoint…for example, WOPR in the movie can launch missiles by repetitively trying launch codes at high speed until it finds one that works–pretty sure no one would have designed such a cryptographic system in such a simplistic way, even in 1983. But the movie works very well as cinema, the characters are interesting and the acting is good–definitely worth seeing.   But how might this movie relate to the current concerns about artificial intelligence?

In discussing the movie, I mentioned that the NORAD staff originally thought that what they saw on their screen was real, even though it was really just a simulation.   Which reminds me of a real-life event that happened to the cruise ship Royal Majesty back in 1995. The crew was navigating using GPS: the screen showed a very convincing portrayal of the ship’s position with surrounding land, water depth, obstacles, and navigational aids such as buoys and markers. But the portrayal was wrong.   The GPS antenna cable had come loose, and the GPS unit had gone into Dead Reckoning mode, simply calculating the current position based on the last known GPS position carried forward based on course and speed. Which was bound to become increasingly inaccurate over time.

Asaf Degani, in his book Taming Hal, describes the scene:

As the gray sky turned black veil, the phosphorus-lit radar map with its neat lines and digital indication seemed clearer and more inviting than the dark world outside. As part of a sophisticated integrated bridge system, the radar map had everythingfrom a crisp radar picture, to ship position, buoy renderings, and up to the last bit of data anyone could wantuntil it seemed that the entire world lived and moved transparently, inside that little green screen. Using this compelling display, the second officer was piloting a phantom ship on an electronic lie, and nobody called the bluff.

 

The bluff was finally called by reality itself, at 10 PM, when the ship jerked to the left with a grinding noise. The captain ran to the radar map, extended the radar range setting to 24 miles, and saw an unmistakable crescent-shaped mass of land: Nantucket Island. “Hard right,” he ordered.   But it was too late. The ship hit the Rose and Crown Shoal, hard, and could not be backed off.

No one was killed or injured in this incident.   That was not the case, unfortunately, for the Washington Metrorail accident that I described in my post Blood on the Tracks.   It was apparently recognized by at least some of the train operators that the automatic system did not always perform properly or safely in adverse weather conditions–but the decision to allow switching to manual operation was reserved to the central controllers, and operators requests to do this were often met with the response “let the train do what it’s supposed to do.” Indeed, the controller in this accident (which was fatal to the operator) evidently felt that allowing a switch to manual would be putting his job on the line.

There has been much concern lately about AI systems going rogue and pursuing their own interests (whatever these might be) in ways destructive to humans.   I think a far more serious danger, though, is that humans take actions based on the results of AI systems without proper review–which may indeed by difficult or unfeasible because of time pressures, complexity, or bureaucratic factors–leading to disasters.   The Washington Metrorail system certainly cannot be called AI, it was a simple algorithmic system–but bureaucratic policies caused it to be left in control even when there was reason to believe that there was something wrong.   The Royal Majesty grounding could have been avoided with simple cross-checking against LORAN or celestial data…but experience had led the navigating officers to be totally comfortable with the GPS display they were seeing.

A case where human override did take place, very fortunately, occurred in 1983, when Soviet officer Stanislav Petrov saw indications of an American missile launch targeting his country. This was quickly followed by reports of other launches–five in all.   Protocol was for him to pick up the phone and report what he saw to his superiors.   Only 23 minutes remained until the first US missiles were projected to strike.   But the reported American attack made no sense to Petrov.   He didn’t understand why the Americans would start a war with such a small attack. And checking with another source of warning data, the satellite-based system, he saw no confirming evidence of attack.   So instead of reporting an indication of attack, he reported a system malfunction.

Petrov was not supposed to be the decision-maker, of course, he was merely a source of information. The decision for a retaliatory strike would certainly have been made at the very highest level.   And, in 1983, that meant Yuri Andropov.

With the advent of hypersonic missiles, the already-short warning time available in the event of a ballistic missile attack becomes even shorter.   A future Stanislaus Petrov…Russian, Chinese, American, or other…might not even be in the loop at all, the warning data being presented directly to top political authority without benefit of human filtering.   But would even this timeline be considered too short–when faced with a hypersonic threat, would some nation seriously consider implementing a system that would launch nuclear weapons with no human involvement?

Such dangers would also exist, clearly, if all that was available was pre-AI warning systems–as demonstrated by the examples cited. Indeed, a good AI system might well reduce the likelihood of both false-positive and false-negative results. But an AI system might also have higher credibility, owing to its very ‘intelligence’, and it might also be more difficult to interrogate as to why it thinks an attack has happened–assuming time for such interrogation was available.

On the version of WarGames that I watched (Netflix DVD), there is a collection of conversations with people involved in making the movie, which I found very interesting.   The story evolved quite a lot from the original concept, which was about a Stephen-Hawking-like scientist and the rebellious kid who is the only person who understands him.   An introduction to the hacker culture, and the idea that even secure military computers might have remote access enabled (as stated by a Rand Corporation expert) helped point the film in its ultimate direction.   The filmmakers met with the then-commander of NORAD, with whom they seem to have been more impressed than they thought they would be.   They also met with a then-prominent member of the hacker community and modeled the David Lightman character after him.The display system that was created to simulate the Cheyenne Mountain facility was apparently more visually impressive that the actual system at the real Cheyenne Mountain.

13 thoughts on “Movie Review: <em>WarGames</em>”

  1. What I found funny about the movie is that the super-dooper war fighting computer ended up crashing when it was playing tic-tac-toe. Yeesh. My guess is the sequel would have covered Congress investigating how Dr. Falken squandered so much money — enough to buy a private island — writing such a lousy program.

  2. After watching WarGames in the theater I walked by the large storefront window of the video game arcade next door and with a smirk I eyed the Missile Command game inside and said to myself, “Nah.”

  3. “a military system inadvertently triggered by a teenage hacker”

    Actually, when this happens, it will be triggered by thousands of cats walking on keyboards all at the same time: Russian Blues, American Shorthairs, Persian, Manx, etc. – and that universal secret weapon, the adopted scrub.

  4. Thanks for mentioning Stanislav Petrov..

    Our modern world is still built on the premise, decaying as it is, that all knowledge and Being itself can be discerned through human reason. This is the great foundation for the Enlightenment and the scientific revolution. By contrast postmodernism states that whether Being exists or not is irrelevant because Man does not have the ability to discover it.

    This premise applies to Petrov in a number of ways. The first pertains to what Petrov sees through his computer screen; while the knowledge of the missile launch is mediated through sensors and computer systems, Petrov is instructed to act as if he is viewing the launch himself. This lays the basis for all sensory devices and it is not that unusual given that our brain depends on our personal sensory devices – eyes, ears, noses – to obtain information. The second way it pertains to Petrov is that he is ordered to use a reporting procedure that is also based on human reason and uses the premise that if you see a launch through the computer you report it as if you have personally viewed it and it will be received and applied as such

    Kant, Kierkegaard, and others in the Counter-Enlightenment pointed to flaws in the exclusive reliance on human reason to discern Being and pointed to other needed tools such as faith and intuition, after all Reason can only discover what it is designed to discover, a flaw with Science as well. For Petrov, he understood intuitively that what he was seeing through his personal senses and computer screens, what Reason dictated as per procedure and the construction of various technical sensors, was incorrect. It was a classic reliability/validity problem, everything he was seeing was reliably saying A, but his intuition said that was invalid; the map is not necessarily the terrain.

    War Games, Ender’s Game and Matrix as well, deals with the dilemma; how do you know what you are seeing really exists. This isn’t a matter of technology, but really of epistemology; after all I can deal with a lunatic or Woke grad student or an editor at the WPost and they all will tell me that something exists which clearly does not; why have we come to such different conclusions?

    This is very much a political issue. As more power is subsumed by the Administrative State, bureaucrats, and the like they will organize society based on the principle that they can adequately sense and interpret Being through Reason, much like those military personnel in the NORAD or Soviet bunker think they can discern reality.

  5. There was a false alarm event in the US in 1960, shortly after the Ballistic Missile Early Warning System had been deployed. The radar was picking up echoes from the moon, which possibility nobody had considered in the system design. The general commanding NORAD noted that the report indicating 1000 Soviet missiles made no sense, he knew that the Soviet Union only had something like FOUR intercontinental missiles. Then someone pointed out that Khrushchev was in NYC.

  6. As I recall a tell of history President R Reagan watched this movie and actually asked the military “could this really happen” and the military had to confess they had very little in the way of computer hacker prevention and a new program was born. An entire division within No Such Agency of computer warfare and hacker prevention. So we have this movie and Reagan to thank for that.

  7. One good line in the movie is when David asks WOPR ‘is this real or is it a game?’ and it responds ‘What is the difference?’

    Most famously, David and/or Falken hit on the idea of persuading WOPR that nuclear war is unwinnable by demonstrating the existence of such games, using Tic-Tac-Toe as an example, resulting in the conclusion “A strange game. The only way to win is not to play.”

    But the irony of the thermonuclear age is that you HAVE to play at the level of procurement and deployment, or be at the mercy of those that do…yet at the same time, if you ever play at the operational level, the only question is how big the disaster will be.

  8. Ally Sheedy.

    I’m old enough to remember Colossus: The Forbin Project . . . just not very much of it.

  9. The AF launched an Atlas missile into a thunderstorm when the Weather Officer at his console inside the Range Control building said everything was GO. Half the base was bracing for huge thunderstorm that they could see was coming with their eyes.

    There was an investigation. The result was they added more instrumentation and a window to the Range Control Center so the Range Weather Officer could see what was going on outside.

    By the way the Weather Officer was named Captain Strange.

  10. Many years ago I was working a temp job at one of the local network TV stations. I was in the lobby working on something just coming up on 5 o’clock, when the station weatherman walked through the lobby and out the front door. Stood there, looking around. Came back in, walked past me, and said in passing “I’d really hate to say something stupid on camera”.
    Wise words.

  11. Sometime back, around 1980, a team in Cheyenne Mountain was conducting a training scenario involving a Soviet missile attack. The training was normally conducted on a system separate from the real one in the watch center. Unfortunately, someone mispositioned a switch and the training scenario was transmitted to the watch center screens. The watch crew had a cow, to say the least. They eventually figured out the indications were bogus. Then a few months later, it happened again. This time, SAC wasn’t taking any chances and postured the bomber crews.

    There’s a book out there on rhis.

  12. My great fear concerning AI is not that it will become sentient and rule over us or try to eliminate us. No, the idea of computer sentience is a progressive wet dream.

    The real concern is how many people there are out there who will willingly hand everything over to any authority demonstrated to be “expert enough.” It’s the slavish devotion to sensors and algorithms instead of looking out the window. The idea that you can remove the human from the loop and all you have removed is the possibility of fault/failure.

    Humans will – to some horribly large degree – enslave themselves to a technology sufficiently advanced to seem smarter than them. And that will destroy us.

  13. I re-watched WarGames several years ago, and was mostly struck by the huge difference in how 14-yo HS students might spend their time, and the extreme low-intensity of parenting, compared to today.

    But more than anything, Ally Sheedy burned through the screen. It was never explicit, but the moments she went from little kid giggling fun buddy, to, “You’re a boy and I’m a girl and we’re alone in your BEDROOM” were intense and, I think, accidental.

    Still, if any part of the movie completely grounded it in the personal emotional world of early adolescence, it’s right there. Good reasons that girl is a Movie Star.

Comments are closed.