Mapping our interdependencies and vulnerabilities [with a glance at Y2K]

[ cross-posted from Zenpundit — mapping, silos, Y2K, 9/11, rumors, wars, Boeing 747s, Diebold voting machines, vulnerabilities, dependencies ] | Send this image to your friend

The “bug” of Y2K never quite measured up to the 1919 influenza bug in terms of devastating effect — but as TPM Barnett wrote in The Pentagon’s New Map:

Whether Y2K turned out to be nothing or a complete disaster was less important, research-wise, than the thinking we pursued as we tried to imagine – in advance – what a terrible shock to the system would do to the United States and the world in this day and age.


My own personal preoccupations during the run-up to Y2K had to do with cults, militias and terrorists — any one of which might have tried for a spectacle.

As it turned out, though, Al Qaida’s plan to set off a bomb at Los Angeles International Airport on New Year’s Eve, 1999 was foiled when Albert Ressam was arrested attempting to enter the US from Canada — so that aspect of what might have happened during the roll-over was essentially postponed until September 11, 2001. And the leaders of the Ugandan Movement for the Restoration of the Ten Commandments of God, acting on visionary instructions (allegedly) from the Virgin Mary, announced that the end of the world had been postponed from Dec 31 / Jan 1 till March 17 — at which point they burned 500 of their members to death in their locked church. So that apocalyptic possibility, too, was temporarily averted.


Don Beck of the National Values Center / The Spiral Dynamics Group, commented to me at one point in the run-up:

Y2K is like a lightening bolt: when it strikes and lights up the sky, we will see the contours of our social systems.

— and that quote from Beck, along with Barnett’s observation, pointed strongly to the fact that we don’t have anything remotely resembling a decent global map of interdependencies and vulnerabilities.

What we have instead is a PERT chart for this or that, Markov diagrams, social network maps, railroad maps and timetables… oodles and oodles of smaller pieces of the puzzle of past, present and future… each with its own symbol system and limited scope. Our mapping, in other words, is territorialized, siloed, and disconnected, while the world system which is integral to our being and survival is connected, indeed, seamlessly interwoven.

I’ve suggested before now that our mapping needs to pass across the Cartesian divide from the objective to the subjective, from materiel to morale, from the quantitative to the qualitative, and from rumors to wars. It also needs a uniform language or translation service, so that Jay Forrester system dynamic models can “talk” with PERT and Markov and the rest, Bucky Fuller‘s World Game included.

I suppose some of all this is ongoing, somewhere behind impenetrable curtains, but I wonder how much.


In the meantime, and working from open source materials, the only kind to which I have access – here are two data points we might have noted a litle earlier, if we had decent interdependency and vulnerability mapping:


Fear-mongering — or significant alerts? I’m not tech savvy enough to know.


Tom Barnett’s point about “the thinking we pursued as we tried to imagine – in advance – what a terrible shock to the system would do to the United States and the world in this day and age” still stands.

Y2K was what first alerted me to the significance of SCADAs.

Something very like what Y2K might have been seems to be unfolding — but slowly, slowly.

Are we thinking yet?

7 thoughts on “Mapping our interdependencies and vulnerabilities [with a glance at Y2K]”

  1. I’m not sure about the 747, but I do remember that the Dreamliner (787) faced some certification issues because the FAA was concerned about possible interaction between the passenger/entertainment network and the various flight control networks. Presumably these have been resolved since the airplane has been certified.

  2. The bit about 747s isn’t fear-mongering. Separating passenger-accessible systems from critical functions (engine management) by layer 2 is utterly insane.

    It’s the equivalent of building a castle with a moat and a drawbridge, then having a second permanent bridge connected to a gaping hole in the wall with a piece of tape stretched across it that says “no entry!”.

    Any competent engineer would know that the two systems should be physically separate. What they have done is made a single system that encompasses the passenger entertainment system and the engine management system (among others), then used *routing protocols* to prevent unauthorised access from one to the other.

    Said routing protocols can’t be guaranteed to be perfectly secure and I’m not surprised to hear that they aren’t. That’s why you don’t rely on them for critical security purposes.

    I can’t understand what they could have been thinking.

  3. hahaha that’s a very funny chart!

    I remember everyone in my dad’s company freaking out, lots of backups being done in all the computers, lots of preparations and in the end, nothing happened.

    I wonder if those voting machines weren’t hacked already in the past elections, but to say that with $10 bucks you could hack them, that’s scary.

  4. Regarding the voting machine tampering, it certainly is very simple and is well within my capabilities and that of many people I know.

    All they’ve done is put a microcontroller on a small PCB with a simple power supply and then connected it inline with a 10 pin header. Presumably the microcontroller simply passes the serial data from the touchscreen through to the main board and has the option of altering it, blocking it or injecting its own serial data. Presumably it also gets power from that header. Most likely the communication is via a standard SPI or I2C protocol, making it easy to intercept and alter.

    I could design a board to do this which is maybe 0.5″ x 0.5″ or possibly smaller – something that could be hidden away in the fold of a cable that would be hard to detect without careful inspection. Many Chinese companies have the technology to embed an IC and indeed other components within a cable and even have it be flexible. So then it would just be a matter of swapping the regular cable with the “smart” one, which may be visually indistinguishable!

    Scary stuff.

  5. Hi Anita:

    On the Y2K “nothing happened” front, pages 9-10 of the GAO “lessons learned” itemizes some of the things that did in fact happen — in US government alone — but they were neither huge nor cascading — and may in some cases have been a tad embarrassing, thus quietly overlooked. A great many of the worries back then were probably needless, a great many would only have been problematic if they had cascaded — and a huge sum went into remediation ($3 bn in USG funds alone), *some* of which undoubtedly headed off *some* issues at the pass.

    As I say, my own neck of the woods (synchronic apocalyptic violence) had two close escapes — the cult self-immolation in Uganda was postponed, and the arrest of Ressam meant that the first AQ terror assault to succeed on US “homeland” soil took place on September 11, 2001, not on December 31, 2000 as originally planned…

  6. David, Nicholas:

    My concern is that both vote-hacking and 747-hacking are data-points from a much larger and harder to track vulnerability space, and we haven’t been thinking about the extent and diversity of such potential failure points, and probably couldn’t even describe the general topography of the space with much insight…

  7. Charles,

    It’s hard to say just how widespread this sort of problem is. Any large project should have enough sensible engineers on it that obvious security flaws like this should be sorted out before widespread adoption.

    Obviously here are two cases where that didn’t happen and I think it must come down to corporate culture and management. Perhaps some savvy engineers questions whether these things should be done this way but they were overruled by non-technical management types who were more concerned with making themselves look good by getting the product out on time.

    It takes a pretty caustic working environment to get to the point where sensible objections by knowledgeable engineers get drowned out for the sake of expediency or profit but it happens. The bigger the company, the more likely it is.

    I don’t know what the answer is. Mandatory security evaluations for critical equipment like aviation hardware and software and voting machines?

Comments are closed.