"Restore(s) a little sanity into current political debate" - Kenneth Minogue, TLS "Projects a more expansive and optimistic future for Americans than (the analysis of) Huntington" - James R. Kurth, National Interest "One of (the) most important books I have read in recent years" - Lexington Green
Chicago Boyz is a member of the Amazon Associates, B&H Photo, Newsmax and other affiliate programs. Your Amazon and B&H purchases made after clicking those businesses' links, and your clicks on Newsmax links, help to support this blog.
Some Chicago Boyz advertisers may themselves be members of the Amazon Associates and/or other affiliate programs and benefit from any relevant purchases you make after you click on an Amazon or other link on their ad on Chicago Boyz or on their own web sites.
Chicago Boyz occasionally accepts direct paid advertising for goods or services that in the opinion of Chicago Boyz management would benefit the readers of this blog. Please direct any inquiries to
Chicago Boyz is a registered trademark of Chicago Boyz Media, LLC. All original content on the Chicago Boyz web site is copyright 2001-2017 by Chicago Boyz Media, LLC or the Chicago Boyz contributor who posted it. All rights reserved.
Cold and misty morning, I heard a warning borne in the air
About an age of power where no one had an hour to spare …
– Emerson, Lake & Palmer, “Karn Evil 9, 1st Impression, Part 1”
Imagine that you just stepped out of a time machine into the mid-1930s with a case of partial historical amnesia. From your reading of history, you can still remember that the nation has been beset with economic difficulties for several years that will continue for several more. You also clearly remember that this is followed by participation in a global war, but you cannot recall just when it starts or who it’s with. A few days of newspapers and radio broadcasts, however, apprise you of obvious precursors to that conflict and various candidates for both allies and enemies.
As mentioned several times in this forum, I adhere to a historical model, consisting either of a four-part cycle of generational temperaments (Strauss and Howe), or a related but simpler system dynamic/generational flow (Xenakis). That model posits the above scenario as a description of our current situation and a prediction of its near future: a tremendous national trial, currently consisting mostly of failing domestic institutions, is underway. It will somehow transform into a geopolitical military phase and reach a crescendo early in the next decade. It cannot be avoided, only confronted.
Nor will it be a low-intensity conflict like the so-called “wars” of recent decades, which have had US casualty counts comparable to those of ordinary garrison duty a generation ago. Xenakis has coined the descriptive, and thoroughly alarming, term genocidal crisis war for these events. Some earlier instances in American history have killed >1% of the entire population and much larger portions of easily identifiable subsets of it. Any early-21st-century event of this type is overwhelmingly likely to kill millions of people in this country, many if not most of them noncombatants. And besides its stupendous quantitative aspect, the psychological effect will be such that the survivors (including young children) remain dedicated, for the rest of their lives, to preventing such a thing from ever happening again.
I will nonetheless argue that no matter how firmly convinced we may be that an utterly desperate struggle, with plenty of attendant disasters, is inevitable and imminent, we must avoid both individual panic and collective overreaction.
Many thanks to the commenters on my review. I won’t be agreeing with all of you, but I value your input for increasing my understanding of what others think. I have some related ideas on how to think about the issues raised specifically by Lightning Fall and generally by “preppers” and, indeed, anyone anticipating a societally disruptive crisis in the near future.
NB: this is an essay in the original sense of “attempt.” It is unlikely to fully represent my thinking on these issues even a few years hence; and whether you agree with me or not, I encourage you to think these things through based on your own abilities and experience, and then act as your specific situation appears to require. Hayekian distributed local knowledge may save us. Central planning, as I hardly need admonish this audience, will not, and therefore any attempt by me to impose a uniform mental framework should (and undoubtedly will) be firmly rejected.
Real-life performance data shows that the most important and high-impact technologies are not the gold-plated, over-engineered wonder weapons that turn majors into colonels, colonels into generals, and young Jedi apprentices into Sith Lords. Instead, data suggest the real winners are humble, simple, low-cost products made by small, rapid innovation teams — the type of projects that don’t attract much attention from the press or from the brass because all they do is get the mission done without any fuss.
While this will not be a uniformly positive review, I must immediately note that the purely literary quality of Bill Quick’s Lightning Fall (subtitled either “A Novel of Destruction” or “A Novel of Disaster,” depending on whether one is looking at the spine or the cover of the paperback edition) ranks it alongside Pat Frank’s Alas, Babylon and comes within metaphorical striking distance of Larry Niven and Jerry Pournelle’s Lucifer’s Hammer. It is a classic page-turner and a serious threat to a good night’s sleep; I began reading it after awakening shortly before 3:00 AM one morning, expecting to drift off in a few minutes, and eventually noticed that I was somewhere around page 250 and the time was after 6:00 AM. This sort of thing has not happened to me more than a handful of times in a half-century of reading, and I read a lot.
Other reviews have included – well, not exactly spoilers, but more specifics about the events in the novel than I intend to provide here. I will mention three things that I think it useful for prospective readers to know, and then use the general thrust of the novel as a springboard for extended commentary of my own.
Posted by Michael Kennedy on 2nd January 2014 (All posts by Michael Kennedy)
David has a good idea. I often read the archives of my personal blog to see how I did in forecasting the future or understanding the present. A major concern of mine is, of course, health care and what is happening. When I retired from surgery after my own back surgery, I spent a year at Dartmouth Medical School’s center for study of health care. My purpose was to indulge an old hobby. How do we measure quality in health care ? I had served for years on the board of a company called California Medical Review, Inc. It was the official Medicare review organization for California. For a while I was the chair of the Data Committee. It seems to have gone downhill since I was there. First, it changed its name in an attempt to get more business from private sources. Then it lost the Medicare contract.
Lumetra, which lost a huge Medicare contract last November, is changing its name and its business model as it seeks to replace more than $20 million in lost revenue.
The San Francisco-based nonprofit’s revenue will shrink this year from $28 million last fiscal year, ending in March 2009, to a projected $4.5 million, CEO Linda Sawyer told the Business Times early this week.
That’s in large part because it’s no longer a Medicare quality improvement contractor, formerly its main line of work. And in fact, the 25-year-old company’s revenue has been plummeting since fiscal 2007, when it hit $47 million.
Beginning Jan. 1, 2015, the Affordable Care Act no longer will provide federal grants to fund state health exchanges. In addition, California law prohibits using the state’s general fund to pay for the exchange.
Anyway, for what it is worth, here are the links to the 2013 health posts.
What can be done is Congress creating a new option in the form of a national health insurance charter under which insurers could design new low-cost policies free of mandated benefits imposed by ObamaCare and the 50 states that many of those losing their individual policies today surely would find attractive.
What’s the first thing the new nationally chartered insurers would do? Rush out cheap, high-deductible policies, allaying some of the resentment that the ObamaCare mandate provokes among the young, healthy and footloose affluent.
These folks could buy the minimalist coverage that (for various reasons) makes sense for them. They wouldn’t be forced to buy excessive coverage they don’t need to subsidize the old and sick.
Who knows ? Maybe Jenkins reads this blog. It’s so obvious that the solution should be apparent even to Democrats.
The revelation came out of questioning of Mr. Chao by Rep. Cory Gardner (R., Colo.). Gardner was trying to figure out how much of the IT infrastructure around the federal insurance exchange had been completed. “Well, how much do we have to build today, still? What do we need to build? 50 percent? 40 percent? 30 percent?” Chao replied, “I think it’s just an approximation—we’re probably sitting between 60 and 70 percent because we still have to build…”
Gardner replied, incredulously, “Wait, 60 or 70 percent that needs to be built, still?” Chao did not contradict Gardner, adding, “because we still have to build the payment systems to make payments to insurers in January.”
If the ability to pay the insurance companies is not yet written, how can anybody sign up ?
Gardner, a fourth time: “But the entire system is 60 to 70 percent away from being complete.” Chao: “There’s the back office systems, the accounting systems, the payment systems…they still need to be done.”
Gardner asked a fifth time: “Of those 60 to 70 percent of systems that are still being built, how are they going to be tested?”
Tyler Cowen, in his recent book Average Is Over, argues that computer technology is creating a sharp economic and class distinction between people who know how to effectively use these “genius machines” (a term he uses over and over) and those who don’t, and is also increasing inequality in other ways. Isegoria recently excerpted some of his Tyler’s comments on this thesis from a recent New Yorker article.
I read the book a couple of months ago, and although it’s worth reading and is occasionally thought-provoking, I think much of what Tyler has to say is wrong-headed. In the New Yorker article, for example, he says:
The first (reason why increased inequality is here to stay) is just measurement of worker value. We’re doing a lot to measure what workers are contributing to businesses, and, when you do that, very often you end up paying some people less and other people more.
The second is automation — especially in terms of smart software. Today’s workplaces are often more complicated than, say, a factory for General Motors was in 1962. They require higher skills. People who have those skills are very often doing extremely well, but a lot of people don’t have them, and that increases inequality.
And the third point is globalization. There’s a lot more unskilled labor in the world, and that creates downward pressure on unskilled labor in the United States. On the global level, inequality is down dramatically — we shouldn’t forget that. But within each country, or almost every country, inequality is up.
Taking the first point: Businesses and other organizations have been measuring “what workers are contributing” for a long, long time. Consider piecework. Sales commissions. Criteria-based bonuses for regional and division executives. All of these things are very old hat. Indeed, quite a few manufacturers have decided that it is unwise to take the quantitative measurement of performance down to an individual level, in cases where the work is being done by a closely-coupled team.
It is true that advancing computer technology makes it feasible to measure more dimensions of an individual’s work, but so what? Does the fact that I can measure (say) a call-center operator on 33 different criteria really tell me anything about what he is contributing the the business?
Anyone with real-life business experience will tell you that it is very, very difficult to create measurement and incentive plans that actually work in ways that are truly beneficial to the business. This is true in sales commission plans, it is true in manufacturing (I talked with one factory manager who said he dropped piecework because it was encouraging workers to risk injury in order to maximize their payoffs), and it is true in executive compensation. Our blogfriend Bill Waddell has frequently written about the ways in which accounting systems can distort decision-making in ultimately unprofitable ways. The design of worthwhile measurement and incentive plans has very little to do with the understanding of computer technology; it has a great deal to do with understanding of human nature and of the deep economic structure of the business.
My profession is much in the news at the moment, so I thought I would pass along such insights as I have from my career, mostly from a multibillion-dollar debacle which I and several thousand others worked on for a few years around the turn of the millennium. I will not name my employer, not that anyone with a passing familiarity with me doesn’t know who it is; nor will I name the project, although knowing the employer and the general timeframe will give you that pretty quickly too.
We spent, I believe, $4 billion, and garnered a total of 4,000 customers over the lifetime of the product, which was not aimed at large organizations which would be likely to spend millions on it, but at consumers and small businesses which would spend thousands on it, and that amount spread out over a period of several years. From an economic transparency standpoint, therefore, it would have been better to select 4,000 people at random around the country and cut them checks for $1 million apiece. Also much faster. But that wouldn’t have kept me and lots of others employed, learning whatever it is we learn from a colossally failed project.
So, a few things to keep in mind about a certain spectacularly problematic and topical IT effort:
Large numbers of reasonably bright and very hard-working people, who have up until that point been creating significant wealth, can unite in a complete flop. Past performance is no guarantee, and all that. Because even reasonably bright, hard-working people can suffer from failures of imagination, tendencies to wishful thinking, and cultural failure in general.
Morale has got to be rock-bottom for anybody with any degree of self-awareness working on this thing. My relevant moment was around the end of ’99 when it was announced, with great fanfare, at a large (200+ in attendance) meeting to review progress and next steps, that we had gotten a single order through the system. It had taken various people eight hours to finish the order. As of that date, we were projecting that we would be doing 1,600 orders a day in eight months. To get an idea of our actual peak rate, note the abovementioned cumulative figure of 4,000 over the multi-year lifespan of the project.
Root cause analysis is all very well, but there are probably at least three or four fundamental problems, any one of which would have crippled the effort. As you may infer from the previous bullet point, back-office systems was one of them on that project. Others which were equally problematic included exposure to the software upgrade schedule of an irreplaceable vendor who was not at all beholden to us to produce anything by any particular date, and physical access to certain of our competitors’ facilities, which they were legally required to allow us into exactly two (2) days per year. See also “cultural failure,” above; most of us were residing and working in what is one of the most livable cities in the world in many ways, but Silicon Valley it ain’t.
Not to overlook the obvious, there is a significant danger that the well-advertised difficulties of the website in question will become a smokescreen for the fundamental contradictions of the legislation itself. The overall program cannot work unless large numbers of people act in a counter-incentived (possibly not a word, but I’m groping for something analogous to “counterintuitive”) fashion which might politely be termed “selfless” – and do so in the near future. What we seem likely to hear, however, is that it would have worked if only certain IT architectural decisions had been better made.
This thing would be a case study for the next couple of decades if it weren’t going to be overshadowed by physically calamitous events, which I frankly expect. In another decade, Gen-X managers and Millennial line workers, inspired by Boomers, all of them much better at things than they are now, “will be in a position to guide the nation, and perhaps the world, across several painful thresholds,” to quote a relevant passage from Strauss and Howe. But getting there is going to be a matter of selection pressures, with plenty of casualties. The day will come when we long for a challenge as easy as reorganizing health care with a deadline a few weeks away.
What do modern military and corporate strategy have in common with Achilles, Sun Tzu, and primates? The answer is fluidity, flexibility, and pure unpredictability. Every day we make decisions that are built on our theory of what will give us the outcome we want. Sir Lawrence Freedman proposes that throughout history strategy has very rarely gone as planned, and that constant evaluation is necessary to achieve success—even today. Join The Chicago Council for a centuries-spanning discussion explaining how the world’s greatest minds navigate toward success.
For interested parties. Sir Lawrence Freedman has quite a few talks posted on YouTube too. Worth checking out.
What proportion of all social-media communication is by bots, spammers, people with agendas who misrepresent themselves, or severely dysfunctional people who pass as normal online? I suspect it’s a large proportion.
There’s not much hard evidence, but every once in a while something like this turns up. I’m guessing it’s the tip of an iceberg. See also this. And who can overlook the partisan trolls who show up on this and other right-of-center blogs before elections. Where do they come from?
None of this apparently widespread Internet corruption should come as a surprise. Given the low costs and lack of barriers to entry it would be surprising if attempts to game the system were less frequent than they appear to be. Nonetheless it’s prudent to keep in mind that a lot of what appears online is probably fake and certainly misleading.
Much of what medical researchers conclude in their studies is misleading, exaggerated, or flat-out wrong. So why are doctors—to a striking extent—still drawing upon misinformation in their everyday practice?
The arguments presented in this article seem like a good if somewhat long presentation of the general problem, and could be applied in many fields besides medicine. (Note that the comments on the article rapidly become an argument about global warming.) The same problems are also seen in the work of bloggers, journalists and “experts” who specialize in popular health, finance, relationship and other topics and have created entire advice industries out of appeals to the authority of often poorly designed studies. The world would be a better place if students of medicine, law and journalism were forced to study basic statistics and experimental design. Anecdote is not necessarily invalid; study results are not necessarily correct and are often wrong or misleading.
None of this is news, and good researchers understand the problems. However, not all researchers are competent, a few are dishonest and the research funding system and academic careerism unintentionally create incentives that make the problem worse.
(Thanks to Madhu Dahiya for her thoughtful comments.)
I have written several posts that use Carroll Quigley’s “institutional imperative” as a lens for understanding contemporary events.  Mr. Quigley suggests that all human organizations fit into one of two types: instruments and institutions. Instruments are those organizations whose role is limited to the function they were designed to perform. (Think NASA in the 1960s, defined by its mission to put a man on the moon, or the NAACP during the same timeframe, instrumental to the civil rights movement.) Institutions, in contrast, are organizations that exist for their own state; their prime function is their own survival.
Most institutions start out as instruments, but as with NASA after the end of the Cold War or the NAACP after the victories of the civil rights movement, their instrumental uses are eventually eclipsed. They are then left adrift, in search of a mission that will give new direction to their efforts, or as happens more often, these organizations begin to shift their purpose away from what they do and towards what they are. Organizations often betray their nature when called to defend themselves from outside scrutiny: ‘instruments’ tend to emphasize what their employees or volunteers aim to accomplish; ‘institutions’ tend to emphasize the importance of the heritage they embody or even the number of employees they have.
Mr. Quigley’s institutional imperative has profound implications for any democratic society – especially a society host to so many publicly funded organizations as ours. Jonathan Rauch’s essay, “Demosclerosis” is the best introduction to the unsettling consequences that come when public organizations transform from instruments into institutions.  While Mr. Rauch does not use the terminology of the Institutional Imperative, his conclusions mesh neatly with it. Describing the history and growth of America’s bureaucratic class, Mr. Rauch suggests its greatest failing: a bureaucracy, once created, is hard to get rid of. To accomplish whatever mission it was originally tasked with a bureaucracy must hire people. It must have friends in high places. The number of people who have a professional or economic stake in the organization’s survival grows. No matter what else it may do, it inevitably becomes a publicly sponsored interest group. Any attempt to reduce its influence, power, or budget will be fought against with ferocity by the multitude of interests who now depend on it. Even when it becomes clear that this institution is no longer an instrument, the political capital needed to dismantle it is just too high to make the attempt worth a politician’s time or effort. So the size and scope of bureaucracies grow, encumbering the country with an increasing number of regulations it cannot change, employees it does not need, and organizations that it cannot get rid of.
I used to think that the naked self-interest described by Mr. Rauch was the driving force behind the Institutional Imperative. It undoubtedly plays a large role (particularly when public funds are involved), but there are other factors at play. One of the most important of these is what business strategists call Marginal Thinking. Read the rest of this entry »
Wretchard discusses recent notorious Type II system failures. The Colorado theater killer’s shrink warned the authorities to no avail. The underwear bomber’s father warned the authorities to no avail. The Texas army-base jihadist was under surveillance by the authorities, who failed to stop him. Administrators of the Atlanta public schools rigged the academic testing system for their personal gain at the expense of students and got away with it for years. Wretchard is right to conclude that these failures were caused by hubris, poor institutional design and the natural limitations of bureaucracies. The question is what to do about it.
The general answer is to encourage the decentralization of important services. If government institutions won’t reform themselves individuals should develop alternatives outside of those institutions. The underwear bomber’s fellow passengers survived because they didn’t depend on the system, they took the initiative. That’s the right approach in areas as diverse as personal security and education. It’s also the approach most consistent with American cultural and political values. It is not the approach of our political class, whose interests are not aligned with those of most members of the public.
The Internet is said to route itself around censorship. In the coming years we are going to find out if American culture can route itself around the top-down power grabs of our political class and return to its individualistic roots. Here’s hoping.
The low rate of overt accidents in reliable systems may encourage changes, especially the use of new technology, to decrease the number of low consequence but high frequency failures. These changes maybe actually create opportunities for new, low frequency but high consequence failures. When new technologies are used to eliminate well understood system failures or to gain high precision performance they often introduce new pathways to large scale, catastrophic failures. Not uncommonly, these new, rare catastrophes have even greater impact than those eliminated by the new technology. These new forms of failure are difficult to see before the fact; attention is paid mostly to the putative beneficial characteristics of the changes. Because these new, high consequence accidents occur at a low rate, multiple system changes may occur before an accident, making it hard to see the contribution of technology to the failure.
How Complex Systems Fail (pdf) (Being a Short Treatise on the Nature of Failure; How Failure is Evaluated; How Failure is Attributed to Proximate Cause; and the Resulting New Understanding of Patient Safety)
Richard I. Cook, MD
Cognitive technologies Laboratory University of Chicago
But there is a much more important question being ignored by Gawande — How well does The Cheesecake Factory analogy really apply to health care? We can see how similar the kitchen is to an operating room — lots of busy people rushing about in a sterile environment, each concentrated on a task. But what about the rest of the “system?”
At The Cheesecake Factory, the customer is the diner. That’s who orders the service, pays the bill, and comes back again if he is happy. That is who all of the efficient, standardized food preparation is designed to please.
In Gawande’s ideal health care model, however, the customer isn’t the patient, but the third-party payer, be it an insurer or government. Let’s call that entity the TPP. The TPP never enters the kitchen. The TTP has no idea what happens in there, and doesn’t really care as long as the steak is cooked to his satisfaction and the tab is affordable.
In this model, the patient is actually the steak. It is the steak who is processed in the kitchen. It is the steak that is cut and cooked and placed on a platter. The steak doesn’t get a vote. Nobody cares if the steak is happy. The steak doesn’t pay the bill. The steak isn’t coming back again.
So here we are in Dr. Gawande’s kitchen, where you and I are slabs of meat and Chef Gawande will cook us to the specifications of his TPP customers — satisfaction guaranteed.
There was an attack in Saudi Arabia using internally placed explosives up the lower GI tract. These explosives cannot be detected by pat downs, metal detectors, or millimeter wave machines. Much more powerful scanning machines would be required or a cavity search. But no follow up bombs have happened using this method. I’d always wondered why. Now things are becoming clear. Apparently there’s been something of a theological problem. It appears that butt bombs are not permitted due to Islam’s prohibition of sodomy. But that prohibition seems to be loosening.
It will take years for the theologians to digest this new complication but once it has been let loose, it is clearly foreseeable that some portion of islamic scholars will hold this position. The consequences for our travel security regime are rather scary. We’re going to have reached the end of the line because routine x-rays at each flight segment are just not going to happen. The accumulated radiation would cause too many cancers. And cavity searches are simply unreasonable. So where does that leave TSA’s current security strategy?
Like most of their terror innovations, I expect that this will take some time for them to organize. It looks like they’ve already put 4 years into it. It may take them another 4 before they’ve worked the theological problems out sufficient to recruit bombers. But then what?
But though they may hate the Pax Americana, the Greens probably can’t live without it. Can’t live without the Ipods, the connectivity, the store-bought food, the cafe-bought lattes — all the ugly things made by private industry. And by paring down the redundancies in the system as wasteful and unsightly; by reducing the energy reserves of the system in favor of such fairy schemes as windmills and carbon trading the Greens have made the system far less robust than it could have been. Because they are never going to need the Design Margin. Ever. Until they do.
Supposedly the US has war gamed this thing and the prospects look poor. A war game is only as good as the assumptions programmed into it. Can the war game be programmed to consider the possibility that a single Iranian leader has access to an ex-Soviet nuke and is crazed enough to use it?
Of course the answer is “No Way”.
A valid war game would be a Monte Carlo simulation that considered a range of possible scenarios. However the tails of that Gaussian distribution would offer extremely frightening scenarios. The Israelis are in the situation where truly catastrophic scenarios have tiny probability but the expectation value [consequence times probability] is still horrific. However “fortune favors the brave”. Also being the driver of events is almost always better than passively waiting and hoping for a miracle. That last argument means the Israelis will launch an attack and probably before the American election.
These are important points. The outcomes of simulations, including the results of focus groups used in business and political marketing, may be path-dependent. If they are the results of any one simulation may be misleading and it may be tempting to game the starting assumptions in order to nudge the output in the direction you want. It is much better if you can run many simulations using a wide range of inputs. Then you can say something like: We ran 100 simulations using the parameter ranges specified below and found that the results converged on X in 83 percent of the cases. Or: We ran 100 simulations and found no clear pattern in the results as long as Parameter Y was in the range 20-80. And by the way, here are the data. We don’t know the structure of the leaked US simulation of an Israeli attack on Iran and its aftermath.
It’s also true, as Eggplant points out, that the Israelis have to consider outlier possibilities that may be highly unlikely but would be catastrophic if they came to pass. These are possibilities that might show up only a few times or not at all in the output of a hypothetical 100-run Monte Carlo simulation. But such possibilities must still be taken into account because 1) they are theoretically possible and sufficiently bad that they cannot be allowed to happen under any circumstances and 2) the simulation-based probabilities may be inaccurate due to errors in assumptions.
An excellent post by Mark Draughn that reminds how we get the behavior we incentivize. In this case the NYC govt incentivized its police to ignore violent crimes and to make bogus arrests to boost their cleared-case stats:
This is a standard recipe for disaster in quality control — and CompStat is at heart a statistical quality control program. Take a bunch of people doing a job, make them report quality control data, and put pressure on them to produce good numbers. If there is little oversight and lots of pressure, then good numbers is exactly what they’ll give you. Even if they’re not true.
Many people canoe and kayak in the Florida Everglades’ extensive inland waterways, which are beautiful, full of interesting plants and animals and easily accessible. I couldn’t refuse an invitation to join friends for a day trip down the Turner River in the Big Cypress area. My friends arranged for me to borrow a kayak but its owner backed out of the trip at the last minute. Fortunately, the guy who organized the trip offered me the use of a kayak that he owns.
The “bug” of Y2K never quite measured up to the 1919 influenza bug in terms of devastating effect — but as TPM Barnett wrote in The Pentagon’s New Map:
Whether Y2K turned out to be nothing or a complete disaster was less important, research-wise, than the thinking we pursued as we tried to imagine – in advance – what a terrible shock to the system would do to the United States and the world in this day and age.
My own personal preoccupations during the run-up to Y2K had to do with cults, militias and terrorists — any one of which might have tried for a spectacle.
As it turned out, though, Al Qaida’s plan to set off a bomb at Los Angeles International Airport on New Year’s Eve, 1999 was foiled when Albert Ressam was arrested attempting to enter the US from Canada — so that aspect of what might have happened during the roll-over was essentially postponed until September 11, 2001. And the leaders of the Ugandan Movement for the Restoration of the Ten Commandments of God, acting on visionary instructions (allegedly) from the Virgin Mary, announced that the end of the world had been postponed from Dec 31 / Jan 1 till March 17 — at which point they burned 500 of their members to death in their locked church. So that apocalyptic possibility, too, was temporarily averted.
Don Beck of the National Values Center / The Spiral Dynamics Group, commented to me at one point in the run-up:
Y2K is like a lightening bolt: when it strikes and lights up the sky, we will see the contours of our social systems.
— and that quote from Beck, along with Barnett’s observation, pointed strongly to the fact that we don’t have anything remotely resembling a decent global map of interdependencies and vulnerabilities.
What we have instead is a PERT chart for this or that, Markov diagrams, social network maps, railroad maps and timetables… oodles and oodles of smaller pieces of the puzzle of past, present and future… each with its own symbol system and limited scope. Our mapping, in other words, is territorialized, siloed, and disconnected, while the world system which is integral to our being and survival is connected, indeed, seamlessly interwoven.
I’ve suggested before now that our mapping needs to pass across the Cartesian divide from the objective to the subjective, from materiel to morale, from the quantitative to the qualitative, and from rumors to wars. It also needs a uniform language or translation service, so that Jay Forrester system dynamic models can “talk” with PERT and Markov and the rest, Bucky Fuller‘s World Game included.
I suppose some of all this is ongoing, somewhere behind impenetrable curtains, but I wonder how much.
In the meantime, and working from open source materials, the only kind to which I have access – here are two data points we might have noted a litle earlier, if we had decent interdependency and vulnerability mapping:
Fear-mongering — or significant alerts? I’m not tech savvy enough to know.
Tom Barnett’s point about “the thinking we pursued as we tried to imagine – in advance – what a terrible shock to the system would do to the United States and the world in this day and age” still stands.
Y2K was what first alerted me to the significance of SCADAs.
Something very like what Y2K might have been seems to be unfolding — but slowly, slowly.