Bruce Webster writes about the parallels (and differences) between the design of legislation and the design of software systems.
(via a thread at Bookworm)
Best-Selling Books by Topic
|Military History||(Top Rated)|
|British History||(Top Rated)|
| Middle East
|Land Battles||(Top Rated)|
|Naval Warfare||(Top Rated)|
|Air Warfare||(Top Rated)|
|Legal History||(Top Rated)|
|IP Law||(Top Rated)|
Bruce Webster writes about the parallels (and differences) between the design of legislation and the design of software systems.
(via a thread at Bookworm)
Posted by Michael Kennedy on 19th November 2013 (All posts by Michael Kennedy)
UPDATE: The Wall Street Journal on how to fix the Obamacare crisis.
What can be done is Congress creating a new option in the form of a national health insurance charter under which insurers could design new low-cost policies free of mandated benefits imposed by ObamaCare and the 50 states that many of those losing their individual policies today surely would find attractive.
What’s the first thing the new nationally chartered insurers would do? Rush out cheap, high-deductible policies, allaying some of the resentment that the ObamaCare mandate provokes among the young, healthy and footloose affluent.
These folks could buy the minimalist coverage that (for various reasons) makes sense for them. They wouldn’t be forced to buy excessive coverage they don’t need to subsidize the old and sick.
Who knows ? Maybe Jenkins reads this blog. It’s so obvious that the solution should be apparent even to Democrats.
We are now learning that a large share of the Obamacare structure is still unbuilt. This is not the website but the guts of the system.
The revelation came out of questioning of Mr. Chao by Rep. Cory Gardner (R., Colo.). Gardner was trying to figure out how much of the IT infrastructure around the federal insurance exchange had been completed. “Well, how much do we have to build today, still? What do we need to build? 50 percent? 40 percent? 30 percent?” Chao replied, “I think it’s just an approximation—we’re probably sitting between 60 and 70 percent because we still have to build…”
Gardner replied, incredulously, “Wait, 60 or 70 percent that needs to be built, still?” Chao did not contradict Gardner, adding, “because we still have to build the payment systems to make payments to insurers in January.”
This is the guy who is the chief IT guy for CMS.
If the ability to pay the insurance companies is not yet written, how can anybody sign up ?
Gardner, a fourth time: “But the entire system is 60 to 70 percent away from being complete.” Chao: “There’s the back office systems, the accounting systems, the payment systems…they still need to be done.”
Gardner asked a fifth time: “Of those 60 to 70 percent of systems that are still being built, how are they going to be tested?”
The answer was the same way the rest was tested.
Tyler Cowen, in his recent book Average Is Over, argues that computer technology is creating a sharp economic and class distinction between people who know how to effectively use these “genius machines” (a term he uses over and over) and those who don’t, and is also increasing inequality in other ways. Isegoria recently excerpted some of his Tyler’s comments on this thesis from a recent New Yorker article.
I read the book a couple of months ago, and although it’s worth reading and is occasionally thought-provoking, I think much of what Tyler has to say is wrong-headed. In the New Yorker article, for example, he says:
The first (reason why increased inequality is here to stay) is just measurement of worker value. We’re doing a lot to measure what workers are contributing to businesses, and, when you do that, very often you end up paying some people less and other people more.
The second is automation — especially in terms of smart software. Today’s workplaces are often more complicated than, say, a factory for General Motors was in 1962. They require higher skills. People who have those skills are very often doing extremely well, but a lot of people don’t have them, and that increases inequality.
And the third point is globalization. There’s a lot more unskilled labor in the world, and that creates downward pressure on unskilled labor in the United States. On the global level, inequality is down dramatically — we shouldn’t forget that. But within each country, or almost every country, inequality is up.
Taking the first point: Businesses and other organizations have been measuring “what workers are contributing” for a long, long time. Consider piecework. Sales commissions. Criteria-based bonuses for regional and division executives. All of these things are very old hat. Indeed, quite a few manufacturers have decided that it is unwise to take the quantitative measurement of performance down to an individual level, in cases where the work is being done by a closely-coupled team.
It is true that advancing computer technology makes it feasible to measure more dimensions of an individual’s work, but so what? Does the fact that I can measure (say) a call-center operator on 33 different criteria really tell me anything about what he is contributing the the business?
Anyone with real-life business experience will tell you that it is very, very difficult to create measurement and incentive plans that actually work in ways that are truly beneficial to the business. This is true in sales commission plans, it is true in manufacturing (I talked with one factory manager who said he dropped piecework because it was encouraging workers to risk injury in order to maximize their payoffs), and it is true in executive compensation. Our blogfriend Bill Waddell has frequently written about the ways in which accounting systems can distort decision-making in ultimately unprofitable ways. The design of worthwhile measurement and incentive plans has very little to do with the understanding of computer technology; it has a great deal to do with understanding of human nature and of the deep economic structure of the business.
My profession is much in the news at the moment, so I thought I would pass along such insights as I have from my career, mostly from a multibillion-dollar debacle which I and several thousand others worked on for a few years around the turn of the millennium. I will not name my employer, not that anyone with a passing familiarity with me doesn’t know who it is; nor will I name the project, although knowing the employer and the general timeframe will give you that pretty quickly too.
We spent, I believe, $4 billion, and garnered a total of 4,000 customers over the lifetime of the product, which was not aimed at large organizations which would be likely to spend millions on it, but at consumers and small businesses which would spend thousands on it, and that amount spread out over a period of several years. From an economic transparency standpoint, therefore, it would have been better to select 4,000 people at random around the country and cut them checks for $1 million apiece. Also much faster. But that wouldn’t have kept me and lots of others employed, learning whatever it is we learn from a colossally failed project.
So, a few things to keep in mind about a certain spectacularly problematic and topical IT effort:
This thing would be a case study for the next couple of decades if it weren’t going to be overshadowed by physically calamitous events, which I frankly expect. In another decade, Gen-X managers and Millennial line workers, inspired by Boomers, all of them much better at things than they are now, “will be in a position to guide the nation, and perhaps the world, across several painful thresholds,” to quote a relevant passage from Strauss and Howe. But getting there is going to be a matter of selection pressures, with plenty of casualties. The day will come when we long for a challenge as easy as reorganizing health care with a deadline a few weeks away.
Posted in Big Government, Book Notes, Commiserations, Current Events, Customer Service, Health Care, Internet, Law, Medicine, Personal Narrative, Politics, Predictions, Systems Analysis, Tech, USA | 6 Comments »
STRATEGY: FROM THE WAR ROOM TO
THE BOARD ROOM
Sir Lawrence Freedman, Professor of War Studies, and Vice-Principal, King’s College London
What do modern military and corporate strategy have in common with Achilles, Sun Tzu, and primates? The answer is fluidity, flexibility, and pure unpredictability. Every day we make decisions that are built on our theory of what will give us the outcome we want. Sir Lawrence Freedman proposes that throughout history strategy has very rarely gone as planned, and that constant evaluation is necessary to achieve success—even today. Join The Chicago Council for a centuries-spanning discussion explaining how the world’s greatest minds navigate toward success.
For interested parties. Sir Lawrence Freedman has quite a few talks posted on YouTube too. Worth checking out.
What proportion of all social-media communication is by bots, spammers, people with agendas who misrepresent themselves, or severely dysfunctional people who pass as normal online? I suspect it’s a large proportion.
There’s not much hard evidence, but every once in a while something like this turns up. I’m guessing it’s the tip of an iceberg. See also this. And who can overlook the partisan trolls who show up on this and other right-of-center blogs before elections. Where do they come from?
None of this apparently widespread Internet corruption should come as a surprise. Given the low costs and lack of barriers to entry it would be surprising if attempts to game the system were less frequent than they appear to be. Nonetheless it’s prudent to keep in mind that a lot of what appears online is probably fake and certainly misleading.
Much of what medical researchers conclude in their studies is misleading, exaggerated, or flat-out wrong. So why are doctors—to a striking extent—still drawing upon misinformation in their everyday practice?
The arguments presented in this article seem like a good if somewhat long presentation of the general problem, and could be applied in many fields besides medicine. (Note that the comments on the article rapidly become an argument about global warming.) The same problems are also seen in the work of bloggers, journalists and “experts” who specialize in popular health, finance, relationship and other topics and have created entire advice industries out of appeals to the authority of often poorly designed studies. The world would be a better place if students of medicine, law and journalism were forced to study basic statistics and experimental design. Anecdote is not necessarily invalid; study results are not necessarily correct and are often wrong or misleading.
None of this is news, and good researchers understand the problems. However, not all researchers are competent, a few are dishonest and the research funding system and academic careerism unintentionally create incentives that make the problem worse.
(Thanks to Madhu Dahiya for her thoughtful comments.)
I have written several posts that use Carroll Quigley’s “institutional imperative” as a lens for understanding contemporary events.  Mr. Quigley suggests that all human organizations fit into one of two types: instruments and institutions. Instruments are those organizations whose role is limited to the function they were designed to perform. (Think NASA in the 1960s, defined by its mission to put a man on the moon, or the NAACP during the same timeframe, instrumental to the civil rights movement.) Institutions, in contrast, are organizations that exist for their own state; their prime function is their own survival.
Most institutions start out as instruments, but as with NASA after the end of the Cold War or the NAACP after the victories of the civil rights movement, their instrumental uses are eventually eclipsed. They are then left adrift, in search of a mission that will give new direction to their efforts, or as happens more often, these organizations begin to shift their purpose away from what they do and towards what they are. Organizations often betray their nature when called to defend themselves from outside scrutiny: ‘instruments’ tend to emphasize what their employees or volunteers aim to accomplish; ‘institutions’ tend to emphasize the importance of the heritage they embody or even the number of employees they have.
Mr. Quigley’s institutional imperative has profound implications for any democratic society – especially a society host to so many publicly funded organizations as ours. Jonathan Rauch’s essay, “Demosclerosis” is the best introduction to the unsettling consequences that come when public organizations transform from instruments into institutions.  While Mr. Rauch does not use the terminology of the Institutional Imperative, his conclusions mesh neatly with it. Describing the history and growth of America’s bureaucratic class, Mr. Rauch suggests its greatest failing: a bureaucracy, once created, is hard to get rid of. To accomplish whatever mission it was originally tasked with a bureaucracy must hire people. It must have friends in high places. The number of people who have a professional or economic stake in the organization’s survival grows. No matter what else it may do, it inevitably becomes a publicly sponsored interest group. Any attempt to reduce its influence, power, or budget will be fought against with ferocity by the multitude of interests who now depend on it. Even when it becomes clear that this institution is no longer an instrument, the political capital needed to dismantle it is just too high to make the attempt worth a politician’s time or effort. So the size and scope of bureaucracies grow, encumbering the country with an increasing number of regulations it cannot change, employees it does not need, and organizations that it cannot get rid of.
I used to think that the naked self-interest described by Mr. Rauch was the driving force behind the Institutional Imperative. It undoubtedly plays a large role (particularly when public funds are involved), but there are other factors at play. One of the most important of these is what business strategists call Marginal Thinking.
Read the rest of this entry »
Wretchard discusses recent notorious Type II system failures. The Colorado theater killer’s shrink warned the authorities to no avail. The underwear bomber’s father warned the authorities to no avail. The Texas army-base jihadist was under surveillance by the authorities, who failed to stop him. Administrators of the Atlanta public schools rigged the academic testing system for their personal gain at the expense of students and got away with it for years. Wretchard is right to conclude that these failures were caused by hubris, poor institutional design and the natural limitations of bureaucracies. The question is what to do about it.
The general answer is to encourage the decentralization of important services. If government institutions won’t reform themselves individuals should develop alternatives outside of those institutions. The underwear bomber’s fellow passengers survived because they didn’t depend on the system, they took the initiative. That’s the right approach in areas as diverse as personal security and education. It’s also the approach most consistent with American cultural and political values. It is not the approach of our political class, whose interests are not aligned with those of most members of the public.
The Internet is said to route itself around censorship. In the coming years we are going to find out if American culture can route itself around the top-down power grabs of our political class and return to its individualistic roots. Here’s hoping.
The low rate of overt accidents in reliable systems may encourage changes, especially the use of new technology, to decrease the number of low consequence but high frequency failures. These changes maybe actually create opportunities for new, low frequency but high consequence failures. When new technologies are used to eliminate well understood system failures or to gain high precision performance they often introduce new pathways to large scale, catastrophic failures. Not uncommonly, these new, rare catastrophes have even greater impact than those eliminated by the new technology. These new forms of failure are difficult to see before the fact; attention is paid mostly to the putative beneficial characteristics of the changes. Because these new, high consequence accidents occur at a low rate, multiple system changes may occur before an accident, making it hard to see the contribution of technology to the failure.
How Complex Systems Fail (pdf)
(Being a Short Treatise on the Nature of Failure; How Failure is Evaluated; How Failure is Attributed to Proximate Cause; and the Resulting New Understanding of Patient Safety)
Richard I. Cook, MD
Cognitive technologies Laboratory University of Chicago
But there is a much more important question being ignored by Gawande — How well does The Cheesecake Factory analogy really apply to health care? We can see how similar the kitchen is to an operating room — lots of busy people rushing about in a sterile environment, each concentrated on a task. But what about the rest of the “system?”
At The Cheesecake Factory, the customer is the diner. That’s who orders the service, pays the bill, and comes back again if he is happy. That is who all of the efficient, standardized food preparation is designed to please.
In Gawande’s ideal health care model, however, the customer isn’t the patient, but the third-party payer, be it an insurer or government. Let’s call that entity the TPP. The TPP never enters the kitchen. The TTP has no idea what happens in there, and doesn’t really care as long as the steak is cooked to his satisfaction and the tab is affordable.
In this model, the patient is actually the steak. It is the steak who is processed in the kitchen. It is the steak that is cut and cooked and placed on a platter. The steak doesn’t get a vote. Nobody cares if the steak is happy. The steak doesn’t pay the bill. The steak isn’t coming back again.
So here we are in Dr. Gawande’s kitchen, where you and I are slabs of meat and Chef Gawande will cook us to the specifications of his TPP customers — satisfaction guaranteed.
Worth reading in full.
(Via The Right Coast.)
There was an attack in Saudi Arabia using internally placed explosives up the lower GI tract. These explosives cannot be detected by pat downs, metal detectors, or millimeter wave machines. Much more powerful scanning machines would be required or a cavity search. But no follow up bombs have happened using this method. I’d always wondered why. Now things are becoming clear. Apparently there’s been something of a theological problem. It appears that butt bombs are not permitted due to Islam’s prohibition of sodomy. But that prohibition seems to be loosening.
It will take years for the theologians to digest this new complication but once it has been let loose, it is clearly foreseeable that some portion of islamic scholars will hold this position. The consequences for our travel security regime are rather scary. We’re going to have reached the end of the line because routine x-rays at each flight segment are just not going to happen. The accumulated radiation would cause too many cancers. And cavity searches are simply unreasonable. So where does that leave TSA’s current security strategy?
Like most of their terror innovations, I expect that this will take some time for them to organize. It looks like they’ve already put 4 years into it. It may take them another 4 before they’ve worked the theological problems out sufficient to recruit bombers. But then what?
But though they may hate the Pax Americana, the Greens probably can’t live without it. Can’t live without the Ipods, the connectivity, the store-bought food, the cafe-bought lattes — all the ugly things made by private industry. And by paring down the redundancies in the system as wasteful and unsightly; by reducing the energy reserves of the system in favor of such fairy schemes as windmills and carbon trading the Greens have made the system far less robust than it could have been. Because they are never going to need the Design Margin. Ever. Until they do.
From a comment by “Eggplant” at Belmont Club:
Supposedly the US has war gamed this thing and the prospects look poor. A war game is only as good as the assumptions programmed into it. Can the war game be programmed to consider the possibility that a single Iranian leader has access to an ex-Soviet nuke and is crazed enough to use it?
Of course the answer is “No Way”.
A valid war game would be a Monte Carlo simulation that considered a range of possible scenarios. However the tails of that Gaussian distribution would offer extremely frightening scenarios. The Israelis are in the situation where truly catastrophic scenarios have tiny probability but the expectation value [consequence times probability] is still horrific. However “fortune favors the brave”. Also being the driver of events is almost always better than passively waiting and hoping for a miracle. That last argument means the Israelis will launch an attack and probably before the American election.
These are important points. The outcomes of simulations, including the results of focus groups used in business and political marketing, may be path-dependent. If they are the results of any one simulation may be misleading and it may be tempting to game the starting assumptions in order to nudge the output in the direction you want. It is much better if you can run many simulations using a wide range of inputs. Then you can say something like: We ran 100 simulations using the parameter ranges specified below and found that the results converged on X in 83 percent of the cases. Or: We ran 100 simulations and found no clear pattern in the results as long as Parameter Y was in the range 20-80. And by the way, here are the data. We don’t know the structure of the leaked US simulation of an Israeli attack on Iran and its aftermath.
It’s also true, as Eggplant points out, that the Israelis have to consider outlier possibilities that may be highly unlikely but would be catastrophic if they came to pass. These are possibilities that might show up only a few times or not at all in the output of a hypothetical 100-run Monte Carlo simulation. But such possibilities must still be taken into account because 1) they are theoretically possible and sufficiently bad that they cannot be allowed to happen under any circumstances and 2) the simulation-based probabilities may be inaccurate due to errors in assumptions.
An excellent post by Mark Draughn that reminds how we get the behavior we incentivize. In this case the NYC govt incentivized its police to ignore violent crimes and to make bogus arrests to boost their cleared-case stats:
This is a standard recipe for disaster in quality control — and CompStat is at heart a statistical quality control program. Take a bunch of people doing a job, make them report quality control data, and put pressure on them to produce good numbers. If there is little oversight and lots of pressure, then good numbers is exactly what they’ll give you. Even if they’re not true.
Worth reading in full.
Many people canoe and kayak in the Florida Everglades’ extensive inland waterways, which are beautiful, full of interesting plants and animals and easily accessible. I couldn’t refuse an invitation to join friends for a day trip down the Turner River in the Big Cypress area. My friends arranged for me to borrow a kayak but its owner backed out of the trip at the last minute. Fortunately, the guy who organized the trip offered me the use of a kayak that he owns.
Posted by Charles Cameron on 28th September 2011 (All posts by Charles Cameron)
[ cross-posted from Zenpundit -- mapping, silos, Y2K, 9/11, rumors, wars, Boeing 747s, Diebold voting machines, vulnerabilities, dependencies ]
The “bug” of Y2K never quite measured up to the 1919 influenza bug in terms of devastating effect — but as TPM Barnett wrote in The Pentagon’s New Map:
Whether Y2K turned out to be nothing or a complete disaster was less important, research-wise, than the thinking we pursued as we tried to imagine – in advance – what a terrible shock to the system would do to the United States and the world in this day and age.
My own personal preoccupations during the run-up to Y2K had to do with cults, militias and terrorists — any one of which might have tried for a spectacle.
As it turned out, though, Al Qaida’s plan to set off a bomb at Los Angeles International Airport on New Year’s Eve, 1999 was foiled when Albert Ressam was arrested attempting to enter the US from Canada — so that aspect of what might have happened during the roll-over was essentially postponed until September 11, 2001. And the leaders of the Ugandan Movement for the Restoration of the Ten Commandments of God, acting on visionary instructions (allegedly) from the Virgin Mary, announced that the end of the world had been postponed from Dec 31 / Jan 1 till March 17 — at which point they burned 500 of their members to death in their locked church. So that apocalyptic possibility, too, was temporarily averted.
Don Beck of the National Values Center / The Spiral Dynamics Group, commented to me at one point in the run-up:
Y2K is like a lightening bolt: when it strikes and lights up the sky, we will see the contours of our social systems.
– and that quote from Beck, along with Barnett’s observation, pointed strongly to the fact that we don’t have anything remotely resembling a decent global map of interdependencies and vulnerabilities.
What we have instead is a PERT chart for this or that, Markov diagrams, social network maps, railroad maps and timetables… oodles and oodles of smaller pieces of the puzzle of past, present and future… each with its own symbol system and limited scope. Our mapping, in other words, is territorialized, siloed, and disconnected, while the world system which is integral to our being and survival is connected, indeed, seamlessly interwoven.
I’ve suggested before now that our mapping needs to pass across the Cartesian divide from the objective to the subjective, from materiel to morale, from the quantitative to the qualitative, and from rumors to wars. It also needs a uniform language or translation service, so that Jay Forrester system dynamic models can “talk” with PERT and Markov and the rest, Bucky Fuller‘s World Game included.
I suppose some of all this is ongoing, somewhere behind impenetrable curtains, but I wonder how much.
In the meantime, and working from open source materials, the only kind to which I have access – here are two data points we might have noted a litle earlier, if we had decent interdependency and vulnerability mapping:
Fear-mongering — or significant alerts? I’m not tech savvy enough to know.
Tom Barnett’s point about “the thinking we pursued as we tried to imagine – in advance – what a terrible shock to the system would do to the United States and the world in this day and age” still stands.
Y2K was what first alerted me to the significance of SCADAs.
Something very like what Y2K might have been seems to be unfolding — but slowly, slowly.
Are we thinking yet?