A rapacious and greedy Technocracy

technocracy

Obama health care plan online. (Relax. It’s a joke. Or, is it?)

The image is by editorial cartoonist Winsor McCay and I came across the image at the wonderful art blog linesandcolors.

Update: In the comments, Michael Kennedy adds: “CBO says there are not enough details to score the new “bill.” What else is new?”

Yes. What else is new? Also, this via Drudge: “An unapologetic Danny Williams says he was aware his trip to the United States for heart surgery earlier this month would spark outcry, but he concluded his personal health trumped any public fallout over the controversial decision….This was my heart, my choice and my health,” Williams said late Monday from his condominium in Sarasota, Fla.”

Be aware: If the health care plans don’t work as smoothly as gamed by the white paper crowd, the connected will exempt themselves from the worst of it. They always do. Do Senators tend to fly coach?

6 thoughts on “A rapacious and greedy Technocracy”

  1. CBO says there are not enough details to score the new “bill.” What else is new?

    Also, the Dartmouth Atlas is being attacked for the first time I recall. I was there when Jack Wennberg was putting it together. I did note that the people at Dartmouth included very few surgeons and many of the people working on the atlas did not understand surgery very well. This also is a problem.

    Atlas-based analyses are also hampered by methodologic problems, starting with their implicit definition of efficiency. A true analysis of efficiency would ask “whether healthcare resources are being used to get . . . improved health,”3 weighing both resources consumed and outcomes. Yet Atlas efficiency rankings consider only costs (i.e., resources consumed).

    Conceptually, this approach would be appropriate only if outcomes were the same in all hospitals, so that costs equaled efficiency. But since outcomes vary among hospitals and providers, both costs and outcomes must be assessed in evaluating efficiency. Atlas researchers might correctly argue that costs correlate poorly with outcomes. But poor correlation does not imply that outcomes are homogeneous, but rather that there are high-spending hospitals that use resources in a manner that improves outcomes and others that squander resources, failing to improve health. The same goes for low-spending hospitals. Figuring out which is which is the purpose of efficiency assessment, which therefore requires consideration of both costs and outcomes.

    When I was there, I did a project on renal dialysis and the frequency with which the AV shunt had to be revised. My study of New England hospitals with about 2500 patients showed that the only factor that a regression analysis could show a statistical relationship with graft revision was the patient’s zip code. That means hospital, dialysis unit or surgeon. The following year, some students showed that death rates correlated with dialysis unit.

    That agrees with his point above. I fear that the Obama plan will homogenize care and the quality issue will be lost. It certainly has been with teachers. Next may come doctors, once they are safely on the government payroll.

  2. Interesting point about the use of cost as the primary definition of health care efficiency, rather than improved health outcomes. I did not know that.

    Odd. Why did the group involved decide on that particular definition?

    Superficially, your study of New England hospitals and dialysis units reminds me of the Atul Gawande New Yorker piece about Cystic Fibrosis Centers and the bell curve applied to doctors. Years ago, I attended a Grand Rounds at the Brigham where Dr. Gawande gave a wide-ranging talk about a lot of different studies he had done, and different medical experiences and observations. At the end of the talk, I walked up to him and asked if the data for his New Yorker article would be submitted to a peer-reviewed journal! Ooops! Young, naive, and silly me. I meant nothing by it! I was curious because I wanted to parse the data. I meant nothing other than “Oh, interesting article, I’m so curious about the data.” I’m sure I came off as a complete jerk. Sigh.

    http://www.newyorker.com/archive/2004/12/06/041206fa_fact

    At the time, something bothered me about the article, but embarrassingly, I can’t remember now. I was so early in my career, and frankly a bit shallow and callow in my approach to some things, that I’m not sure my initial response to the article was correct. Why isn’t it possible that the only thing separating good versus bad outcomes between two cystic fibrosis centers are the physicians involved and their practice habits?

    I guess what bothered me is that it was presented as kind of out there, but that is what academic centers do, right? They do clinical research, they try new treatments, and in trying new treatments some will get it right and some will get it wrong. How can every experiment work? But maybe that is my own shallowness showing: it might not be experimental approach, but more basic procedure leading to better outcomes. I don’t know. I need to reread the article critically.

    – Madhu

  3. Oh, by superficially I only meant your study made me think of the Gawande article (random association?) – not that the two were making the same point.

    Dr. Kennedy, have you read that particular article? Any thoughts?

  4. The reason that cost is used in these studies is two-fold. One, it is easy to measure. The other is that it is what they are all interested in. Measuring quality is possible but difficult. The reason I spent a year at Dartmouth was that I had retired after a very big back surgery and thought I would indulge another interest by learning how to measure quality. I thought they had the best methodology at the time (1994).

    I had been doing vascular surgery for 30 years and was interested in how to measure outcomes. I had also been chair of the data committee of the California PRO. We had begun to use the epidemiology methods that Wennberg had pioneered in his famous Science article on tonsillectomy in 1977. here is a profile of him with the story. When we used his methods, we found some interesting things. We did a study of hip replacement in California in which we charted the number of hip replacements with the census tracts. Wennberg used Metropolitan Service Areas, which are much larger, and he then modified them to create a medical service area around a tertiary hospital. Within a state, census tracts are workable but in a national study there are too many.

    Anyway, when we did our plot, we found spikes in some odd areas. One would expect that some spikes would represent referral hospitals, as we found in Santa Barbara, the location of two large specialty clinics that draw from a much larger area. However, we also found a couple of spikes in small towns. When we investigated, we found a couple of very aggressive orthopedic surgeons who were going to nursing homes and convincing non-ambulatory patients to have hip replacements. After that, it became a much more useful method than reading thousands of charts.

    Those sorts of studies are easy but, fortunately, most physicians are not crooks. We tried to come up with other measures. One is how often a dialysis graft has to be revised. All those patients are on Medicare and the shunt is a life and death situation so followup is 100%. There are some peculiarities; the commonest cause of death in dialysis patients is (or was in 1994) suicide. One way to accomplish this is to just stop going to dialysis. When I did my study, we found that the mean time between revisions was two years or so in most zip codes but, in one, it was six months. The difference was dramatic. We then applied for an NIH grant to do a national study but were turned down. It’s a long story but there are fiefdoms in academic research, as I don’t have to tell you, and we were coming at the problem from a completely new direction.

    I came back to California thinking I would find enough people interested in measuring quality that it would make a second career. I was mistaken. My impression is that insurance companies think it will be more expensive.

    As for the Gawande article, I have a blog post on it.

  5. I was talking about a different article: “The Bell Curve” or something like that, actually :) Thanks for the link to your post, though!

    (I want to add that I never was at Brigham and Women’s – just attending a Grand Rounds lecture in the Longwood Medical Area.)

    I keep getting emails about NIH grants for comparative effectiveness research – I wonder how much of the NIH budget will go in future toward how best to split the penny, instead of innovate research? That’s Megan McCardle’s thing, isn’t it? The effects on innovation.

    – Madhu

Comments are closed.