Chicago Boyz

                 
 
 
 
What Are Chicago Boyz Readers Reading?
 

Recommended Photo Store
 
Buy Through Our Amazon Link or Banner to Support This Blog
 
 
 
  •   Enter your email to be notified of new posts:
  •   Problem? Question?
  •   Contact Authors:

  • CB Twitter Feed
  • Blog Posts (RSS 2.0)
  • Blog Posts (Atom 0.3)
  • Incoming Links
  • Recent Comments

    • Loading...
  • Authors

  • Notable Discussions

  • Recent Posts

  • Blogroll

  • Categories

  • Archives

  • Worse Than Nothing At All

    Posted by Shannon Love on November 10th, 2004 (All posts by )

    Consider the following hypotheticals: A friend ask you to build a bookcase to fit in a particular niche in his house. In the first case, the friend doesn’t really know the dimensions of the niche. He just says that it is a couple feet wide and about “this tall.” In the second case, he provides you with exact measurements. Of these two cases, which is more likely to result in a bookcase too large to fit in the niche?

    Everybody who has dealt with technical measurement knows the ugly truth. The second case is more likely to result in a bookcase that is too large. In the first case, you won’t know the true dimensions, so you will build conservatively, shaving off inches to try to make sure that the bookcase will fit. In the second case, however, you will feel confident of the size of the niche and will build the bookcase to take up all of the available space. If your friend measured wrong, the bookcase won’t fit because, believing you knew the true size of the niche, you didn’t build in any margin for error.

    This example demonstrates a truism within science and technology: Bad data are worse than no data at all. With no data, people plan conservatively and are always reexamining their actions and assumptions, but with bad data they charge forward, with far less ongoing reexamination. Bad data lead people to got wrong with confidence.

    Which brings us to The Lancet‘s published Study of Iraqi mortality by lead author Les Roberts (PDF HTML).

    Many people have argued that the study, even if it is badly flawed and inaccurate, at least represents some attempt to measure mortality in Iraq in a scientific manner, and is therefore useful. However, under the bad-data rule the study could easily do more harm than good.

    For example, approximately half of all the deaths in the study result from an increase in deaths caused by a lack of medical care, electricity and clean water. These lacks are caused primarily by the insurgency’s preventing the Coalition and Provisional Government (CPG) from providing services in the areas that the reactionaries control or can strike in. If the CPG scaled back its own assaults on the insurgency to reduce civilian deaths by combat, it would mean the insurgency would last longer and cover a larger territory, meaning critical services would be denied to more people for a longer time. The resulting deaths from the lack of services could easily offset and might even swamp any lives saved from a decreased combat tempo. Worse, if the decision was made based on an exaggeration of deaths from combat, then many more people would die than if the original tempo was sustained.

    There are very good reasons for believing that the L. Roberts study is badly flawed and that it exaggerates the change in mortality. Given that concern for non-combatant casualties is already the major constraint on the operations of the CPG, using the study as a basis for any policy change would most likely result in more deaths in the long term.

    Having the study is worse than having nothing at all.

     

    3 Responses to “Worse Than Nothing At All”

    1. dsquared Says:

      However many people argue that it isn’t badly flawed, and that your argument (linked above) is misleading and based on a misrepresentation of the theory and evidence. The “cluster sampling critique” that you link to is wrong, and I am surprised that you are still pushing it.

    2. fade Says:

      Ur right but we look forward to give the provisional goverment chance to check USA product toward settlement

    3. Paul Bixby Says:

      I couldn’t find an email for Shannon, so I’ll post it here. She did such a great job with the lancet study, I thought she may be able to take a look at a study linked to by and hosted at Electoral-vote.com. It claims to highlight the “discrepancy” between the exit polls and the actual votes counts (all without claiming that there was vote fraud, of course). It trigged my BS-detector as I highlight here. I just think there are bigger systemic problems with this paper than I could determine. Anyone else who can separate the wheat from the chaff would be appreciated.