The Iraqi Ministry of Planning and Development Cooperation, in conjunction with the U.N. Development Program, has released an extensive survey of conditions in Iraq, called Iraq Living Conditions Survey 2004 (ILCS). The part of the survey dealing with war related deaths appears to strongly undercut the Lancet Iraqi Mortality Survey (LIMS) of which I have been so critical.

The home page for the ILCS contains links to PDFs of the full report divided in three volumes: Volume I: Tabulation Report, Volume II: Analytical Report and Volume III: Socio-economic Atlas. The Analytical Report is the main document.

I haven’t had time to thoroughly digest the new study but I do have some preliminary observations. Take them with a grain of salt. I’m not ready to declare myself vindicated yet.


(1) The two studies do not cover identical time frames. The LIMS was conducted in August of 2004 whereas the ILCS study was mostly conducted in March 22 through May 25 of 2004. Significant fighting broke out in April-July of 2004, mostly associated with Sadr’s uprising. So the ILCS might have missed deaths that the LIMS measured (although with the Falluja cluster excluded most of the deaths measured by LIMS occurred before March 2004).

(2) The ILCS study doesn’t seek to measure the results of the war per se but rather to provide a good picture of overall conditions in Iraq. It seeks to measure the history of conditions during Saddam’s reign as well as conditions after the liberation. Unlike LIMS, it is intended to provide actual useful data to decision makers, so it concentrates on healthcare, housing, education etc. Deaths from violence are not the main focus of the study and don’t appear to be broken out in detail anywhere.

(3) Like all U.N. studies, it relies wholly on the governing authority of the country in which it was conducted. This is why U.N. sponsored studies on, say, Saddam-era Iraqi infant mortality, or Cuban health care standards, are so suspect. In the case of ILCS, the governing authority was the Iraqi provisional government under the Coalition. If you don’t trust them perhaps you shouldn’t trust the study.

First Impressions:

(1) The ILCS uses a huge sample. Like the LIMS it uses cluster sampling but it uses a large number of small clusters. The LIMS sampled 990 households divided into 33 clusters of 30 houses each. It assigned the clusters by choosing specific points and then sampling the 30 households most contiguous to each point. The ILCS sampled 22,000 households divided into 2,200 clusters of 10 houses each. Further, the 10 households were not physically contiguous as with LIMS but were randomly selected from all houses in a defined area. The ILCS sampling is much larger and more random than the LIMS. And, unlike the LIMS, the ILCS sampled all the governorates.

For these reasons, data from the ILCS will be significantly more solid than the LIMS. LIMS looks like a high-school science project in comparison.

(2) As much as I hate to defend Roberts and the other authors of LIMS, I do not know whether I can’t say at present whether it is fair to report, as most news stories have, that the ILCS measured only 24,000 war related deaths whereas the LIMS measured 100,000. It is not immediately clear what the ILCS considers a “war-related” death [Analytical Report logical page 54], which is where the 24,000 number appears to come from. If the definition of “war-related” differs significantly from those that LIMS counted, then the LIMS might be off by only a factor of 2 or so. Just a reminder of how badly the media suck at reporting science.

LIMS looked at changes in mortality from all causes. The 100,000 figure (what the authors call a “conservative” estimate) includes death from illness and accident, crime, insurgents/terrorists and Coalition actions. The percentage breakdown in deaths (excluding the Falluja cluster) was 66% from violent causes and 33% from non-violent causes. That means that crudely the Lancet study reports 66,000 deaths from violence. Most of those deaths (57%) resulted from the non-Coalition actors, who where either insurgents or economic criminals. If the ILCS did not include crime in their “war-related” category, and we include deaths that occurred after after the end of May 2004, then the ILCS deaths could bubble up to 30,000 or so. On the other hand, LIMS excluded military deaths from its estimate whereas the ILCS seems to include them, so perhaps it all cancels out.

The ILCS authors do imply that their 24,000 figure matches the LIMS 100,000 figure because they refer to it directly [Analytical Report logical page 54]:

Another source (Roberts et al. 2004) estimates the number to be 98,000, with a confidence interval of 8,000 to 194,000. The website Iraq Body Count ( estimates that between 14,619 and 16,804 deaths have occurred between the beginning of 2003 and 7 December 2004 (IBC 2004).

Since the LIMS executive summary makes it clear that it measures all sources of mortality I think it safe to assume that the ILCS considers “war-related” deaths to cover the same causes of deaths as LIMS. Moreover, the Iraqi ministries rejected the LIMS conclusions when they were originally published, stating that their own research showed nothing like the death rate reported by LIMS. Since the ILCS had already been conducted they may have been relying on its then-unpublished data to reject the LIMS.

(Interesting note: LIMS was conducted, analyzed and published in under 90 days. The ILCS took over a year.)

In any case, I think it fair to say that the LIMS is high by at least a factor of 2. When you consider that the authors repeatedly have asserted that the 100,000 deaths was a “conservative” estimate and “may be much higher” then LIMS really doesn’t look very good. Its minimum estimate is at least twice that of the much better ILCS.

Still, comparing the ILCS’s 24,000 war-related deaths to LIMS 100,000 “conservative” estimate might be inaccurate.

(3) The study demolished many of the claims made about infant mortality both during the sanction era and in the time before and after the war.

There never was a huge spike in infant mortality in the 1996-1998 era as many (Including LIMS co-author Richard Garfield) had asserted. IIRC, Garfield’s paper in Lancet claimed a mortality rate of 109/1000. As many suspected at the time, it appears likely that those studies were subverted by Saddam in an attempt to build support for the lifting of sanctions. It that sense, LIMS was following an established tradition.

There also doesn’t seem to have been any spike in infant mortality after the war. LIMS portrayed infant mortality as falling from a peak in 1998 until the invasion and then jumping upward afterward. ILCS shows a steadily worsening infant mortality rate across the 90’s then a decrease from 1998 that reversed in 2001 and then steadily increased up to the study time. [Analytical Report Fig 26 logical page 52]. The rate seemed to bottom out in the upper 20s in early 2001, then began to rise steadly. There doesn’t seem to be any discontinuity associated with the war. At no time did the mortality rate exceed 40/1000.

LIMS had reported 29/1000 in the year before the invasion and 57/1000 afterward. ILCS measured infant mortality at 32/1000 for the entire period 1999-2003. That means that quite a lot of the excess deaths that LIMS claimed to have measured either didn’t occur or didn’t have anything to do with the war.

I was really surprised (pleasently) by the low rate. I assumed that LIMS underestimated the pre-invasion rate but instead it appears to have overestimated the post-invasion rate. What threw me off was the assumption that the 1998 rate must have been up above 60/1000 when it was actually around 40/1000. I knew the rate couldn’t have changed that quickly. (Infant mortality doesn’t change quickly because so many factors influence it. That is why it is often used as proxy for the health of entire populations.)

(4) I can say with absolute confidence that all the numbers produced by including the Falluja cluster data are complete garbage. With the Falluja cluster included, LIMS shows an excess of a 250,000 dead (nearly 1% of the total population of Iraq) from violence alone. If anything approaching that number of people had died the huge number of samples in the ILCS would have easily uncovered it. If Roberts et al were honest researchers they would have tossed out all the data produced by the cluster. Instead they have relied on it for all their primary conclusion except for their “conservative” 100,000 total death figure. That remains my primary criticism of the study.


I think the news stories to date may be giving the false impression that the LIMS is worse than it is, but given the dishonest way in which its authors have marketed the study, it would be karmic if the idea got wedged in the mind of the media that the LIMS was wildly inaccurate (as it might actually be). It is clear that the LIMS “conservative” estimate is at best way on the high side and any “non-conservative” estimates they might offer are clearly wrong. LIMS is at best a very weak study that was hailed by many as a very strong one.

For Lancet, this new study appears to strongly contradict two controversial studies it has published on Iraq. I suppose it is too much to hope that this will prompt some sort of internal review of potential political bias.

I’ll go over the ILCS in more detail as my time permits.

UPDATE: Commenter AMac has posted another thoughtful analysis (written before the UN study came out) of the Lancet study at Winds of Change. [Update by JG.]

16 thoughts on “ILCS vs LIMS”

  1. Pingback: Tim Worstall
  2. Sorry, but the ILCS was conducted starting in March 2004, before the heavy fighting in Falluja. It neither confirms nor denies the Lancet’s findings about deaths in Falluja.

  3. It’s clear that not that many died in Fallujah anyway. There was, however, talk about Fallujah being representative of “high bombing intensity” clusters in Iraq. Unless said bombing ramped up rather significantly after the fieldwork for the UNDP survey had been completed, that clearly does not appear to be the case.

  4. Pingback: Vagabondia
  5. Pingback: Deltoid
  6. Tim Lambert,

    “Sorry, but the ILCS was conducted starting in March 2004, before the heavy fighting in Falluja. It neither confirms nor denies the Lancet’s findings about deaths in Falluja.”

    Going to have to disagree. From Figure 2 [LIMS p 5] it is clear that significant fighting occurred in Falluja in June and December 2003. If the deaths in Falluja where in fact statistically representative of the larger population then those deaths should have shown up in the ILCS as 20.000-30,000 deaths in Al-Abar alone. Instead it shows only 3,686 for the entire center region of the country. (Which includes the entire Sunni Triangle except for Baghdad.)

    Remember that LIMS claims to draw its statistical power from the ability to generalize from the experiences of the clusters. LIMS claims that we can reliably generalize from the experience of one little neighborhood of 30 houses in Falluja to rest of Iraq. The numbers returned by the Falluja cluster are so over the top that even with only 10% (20,000) of them falling in time frame of ILCS it would have met or exceeded the total for the rest of the country combined (20,057).

    Trying to cram 200,000 deaths into the June-Sept 2004 span is even more silly, especially given the known geographical concentrations of the fighting in that period.

    The only chance that Falluja cluster had of being relevant was that it represented the broader experiences of the entire Iraqi population. That is now definitively shown not to be the case.

  7. Shannon, nice catch on the UN study possibly did not including crime and that the Lancet definately exluded military deaths.

    I think your assumption that the Lancet and UN definitions of war related deaths being comparable isn’t as safe as you think. The study also mentions the Iraq Body Count which only includes violent deaths.

    I did some rough calculations, assuming the UN numbers only included violent deaths, which I posted over at Tim Blairs:

    98,000*43,000/60,070=70,152 Lancet, roughly, time adjust.

    Now if we unscientifically assume that the other lancet numbers are also higher by over 79% of actual, the lancet non-violent casualties over that period are 15,154. 15,154+24,000=39,154.

    Expand that out back over the lancet time period the lancet total becomes 54,698 (should remind everyone this number ignores the increase in violence after March 04).

    Now, another problem with the lancet study is that, while their sampling methodology may not have been biased, much of their sample participants may have been. Also, I think, many medical deaths, such as heart attacks, simply happened earlier because of the war. I think we are likely to see lower medical deaths over the next few years that offset the increases of the first 2 years.

    I think the violent deaths are probably exaggerated more than the medical deaths, so the real total is probably between 71,457 and 54,698 (using the lancet time period).

    We can’t tell from the Lancet study whether the death rate increased after March 04, which further highlights how useless it is. I seriously doubt that the violent deathrate increased by 201% as Tim Lambert seems to suggest.

  8. The Lancet study shows just four of the 53 violent deaths in Falluja were before April 2004. That translates into 12,000, not 20,000 to 30,000 deaths as you claim. This is more than than the ILCS found, so you can argue that this is an overestimate but it doesn’t prove that there was anything wrong with their methodology in Falluja.

  9. Tim, we know that most of the post march 04 violence was isolated to Falluja. We also don’t know if that sample is even representative of Falluja. Given the methodology, it’s pretty safe to assume it isn’t (localized sample).

  10. I’m not sure how to figure in Fallujah deaths recorded by Roberts when comparing Roberts’ calculations with ILCS’ (Tim Lambert 10:45pm), as Roberts excluded Fallujah as an outlier.

    My count from Roberts Fig. 2 (pdf pg. 5) of ex-Fallujah violent deaths is as follows:

    Jan 02 thru Feb 03 (preinvasion): 1
    March 03 thru March 04 (pre-ILCS fieldwork): 13
    April 04 thru May 04 (ILCS fieldwork): 4
    Jun 04 thru Sept 10 (post-ILCS fieldwork): 4

    Back ‘o the envelope comparison: ILCS postinvasion death-by-violence should be compared to 71% of Roberts postinvasion death-by-violence-ex-Fallujah. (13 + 0.5*4)/21

    I posted some thoughts on Roberts (written mostly prior to the release of ILCS, alas) at Winds of Change.

  11. Tim, Falluja was thrown out of the Lancet findings.

    Aaron, as some posters have pointedly remarked in earlier threads: Fallujah was NOT thrown out of the study. Only the “excess death” figure was excluded. Other conclusions incorporating Fallujah (notably the mortality risk) remain in the section marked ‘findings.’

  12. Tim Lambert,

    Just to be clear: Are you arguing that it is plausible that something like 180,000 people where killed by helicopter airstrikes sometime between May 24th and early Sept of 2004?

    If so, do you realize that is roughly 1,800 people per day? Do you realize that 80% of all the fighting in that period happened within a 100 klick radius of a point half-way between Baghdad and Falluja? Is it really plausible that that many people died without leaving the entire area in rubble and producing a vast exodus of refugees? Do have any intuitive feel at all for the scale of carnage you are arguing for?

    A simpler explanation is that the experience of a single neighborhood in a single city is simple not statistically representative of any segment of the larger population. All these 30 households sitting next door to one another suffered from the same localized events that affected the entire area. This isn’t 30 points of data but rather just 1.

    This is exactly the kind of failure we expect to see when using cluster-sampling to study a phenomenon with a highly heterogeneous (uneven) distribution. There is no methodological reason not to just toss the entire cluster.

  13. Telluride, sorry, I mistyped what I was thinking. What I meant to say is that Falluja was ommitted from the conclusions.

    The use of the term findings is awkward for me. The findings section performs analysis and interperatation of the data that I don’t think are sound. The descriptions of the sample data is written in way that infers conclusions on the entire population that simply can’t be made, some of which, if I recall correctly, made their way into the conclusion. It blurs the definition.

  14. What I meant to say is that Falluja was ommitted from the conclusions.

    Falluja is included in the conclusions of the study. Both a mortality risk figure and a confidence interval cum-falluja are reported as findings. I don’t know why Tim is not making this point, since it certainly seemed rather important a month or two ago, when the “at least 100,000” characterization was being defended by Tim and dsquared, along with 30k dead from bombing.

  15. One minor point about the ILCS report is worth mentioning. The authors that maintain IBC are explicit in maintaining that their numbers represent reports of deaths and not deaths. Insofar as the authors of the ILCS report suggest otherwise, they miss something important. The authors of IBC were in fact not at all surprised by the numbers reported in The Lancet study.

Comments are closed.