Short or reproducing it, how can one judge the likely accuracy of a study?
Statistics won’t help. Statistics only tell one the odds the results spring from sheer chance, not whether your original measurements were valid in the first place. You get the same statistics from the same data set whether the data represent colored ping-pong balls, car wrecks or the lengths of salamander penises.
About the only way to calibrate the study is to see how it measures the same phenomenon that other studies measured. If the study’s methodology returns results consistent with other studies for one measurement, then we can be more confident that its other measurements are accurate.
The Johns Hopkins funded study of Iraqi mortality before and after the war (published with much media attention in The Lancet) has many critics and defenders. Is there any means of judging the study’s likely accuracy without reproducing it?
I think there is.
The Johns Hopkins study replicated one measurement, pre-war Iraqi infant mortality, that was extensively studied by multiple sources long before the war, and indeed long before 9/11. We can compare the Johns Hopkins study’s measurement of pre-war Iraqi infant mortality to other similar studies, and that will give us at least a rough idea of the likely accuracy of its methodology.
So how does the JH study compare? Not too well actually. The JH study (paragraph 4, page 8) reported a pre-war infant mortality rate of 29/1000. (That’s 29 deaths of children less than 1 year old per every 1000 live births.) After the war, the study says that the death rate jumped to 59/1000. Much of the increased death toll in the study not attributed to violence comes from this near doubling of the infant mortality rate.
By comparison, a Unicef report published in 1999 showed that, in the 1984-1989 time frame, Iraqi infant mortality was 47/1000, and rose to 108/1000 for 1994-1999. Unicef’s last pre-war report in 2002 put the infant mortality rate at 102/1000.
A paper published in The Lancet in 2000 (pdf), reported an increase in infant mortality in the southern 85% of the country, from 47/1000 to 108/1000 for the same period as the Unicef study. (In the northern autonomous zone where Saddam did not rule, infant mortality fell from 64/1000 to 59/1000.)
So the Johns Hopkins study reports a pre-war infant mortality rate well under the 47/1000 level from the halcyon days of the 80’s, when there were no sanctions but an ongoing war. Worse, the study shows a rate that’s just over a quarter of the pre-war infant-mortality rates that the other studies showed.
The John Hopkins study so deviates from the other studies that its measurement of post-war infant mortality of 57/1000 shows a vast improvement over the other pre-war rates. With these numbers, one could argue that by nearly halving the infant mortality rate, the war saved the lives of thousands or even tens of thousands of babies.
There are many possible reasons for this divergence, but political contamination of both sets of studies is the most likely answer. Prior to 9/11, Saddam sought to use infant mortality to undermine support for the sanctions regime. He carried out an orchestrated campaign to both falsify infant deaths and to deny care to those in areas and groups hostile to his regime. Unfortunately, there is evidence that the UN and human-rights groups played along with this manipulation. Even so, the true infant mortality rate was most likely in the 70-100/1000 range. It definitely wasn’t anywhere near 29/1000.
In the post-war Johns Hopkins study, the pre-war infant mortality rate was measured by the same method, household self-reports, that was used to measure deaths from violence. Since the purpose of the study was made very clear to those being interviewed, it would be easy for interviewees to lie about pre-war deaths. They might do so in hopes of undermining the war effort. Since the actual number of infant deaths unrelated to violence is very small (table 2, page 4) it would not take but two or three unreported infant deaths to seriously skew the results.
In a nutshell, since the study’s methodology doesn’t come anywhere close to reproducing the infant mortality rates measured by other studies, under much better conditions, we can safely assume that its methodology doesn’t accurately measure other deaths either.