I am embarrassed to say that when reading the infamous Lancet Study for my previous post, I was so stuck by the idiocy of using cluster sampling and self-reporting in a population (the Sunni) who have a strong motive to exaggerate that I just flat ignored the actual resulting statistics. Since I knew the methodology was crap I knew the numbers were crap and I didn’t look any farther.
Commentator JohnChris and Fred Kaplan over at Slate (via Instapundit) both pointed out that the confidence interval on the studies results, even with Faluja excluded, is so broad to be utterly useless.
Kaplan nails it so I will except a bit:
“Readers who are accustomed to perusing statistical documents know what the set of numbers in the parentheses means. For the other 99.9 percent of you, I’ll spell it out in plain English—which, disturbingly, the study never does. It means that the authors are 95 percent confident that the war-caused deaths totaled some number between 8,000 and 194,000. (The number cited in plain language—98,000—is roughly at the halfway point in this absurdly vast range.)
This isn’t an estimate. It’s a dart board.
Imagine reading a poll reporting that George W. Bush will win somewhere between 4 percent and 96 percent of the votes in this Tuesday’s election. You would say that this is a useless poll and that something must have gone terribly wrong with the sampling. The same is true of the Lancet article: It’s a useless study; something went terribly wrong with the sampling.”
Of course, we know what went wrong with the sampling. The study’s basic design was flawed for examining a phenomenon know a priori to be highly asymmetrical.
This raises the obvious question: How did such a seriously flawed study get published in a prestigious (Lancet is the British equivalent of the New England Journal of Medicine) medical journal? The only possible explanation is political bias of the authors, the peer reviewers and the publisher.
Evidence for this comes in the observation of poster AMac who noted that:
“As an author of papers published in peer-reviewed journals, I was struck by the extraordinarily compressed time-line of this publication. Readers outside the biomedical fields might consider what the peer-review process involved:
1. Data were collected in September 2004, and the authors had completed compilation, statistical analysis, drafting of text, artwork, and proofreading in order to submit their work in the form of a for-publication draft manuscript (MS) to the Lancet Editor.
2. The Editor read the MS, chose peer-reviewers, had the reviewers comment on the MS, evaluated these comments, passed his/her favorable judgement on the MS to the authors, with any suggestions for necessary or advisable revisions.
3. The authors revised the MS and resubmitted it.
4. The Editor and perhaps the peer-reviewers reviewed and approved the revised text and figures. The MS files were sent to the Lancet’s copy editors for proofreading and digital typesetting. Author queries were generated and sent to the lead author, and the responses incorporated into the typeset version. Finally, the complete manuscript, ready for printing, was published on the Lancet’s website.
Four to eight weeks is an unusually short time for a high-impact journal such as the Lancet to bring such an article into print. I would doubt that Lancet, JAMA, Nature, BMJ, Science, or similar high-prestige journals have ever compressed their review and publication schedule in such a drastic manner.”
I’m not sure that the article is in the current or upcoming hardcopy version of Lancet but even so, publishing a study completed well under 60 days ago smacks of a rush job.
What we have here is the scientific equivalent of medical malpractice. We have a group of researchers who claim to have followed standard research practices, only they didn’t. They claim to have found statistically valid results, only they didn’t. Then we have a scientific journal that claims to have followed standard practices of peer review before publication, only it is very clear they did not. I think that the funders of the study would have grounds to sue were this any other profession.
In its own way, this incident is just as important as the CBS Memogate scandal. Memogate revealed that in order to advance its political agenda, a major media source ignored basic common practice in vetting the documents at the heart of the story. In this case, we see scientists funded by one of the premier medical research institutions in the U.S. (John Hopkins) and one the world’s best-regarded medical journals ignoring basic standards of practice in order to produce a result beneficial to their political biases. The failure of the institutions in both cases is glaring and requires the same sort of rigorous public review to ask what went wrong.
For example, by tradition, peer reviewers remain anonymous so that they can give their opinions without fear of professional bad feelings or institutional retribution. However, in this case I think it’s fair that the reviewers be asked to publicly justify their opinion. In fact, I think the failure is so egregious that we need to ask if the Lancet actually submitted the paper to peer review at all, and if they so, did they listen to any qualms the reviewers had? (Recall how CBS ignored its own document experts?)
Scientists carry great cachet in Western political and social debate precisely because they have traditionally been viewed as outside of politics. People believe that scientists and scientific institutions provide the best possible information to the public and politicians who then make decisions based on that information. Regrettably, it appears that an increasing number of scientists have fallen prey to the belief that one is first and foremost a political ideologue and only secondarily a member of a profession like scientist, judge or teacher. They believe that their primary obligation is to use whatever authority they have obtained by virtue of their professional standing to advance their political agenda.
This philosophy gave rise to the concept of judicial activism, in which judges ruled based on their belief in the best policy, not on their consensus interpretation of the law. This has lead to the intense politicization of the judiciary and a near complete collapse in the public’s respect for their rulings.
If the same thing happens with our scientific institutions we are screwed. We need to ask serious questions right now.