Lancet? Damn Near Killed It!
This Lancet controversy has got me all hot and bothered. So let’s try and go over all the relevant bits, shall we? First, let’s recap what happened. In 2004, the Lancet published an estimate of Iraq war dead that was widely criticized for weak methodology. Then, this year, many of the same authors re-jigged their methodology and went at it whole hog, coming up with an estimate of about 650,000 dead, with a 95% confidence interval between 393,000 and 943,000. This means that if the experiment were repeated a gazillion times, 95% of the time, the result would be between those two bounds; but the most likely estimate, according to their data, is the midpoint of 650,000.
The pro-war set immediately poo-pooed this finding as being too high. This, in and of itself, is interesting because it begs the question, “what number isn’t too high?” When asked, President Bush immediately dismissed the study, insisting it had been “discredited”. This was news to the scientific community. The study has not been discredited. Instead, some vocal pollsters and economists and at least one statistician have come forward to criticize its methodology, while an army of epidemiologists have supported it. This is a good and fair thing. Debate is good. Methodology should always be criticized.
Then the conservative echo chamber began doing their thing. On various sites, I’ve seen some brazen assertions that the Lancet “is not a respected journal” and that it is “siding with the Jihadis”. This is not a good and fair thing. The Lancet is, in the words of Wikipedia, “one of the oldest and most respected peer-reviewed medical journals in the world.” It is among the greatest science publications ever formulated, up there with Nature, BMJ and The New England Journal of Medicine. The finest health scientists in the world are published in its pages, and I don’t know a single health researcher whose chest wouldn’t puff with pride if his study got published in the Lancet.
I sometimes forget that not everyone out there is privy to the methods of the science world. Let me offer a primer….
First, I offer my biases. I am and have always been decidedly opposed to the invasion of Iraq, for reasons that I would hope are patently obvious to most thinking people. I hold a PhD in Epidemiology and Biostatistics; though I would never claim to even begin to understand the magic of statistics, for professional reasons I must claim to be an expert on the methodologies of population health science. While I have designed, implemented and analysed population health surveys, I have never been involved in anything as ambitious as the Lancet Iraq study. However, as a member of the greater global scientific community, I and my colleagues have often been called upon to contribute (paid) methodological reviews for studies submitted to the Lancet and other journals. To be clear, I was not a reviewer on this study, nor do I personally know anyone involved in the study or in its review.
When a study is submitted to a journal, the editors send it off to 2-4 reviewers in the community. Such reviewers are chosen for their availability, seniority and their expertise with respect to the topic at hand. Reviewers declare any conflicts of interest or inherent biases, then, if allowed to proceed, present the journal with a verdict: accept with no changes, accept with minimal revisions, accept with major revisions, or reject outright. This is the hallowed peer-review process that ensures that the editors’ biases do not taint the quality of the study.
Even so, sometimes studies of poor quality slip through the cracks. This is due to insufficiently astute reviewers, distracted reviewers or indeed reviewers with biases they have either not declared or not examined. Sometimes a reviewer is blinded by the exciting topic of the study, and fails to see the methodological flaws. In such cases, the scientific community will criticize the study once it has been published. So an informal post-hoc peer-review process catches these outliers.
Assuming that the Iraq study is indeed of poor quality (a big assumption), this is in no way evidence of a bias on the part of the Lancet. Every population health study has flaws, every single one. Not a one could withstand the global, aggressive scrutiny this study is undergoing. To suggest that its publication is indicative that the Lancet is “siding with Jihadis” is ignorant slander of the worst kind.
The fact remains that the science of public health necessarily abuts topics that are close to the hearts of those on the Left: anti-poverty, gender rights, human rights, etc. The Lancet and its brethren also publish studies that are traditionally Rightist, such as those about health benefits associated with private sector reforms and investments. The Lancet is not necessarily completely apolitical, as it has in the past taken stands against both homeopathy and the arms industry, each time based upon the evidence presented and parsed within its own pages. How this can be construed as “siding with Jihadis” is beyond me.
A given health science study has five components: its context, methdology, data collection, analysis and interpretation. The soft aspects are the context and interpretation, which are reflected in the eventual paper as the Introduction and Discussion. The technical components are the methodology and analysis, and these are the bits that any criticism must tackle. The “black box” component is the data collection phase. We can only assume that the researchers collected data in the way they said they would in the methodology section. If they failed to do so, then this is not science.
With that in mind, I will discuss the actual Lancet study in my next blog post….