(I initially published this article on a Sermo weekly column titled “Biostatistics weekly” under the moniker Sciencebased. Following is a revision of that article)
As doctors, how is it that we know, that we know something? Is it because we read it on a peer-reviewed study?
Most published research findings are false. Not everyone doing peer review understands how do we know that we know something (as a resident –before being formally trained in biostatistics and research-, I peer-reviewed a few papers). Not all people designing and running the actual studies understand it either. All these can lead to studies with un-interpretable data for which no conclusions can be drawn, but that are being used as fact by many. The science of how to know that we know something is relatively new and thus is hasn’t reflected yet on the published literature (the first CONSORT statement was published in 1996).
Contrary to popular believe, clinical research is harder than basic science research. In basic science, if you have a result and you want to know if it’s really true, you repeat the experiment as many times as it takes you to convince you that the results are true. In clinical research, you cannot repeat a 10-year multi-million dollar study with 3000 patients as many times as you want and thus the methods, design, and execution of the experiment have to be so much better.
In clinical research, the way we know that we know something starts with the definition of a very specific primary outcome.
To understand the importance of defining a primary outcome think about this example:
Suppose you are with a friend at the top of a giant boulder overlooking a forest and he throws a rock hitting a tree 50 yards away. Then he says, “That is the tree I wanted to hit”. How can you trust he really meant to hit that tree? You can’t. But what if he had walked towards the tree and painted the tree with a sign, then walked back to the boulder to throw the rock hitting the exact tree he just painted? Well, now you know for sure.
In clinical terms, you can do a study of vitamin C versus placebo for the common cold and end up with an infinite amount of possible outcomes: on day 2, there is less throat erythema; on day 5 there is less nasal stuffiness; the symptoms are the same but the duration is less; mean body temperature on day 1 was 0.3 degrees lower… Etc. All of these outcomes could constitute a positive publishable result. But you should not believe any of them if a hypothesis (primary outcome) that clearly mentioned the specific outcome was not formulated before the execution of the study (rock and the tree example above). And what proof do you have that a specific hypothesis was formulated beforehand? Submitting your protocol to the IRB or registering it at clinicaltrials.gov is like painting a sign on the tree you intend to hit with the rock. The IRB/clinicaltrials.gov then dates and stamps your protocol and that constitutes the proof.
From the CONSORT statement: “Only 45% of a cohort of 519 RCTs published in 2000 specified the primary outcome; this compares with 53% for a similar cohort of 614 RCTs published in 2006.”
Good clinical research is hard and rare. Without a clearly specified hypothesis formulated before the results were obtained, one has to be skeptical of any conclusions made.