Failed replications and null results are still science

Jason Mitchell’s self-published manifesto against replication studies (studies that try to reproduce a previously published significant result) and null results (those that fail to find a significant effect) has been flying around the psychology blogosphere. Neuroskeptic, Neuropolarbear, NeuroconscienceDrug MonkeyTrue Brain, and Richard Tomasett have already weighed in with great insight, but I still thought I’d add my two cents, if only so that I can get my thoughts down and stop ranting to my friends about it over Gchat.

First, why should we care what Jason Mitchell says? For starters, he’s Professor Jason Mitchell, Ph.D., a full professor at Harvard who is influential in my field of cognitive/social neuroscience. His lab has done great work that I really admire and that has shaped how I think about and approach my own research. For example, he found that we value social things in the same way that we value money or food: people are willing to give up money to be fair or to tell a stranger something about themselves, and when they do so, reward-related regions of the brain become active.

Given what the Mitchell Lab studies, I find it interesting (ironic?) that he wrote ~5000 words stating that 1) replication studies and null results have no value and 2) replication studies are unfair to the scientists who produced the original work being replicated.

Unidentical replications – which one is the failure? From flickr user joriel.

For point 1), Professor Mitchell notes that failed replications can result from screwing up methods in some way – messing up your code, contaminating a sample, etc. And he’s right! If you follow Paper X’s reported methods and get different results, you could have made a mistake in following those methods.

BUT you also could have exactly followed Paper X’s reported methods but not done everything exactly as the authors of Paper X did. Maybe Paper X’s participants were run in the morning when they were more alert, and your participants were run in the afternoon during that post-lunch funk.

Or maybe Paper X came from a lab with mostly male experimenters, and your lab is mostly females. We recently learned that rodents are stressed by male researchers, so maybe you didn’t replicate because your testing environment was less stressful. As Neuroconscience notes, there are plenty of #methodswedontreport – things that we do but do not bother writing down in a published paper. Discovering these legitimate factors that could explain a failure to replicate is scientifically valuable and part of the scientific method.

from flickr user afagen

Furthermore, the same screw-ups that can cause a replication to fail can also cause a study to produce significant positive findings – findings that can then be published in peer-reviewed journals and treated as fact. The problem with Professor Mitchell’s case is that he argues that all positive findings are true, and all null results are false.

That simply isn’t the case. Even without intent to deceive (fraud or cherry-picking data), sometimes you get a false positive just due to random chance, as I’ve noted before.

As to point 2), I agree that reporting failures to replicate can be taken as criticisms against or even bullying of the original authors. As Professor Mitchell puts it, “You found an effect. I did not. One of us is the inferior scientist.”

Such an attitude is flat out wrong. The field should never jump to conclusions that the original authors are poor scientists (or worse, fraudulent ones). In the same vein, it also should not jump to the conclusion that the replicating authors are poor scientists.

Instead, we should follow the evidence. Given one positive finding and one negative finding, can we empirically determine what extraneous factors could have caused these different results? If not, it could be that either finding is due to random chance, so the study should be repeated until the balance of evidence tips one way or the other.

via Flickr user kxlly

In my ideal world, science would not be about egos and proving yourself right or someone else wrong. It should be about hunting down the Truth. Our main weapon is the scientific method, which is inherently based upon supporting or disproving hypotheses.

In the real world, scientists are people too. We have egos that can be bruised and feelings that can be hurt. The danger is when, in trying to protect our egos from being battered, we shift our goal from Finding the Truth to Proving Ourselves Right and shift the facts to fit our story rather than shifting the story to fit our facts.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s