While I was at ComSciCon, I workshopped a piece about our memories of 9/11. After lots of helpful feedback at ComSciCon and several rounds of editing, it’s been published as a guest blog post at Scientific American! Check it out here.
Jason Mitchell’s self-published manifesto against replication studies (studies that try to reproduce a previously published significant result) and null results (those that fail to find a significant effect) has been flying around the psychology blogosphere. Neuroskeptic, Neuropolarbear, Neuroconscience, Drug Monkey, True Brain, and Richard Tomasett have already weighed in with great insight, but I still thought I’d add my two cents, if only so that I can get my thoughts down and stop ranting to my friends about it over Gchat.
First, why should we care what Jason Mitchell says? For starters, he’s Professor Jason Mitchell, Ph.D., a full professor at Harvard who is influential in my field of cognitive/social neuroscience. His lab has done great work that I really admire and that has shaped how I think about and approach my own research. For example, he found that we value social things in the same way that we value money or food: people are willing to give up money to be fair or to tell a stranger something about themselves, and when they do so, reward-related regions of the brain become active.
Given what the Mitchell Lab studies, I find it interesting (ironic?) that he wrote ~5000 words stating that 1) replication studies and null results have no value and 2) replication studies are unfair to the scientists who produced the original work being replicated.
For point 1), Professor Mitchell notes that failed replications can result from screwing up methods in some way – messing up your code, contaminating a sample, etc. And he’s right! If you follow Paper X’s reported methods and get different results, you could have made a mistake in following those methods.
BUT you also could have exactly followed Paper X’s reported methods but not done everything exactly as the authors of Paper X did. Maybe Paper X’s participants were run in the morning when they were more alert, and your participants were run in the afternoon during that post-lunch funk.
Or maybe Paper X came from a lab with mostly male experimenters, and your lab is mostly females. We recently learned that rodents are stressed by male researchers, so maybe you didn’t replicate because your testing environment was less stressful. As Neuroconscience notes, there are plenty of #methodswedontreport – things that we do but do not bother writing down in a published paper. Discovering these legitimate factors that could explain a failure to replicate is scientifically valuable and part of the scientific method.
Furthermore, the same screw-ups that can cause a replication to fail can also cause a study to produce significant positive findings – findings that can then be published in peer-reviewed journals and treated as fact. The problem with Professor Mitchell’s case is that he argues that all positive findings are true, and all null results are false.
That simply isn’t the case. Even without intent to deceive (fraud or cherry-picking data), sometimes you get a false positive just due to random chance, as I’ve noted before.
As to point 2), I agree that reporting failures to replicate can be taken as criticisms against or even bullying of the original authors. As Professor Mitchell puts it, “You found an effect. I did not. One of us is the inferior scientist.”
Such an attitude is flat out wrong. The field should never jump to conclusions that the original authors are poor scientists (or worse, fraudulent ones). In the same vein, it also should not jump to the conclusion that the replicating authors are poor scientists.
Instead, we should follow the evidence. Given one positive finding and one negative finding, can we empirically determine what extraneous factors could have caused these different results? If not, it could be that either finding is due to random chance, so the study should be repeated until the balance of evidence tips one way or the other.
In my ideal world, science would not be about egos and proving yourself right or someone else wrong. It should be about hunting down the Truth. Our main weapon is the scientific method, which is inherently based upon supporting or disproving hypotheses.
In the real world, scientists are people too. We have egos that can be bruised and feelings that can be hurt. The danger is when, in trying to protect our egos from being battered, we shift our goal from Finding the Truth to Proving Ourselves Right and shift the facts to fit our story rather than shifting the story to fit our facts.
A few weeks ago, I kinda, sorta, not really, almost caught a free Bruce Springsteen concert. Except Springsteen didn’t show because he was never supposed to perform in the first place. I spent five hours standing around, surrounded by drunken Duke undergrads (and carefully avoiding eye-contact with those whom I’d taught or mentored) for nothing, all because of anticipatory regret.
Let me explain. Every year, Duke undergrads celebrate their Last Day of Classes, or LDOC, with lots of partying and a free outdoor concert. Past LDOC performers have included Kanye West, B.O.B., Macklemore, and Kendrick Lamar. This year, the lineup was a bunch of artists that were not as famous (at least to me; I do not pretend to be even remotely cool about current music), leading students to wonder if the lineup was a ruse. Perhaps they had a secret, better headliner in reserve?
Here’s where Springsteen comes in. As LDOC drew near, rumors started circulating that Bruce Springsteen was going to give a surprise performance to close out LDOC. While this may sound like wild conjecture, let me present the following pieces of evidence.
- Springsteen’s daughter is a senior graduating from Duke this year.
- Springsteen was going to give a concert in nearby Raleigh the very next night and was not slated for an official performance the night of LDOC.
- The slogan for LDOC was “Saving the Best for Last”, and clearly Bruce Springsteen is the best!
Before hearing these rumors, I had no intention of going to LDOC. Being surrounded by partying undergrads makes me feel tired and old and like I should be on the lookout for date rape. After I heard the Springsteen rumors, however, I couldn’t not go.
You see, if I didn’t go, and the rumors were true, I knew I would be devastated. I would forever kick myself about that time Bruce Springsteen surprised Duke with a free concert, and I didn’t go because I was a homebody who didn’t want to stay up past my bedtime.
In other words, I anticipated the regret that I would feel if I had passed on the possibility of seeing Springsteen and he had actually shown up (angry streaming tears face below). This regret would be more painful than the annoyed disappointment I’d feel if I went to LDOC and Springsteen didn’t show (upset tongue-out bunny below). After weighing those possible emotional outcomes, I chose to take a chance and go to LDOC to avoid an angry streaming tears situation.
Anticipatory regret is a powerful motivator, and it works best when you know you’ll find out what might have been had you made a different choice. The Dutch Postcode Lottery is a famous example. It randomly picks a street or neighborhood in the Netherlands, and if you live in that selected area, you win big Euros – IF you bought a lottery ticket.
Put yourself in Dutch clogs for a second. Imagine that you didn’t buy a postcode lottery ticket, and then your neighborhood got picked to win. How much regret would you feel, knowing that you missed out those winnings? That anticipatory regret would likely compel you to buy a postcode lottery ticket, just in case.
In U.S. lotteries, if you don’t buy a ticket, you’ll probably never know how close you came to hitting the big win, and you know that you’ll never know. No anticipatory regret in that case, so less impetus to purchase U.S. lottery tickets.
We often think of decision making as being driven by the emotions we experience while we’re trying to make up our minds. Anticipatory regret is a nice reminder that it’s not just the emotions that we feel in the moment – the emotions we think we might feel alter our decisions as well.
I’m still sad that I didn’t get to see Springsteen perform. But hey, Duke Commencement is this weekend. Maybe I’ll run into him around town while he’s here to watch his daughter graduate.
Marcel Zeelenberg (1999). Anticipated regret, expected feedback and behavioral decision making Journal of Behavioral Decision Making
These days, the ubiquity of smartphones makes it easy to call up a quick game of Angry Birds or Candy Crush and spend a few fun but ultimately unproductive minutes swiping around the screen. Duke psychology professor Steve Mitroff has discovered that the massive amounts of data generated by millions of people ostensibly wasting time can actually help save lives and protect national security.
One night, while waiting for his one-year-old daughter to fall asleep, Dr. Mitroff found a smartphone game called Airport Scanner, in which players man airport x-ray scanners and search for illegal objects amongst cluttered cartoon suitcases. The game dynamics paralleled Dr. Mitroff’s research on how we find specific visual targets amid complex visual scenes. This caused him to wonder if the cartoonish smartphone game, and the big data that it generated, could serve as an avenue for scientific research.
Fortunately for Dr. Mitroff, Ben Sharpe, the CEO of the company that made Airport Scanner, was pleased to hear that his company’s game could not only provide entertainment but also benefit scientific inquiry. Sharpe had his programmer tweak Airport Scanner’s code so that anonymized gameplay data would automatically download to a server that Dr. Mitroff’s research team could access, and a fruitful collaboration began.
Most recently, Dr. Mitroff has used Airport Scanner to study how we search for “ultra-rare” items, or items that appear in less than 1% of searches (pdf of paper published in Psychological Science). In regular laboratory studies of visual search, the rarest testable items appear 1% of the time. Because people’s attention spans are limited, it is difficult to ask people to perform more than a few hundred searches in the lab. Consequently, items that appear 1% of the time provide just a handful of datapoints per person, and items that appear less frequently are virtually impossible to study.
Airport Scanner, on the other hand, allowed Dr. Mitroff to study millions of searches, including those with ultra-rare items that appeared as infrequently as 0.08% of the time. Even with such low rates of appearance, the big data generated by Airport Scanner players provided hundreds to thousands of instances for each ultra-rare search – enough datapoints to draw valid statistical inferences.
It turns out that people are disproportionately terrible at finding ultra-rare items. While items that appeared in 1% to 3.7% of searches were found 92% of the time, ultra-rare items that appeared in less than 0.15% of searches were only found 27% of the time, even by experienced players. If you are a cancer patient with an ultra-rare cancer marker, or a Transportation Security Officer (TSO) searching for an ultra-rare terrorist threat in packed luggage, this low success rate is bad news with life-threatening implications.
Dr. Mitroff’s research points to a way to remedy this poor result for ultra-rare item searches: artificially increase the frequency of ultra-rare items and thus make them no longer ultra-rare. Airport scanners already project false illegal items onto luggage x-rays for TSOs to find. This keeps TSOs vigilant and provides performance feedback. By tweaking existing algorithms to project more ultra-rare items – more fake bombs and fewer fake forgotten pocketknives – Dr. Mitroff notes that we can change “how likely [TSOs] are to see something.” Similar techniques can be applied to cancer screenings to give radiologists more practice detecting ultra-rare cancer markers.
So, the next time you find yourself wasting time or procrastinating with a smartphone game, you can reassure yourself that you are not just wasting time – you are working to generate data that could benefit science, or even save lives.
Mitroff, S., & Biggs, A. (2013). The Ultra-Rare-Item Effect: Visual Search for Exceedingly Rare Items Is Highly Susceptible to Error Psychological Science, 25 (1), 284-289 DOI: 10.1177/0956797613504221
The paper (it’s open access, so no paywalls!): Grahn, J.A. & Rowe, J.B. (2013) Finding and feeling the musical beat: Striatal dissociations between detection and prediction of regularity. Cerebral Cortex, 23(4), 913-921.
The abstract: I’m going to start putting these at the end of my post because I am conceited and think you should read my explanation first.
At some point in our lives, we have likely all encountered That Guy or That Gal who just can’t seem to dance to a beat. That Guy is often found on the dance floor at a wedding reception, flailing his limbs with wild abandon but no regularity. That Gal may be in your Zumba class, perpetually a quarter to a half-second behind the music while the rest of the class is shaking their hips in sync. Why can’t That Guy and That Gal just get with the beat?
Being able to recognize a musical beat requires two things. First, one must be able to find the beat – the steady time intervals that underlie a piece of music or rhythmic sequence. Second, one must be able to maintain an internal sense of that beat as the music continues.
The ability to find a beat lets you start dancing in time to the music, while the ability to maintain a beat lets you continue moving your limbs in a regular fashion as the music plays on. Researchers at the University of Western Ontario and Cambridge University have found that we use one part of our brain, the cerebellum, to look for the beat, and another part of our brain, the basal ganglia, to maintain it.
In the study by neuroscientists Jessica Grahn and James Rowe, 24 healthy adults listened to a series of rhythms while their brain function was measured using magnetic resonance imaging. Some of the rhythms had underlying beats of a variety of tempos. Others had no underlying beat whatsoever.
In a clever design twist, the different rhythms were consecutively strung together so that sometimes the beat stayed the same while the rhythm changed, sometimes the beat became faster or slower, sometimes the beat disappeared entirely, and sometimes a beat emerged from a beat-less rhythm. This design allowed the experimenters to measure how brain function changed when participants searched for a beat when there was none to find or when an existing beat was continued or altered in a new rhythm.
The cerebellum, a brain structure that sits right above the spinal cord, was more active when participants listened to rhythms that lacked a beat than when they listened to rhythms with a beat. The neuroscientists interpreted this to mean that the cerebellum is looking for a beat before a regular rhythm can be detected. Scientists already know that the cerebellum helps coordinate our motor movements, and this study suggests that it does so in part by paying extra attention to irregular timing.
In contrast, another part of the brain called the basal ganglia was more active when listening to beat rhythms than when listening to non-beat rhythms. Furthermore, its activity level differed depending on the relative tempos of the rhythms: it was active when a beat changed tempo to become faster or slower, it was even more active when a beat maintained the same tempo but switched to a new rhythm, and it was most active when both the current beat and current rhythm stayed the same.
Finally, the basal ganglia did not differentiate between a non-beat rhythm and the first appearance of a rhythm with a beat, indicating that the basal ganglia do not pay attention to the emerging presence of a beat when there was none before. This graded level of activity in the basal ganglia suggests that it works to maintain our sense of the beat and favors continuity and similarity – it works the hardest when it meets a reliable rhythm and slacks off a bit as the rhythm changes.
Scientists already know that the basal ganglia play an important role in generating and maintaining movement, as well as in learning and reward processing. Movement disorders such as Parkinson’s disease, in which patients have difficulty initiating movement, and Huntington’s disease, in which patients have difficulty stopping and controlling movements, are both linked to damage to the basal ganglia.
Understanding exactly how different parts of the brain enable us to control and coordinate our movements helps scientists figure out what deficits to expect from people with neurological damage and how those deficits could be treated. As a result of this study, we now know that healthy basal ganglia let us maintain our sense of regular rhythm – a sense that is important for anything that involves syncing movement to a regular beat, including walking, speaking, and, of course, dancing like a champ.
The abstract: Perception of temporal patterns is critical for speech, movement, and music. In the auditory domain, perception of a regular pulse, or beat, within a sequence of temporal intervals is associated with basal ganglia activity. Two alternative accounts of this striatal activity are possible: “searching” for temporal regularity in early stimulus processing stages or “prediction’ of the timing of future tones after the beat is found (relying on continuation of an internally generated beat). To resolve between these accounts, we used functional magnetic resonance imaging (fMRI) to investigate different stages of beat perception. Participants heard a series of beat and nonbeat (irregular) monotone sequences. For each sequence, the preceding sequence provided a temporal beat context for the following sequence. Beat sequences were preceded by nonbeat sequences, requiring the beat to be found anew (“beat finding” condition), or by beat sequences with the same beat rate (“beat continuation”), or a different rate (“beat adjustment”). Detection of regularity is highest during beat finding, whereas generation and prediction are highest during beat continuation. We found the greatest striatal activity for beat continuation, less for beat adjustment, and the least for beat finding. Thus, the basal ganglia’s response profile suggests a role in beat prediction, not in beat finding.
Grahn, J., & Rowe, J. (2012). Finding and Feeling the Musical Beat: Striatal Dissociations between Detection and Prediction of Regularity Cerebral Cortex, 23 (4), 913-921 DOI: 10.1093/cercor/bhs083
Last week, I was tweeting up a storm at the ScienceOnline Together unconference, which was a 30-minute drive away. This week, I’m jetting off to Shanghai for a neuroeconomics conference, which is 24+ hours of total travel time (blech).
I’m not so excited about the epic travel boredom, but I am excited to get to meet some of my science heroes! Paul Glimcher published what is probably the first neuroeconomics papers ever and hasn’t stopped publishing since. He’s one of the authors on the adolescent decision-making paper that I wrote about here. Elizabeth Phelps has done some amazing work on the influence of emotions on decision-making, as well as some neat stuff looking at how fear shapes memories.
Funny that they’re at NYU, but I’m going all the way to China to hear them speak. Hopefully I will learn a lot. I definitely plan to eat a lot. And I hope I don’t breathe in pollution a lot. I will not be Tweeting a lot because Twitter is banned, as is Facebook and the NY Times. Sigh, China.
A few years ago, my then boss/PI and I submitted a paper to the journal Current Biology. It was rejected because they thought our paper was too specialized and not high impact enough. We were eventually published in another journal that had a higher impact factor than Current Biology (self-righteous science five!).
I share all this not just to humblebrag but to note that my feelings about Current Biology are colored by that rejection. You have been warned.
abstract summary (the article is super short, just over a page, so I guess it gets a summary instead of an abstract): “After the birth of a second child many parents report that their first child appears to grow suddenly and substantially larger. Why is this? One possibility is that this is simply a contrast effect that stems from comparing the older sibling to the new baby: “everything looks big compared to a newborn”. But, such reports could be the result of a far more interesting biopsychological phenomenon. More specifically, we hypothesized that human parents are subject to a kind of ‘baby illusion’ under which they routinely misperceive their youngest child as smaller than he/she really is, regardless of the child’s age. Then, when a new baby is born, this illusion ceases and the parent sees, for the first time, the erstwhile youngest at its true size. By this account the apparent growth results from the mismatch of the parent’s now accurate perception with the stored memories of earlier misperceptions. Here we report that the baby illusion is a real and commonly occurring effect that recasts our understanding of how infantile features motivate parental caregiving .”
Hold up, y’all. Let’s see that first sentence again. “After the birth of a second child[,] many parents report that their first child appears to grow suddenly and substantially larger.”
This is a thing? I have never heard of this sudden and substantial increase in firstborn child size being a thing. I also googled the quote they gave in the summary, “everything looks big compared to a newborn”, and only got hits for this paper or pieces about this paper, so I am not convinced that “everyone knows newborns make things look giant” is a thing either. Maybe it’s a thing in the UK or Canada, where the authors are from?
These phenomena that the paper seems to refer to as common knowledge may actually be based on a survey from the paper itself: “Over 70% of respondents indicated that the erstwhile-youngest child suddenly appeared bigger after the new infant’s birth.”
Survey data aren’t much to hang your hat on, especially if they ask weird questions like, “Following the birth of your youngest child, did your other child suddenly appear bigger?” BUT the experiment based on the survey results has some pretty compelling data. When empirically tested, parents do think their youngest children are physically smaller than they actually are, while not-youngest children are pretty accurately sized.
The study design was elegantly simple (and annoyingly so; why are my studies so complicated?): they had parents draw on a wall how tall they thought their kids were. The experimenters then compared children’s actual heights to those estimated by their parents. Lo and behold (see above), parents estimated the heights of kids who were an elder sibling, meaning that they were not the youngest child, pretty closely to those kids’ real heights.
Kids who were youngest siblings, on the other hand, were consistently estimated to be smaller than they actually were. Note that this group included only children; additional analyses showed that only children and youngest children with older siblings didn’t differ from each other.
Wha?? The authors call this a “baby illusion” and speculate that it may be evolutionarily beneficial for youngest children, who need the most parental attention, to be perceived as smaller and weaker than they actually are. Interesting speculation, though I am a bit dubious. Runts and weakest of the litters are usually ignored/left to die in the wild to save parental resources for those who are guaranteed to survive, so it may not always be evolutionarily adaptive to be perceived as weak. I wonder if the “baby illusion” doesn’t apply for twins, then?
I hereby propose the testing of a “puppy illusion”, a “kitten illusion”, and a “baby panda illusion“. Please leave a comment below if you will give me funding to do science with all the cute baby animals. I already have extensive experience watching Bao Bao on the National Zoo Panda Cam.
Yup, baby pandas definitely look tiny. I estimate that I can put her in my pocket and take her home with me.
Kaufman J, Tarasuik JC, Dafner L, Russell J, Marshall S, & Meyer D (2013). Parental misperception of youngest child size. Current biology : CB, 23 (24) PMID: 24355780