Smartphone games teach scientists how to save lives

These days, the ubiquity of smartphones makes it easy to call up a quick game of Angry Birds or Candy Crush and spend a few fun but ultimately unproductive minutes swiping around the screen. Duke psychology professor Steve Mitroff has discovered that the massive amounts of data generated by millions of people ostensibly wasting time can actually help save lives and protect national security.

One night, while waiting for his one-year-old daughter to fall asleep, Dr. Mitroff found a smartphone game called Airport Scanner, in which players man airport x-ray scanners and search for illegal objects amongst cluttered cartoon suitcases. The game dynamics paralleled Dr. Mitroff’s research on how we find specific visual targets amid complex visual scenes. This caused him to wonder if the cartoonish smartphone game, and the big data that it generated, could serve as an avenue for scientific research.

Fig 1 from the paper: Airport Scanner game screenshots

Figure 1 from the paper: Screenshots of Airport Scanner game’s x-ray baggage displays. Can you spot the illegal items?

Fortunately for Dr. Mitroff, Ben Sharpe, the CEO of the company that made Airport Scanner, was pleased to hear that his company’s game could not only provide entertainment but also benefit scientific inquiry. Sharpe had his programmer tweak Airport Scanner’s code so that anonymized gameplay data would automatically download to a server that Dr. Mitroff’s research team could access, and a fruitful collaboration began.

CC2.5 via Wikimedia Commons user Duke

A real x-ray baggage screening display. Can you spot the illegal items? From Wikimedia Commons user Duke, via CC-BY-2.5 license

Most recently, Dr. Mitroff has used Airport Scanner to study how we search for “ultra-rare” items, or items that appear in less than 1% of searches (pdf of paper published in Psychological Science). In regular laboratory studies of visual search, the rarest testable items appear 1% of the time. Because people’s attention spans are limited, it is difficult to ask people to perform more than a few hundred searches in the lab. Consequently, items that appear 1% of the time provide just a handful of datapoints per person, and items that appear less frequently are virtually impossible to study.

Airport Scanner, on the other hand, allowed Dr. Mitroff to study millions of searches, including those with ultra-rare items that appeared as infrequently as 0.08% of the time. Even with such low rates of appearance, the big data generated by Airport Scanner players provided hundreds to thousands of instances for each ultra-rare search – enough datapoints to draw valid statistical inferences.

Figure 3 from the paper: How often targets (illegal items) were detected plotted by how frequently they appeared in the game.

Figure 3 from the paper: How often targets (illegal items) were detected plotted by how frequently they appeared in the game.

It turns out that people are disproportionately terrible at finding ultra-rare items. While items that appeared in 1% to 3.7% of searches were found 92% of the time, ultra-rare items that appeared in less than 0.15% of searches were only found 27% of the time, even by experienced players. If you are a cancer patient with an ultra-rare cancer marker, or a Transportation Security Officer (TSO) searching for an ultra-rare terrorist threat in packed luggage, this low success rate is bad news with life-threatening implications.

Dr. Mitroff’s research points to a way to remedy this poor result for ultra-rare item searches: artificially increase the frequency of ultra-rare items and thus make them no longer ultra-rare. Airport scanners already project false illegal items onto luggage x-rays for TSOs to find. This keeps TSOs vigilant and provides performance feedback. By tweaking existing algorithms to project more ultra-rare items – more fake bombs and fewer fake forgotten pocketknives – Dr. Mitroff notes that we can change “how likely [TSOs] are to see something.” Similar techniques can be applied to cancer screenings to give radiologists more practice detecting ultra-rare cancer markers.

So, the next time you find yourself wasting time or procrastinating with a smartphone game, you can reassure yourself that you are not just wasting time – you are working to generate data that could benefit science, or even save lives.

 ResearchBlogging.orgMitroff, S., & Biggs, A. (2013). The Ultra-Rare-Item Effect: Visual Search for Exceedingly Rare Items Is Highly Susceptible to Error Psychological Science, 25 (1), 284-289 DOI: 10.1177/0956797613504221

How your brain helps you get your groove on

I wrote this for a science writing competition. I did not win (here is the piece that won for the journal article I chose to explain), so I am publishing my entry here.

The paper (it’s open access, so no paywalls!): Grahn, J.A. & Rowe, J.B. (2013) Finding and feeling the musical beat: Striatal dissociations between detection and prediction of regularity. Cerebral Cortex23(4), 913-921.

The abstract: I’m going to start putting these at the end of my post because I am conceited and think you should read my explanation first.

At some point in our lives, we have likely all encountered That Guy or That Gal who just can’t seem to dance to a beat. That Guy is often found on the dance floor at a wedding reception, flailing his limbs with wild abandon but no regularity. That Gal may be in your Zumba class, perpetually a quarter to a half-second behind the music while the rest of the class is shaking their hips in sync. Why can’t That Guy and That Gal just get with the beat?

Hint: look towards the back of the room.

Did you find That Gal? Image from Brittany Carlson, Wikimedia Commons

Being able to recognize a musical beat requires two things. First, one must be able to find the beat – the steady time intervals that underlie a piece of music or rhythmic sequence. Second, one must be able to maintain an internal sense of that beat as the music continues.

The ability to find a beat lets you start dancing in time to the music, while the ability to maintain a beat lets you continue moving your limbs in a regular fashion as the music continues. Researchers at the University of Western Ontario and Cambridge University have found that we use one part of our brain, the cerebellum, to look for the beat, and another part of our brain, the basal ganglia, to maintain it.

In the study by Drs. Jessica Grahn and James Rowe, 24 healthy adults listened to a series of rhythms while their brain function was measured using magnetic resonance imaging. Some of the rhythms had underlying beats while others did not, and the rhythms with a beat were of a variety of tempos.

In a clever design twist, the different rhythms were consecutively strung together so that sometimes the beat stayed the same while the rhythm changed, sometimes the beat became faster or slower, sometimes the beat disappeared entirely, and sometimes a beat emerged from a beat-less rhythm. This design allowed the experimenters to measure how brain function changed when participants searched for a beat when there was none to find or when an existing beat was continued or altered in a new rhythm.

I also developed a strong lower back.

I developed excellent beat finding skills on my college drumline.

When participants listened to rhythms that lacked a beat, their cerebellum, a brain structure that sits right above the spinal cord, was more active than when participants listened to rhythms with a beat. This finding suggests that the cerebellum is looking for a beat before a regular rhythm can be detected. Scientists already know that the cerebellum helps coordinate our motor movements, and this study suggests that it does so in part by paying extra attention to irregular timing.

In contrast, another part of the brain called the basal ganglia was more active when listening to beat rhythms than when listening to non-beat rhythms. Furthermore, its activity level differed depending on the relative tempos of the rhythms. It was active when a beat changed tempo to become faster or slower, it was even more active when a beat maintained the same tempo but switched to a new rhythm, and it was most active when both the current beat and current rhythm stayed the same.

My, what lovely ROIs you have!

The basal ganglia (in hot colors) maintains our sense of the beat. From Figure 7 of the paper.

Finally, the basal ganglia did not differentiate between a non-beat rhythm and the first appearance of a rhythm with a beat, indicating that the basal ganglia do not pay attention to the emerging presence of a beat when there was none before. This graded level of activity in the basal ganglia suggests that it works to maintain our sense of the beat and favors continuity and similarity – it works the hardest when it meets a reliable rhythm and slacks off a bit as the rhythm changes.

Note that putamen is NOT pronounced put-a-men.

Structures in the left and right basal ganglia show no significant responses to the presence of a new beat but do show significant responses to existing beats, even if they’re changing. From Figure 7 in the paper.

Scientists already know that the basal ganglia play an important role in generating and maintaining movement, as well as in learning and reward processing. Movement disorders such as Parkinson’s disease, in which patients have difficulty initiating movement, and Huntington’s disease, in which patients have difficulty stopping and controlling movements, are both linked to damage to the basal ganglia.

Understanding exactly how different parts of the brain enable us to control and coordinate our movements helps scientists figure out what deficits to expect from people with neurological damage and how those deficits could be treated. As a result of this study, we now know that healthy basal ganglia let us maintain our sense of regular rhythm – a sense that is important for anything that involves syncing movement to a regular beat, including walking, speaking, and, of course, dancing like a champ.

The abstract: Perception of temporal patterns is critical for speech, movement, and music. In the auditory domain, perception of a regular pulse, or beat, within a sequence of temporal intervals is associated with basal ganglia activity. Two alternative accounts of this striatal activity are possible: “searching” for temporal regularity in early stimulus processing stages or “prediction’ of the timing of future tones after the beat is found (relying on continuation of an internally generated beat). To resolve between these accounts, we used functional magnetic resonance imaging (fMRI) to investigate different stages of beat perception. Participants heard a series of beat and nonbeat (irregular) monotone sequences. For each sequence, the preceding sequence provided a temporal beat context for the following sequence. Beat sequences were preceded by nonbeat sequences, requiring the beat to be found anew (“beat finding” condition), or by beat sequences with the same beat rate (“beat continuation”), or a different rate (“beat adjustment”). Detection of regularity is highest during beat finding, whereas generation and prediction are highest during beat continuation. We found the greatest striatal activity for beat continuation, less for beat adjustment, and the least for beat finding. Thus, the basal ganglia’s response profile suggests a role in beat prediction, not in beat finding.

ResearchBlogging.orgGrahn, J., & Rowe, J. (2012). Finding and Feeling the Musical Beat: Striatal Dissociations between Detection and Prediction of Regularity Cerebral Cortex, 23 (4), 913-921 DOI: 10.1093/cercor/bhs083

Off to Shanghai!

Last week, I was tweeting up a storm at the ScienceOnline Together unconference, which was a 30-minute drive away. This week, I’m jetting off to Shanghai for a neuroeconomics conference, which is 24+ hours of total travel time (blech).

I’m not so excited about the epic travel boredom, but I am excited to get to meet some of my science heroes! Paul Glimcher published what is probably the first neuroeconomics papers ever and hasn’t stopped publishing since. He’s one of the authors on the adolescent decision-making paper that I wrote about here. Elizabeth Phelps has done some amazing work on the influence of emotions on decision-making, as well as some neat stuff looking at how fear shapes memories.

Funny that they’re at NYU, but I’m going all the way to China to hear them speak. Hopefully I will learn a lot. I definitely plan to eat a lot. And I hope I don’t breathe in pollution a lot. I will not be Tweeting a lot because Twitter is banned, as is Facebook and the NY Times. Sigh, China.

Image

Children in life may be larger than they appear

A few years ago, my then boss/PI and I submitted a paper to the journal Current Biology. It was rejected because they thought our paper was too specialized and not high impact enough. We were eventually published in another journal that had a higher impact factor than Current Biology (self-righteous science five!).

selffive

I share all this not just to humblebrag but to note that my feelings about Current Biology are colored by that rejection. You have been warned.

The paper: Kaufman, J., Tarasuik, J.C., Dafner, L., Russell, J., Marshall, S., & Meyer, D. (2013). Parental misperception of youngest child size. Current Biology, 23(24), R1085-R1086.

The abstract summary (the article is super short, just over a page, so I guess it gets a summary instead of an abstract): “After the birth of a second child many parents report that their first child appears to grow suddenly and substantially larger. Why is this? One possibility is that this is simply a contrast effect that stems from comparing the older sibling to the new baby: “everything looks big compared to a newborn”. But, such reports could be the result of a far more interesting biopsychological phenomenon. More specifically, we hypothesized that human parents are subject to a kind of ‘baby illusion’ under which they routinely misperceive their youngest child as smaller than he/she really is, regardless of the child’s age. Then, when a new baby is born, this illusion ceases and the parent sees, for the first time, the erstwhile youngest at its true size. By this account the apparent growth results from the mismatch of the parent’s now accurate perception with the stored memories of earlier misperceptions. Here we report that the baby illusion is a real and commonly occurring effect that recasts our understanding of how infantile features motivate parental caregiving [1].”

Hold up, y’all. Let’s see that first sentence again. “After the birth of a second child[,] many parents report that their first child appears to grow suddenly and substantially larger.”

This is a thing? I have never heard of this sudden and substantial increase in firstborn child size being a thing. I also googled the quote they gave in the summary, “everything looks big compared to a newborn”, and only got hits for this paper or pieces about this paper, so I am not convinced that “everyone knows newborns make things look giant” is a thing either. Maybe it’s a thing in the UK or Canada, where the authors are from?

These phenomena that the paper seems to refer to as common knowledge may actually be based on a survey from the paper itself: “Over 70% of respondents indicated that the erstwhile-youngest child suddenly appeared bigger after the new infant’s birth.”

Survey data aren’t much to hang your hat on, especially if they ask weird questions like, “Following the birth of your youngest child, did your other child suddenly appear bigger?” BUT the experiment based on the survey results has some pretty compelling data. When empirically tested, parents do think their youngest children are physically smaller than they actually are, while not-youngest children are pretty accurately sized.

Figure 1 from the paper

Figure 1 from the paper

The study design was elegantly simple (and annoyingly so; why are my studies so complicated?): they had parents draw on a wall how tall they thought their kids were. The experimenters then compared children’s actual heights to those estimated by their parents. Lo and behold (see above), parents estimated the heights of kids who were an elder sibling, meaning that they were not the youngest child, pretty closely to those kids’ real heights.

Kids who were youngest siblings, on the other hand, were consistently estimated to be smaller than they actually were. Note that this group included only children; additional analyses showed that only children and youngest children with older siblings didn’t differ from each other.

Wha?? The authors call this a “baby illusion” and speculate that it may be evolutionarily beneficial for youngest children, who need the most parental attention, to be perceived as smaller and weaker than they actually are. Interesting speculation, though I am a bit dubious. Runts and weakest of the litters are usually ignored/left to die in the wild to save parental resources for those who are guaranteed to survive, so it may not always be evolutionarily adaptive to be perceived as weak. I wonder if the “baby illusion” doesn’t apply for twins, then?

I hereby propose the testing of a “puppy illusion”, a “kitten illusion”, and a “baby panda illusion“. Please leave a comment below if you will give me funding to do science with all the cute baby animals. I already have extensive experience watching Bao Bao on the National Zoo Panda Cam.

Screen Shot 2014-02-14 at 10.05.28 PM

Personal screen shot from Panda Cam

Yup, baby pandas definitely look tiny. I estimate that I can put her in my pocket and take her home with me.

ResearchBlogging.org
Kaufman J, Tarasuik JC, Dafner L, Russell J, Marshall S, & Meyer D (2013). Parental misperception of youngest child size. Current biology : CB, 23 (24) PMID: 24355780

No hiding behind a pseudonym

I’m currently taking a science communications course. I’m learning quite a bit while also feeling increasing guilt about not science blogging more. Today we had a guest lecture by David Kroll, a science blogger/science writing professor/pharmacologist/museum communicator/jack of all trades. He suggested that as graduate students, we only blog under a pseudonym so that we can remain anonymous.

rosa-1

Photo credit Jeff MacInnes

Oops. Too late for that. I had considered writing anonymously when I first started this blog, especially since I chose to write about orgasm faces in my first real post. But then I decided that, as I was planning to be civil and thoughtful in my blogging, I shouldn’t have to hide my true identity. There will be no need to live in fear of being outed, and no shit to hit the fan if people learn who I really am. Bradley Voytek thinks that his online presence helped him land a job and increases the number of times his papers get cited, and I promised the NSF that I care about the broader impacts of promoting scientific knowledge to the general public, so why hide?

On the other hand, some academics frown upon pontificating to the masses as a waste of time that would be better spent on research/grant writing. In his talk, David mentioned that some of our dissertation committee members could fall in that boat and give us a hard time. Those committee members vote on whether we get our PhDs, so keeping them satisfied is pretty important.

I’d like to think that my committee members have better things to do than drop in on my little corner of the internet. Plus, writing under my real name forces me to keep my swears down, my criticisms civil, and (hopefully) my thoughts well-formed. Let’s hope I don’t regret this someday down the line…

Seriously, Nature?

First there was Womanspace, which basically said “Hey Nature readers, isn’t it funny how your wife is good at shopping and you, middle-aged man who is representative of all of science scholarship, are not?”

Then there was the recent publication of a letter from Lukas Koube pointing out that there are more male reviewers because women leave science to stay home and have kids. NBD; don’t get your knickers in a twist. I’m gonna go ahead and reproduce that letter that you’ve hidden behind a paywall.

“The publication of research papers should be based on quality and merit, so the gender balance of authors is not relevant in the same way as it might be for commissioned writers (see Nature 504,188; 2013). Neither is the disproportionate number of male reviewers evidence of gender bias.

“Having young children may prevent a scientist from spending as much time publishing, applying for grants and advancing their career as some of their colleagues. Because it is usually women who stay at home with their children, journals end up with more male authors on research articles. The effect is exacerbated in fast-moving fields, in which taking even a year out threatens to leave a researcher far behind.

“This means that there are likely to be more men in the pool of potential referees.”

Now there is a senior Nature science editor outing Dr. Isis, an anonymous science blogger/junior faculty member who called him on his shit and thus pissed him off. Clearer recap here.

Slow clap, Nature. Slow clap.

Chocolate consumption and rampaging killers

In our modern, wired world, scientists now have access to massive amounts of data, and it’s become popular to mine these datasets for correlations and spin the results into a science story. For example, countries with grammar rules that strongly differentiate between the present and the future have lower savings rates. Ergo, language affects how close or distant the future seems, which in turn affects how much people decide to save for the future.

It’s an elegant tale, but remember the first rule of critical thinking (pay attend, future GRE essay section takers!): correlation does not equal causation. Could it be possible that such correlations that seem logical – especially after we have found them and have had time to come up with a plausible sounding explanation for them – are in fact, just accidental? Scientists call these “spurious correlations”; the math seems to check out (more on this later), but they’re not meaningful.

Seán Roberts and James Winters recently published a cautionary tale of such correlation hunting in the open access journal PLoS One. Using standard techniques that have often been applied to such giant datasets, they found several spurious correlations. 

My favorite (thanks to my other blog) is shown in Figure 10, below: per capita chocolate consumption correlates with per capita number of serial and rampaging killers!

Figure 10

I can think of 4 explanations for this finding:

1) Chocolate consumption causes people to become rampaging killers.

2) Rampaging killers consume massive quantities of chocolate, enough to up a whole country’s per capita chocolate consumption.

3) Some third variable causes both chocolate consumption to increase and people to become rampaging killers.

4) This statistically significant correlation was in fact, a statistical fluke.

I think we can pretty safely rule out 1 and 2 for being patently ridiculous to nigh impossible. Explanation 3 has some potential and is fun for speculation. Maybe people with the means to buy and eat chocolate, a luxury good, also have the means to go on killing rampages. Being a serial killer takes some time and investment!

I’m going to go ahead and guess that this particular correlation likely falls under Explanation 4: a statistical fluke. In statistics, we’re never able to prove that something is true. Instead, we can only assign a X percent chance that we think something, such as a correlation, could have happened due to random chance alone. When X is sufficiently small, we decide that the correlation didn’t happen due to random chance and instead is a meaningful relationship.

In the chocolate and rampaging killers example above, statistics said that there was just a 2% chance that the correlation was due to random chance. That’s a pretty tiny chance, so it seems unlikely that it just came about randomly, right?

If you’re only looking at one correlation, then yes. But the problem with these giant datasets is that they let scientists look for lots and lots of correlations between lots of different variables. When you look at 100 correlations, a 2% chance means that 2 of those correlations would look statistically significant but would in fact be totally random. If you’re looking for relationships between hundreds of variables, that could be thousands and thousands of correlations, and all of sudden all sorts of random relationships come out as being statistically significant, like a link between chocolate and serial killing.

Figure 9

Figure 9: Countries with acacia trees have more road fatalities

This is a pretty common problem in science, and it’s one that’s not limited to giant datasets. If you have 100 labs investigating the same thing, say a link between a gene A and disease B, two of those labs may find a statistically significant relationship that’s really just accidental. The two labs that find a significant result publish their data, the 98 labs that found no significant relationships don’t, and the written record only shows the positive findings that gene A is related to disease B.

How do we fix this problem? In some cases, we can’t; we just have to be aware of what statistically significant really means and interpret data with a discerning eye. In other cases, we can turn to the good old scientific method.

In our chocolate consumption and rampaging killers example, say we wanted to test if Explanation 1 (chocolate consumption causes people to become rampaging killers) were true. We could take 1000 pairs of identical twins, with each pair reared as identically as possible. Give 1 twin in each pair a chocolate bar a day, don’t let the other twin have any chocolate, and wait and see which twins turn into rampaging killers.

Impractical? Yes. Unethical? If we really thought chocolate could turn people into rampaging killers, it would be highly unethical! But scientifically sound? Yes, because we’re only manipulating a single variable and holding all else constant!

ResearchBlogging.orgSeán Roberts, & James Winters (2013). Linguistic Diversity and Traffic Accidents: Lessons from Statistical Studies of Cultural Traits PLoS One DOI: 10.1371/journal.pone.0070902