In my last post, I promised that my next pop science write-up would be about my own research. That piece has been written for months, but I am working on getting it published in a legit media outlet, so stay tuned.
In the meantime, I was inspired by this infographic about spotting bad science to create my own infographic about How to Spot Stellar Brain Science. The content is based on an article by Russ Poldrack, one of my neuroscience heroes.
fMRI studies can generate flashy popular press coverage, and studies have shown that just adding brain images makes scientific reports seem more credible. Thus, it’s important to know how to critically evaluate the quality of neuroimaging studies.
Dr. Poldrack’s piece is a great teaching tool for undergraduates or anyone new to cognitive neuroscience. Last semester, I had my students read Dr. Poldrack’s piece and then analyze this extremely popular NY Times Op-Ed about how fMRI data purportedly show that we literally love our iPhones.
I have done a terrible job at keeping up my science writing. My folder of cool studies to write about keeps growing, and my spare time to do such writing has continued to dwindle. Here are some the science-related things I have done in 2015:
- Published my first first-author paper about how kids do not fear the unknown. That will be my next pop science write-up!
- Led the organization and execution of ComSciCon-Triangle 2015, a local, 2-day science communications workshop for other grad students. You can see how that went on Twitter @ComSciConTri and #ComSciCon.
- Had my first undergraduate thesis student successfully write and defend her thesis! She gets to Graduate with Distinction next week. I am so proud of her.
- Was published in Science twice (in January and April). Unfortunately, it was not my research getting into the pages of Science, but it’s still fun to say that I’ve been published in one of the top scientific journals. And you can download my face as a PowerPoint slide for teaching at the Science website, which I find hilarious.
Here’s what’s on deck for the rest of 2015
- Get the ball rolling on ComSciCon-Triangle 2016!
- Attend the Summer Institute on Bounded Rationality in Berlin, Germany
- Attend the Summer Institute in Cognitive Neuroscience in Santa Barbara, California
- Move to the Netherlands and spend 4 months conducting research at Leiden University on an NSF GROW fellowship
Recently, I was fortunate enough to attend a dinner with Dr. Kent Kiehl, a leading researcher on psychopathy and self-described Psychopath Whisperer. Dinner was billed as story-time about psychopaths, and Dr. Kiehl did not disappoint.
As a tasty Mediterranean meal digested in my belly, I listened in rapt attention (and significant unease) as Dr. Kiehl described his latest “Perfect 40”, a convicted serial killer who scored the maximum 40 out of 40 possible points on the Hare Psychopathy Checklist.
I don’t think Law & Order: SVU could have dreamt up a more chilling evildoer. This “Perfect 40” committed rape, incestuous rape, hebephillic rape, and at least half a dozen murders, all without a shred of guilt.
In addition to telling enough stories to give me nightmares for a month, Dr. Kiehl also answered questions about the science behind psychopathy. Here’s what I learned:
1. Psychopaths lack empathy but not theory of mind: Though psychopathy and autism are both social-processing disorders that primarily affect men, they are quite distinct. People with autism lack theory of mind abilities – that is, they cannot reason about other people’s thoughts and intentions.
Psychopaths, on the other hand, generally have pretty good theory of mind. In some cases, they can actually leverage their theory of mind abilities to manipulate others into doing what they want. Instead, psychopaths are distinguished by their lack of empathy. They can’t tell and don’t care what others are feeling. This is what enables the most dangerous psychopaths to commit heinous crimes: zero guilt about what they do to their victims.
2. Psychopaths don’t anticipate punishment. Pretend your television remote is broken, so that whenever you use it to change channels, it makes a little buzzing sound before delivering a painful electric zap. After just a few zaps, you’d learn to associate that buzzing sound with the pain to come. At that point, the mere sound of the buzz would cause your body to show a stress response: your palms would get sweaty and your heart rate would rise in anticipation of the expected zap.
This unconscious learning of the association between cue and punishment is called fear conditioning, and it happens in just about all animals – but not in psychopaths! In the broken remote example, psychopaths would consciously know that the buzz would be followed by a zap, but their bodies wouldn’t show any anticipatory stress response to the buzz. This indicates that something’s gone wonky with psychopaths’ learning and punishment-processing circuits.
3. Psychopaths may not experience withdrawal after halting drug use. Lots of psychopaths abuse drugs and alcohol, which is unsurprising, since they’re impulsive pleasure seekers who ignore consequences. I was astonished, however, to hear that psychopaths don’t seem to experience withdrawal symptoms after stopping their drug use. It could mean that psychopaths don’t get addicted to drugs in the first place.
Unfortunately, this phenomenon has not been well-studied because it is logistically challenging to do so. It’s obviously unethical to test this in an experimental setting. You’d have to give psychopaths lots of dangerous drugs and then suddenly stop. Consequently, the only way to study this is to find drug-abusing psychopaths in the real world and strictly monitor them while they voluntarily quit their drug use – not exactly an easy thing to do!
Dr. Kiehl had anecdotal examples of psychopaths claiming no withdrawal symptoms. It’s possible that psychopaths are either lying about going through withdrawal, or that psychopaths’ bodies go through a physical withdrawal experience that they fail to consciously notice or remember.
I found this open question especially fascinating, as it could point to ideas for addiction treatment. Some good could come from psychopaths, at least indirectly.
4. Research with psychopaths is dangerous and scary. Administration of the real psychopath test (not the pop science version that Dr. Kiehl dismissed as pseudoscience) involves conducting an interview. In Dr. Kiehl’s work involving incarcerated psychopaths, he often found himself alone in a prison room with a psychopath.
Sometimes the psychopaths would think it was amusing to tell Dr. Kiehl that they could easily kill him before he could summon the help of a guard. At least one got too close for comfort, simply to show Dr. Kiehl that it could be done. Needless to say, it takes a special kind of personality to be able to do the research Dr. Kiehl does. He noted that some of his graduate students and research assistants don’t stick around for long because it’s just too stressful.
5. Research with psychopaths is fascinating. There’s a reason why shows like Law and Order: SVU and movies about serial killers are successful – it is utterly fascinating to see other people operate so callously outside the bounds of social convention and morality. There’s a gasp-inducing, “I can’t believe any human being could do something like that!” allure that keeps sites like Murderpedia in existence.
For Dr. Kiehl, the fascination runs even deeper. He’s not content to marvel at the mere existence of psychopaths; he wants to learn what makes them the way they are so that we can treat them or at least prevent them from becoming serial killers.
I was glad to vicariously experience a small glimpse of what it would be like to be a professional psychopath whisperer. It was at once thrilling and deeply disturbing. It was also enough to know that I would never want to ask Dr. Kiehl for a job in his lab.
Fall officially begins tonight, but the new school year is already in full swing. For many first and second year grad students (and some prospective grad students), that means it’s time to prepare National Science Foundation (NSF) Graduate Research Fellowship Program (GRFP) applications. It can be a stressful experience, but never fear: I am here to help with my unsolicited advice!
Each year, the NSF awards ~2,000 competitive fellowships to grad students in a variety of STEM disciplines. In addition to providing a prestigious resumé (or, as we prefer in academia, CV) boost, these fellowships pay our tuitions and a $32,000/year stipend for three years.
I applied for an NSF GRF back in 2011, as I was preparing my grad school applications. I was lucky enough to receive an award that year – and I do mean lucky; grant reviewing is quite subjective, and my award came down to my ability to please 3 randomly assigned human reviewers.
I am convinced that I received a big boost because I applied so early in my research career. Reviewers expected less of me because I wasn’t a grad student yet – even though I had been more or less acting like one in my paid job as a research assistant – so my ability to write convincingly about scientific research was probably more impressive than it would have been had I been a first or second year grad student. So if you are just now applying for grad school, also apply for an NSF! Expectations will never be as low for you again.
I got loads of help when I wrote my NSF GRFP essays and did lots of Googling on best practices. Here are some tips I put together that I hope will be helpful to other NSF GRFP applicants who are furiously Googling now!
Rosa’s unsolicited NSF GRFP application advice
1) Just do it!
It’s not easy, and the whole process can kind of suck because GAH! so much work for a gamble at free money! But the NSF GRFP is probably one of the shortest major applications that you’ll ever prepare. The cost-benefit ratio is definitely worth it.
If you don’t get an award, you still get feedback on your application, which will help you improve future grant applications. If you do get an award, you have an NSF award! And you definitely won’t get an award if you don’t apply.
If you’re like I was back in 2011, trying to navigate the grad school application process and wondering if you could throw an NSF GRFP application into the mix – really just do it! I found that writing my NSF GRFP application really made me think through my research goals, which was helpful for my grad school application process as well.
I could also note on my grad school applications that I had applied for an NSF fellowship, which made me look like an organized go-getter. After I received my NSF award, I could have reached out to professors who didn’t accept me as a student to see if having my own funding changed their minds.
2) Start early!
Your reference letter writers are supposed to comment not only on you but also on your proposed research. That means that NSF GRFP reference letters should be more tailored than usual, so you want to give your recommenders plenty of time to do that. I started drafting my essays at the end of September and sent my reference letter writers drafts of all of my essays in mid-October, just over a month before the deadline. [When I applied, applications were due mid-November. They’re now due late-October/early-November, so I suggest moving my dates up 2 weeks.]
3) Don’t be shy or humble!
The NSF GRFP application form has minimal space for you to list your accomplishments, so you have to make sure to laud yourself in your essays. This is not the time to be humble. Brag about how awesome/smart/good at research you are.
4) Focus on broader impacts and work them into all of your essays!
You’ll be rated on two criteria: Intellectual Merit and Broader Impacts. Intellectual Merit is important and comprises half your score, but you should already have plenty of practice writing to that, so it will come pretty naturally. Furthermore, it’s probably what your letter writers will focus on.
Broader Impacts are less familiar. Really addressing them requires more than one or two throwaway claims about long-term trickle-down effects of your research. My year, the NSF GRFP website had a pdf with lots of examples of what they consider to be Broader Impacts activities. Look through that list, and try to incorporate as many items as possible into your essays. [Note that the aforementioned pdf was from 2011 because I can’t easily find a more recent one. I doubt the NSF’s criteria for Broader Impacts has changed much, but take the list from 2011 with a grain of salt.]
If you do any sort of volunteer or outreach work, even if it’s not directly related to your project, find a way to work it into your essays. I think you can stretch (though not break) the limits of plausibility here. I wrote about volunteering with inner-city kids at a local science museum and noted that “the experience made me aware of my responsibility to disseminate information beyond the scientific community.”
If you’re female and/or a minority, you can work that into your essays. I chose not to go this route because I felt like I had adequately addressed broader impacts in other ways, but I have had friends who successfully did. I could have written something about how I’ve benefitted from having strong female scientists as role models and mentors, and how I want to be a role model and mentor to others.
Another easy Broader Impacts move is to mention how much you love teaching and mentoring undergraduates, and OMG you can’t wait until you get to be a TA/professor. There’s an example of that in my personal statement. [I still love working with undergraduates, by the way!]
If you do it right, you will probably sound corny. You may hate yourself a little for the corny, but it’s what they’re asking for, so get over it!
5) Get help!
I discussed a few different research ideas with my boss/PI before I settled on the one that I wrote about. She also made several important suggestions for additional measures to compare and helped me clean up/fancify the language I used to describe my proposed research so that I could sound extra scientifical.
My PI also informed me that I didn’t have to do the exact project that I proposed, which I hadn’t been aware of. NSF GRFP is about funding the budding young scientist rather than the exact scientific project, so applications just need to demonstrate an ability to write about research intelligently. Don’t worry about mapping out the exact, perfect project that you will definitely do. A good project that is well-described and justified will do.
I also reached out to friends who had won NSF awards in the past and asked to see their applications. One even offered to show me her rating sheets. It was really helpful to see examples of successful applications – and it was eye-opening to see how my friends approached their essays in substantially different ways. For example, one friend’s personal statement was about how being an immigrant made her want to study cultural differences, so quite personal. Another friend’s personal statement focused more on her research experiences – less personal, but still engaging and ultimately effective.
Finally, having a few people read your essays and offer comments and suggestions will be invaluable. I recommend getting a science-minded friend who’s not exactly in your subfield to read through your writing. Your raters will be in the field, but they may not be experts in your methods or analysis, so it’s good to get a check that you remain accessible in your writing.
6) Good luck!
Jason Mitchell’s self-published manifesto against replication studies (studies that try to reproduce a previously published significant result) and null results (those that fail to find a significant effect) has been flying around the psychology blogosphere. Neuroskeptic, Neuropolarbear, Neuroconscience, Drug Monkey, True Brain, and Richard Tomasett have already weighed in with great insight, but I still thought I’d add my two cents, if only so that I can get my thoughts down and stop ranting to my friends about it over Gchat.
First, why should we care what Jason Mitchell says? For starters, he’s Professor Jason Mitchell, Ph.D., a full professor at Harvard who is influential in my field of cognitive/social neuroscience. His lab has done great work that I really admire and that has shaped how I think about and approach my own research. For example, he found that we value social things in the same way that we value money or food: people are willing to give up money to be fair or to tell a stranger something about themselves, and when they do so, reward-related regions of the brain become active.
Given what the Mitchell Lab studies, I find it interesting (ironic?) that he wrote ~5000 words stating that 1) replication studies and null results have no value and 2) replication studies are unfair to the scientists who produced the original work being replicated.
For point 1), Professor Mitchell notes that failed replications can result from screwing up methods in some way – messing up your code, contaminating a sample, etc. And he’s right! If you follow Paper X’s reported methods and get different results, you could have made a mistake in following those methods.
BUT you also could have exactly followed Paper X’s reported methods but not done everything exactly as the authors of Paper X did. Maybe Paper X’s participants were run in the morning when they were more alert, and your participants were run in the afternoon during that post-lunch funk.
Or maybe Paper X came from a lab with mostly male experimenters, and your lab is mostly females. We recently learned that rodents are stressed by male researchers, so maybe you didn’t replicate because your testing environment was less stressful. As Neuroconscience notes, there are plenty of #methodswedontreport – things that we do but do not bother writing down in a published paper. Discovering these legitimate factors that could explain a failure to replicate is scientifically valuable and part of the scientific method.
Furthermore, the same screw-ups that can cause a replication to fail can also cause a study to produce significant positive findings – findings that can then be published in peer-reviewed journals and treated as fact. The problem with Professor Mitchell’s case is that he argues that all positive findings are true, and all null results are false.
That simply isn’t the case. Even without intent to deceive (fraud or cherry-picking data), sometimes you get a false positive just due to random chance, as I’ve noted before.
As to point 2), I agree that reporting failures to replicate can be taken as criticisms against or even bullying of the original authors. As Professor Mitchell puts it, “You found an effect. I did not. One of us is the inferior scientist.”
Such an attitude is flat out wrong. The field should never jump to conclusions that the original authors are poor scientists (or worse, fraudulent ones). In the same vein, it also should not jump to the conclusion that the replicating authors are poor scientists.
Instead, we should follow the evidence. Given one positive finding and one negative finding, can we empirically determine what extraneous factors could have caused these different results? If not, it could be that either finding is due to random chance, so the study should be repeated until the balance of evidence tips one way or the other.
In my ideal world, science would not be about egos and proving yourself right or someone else wrong. It should be about hunting down the Truth. Our main weapon is the scientific method, which is inherently based upon supporting or disproving hypotheses.
In the real world, scientists are people too. We have egos that can be bruised and feelings that can be hurt. The danger is when, in trying to protect our egos from being battered, we shift our goal from Finding the Truth to Proving Ourselves Right and shift the facts to fit our story rather than shifting the story to fit our facts.