Scanning for suicide with fMRI: the limitations of pixels and language

Michael Joyce is a writer-producer with HealthNewsReview.org and tweets as @mlmjoyce

fMRI suicide brain

On left – ‘brain activation pattern’ on contemplating the word “death” in a subject who had attempted suicide; On right – the pattern of a control subject also thinking about the word “death” (from study)


It’s hard to believe that the above functional magnetic resonance imaging (fMRI) image could lead to headlines like this:

Brain scans may have spotted people thinking about suicide (HealthDay)

Can suicide be prevented by science? (International Business Times)

Machine learning reveals what suicidal thoughts look like in the brain (Newsweek)

Algorithm can identify suicidal people using brain scans (The Verge)

It’s also hard to believe — or maybe it’s not — that this study generated all this hype. Here’s what the researchers from Carnegie Mellon University and University of Pittsburgh did to spark these headlines:

  • They placed 34 volunteers — half with suicidal thoughts (‘suicidal ideators’) and half without — in a fMRI machine
  • Subjects were presented with 6 words. Half the words had ‘negative’ connotations (“death, cruelty, trouble”) and half ‘positive’ (“carefree, good, praise”)
  • Brain ‘activation patterns’ were recorded for each group, ‘learned’ by a computer, and a resulting algorithm allowed the computer — when presented with an unknown — to correctly identify which group it belonged in (suicidal vs. control) 91-percent of the time

What’s lost in the breathless coverage generated by this study are two very important limitations.

Pixels on a screen can’t predict suicide

First, none of the coverage I reviewed brought up the surrogate nature of fMRI. That what a fMRI measures is simply blood flow. And that the activity of neurons in the brain is correlated with blood flow. That is, blood flow increases to parts of the brain we are actively using. What lights up does not equate with a complex behavior or mood.

Just because researchers present patients with words or concepts they themselves associate with suicide (or whatever the opposite of suicide is?), does not mean they can draw any conclusions regarding what the subjects are thinking or feeling. Specifically, the findings can not predict suicidality as some headlines implied.

“There is an enormous gap between pixels on a scan and the suicidal behavior we see clinically,” says Allen Frances MD, former chairman of the psychiatry department at Duke University, who thought publishing the study was questionable. “Also, preliminary fMRI findings almost always fail to replicate when larger samples and more realistic control groups are used.”

Frances’s comments hint at the second key limitation of this study, which is the extremely small sample size. Although most of the news coverage mentioned the tiny sample size, none mentioned the dropout rate of nearly 50 percent, and the fact that most of the subjects were young females. It will take a much larger study to know if the results can be generalized more broadly.

Did the news release for this study contribute to sensational news coverage?

That a computer can recognize a pattern — and these patterns can be sorted to generate an algorithm — is not surprising. What is surprising is what study co-leader, Marcel Just PhD, said in this video embedded in the Carnegie Mellon University news release :

I find three statements Dr. Just makes in this video disconcerting (italicized/underlined for emphasis):

  • “It means, for the first time, we can really peer into the brains of people who have a psychiatric problem  …. and see their inner thoughts and measure their inner thoughts.
  •  “Here we have a completely new way of assessing thought. It’s — in some ways — revolutionary.”
  •  “It’s not just a ‘difference,’ it’s an interpretable/understandable difference which provides a target for therapy.  You can imagine a therapy designed to ameliorate or possibly eliminate the difference in the emotional content of these concept representations.”

I think the words he uses create problems for both journalists and readers. I also think they could foster hype.

The words matter

Word choices do matter. We’ve run into this before with communicating about fMRI studies.

Just’s comments suggest that a brain imaging technique like fMRI can actually see and measure thoughts (or emotions). And to suggest this, as well as claim the study results are ‘revolutionary’ — or might help guide therapy — is misleading, irresponsible, and potentially self-serving. The gap between these scans and any possible therapy for mental illness is enormous.  

And reporters have a responsibility here too. The International Business Times opened with: “Suicidal individuals can be identified through a new algorithm … a potential method for diagnosing mental health conditions in the near future.” What are suicidal people — who are likely quite desperate — expected to do with this information? It’s irresponsible reporting.

Susan Molchan, MD

I empathize with fMRI researchers trying to find a vocabulary for what they see. Words like “signatures” or “patterns” make sense to me. But once you start to slide over to speaking about the brain’s ‘representation’ of words, thoughts, and emotions I think you run the risk of misleading people. You imply causality when you know full well as a scientist that your results can suggest correlation at best. In other words, we can’t know if the areas that light up in suicidal people represent their depression or intention to hurt themselves.

“I’m sure this is interesting research to those in the field of imaging and cognition,” says psychiatrist and former brain imaging researcher, Susan Molchan MD, a frequent contributor on our site. “But as far as clinical usefulness it has none, and likely won’t for a very long time.”

But a study like this does serve as a useful reminder to journalists and consumers alike. It reminds us that, other than the necessity of scrutinizing study limitations and word choices, it has becoming increasingly necessary to scrutinize news releases. It doesn’t matter if they come from corporations, medical journals, or — as in this case — from respected and trusted academic institutions.

All too often the burden of proof is left on the shoulders of journalists (and readers) to ask: what master does this news release serve? Is it leaning toward self-promotion or promoting evidence-based information?


Addendum, November 7th, 2017

Because I found the statistical analysis of this study somewhat confusing, I reached out to one of our frequent contributors, Susan Wie PhD, of the Division of Biostatistics at the University of Minnesota School of Public Health. She sent me the following which I think provides another reason to approach this study with further skepticism:

“Although the machine learning algorithm that was utilized in the study (Gaussian Naive Bayes) is pretty standard,” says Wei. “What is misleading is this:
The reported high accuracy of 0.91 in the abstract was assessed in a non-standard way. The gold standard, which uses something called cross-validation, reveals a much lower classification accuracy of 0.76. In other words, the generalizability of this algorithm for identifying suicidality should be taken with a grain of salt.”

You might also like

Comments (2)

We Welcome Comments. But please note: We will delete comments left by anyone who doesn’t leave an actual first and last name and an actual email address.

We will delete comments that include personal attacks, unfounded allegations, unverified facts, product pitches, or profanity. We will also end any thread of repetitive comments. Comments should primarily discuss the quality (or lack thereof) in journalism or other media messages about health and medicine. This is not intended to be a forum for definitive discussions about medicine or science. Nor is it a forum to share your personal story about a disease or treatment -- your comment must relate to media messages about health care. If your comment doesn't adhere to these policies, we won't post it. Questions? Please see more on our comments policy.

Donald Marks MD

November 5, 2017 at 2:58 pm

I enjoyed reading the critical review by Joyce on the article “Scanning for suicide with fMRI: the limitations of pixels and language” I have previously published on this exact issue using functional MRI to identify suicidal thought 8 years previously to dr. Just work. See: Marks DH, Adineh M, Gupta S. MR Imaging of Drug-Induced Suicidal Ideation. Internet J Radiology [peer-reviewed serial on the Internet], 9(1). 2008. You raised a number of good points would certainly surround the use of this technique and the interpretation of the data.

Reply

Jack Gorman, MD

November 7, 2017 at 8:19 am

I am a great admirer of your work. I do think some qualification of this story needs to be made, however. You are completely right in disputing the way this study was portrayed in the media. Your article, however, is in my view much too disparaging of the actual research involved. This study, published in a high-quality journal known for excellent peer review standards, does represent an important advance to the field in locating the possible neurobiology of suicidal ideation. The use of machine learning in this context is novel (although others are beginning to do it). You also need to be a bit more careful in explaining the issues surrounding small sample size. First, imaging studies are extremely difficult to do especially when very ill patients are involved. Second, as you know, the statistical risk of small sample sizes is false negative, not false positive. Obviously, this study needs replication and clearly you are right that press releases and media coverage are way overblown. But we are making important advances in understanding brain function with fMRI studies and these should not be so casually dismissed.

Reply