Michael Joyce is a writer-producer with HealthNewsReview.org and tweets as @mlmjoyce
On left – ‘brain activation pattern’ on contemplating the word “death” in a subject who had attempted suicide; On right – the pattern of a control subject also thinking about the word “death” (from study)
It’s hard to believe that the above functional magnetic resonance imaging (fMRI) image could lead to headlines like this:
Brain scans may have spotted people thinking about suicide (HealthDay)
Can suicide be prevented by science? (International Business Times)
Machine learning reveals what suicidal thoughts look like in the brain (Newsweek)
Algorithm can identify suicidal people using brain scans (The Verge)
It’s also hard to believe — or maybe it’s not — that this study generated all this hype. Here’s what the researchers from Carnegie Mellon University and University of Pittsburgh did to spark these headlines:
What’s lost in the breathless coverage generated by this study are two very important limitations.
First, none of the coverage I reviewed brought up the surrogate nature of fMRI. That what a fMRI measures is simply blood flow. And that the activity of neurons in the brain is correlated with blood flow. That is, blood flow increases to parts of the brain we are actively using. What lights up does not equate with a complex behavior or mood.
Just because researchers present patients with words or concepts they themselves associate with suicide (or whatever the opposite of suicide is?), does not mean they can draw any conclusions regarding what the subjects are thinking or feeling. Specifically, the findings can not predict suicidality as some headlines implied.
“There is an enormous gap between pixels on a scan and the suicidal behavior we see clinically,” says Allen Frances MD, former chairman of the psychiatry department at Duke University, who thought publishing the study was questionable. “Also, preliminary fMRI findings almost always fail to replicate when larger samples and more realistic control groups are used.”
Frances’s comments hint at the second key limitation of this study, which is the extremely small sample size. Although most of the news coverage mentioned the tiny sample size, none mentioned the dropout rate of nearly 50 percent, and the fact that most of the subjects were young females. It will take a much larger study to know if the results can be generalized more broadly.
That a computer can recognize a pattern — and these patterns can be sorted to generate an algorithm — is not surprising. What is surprising is what study co-leader, Marcel Just PhD, said in this video embedded in the Carnegie Mellon University news release :
I find three statements Dr. Just makes in this video disconcerting (italicized/underlined for emphasis):
I think the words he uses create problems for both journalists and readers. I also think they could foster hype.
Word choices do matter. We’ve run into this before with communicating about fMRI studies.
Just’s comments suggest that a brain imaging technique like fMRI can actually see and measure thoughts (or emotions). And to suggest this, as well as claim the study results are ‘revolutionary’ — or might help guide therapy — is misleading, irresponsible, and potentially self-serving. The gap between these scans and any possible therapy for mental illness is enormous.
And reporters have a responsibility here too. The International Business Times opened with: “Suicidal individuals can be identified through a new algorithm … a potential method for diagnosing mental health conditions in the near future.” What are suicidal people — who are likely quite desperate — expected to do with this information? It’s irresponsible reporting.
I empathize with fMRI researchers trying to find a vocabulary for what they see. Words like “signatures” or “patterns” make sense to me. But once you start to slide over to speaking about the brain’s ‘representation’ of words, thoughts, and emotions I think you run the risk of misleading people. You imply causality when you know full well as a scientist that your results can suggest correlation at best. In other words, we can’t know if the areas that light up in suicidal people represent their depression or intention to hurt themselves.
“I’m sure this is interesting research to those in the field of imaging and cognition,” says psychiatrist and former brain imaging researcher, Susan Molchan MD, a frequent contributor on our site. “But as far as clinical usefulness it has none, and likely won’t for a very long time.”
But a study like this does serve as a useful reminder to journalists and consumers alike. It reminds us that, other than the necessity of scrutinizing study limitations and word choices, it has becoming increasingly necessary to scrutinize news releases. It doesn’t matter if they come from corporations, medical journals, or — as in this case — from respected and trusted academic institutions.
All too often the burden of proof is left on the shoulders of journalists (and readers) to ask: what master does this news release serve? Is it leaning toward self-promotion or promoting evidence-based information?
Because I found the statistical analysis of this study somewhat confusing, I reached out to one of our frequent contributors, Susan Wie PhD, of the Division of Biostatistics at the University of Minnesota School of Public Health. She sent me the following which I think provides another reason to approach this study with further skepticism:
The reported high accuracy of 0.91 in the abstract was assessed in a non-standard way. The gold standard, which uses something called cross-validation, reveals a much lower classification accuracy of 0.76. In other words, the generalizability of this algorithm for identifying suicidality should be taken with a grain of salt.”
Comments (2)
Please note, comments are no longer published through this website. All previously made comments are still archived and available for viewing through select posts.
Donald Marks MD
November 5, 2017 at 2:58 pmI enjoyed reading the critical review by Joyce on the article “Scanning for suicide with fMRI: the limitations of pixels and language” I have previously published on this exact issue using functional MRI to identify suicidal thought 8 years previously to dr. Just work. See: Marks DH, Adineh M, Gupta S. MR Imaging of Drug-Induced Suicidal Ideation. Internet J Radiology [peer-reviewed serial on the Internet], 9(1). 2008. You raised a number of good points would certainly surround the use of this technique and the interpretation of the data.
Jack Gorman, MD
November 7, 2017 at 8:19 amI am a great admirer of your work. I do think some qualification of this story needs to be made, however. You are completely right in disputing the way this study was portrayed in the media. Your article, however, is in my view much too disparaging of the actual research involved. This study, published in a high-quality journal known for excellent peer review standards, does represent an important advance to the field in locating the possible neurobiology of suicidal ideation. The use of machine learning in this context is novel (although others are beginning to do it). You also need to be a bit more careful in explaining the issues surrounding small sample size. First, imaging studies are extremely difficult to do especially when very ill patients are involved. Second, as you know, the statistical risk of small sample sizes is false negative, not false positive. Obviously, this study needs replication and clearly you are right that press releases and media coverage are way overblown. But we are making important advances in understanding brain function with fMRI studies and these should not be so casually dismissed.
Our Comments Policy
But before leaving a comment, please review these notes about our policy.
You are responsible for any comments you leave on this site.
This site is primarily a forum for discussion about the quality (or lack thereof) in journalism or other media messages (advertising, marketing, public relations, medical journals, etc.) It is not intended to be a forum for definitive discussions about medicine or science.
We will delete comments that include personal attacks, unfounded allegations, unverified claims, product pitches, profanity or any from anyone who does not list a full name and a functioning email address. We will also end any thread of repetitive comments. We don”t give medical advice so we won”t respond to questions asking for it.
We don”t have sufficient staffing to contact each commenter who left such a message. If you have a question about why your comment was edited or removed, you can email us at feedback@healthnewsreview.org.
There has been a recent burst of attention to troubles with many comments left on science and science news/communication websites. Read “Online science comments: trolls, trash and treasure.”
The authors of the Retraction Watch comments policy urge commenters:
We”re also concerned about anonymous comments. We ask that all commenters leave their full name and provide an actual email address in case we feel we need to contact them. We may delete any comment left by someone who does not leave their name and a legitimate email address.
And, as noted, product pitches of any sort – pushing treatments, tests, products, procedures, physicians, medical centers, books, websites – are likely to be deleted. We don”t accept advertising on this site and are not going to give it away free.
The ability to leave comments expires after a certain period of time. So you may find that you’re unable to leave a comment on an article that is more than a few months old.
You might also like