The following is a guest blog post by Susan Molchan, MD. Dr. Molchan is a psychiatrist in practice in Bethesda, Maryland. She also trained in nuclear medicine and did PET research at the National Institute of Mental Health, and worked as the program director for Biomarkers, Diagnosis, and Alzheimer’s Disease at the National Institute on Aging.
In a recent post on the New York Times Well blog, Dr. Richard Friedman describes an article from researchers at Emory University reporting a “potential biomarker in the brain that would help psychiatrists direct depressed patients towards treatment to which they would more likely respond.” (A version of the article also appeared in the print edition of the New York Times on January 13.) He reported “striking brain differences” in that the brightness of the part of the brain called the insula differed in those who responded or not to drug treatment, as compared to those who responded or not to cognitive behavioral therapy, in which the patient learns to modify thoughts and behaviors to allay symptoms.
The Emory researchers scanned their patients with positron emission tomography or PET, and randomized them to either an anti-depressant or cognitive behavioral therapy for 12 weeks, with the idea that they would look back at the scans to see if there was a difference among those who did and didn’t respond to the two treatments. They started out with 82 patients, with 65 completing the protocol, pretty good for a depression study, where about one-third of patients tend to drop out. But then comes the red flag. Data from only 38 patients were used. The researchers had stated a priori that those to be included in the analysis would have “clear outcomes and usable PET scans.” They defined the “clear outcomes” clearly with depression scale criteria. Less clear was how the researchers would define PET scans that were not “usable.” Could they toss some scans they didn’t like and on what basis? The potential for bias is clear.
On the PET scans, six regions in the brain lit up to show a significant interaction between treatment and treatment outcome. Two regions, the right insula, and the left cuneus lit up the most.
Dr. Friedman said the results fit with other brain imaging studies, and as a reference cited a study from the Emory group from several years ago. The problem is, in that study, while brain regions also responded differentially to antidepressant and cognitive behavioral treatment, they responded differentially different as compared to the 2013 study. While the 2013 study showed decreased activity in the insula in those who responded to CBT and increased activity in those who didn’t (and vice versa for the drug), the 2007 study showed the opposite. This seems more a “neurocontradiction,” if the ultimate aim is to predict treatment response based on consistent brain patterns.
Dr. Friedman cites another article for context, to show that different patients with depression may respond differently based on clinical or historical or anatomical characteristics, that indicated patients with childhood trauma, which has been correlated with smaller hippocampi, do better with treatment with cognitive behavioral therapy as compared to drugs. The first author of this 2003 paper is Dr. Charles Nemeroff, and again this should have raised a red flag; for many in psychiatry and beyond, the name conjures a black stain. As reported in both newspapers and medical journals, Dr. Nemeroff was well known for misrepresenting information, with numerous infractions for not disclosing hundreds of thousands of dollars drug companies paid him, including while NIH funded his research to study their drugs. He finally had to resign as chair of psychiatry at Emory, and resign a journal editorship for publishing an article he authored lauding a product without disclosing that the products’ manufacturer paid him. He was also prohibited from applying for NIH grants for a number of years.
The astute retired psychiatrist who writes the blog 1 Boring Old Man (who was also at Emory) pointed out that an erratum had been published for the Nemeroff paper, invalidating the conclusion. The 1 Boring Old Man blogger commented about both the NYT Well blog and about journal policy:
“Should Dr. Friedman have known about all of that? or about the Conflicts of Interest in the original study? Probably. Even if he doesn’t keep up with the blogs here at the edge of the galaxy, quoting Dr. Nemeroff, particularly from a paper back in 2000 or 2003, is always risky business. And that’s a widely known bit of information in the psychiatric community and elsewhere. But that’s not my central point. A paper like that should have been retracted from the literature, or at the least, retrospectively annotated on the Journal’s web-site by the Journal itself.”
The inconsistencies across studies in brain imaging, which compare groups of patients highlight how challenging it will be to translate imaging findings to individual patients. In an important and rare randomized, controlled trial using brain scans to direct treatment, Dr. Mayberg and colleagues targeted a region in the cingulate gyrus near the front of the brain that has shown fairly consistent changes in depression (although not in the study discussed by Dr. Friedman). Prior data had indicated these patients, who had increased activity in this area of the cingulate gyrus responded well to deep brain stimulation. This involves neurosurgery—actually implanting wire electrodes in the brain; it’s helped some patients with Parkinson’s disease.
Unfortunately the study, called BROADEN (BROdmann Area 25 Deep brain Neuromodulation), was stopped in December 2013, when a futility analysis showed that the study would be unable to show a difference between sham surgery and deep brain stimulation.
Many high hurdles remain before we can make the leap from research studies of correlations and regressions to predictions of what will help in clinical medicine. Real world patients are never as “clean” as research subjects; they bring their co-morbid diseases. They take lots of drugs. They drink. They smoke. They don’t tell the truth about what they drink and smoke. This all adds “noise” to the patterns on the scans, which complicates trying to discern their meaning.
As a psychiatrist and a nuclear medicine physician, with long time concerns about conflicts of interest in medicine, these flaws in the New York Times publications by Dr. Friedman leaped out at me. Whoever edits the blog and the print edition would, understandably, have a more difficult time discerning these gaps in information provided. But before the Times publishes blog posts that talk about potential biomarkers that could predict – perhaps some balance could be added on limitations of evidence in the field.
Biomarkers mean tests, and invariably, especially when they involve imaging, expensive tests. Many tests in medicine today are overused, as we’re learned from campaigns such as Choosing Wisely, spearheaded by the American Board of Internal Medicine.
The colors and flash of the technology make brain scans seductive. They can also be very profitable to those who make careers out of them, to those who market them. All the more reason to look beyond the colorful splotches of light, to follow the money, for anyone advocating clinical use in psychiatry, any time soon.
Dr. Allen Frances, who chaired the task force that produced psychiatry’s fourth revision of its Diagnostic and Statistical Manual, was succinct in his Tweet-criticism of the NYT blog post:
Read as classic example of psychiatric hype- Bio-markers predict who responds to meds vs CBT. http://t.co/wE1dp5CJ92
— Allen Frances (@AllenFrancesMD) January 18, 2015
Follow us on Twitter: