Last week a guideline recommendation from the US Preventive Services Task Force (USPSTF), published in the Journal of the American Medical Association, addressed depression screening in the adult population, with a special emphasis on women who are pregnant or have recently given birth. Maybe it was a lonely day in January, but these simple recommendations generated many headlines and numerous column inches on the need for better mental health services in the US.
Here is the main assessment from the USPSTF:
The USPSTF concludes with at least moderate certainty that there is a moderate net benefit to screening for depression in adults, including older adults, who receive care in clinical practices that have adequate systems in place to ensure accurate diagnosis, effective treatment, and appropriate follow-up after screening.
The USPSTF also concludes with at least moderate certainty that there is a moderate net benefit to screening for depression in pregnant and postpartum women who receive care in clinical practices that have CBT or other evidence-based counselling available after screening.
“Moderate” certainty of “moderate” benefit with a slew of conditions attached. And yet stories in the New York Times, USA Today, The Washington Post, Los Angeles Times, CNN and Reuters all seemed to accept the premise that a sweeping increase in depression screening is justified based on the USPSTF analysis. Some of the coverage focused entirely on the implications for pregnant women and glossed over the fact that the recommendations now apply to all adults — everyone. And while these stories generally did an acceptable job of summarizing the recommendations and talking about the burden of mental illness in the US, most of the coverage was weirdly missing in action on almost everything else that counted in a serious medical screening story: explanations of the potential benefits and harms, the specificity and sensitivity of the tests, the costs of the treatment (and in this case, the myriad of costs of implementing a screening program), and the likelihood that financial conflicts of interest have inevitably tainted the research around screening tools, thus biasing the recommendations that surround them.
Consider for example how the stories addressed these issues:
Brett Thombs, PhD, an expert on depression screening at McGill University in Montreal, noted a range of potential harms which his team said are “rarely made explicit.” These include “the treatment of depression in patients who are incorrectly identified as having the disorder, the treatment of mild symptoms that would often resolve without intervention and, perhaps most importantly, the diversion of scarce resources from other endeavours, such as ensuring better care for patients already identified as having depression.”
This perspective was shared by Allen Frances, MD, Professor Emeritus of Psychiatry and Behavioral Sciences at Duke University, who has written widely about the medicalization of mental health. He thinks the USPSTF guidelines could be harmful. He told me over the phone that “the experts are sometimes completely naive about the public health implications of their recommendations.” His biggest concern is about the “tremendous false positive problem” with depression screening and that “you’ll capture people who are merely sad or having a bad day” and diagnose them with depression. He’s worried that “primary care doctors prescribe 80% of the antidepressants in the US, sometimes only after a 7 minute visit,” noting that the US already has extremely high rates of antidepressant use and these screening guidelines will only increase their use.
Indeed, the USPSTF recognized some potential downsides to their recommendations when they noted: “Screening should be implemented with adequate systems in place to ensure accurate diagnosis, effective treatment, and appropriate follow-up.” That seemed hopeful to Susan Molchan,MD, who is one of our editorial contributors, a psychiatrist, and nuclear medicine physician.
Dr. Molchan was more supportive of the USPSTF recommendations, happy to see they included an emphasis on post-partum care. Having been through several pregnancies herself she told me she was well aware of the hormonal fluctuations, sleep deprivation, mood swings and so on that often follow childbirth so she’d welcome a tool that could help screen and diagnose depression properly in women who’d given birth. As she told me: “The screening is just the starting point, obviously these women would need to be diagnosed in context over several visits, and include mental health providers in an assessment.” Like Brett Thombs and Dr. Allen Frances, she’s wary if screening will be used as a way to fast-track women to antidepressants but was overall glad to see that the data behind the USPSTF recommendations focused on cognitive behavioral therapy.
Having examined depression screening in a book I wrote on the subject, I concluded in my chapter on depression screening that despite how much we wish screening and early intervention would help people at risk of a mental health crisis, screening for depression is not supported by good science, unlikely to reduce the burden of mental illness in the overall population, and always includes a likelihood of causing harm.
As for those news organizations that haven’t taken the time to dive into the issue, I would suggest that there is the meat here for some great journalism: look at the evidence, the potential for harm, and the conflicts of interest swirling around issues raised by these USPSTF recommendations, because there are many vital, broad and nuanced stories that can emerge. This is an incredibly rich and complicated set of issues that could generate stories for months, not just a single lonely day in January.
Alan Cassels is the author of Seeking Sickness: Medical Screening and the Misguided hunt for Disease (Greystone, 2012) and a contributor to HealthNewsReview.org.
Comments (4)
Please note, comments are no longer published through this website. All previously made comments are still archived and available for viewing through select posts.
Bradley Flansbaum
February 3, 2016 at 7:30 amAlan
Can you elaborate on the COI and PHQ-9?
Are there better screening tools around?
Was this tool developed with intellectual honesty?
Have better instruments been buried as a result of corporate interference?
If the developer(s) get royalties, does it invalidate the tool?
Thanks
Brad
(Nice post btw)
Alan Cassels
February 3, 2016 at 4:08 pmBrad, thanks for the questions, which I’ve tried to answer here:
Can you elaborate on the COI and PHQ-9? Elaborate? Hmmm. When it’s been tested it has tended, as we expect, to overdiagnose depression even among people who are unlikely to be depressed. (see this study: http://www.jabfm.org/content/27/5/611 )
It’s not unusual to see conflicts of interest in mental health screening tools and I have seen these with screening tools for ADHD, dementia, anxiety and so on. Usually the funders of the tool aren’t this obvious. Follow this to the PHQ-9 screening tool and Pfizer: http://www.phqscreeners.com/select-screener
Are there better screening tools around? This is what Brett Thombs wrote to me when I asked him that question: “There are many depression screening tools out there, and there is not any solid evidence that any of them perform very differently than any of the others. The PHQ-9 is easy to use, which makes it better than others that may also perform at around the same level.”
Was this tool developed with intellectual honesty? Tough question to answer: I don’t know. I am sure the intentions of the creators are genuine, yet there is a certain naiveté to think you could use a ten question questionnaire to arrive at a diagnosis and a particular course of action.
As for the incentives for Pfizer, Brett Thombs wrote this: “They seem to have developed this so that there is an easy-to-use tool that they can massively disseminate to clinicians as part of a push to screen. It lets them tell people that screening for depression is feasible – without addressing any of the big picture costs and harms. It also appears that, offline, they encourage prescribing based on this. There is fine print in their directions about doing a clinical interview, but they actually publish an “Interpretation of Total Score” that says if you have score 10-14 you have moderate depression – when, in fact, relatively few people who score 10-14, for instance, actually meet criteria for a diagnosis. They play on misunderstandings of testing accuracy parameters to push this. Even with a tool that is 80-90% accurate in terms of sensitivity and specificity, the rate of people who score above the cut-off who are actual cases is often under 30% in depression. Yet, they get doctors to think that if people score positive on the PHQ-9 they almost certainly have depression – and, thus, need a prescription. “
So, I would agree with him, that the bias isn’t in the tool so much as how it gets used in the way information gets managed and ultimately in how patients get treated.
Have better instruments been buried as a result of corporate interference? I don’t know, but it’s a good question. I’m going to look into this.
If the developer(s) get royalties, does it invalidate the tool? Not necessarily just as a conflict of interest in a research study doesn’t invalidate that study automatically. But do we really feel comfortable using a depression screening tool that is copyrighted by the world’s biggest pharmaceutical company?
Bradley Flansbaum
February 3, 2016 at 7:16 pmAlan
Yeomans work in your responses. Thank you. I am sure you read the subtext of my questions, though, i.e., just because an instrument has an assn with a corporate entity, we should not dismiss it out of hand. Putting depression screening aside, I could list a large number of screening instruments and prediction rules used for other disciplines–adopted countrywide–that need more discrimination and calibration.ability. If we shine a light on the PHQ-9, we must do it for all of them. The story is not so much about the PHQ-9, but how the USPTF utilizes it in their assessment.
Thanks
Brad
aek
February 3, 2016 at 4:44 pmHarms:
psychiatric label in medical record
stigma
wrong treatment
career derailment
earnings derailment
ostracism
Our Comments Policy
But before leaving a comment, please review these notes about our policy.
You are responsible for any comments you leave on this site.
This site is primarily a forum for discussion about the quality (or lack thereof) in journalism or other media messages (advertising, marketing, public relations, medical journals, etc.) It is not intended to be a forum for definitive discussions about medicine or science.
We will delete comments that include personal attacks, unfounded allegations, unverified claims, product pitches, profanity or any from anyone who does not list a full name and a functioning email address. We will also end any thread of repetitive comments. We don”t give medical advice so we won”t respond to questions asking for it.
We don”t have sufficient staffing to contact each commenter who left such a message. If you have a question about why your comment was edited or removed, you can email us at feedback@healthnewsreview.org.
There has been a recent burst of attention to troubles with many comments left on science and science news/communication websites. Read “Online science comments: trolls, trash and treasure.”
The authors of the Retraction Watch comments policy urge commenters:
We”re also concerned about anonymous comments. We ask that all commenters leave their full name and provide an actual email address in case we feel we need to contact them. We may delete any comment left by someone who does not leave their name and a legitimate email address.
And, as noted, product pitches of any sort – pushing treatments, tests, products, procedures, physicians, medical centers, books, websites – are likely to be deleted. We don”t accept advertising on this site and are not going to give it away free.
The ability to leave comments expires after a certain period of time. So you may find that you’re unable to leave a comment on an article that is more than a few months old.
You might also like