Read Original Story

Textured look at one health system’s relentless suicide prevention efforts


4 Star


What Happens If You Try To Prevent Every Single Suicide?

Our Review Summary

iStock_000034116272_SmallThe story looks at an attempt by the Henry Ford Health System in Detroit, Mich., to eliminate suicide in its patients by taking a proactive approach to identifying and treating depression. The program, which was launched in 2001, is associated with a significant reduction in the suicide rate among its patients, even as suicide rates have gone up nationally.

The story is a richly told exploration of the program and its impact. And the storytelling substantively addresses the majority of our criteria. The major hole in this piece is that it doesn’t explore alternate explanations for the drop in suicides in this health system that might not have had anything to do with depression care. Inclusion of an independent perspective might have helped identify some of those reasons.


Why This Matters

Suicide is a tragedy. In addition to taking the life of an individual, it can have profound and lasting impacts on friends and family who have lost a loved one to suicide. According to the CDC, the suicide rate in the U.S. was 12.6 per 100,000 people in 2013 — that’s more than 41,000 per year. And suicide is also a significant financial burden, with an estimated economic cost of $44.6 billion per year in the U.S. An institutional treatment strategy that could save lives and reduce the economic impact of suicide is worth covering.


Does the story adequately discuss the costs of the intervention?


At issue in the story is the health system’s so-called “perfect depression care,” which incorporates depression screening into primary care. Patients deemed to have, or be at risk of, depression are then given appropriate treatment. It’s difficult to pin a specific cost to this. The matter is further complicated by the fact that treating depression can also save money by driving down costs related to associated health problems. However, the story notes that Centerstone — a separate health system that has adopted the Henry Ford Health System’s model for a smaller cohort of patients — did do a cost/benefit analysis and found that it resulted in savings of more than $400,000 per year. While we’ll award a Satisfactory, we think there was a muddling here of Henry Ford’s global screening strategy with the intense care provision in the Centerstone model. It is not appropriate to apply the cost analysis of the latter to the former. In addition, it can be argued that screening every patient at every visit it wasteful. The story reads/sounds somewhat like a paean to doing as much as you can and adopting a kitchen-sink approach rather than a carefully honed and efficient one.

Does the story adequately quantify the benefits of the treatment/test/product/procedure?


The story notes that the Henry Ford Health System currently has a suicide rate of 20 per 100,000 among patients with mental health and/or substance abuse problems, which is 80 percent lower than than it had been when the “perfect depression care” was launched in 2001. It also notes that the overall suicide rate for system patients is five per 100,000 — which is significantly lower than the national average of 12.6 per 100,000. It would have been good if the story had simply given the starting suicide rate among patients with mental health and/or substance abuse problems in 2001 (according to Henry Ford it was apparently 89 per 100,000), rather than asking readers to do the math.

Does the story adequately explain/quantify the harms of the intervention?

Not Applicable

This story looks at an overarching strategy for identifying and treating depression; screening patients to identify those at risk and then pursuing “appropriate care.” While individual drugs or other treatment options can have potentially adverse side-effects, the story does not attempt to evaluate specific courses of treatment. For that reason, exploring the potential harms associated with individual treatment options doesn’t seem relevant in this case.

Does the story seem to grasp the quality of the evidence?

Not Satisfactory

The story tells readers (or listeners) that the Henry Ford Health System has 200,000 patients, and that Centerstone has implemented a similar strategy for “nearly 200 patients who’d already made a suicide attempt.” However, it’s not clear how or if the benefits of “perfect depression care” have fluctuated over time, whether there are particular groups that have benefited more (or less) from the approach, or how effective the approach has been in other systems where it’s been adopted.

The story does cite an epidemiologist who evaluated the outcomes at Henry Ford, and it would have been great if it could have dug into the details just a bit more. For example, we found a 2013 American Journal of Managed Care report about the program. Looking at that report shows that that denominator for these suicide rates (the total number of people who are counted as potentially committing suicide) represents all those with contact with the Ford behavioral health system. This has very likely changed over time as services expanded. Also what diagnoses were represented matter.  The big issue is that a significant reduction in this health system might be regression toward the mean (where extreme rates tend to revert to more normal rates) and also the removal (by death) of the highest-risk patients in their rather small cohort — it is not surprising that the rate declines when successful suicide removes those at highest risk from the cohort.

Does the story commit disease-mongering?


No disease mongering here.

Does the story use independent sources and identify conflicts of interest?

Not Satisfactory

The story doesn’t include input from experts outside of the Henry Ford Health System. The story would have been stronger if third-party experts in mental health or epidemiology had been able to provide some critical context on how effective the “perfect depression care” approach has been.

Does the story compare the new approach with existing alternatives?


The alternative to “perfect depression care” would seem to be the absence of a formalized requirement for mental health screening to be incorporated into primary care. The story compares perfect depression care to standard care, which would appear to be the primary alternative. The story also implies that the ‘contract not to commit suicide’ is an alternative and mentions that safety plans are included in the Henry Ford approach.

Does the story establish the availability of the treatment/test/product/procedure?


The story notes that “perfect depression care” has been either adapted or adopted by other health systems, and names two of them. It also says that health insurers and other health systems have expressed an interest in the approach. In short, the approach is not yet in widespread use, but may eventually be adopted in additional areas. Given that the story can’t be expected to incorporate a list of every health system currently using the approach, this earns a Satisfactory.

Does the story establish the true novelty of the approach?


The story is very clear that Henry Ford Health System took a novel approach to addressing mental health in its patients with the particular goal of eliminating suicide.

Does the story appear to rely solely or largely on a news release?


The story does not appear to be based on a news release. (The most recent news release we could find on the Henry Ford Health System program dates back to 2010.)

Total Score: 7 of 9 Satisfactory

Comments (4)

Please note, comments are no longer published through this website. All previously made comments are still archived and available for viewing through select posts.

Joanne Silberner

November 4, 2015 at 5:16 pm

Thank you for these very helpful comments. I did have two outside experts commenting on the zero-suicide approach, I left them out thinking that the observation that other health groups were adopting the Henry Ford system would suffice. But I take your point. They should have been in the story. On the AJMC report, I’d seen it as well as a couple of others in JAMA and JAMA Psychiatry and had struggled to put them into the story but I didn’t struggle enough, I agree with your observation and I could at least have included a link. A note to other reporters doing suicide stories — data tend to be problematic, some suicides aren’t counted as suicides for family or insurance reasons. The American Foundation for Suicide Prevention collects good numbers and has some nice illustrative charts. And someone out there should do a story on all the studies linking suicides with the availability of firearms in the home, the data are very compelling.


    Matt Shipman

    November 4, 2015 at 7:37 pm

    Thanks for your feedback, Joanne. I thought this was a strong story, and any critiques here are not meant to make people feel bad, but to highlight things that could have made the story even better. You definitely took things in that spirit, which is great — thank you! And, yes, I would *love* to see an in-depth article on the research examining suicide and firearms in the home.


    Stephen Soumerai, Harvard Medical School

    November 9, 2015 at 9:09 am

    Hello Joanne and Gary: I missed the original story. Sorry. However, this excellent summary says something even more important about why the study can not provide solid evidence of effectiveness. I read that there was not even one baseline point. But it was was provided later. If this is true, the research design (the most important predictor of threats to validity and not identified here) is very weak (pre-post or post only without control- please correct me!) and can not provide the trajectory of suicide rates based on pre-intervention data and, hence, what would have happened even in the absence of the intervention. Thus, it could be totally untrustworthy. There is no adequate control or “counterfactual.” The underlying rate of suicide could have been going up or down BEFORE the intervention. We don’t know how the intervention changed the possibly already declining suicide rates. Moreover, a cross-sectional comparison between one commercial health plan and the entire US population is inappropriate for research. They are completely non-comparable. Please send me the study, but you need to know that, if there were inadequate or no baseline observations, this study will be excluded as evidence in international systematic reviews of the entire body of research because of the inadequate research design.
    Reporters need to pay more attention to research design based on the standard texts or the acceptable designs of the Cochrane Collaboration (organization of care section). You can also look at my recent simpler guide in CDC’s Preventing Chronic Disease. It is meant for you and deals with these topics. It was coauthored by a journal editor, expert in research design, and a science reporter.
    Best, and keep up this important work (but include research design as the first consideration). Steve


      Kevin Lomangino

      November 9, 2015 at 12:59 pm


      Reviewer Michael Bierer, MD asked me to post this response on his behalf:

      “Yes. The problems in design go deeper. The weakness of the pre-post design holds even if the counting of events and source population are valid. We actually are not confident that the denominator of people at risk for suicide was comparable year-to-year over the study duration. Glad to read such an engaged and accurate comment.”

      Kind regards,

      Kevin Lomangino
      Managing Editor