This is a story looking at the possibilities for developing lab tests for mental illness diagnoses, primarily schizophrenia and bipolar disorder.
The story covers a wide array of information, and that’s not a simple task given the topic. Synthesizing years of research can be tough, and we don’t envy the challenge.
However, the story needed a more thorough examination of the quality of the evidence for the various research studies that are mentioned, a better quantification of the benefits that are mentioned, some mention of the risks involved in widespread screening for mental disorders, and a sense of the costs that would be involved.
Given the prevalence of mental illness, any tools to help improve diagnostics is a welcome development. For readers, it’s important to discuss the unknowns and limitations of new screening tests, along with factors that affect accuracy, such as the false-positive and false-negative rate.
There is no mention of costs in this piece. With the variety of tests that are being discussed, it would have been possible to at least mention a range of possible costs per test.
In a piece with multiple studies mentioned, there was no quantification of any of the benefits found in any of those studies. Readers won’t find out if these tests found five people with various disorders or 500.
We were also concerned about overstatement of the benefits of this study: “The test, says Bahn, can accurately predict whether someone will ‘develop schizophrenia over the next two years.'” What does the story mean by “accurate” here? We encourage journalists to always discuss and define the test’s sensitivity (how well the test finds people who have the disease) and specificity (how well the test rules out people who don’t have the disease). In this story, we’re not given any data points to see how this conclusion was made. And there’s a general sense given to the reader that biomarker-based assessments would clearly be superior to diagnostic interviews, something that’s not supported by any evidence presented in the story.
There are no potential harms mentioned, and yet screening carries the potential for harm because of false positive and false negative rates that may lead to the wrong intervention.
It is unclear in most of the examples of research cited how many people were being studied, the circumstances of the study, whether these were controlled trials, etc. In a few cases, a journal name is mentioned, which is helpful. But much more could have been done to give readers a sense of how solid this research is.
There is no disease mongering in this piece, and we give it high marks for how it approaches the topic of mental health. It is trying to bring readers into the very real problem of accurately diagnosing these disorders. In that way, it is a very welcome piece of reporting.
However, we did want to note that saying a delayed diagnosis may come “too late” is somewhat hyperbolic–it implies that there is a limited period in which prevention or effective treatment is possible, and after that, it’s no longer possible. This is not always the case.
The story interviews a patient, a professor who is an expert in mental disorders, and a researcher looking at one avenue for biomarker identification. It’s also clear that the reporter read through a lot of studies to develop this piece.
However, the story doesn’t point out that source Dr. Sabine Bahn has apparent financial stakes in the success of this blood test.
The story does not perform any head-to-head comparisons between traditional diagnostic methods and biological testing, but we feel the framing of the story gets the point across that there are traditional ways of diagnosing these disorders and a promising wave of new biological tests.
The story goes into some detail about how experimental these diagnostic tests are and even some of the difficulties entailed in bringing them to market.
It’s made clear that lab-based biomarker diagnostic tests for mental illnesses would be novel.
The story does not rely on any news releases, as far as we can tell.