This story addressed many of the issues that we like to see covered in discussion of experimental diagnostic tests. It explained that this test is not ready for prime time and probably won’t be any time soon. It emphasized that the results of a small study cannot tell us how the test will perform in the real world. It explained that there are already effective diagnostic tests for colon cancer that aren’t being optimally used. It included perspective from an expert who explained potential benefits and drawbacks of the new test. Overall, a very good report.
In any story about diagnostic testing or screening, the discussion has to go beyond how many people with the disease were correctly identified. We also need to hear about how many people without the disease were incorrectly identified (i.e. false positives), and what harms those individuals may face as a result of the incorrect test. Of the two stories we reviewed about the same experimental colon cancer test, only HealthDay made readers aware of this important distinction.
Costs were not mentioned, but since this test is in an early stage of development, it would be difficult to provide an accurate cost figure. That figure would have to include both the cost of the test and any unnecessary follow-up tests and procedures due to false-positive results. We’ll rule it not applicable.
Both this story and the competing WebMD piece repeated the accuracy statistic provided by the study (76%), but neither story provides much detail as to what this figure means. Good diagnostic tests not only have to be able to identify people with diseases correctly, but they also have to be able to rule out people who don’t have the disease correctly. Ideally the story should report true positive rate (sensitivity) and the false positive rate (1- the specificity), though it’s not clear that the percentages listed represent those rates. We would have preferred to see the absolute numbers of people in each group (the cancer patients and controls) who were accurately diagnosed, and a comparison of how that stacks up against currently available tests.
This story did note that the primary benefit of a breath test would be increased convenience and less of an “ick” factor compared with existing tests. In addition, it pointed out that a 75% accuracy rate is a failure for 25% of patients — a nuance missing from WebMD’s coverage. In also explained that we don’t whether this kind of test can detect precancerous polyps, which should ideally be identified and removed before they have a chance to develop into cancer.
A mixed bag, but overall, it meets our standard for a satisfactory.
The story mentioned that this test would inevitably produce false-positive results that would lead to needless invasive follow-up tests.
The story sprinkled useful caveats throughout the coverage:
There was no disease-mongering of colon cancer.
An expert from the American Cancer Society provides much-needed context and perspective on this research.
The story discusses existing options for colon cancer screening, and what some experts think are the appropriate ages and intervals at which these tests should be done.
It’s clear from the story that this test won’t be available any time soon.
The story acknowledges that the idea of using a breath test to catch cancer is not new, and mentions other relevant research.
It doesn’t appear that the story relied inappropriately on this press release.