Strengths: a fairly good evaluation of the limitations of the evidence.
Weaknesses: Nothing on harms. Didn’t compare the new approach with another competing new technology (as the New York Times did). But it also didn’t compare the new approach with existing colon cancer screening methods in a meaningful data-driven way. So it failed to give readers a sense of the true scope of the potential benefits.
This topic is really about when we should pay attention to evidence about a new diagnostic test and how we should evaluate diagnostic tests in comparison with one another- a topic that is often inadequately discussed in both medicine and journalism.
A hesitant satisfactory score. The story stated, “the cost of the test has not yet been established. It is expected to cost more than a fecal occult blood test, but far less than a colonoscopy. A fecal occult blood test can cost as little as $23 while a colonoscopy can total $700.” It is expected by whom to cost in that range? What’s the source? (Reuters reported a $300-400 cost estimate from the company.)
The story does quantify the potential benefits, although the numbers are incomplete. It’s not clear, for one, how this compares to other colon cancer screening tests.
The story didn’t specify the sensitivity of the test, but stated: “The sensitivity of the test is much better than what has been seen in other stool screening tests, the ACS’ Brooks added.
Overall, we didn’t think the story gave readers a way to judge the scope of the potential benefits.
The story kept piling up the accolades – “one more advantage” and then “another benefit is…” but in the end the story was all benefits and no harms. The New York Times story, by comparison, addressed false positives.
The story makes an effort to address the limitations of reporting on the topic of a talk that hasn’t even been given yet (!) when it states: “Experts point out that studies presented at scientific meetings do not have to pass the rigorous peer review of studies published in reputable journals.” We applaud that effort but it could be improved easily. The story should state that that’s a limitation of drawing conclusions from such data! Wouldn’t that be far more explicit and clear to readers?
It did include an important independent perspective from Dr. Durado Brooks of the American Cancer Society, who called the findings interesting but said, “They will be more interesting if we ever get this kind of data in a screening population….Showing that in a small group of samples is very different from demonstrating that in a population where only a small number of individuals are going to have polyps of that size.”
Ideally, the story would have explained that initial accuracy studies in non-representative samples of cases and non-cases (like this one) usually overestimate performance. That is why they are planning another study.
No disease mongering here.
At least the input of the director of colorectal cancer for the American Cancer Society was important.
The story did not compare the new approach with other similar approaches now in development, and did not give adequate data-driven comparisons with other screening methods that are now used.
The story explains that “Cologuard is not yet available for sale. Clinical trials comparing the test with colonoscopy are slated to start next year.” But then it allows the lead researcher to get away with saying he “hopes that the test will be approved and available within two years.” Sure he does. That doesn’t make it a prediction you can bank on.
No comparison with – not even a mention of – other competing research as the New York Times did.
It does not appear that the story relied on a news release, but we are curious why both this story and the Philadelphia Inquirer used the same “Holy Grail” analogy.