Read Original Story

Computers help docs spot breast cancer on X-rays


4 Star

Computers help docs spot breast cancer on X-rays

Our Review Summary

This well-reported story on a study of computer-aided detection (CAD) of problems found in mammograms follows most health journalism best practices. But two related flaws, and one contextual issue, prevent it from providing a higher level of reader service.

As the ratings above show, the reporter provides necessary information about evidence and outcomes, and does an excellent job with sourcing. It communicates very clearly the conclusion that one radiologist using CAD appears to be as effective in identifying cancers as two radiologists reading the same images. 

The flaw is that the story fails to compare the benefit of these practices to the most common method of diagnosis used in the U.S.–a single reading by one radiologist. The published study itself reveals that double readings have been shown to be 4 to 14 percent more effective at detection than single readings, with a meta-analysis showing a 10 percent advantage. 

Combined with the report’s failure to directly compare costs of single-reader, double-reader and single-reader-with-CAD methods, the reader isn’t able to understand the real trade-offs facing the U.S. medical system: the cost of missed cancers vs. the cost of additional technology.

And finally, when dealing with mammography it’s important for reporters to understand the underlying challenge is not simply to identify cancers, but to identify the cancers that matter. It has not been demonstrated that diagnosing more cases of breast cancer reduces mortality or incidence of invasive cancers. This fact is always worth mentioning in stories about the benefits of cancer detection.


Does the story adequately discuss the costs of the intervention?

Not Satisfactory

The story should have stated how much CAD costs per mammogram, and how that compares to the costs of a radiologist or technician providing a second reading of the images. The story quotes a researcher saying cost-effectiveness needs to be studied. This should have triggered additional reporting comparing costs. 

The story says Medicare pays $15 per CAD reading, but this does not indicate whether this covers costs.


Does the story adequately quantify the benefits of the treatment/test/product/procedure?


The story does a good job of describing the methodology and the outcomes in terms readers can understand.

Does the story adequately explain/quantify the harms of the intervention?

Not Satisfactory

The study results showed that the recall rates–the percentage of women brought back for a second mammogram–was signficantly higher with CAD than with double readings (3.9 percent vs. 3.5 percent). Yet CAD discovered no more cancers. 

These additional mammograms without more accurate diagnosis can produce increased anxiety and potentially more biopsies. The report should have mentioned this. 


Does the story seem to grasp the quality of the evidence?


This news report is based on a peer-reviewed study published in a top-tier medical journal. The underlying study itself is a randomized, blinded trial involving about 30,000 women.

Does the story commit disease-mongering?


The story does nothing to exaggerate either the prevalence or severity of breast cancer, or the benefits of CAD.

Does the story use independent sources and identify conflicts of interest?


The reporter quotes the article itself, the lead author, a breast cancer clinician at a major teaching hospital, a medical society’s breast cancer expert, and a clinician who uses CAD in his practice. This is excellent sourcing.

The story reports the study’s funding, as well as two of the researchers’ links to CAD makers. 

Does the story compare the new approach with existing alternatives?

Not Satisfactory

The story clearly compares the options of having two human readers of mammograms against one reader using CAD.

Yet the reporter fails to compare the value of these methods with those when a single human reader is used, which remains the common practice in the U.S. The study reports that double readings have been shown to be 4 to 14 percent more effective in detecting cancers than single readings, with a recent meta-analysis showing a 10 percent advantage to double reading.

This context would have been very valuable, to demonstrate that U.S. diagnostic practices miss a lot of cancers.  

Does the story establish the availability of the treatment/test/product/procedure?


The story states that computer-aided dectection (CAD) of breast cancer is used for about a third of mammograms in the U.S. 

This implies availability at more specialized facilities and teaching hospitals. A more direct statement of this would have been useful. But the story meets minimum standards under this criterion. 

Does the story establish the true novelty of the approach?


The story says CAD is used for about one-third of mammograms in the U.S. and is likely to be used more as digital imaging becomes more widespread.

Does the story appear to rely solely or largely on a news release?


The report does not appear to draw on the press release.

Total Score: 7 of 10 Satisfactory


Please note, comments are no longer published through this website. All previously made comments are still archived and available for viewing through select posts.