Read Original Story

Breast density and mammograms: LA Times story leans hard on news release


3 Star

How often should you get a mammogram? It depends on whether you have dense breast tissue, experts say

Our Review Summary

Nurse Assisting Patient About To Have A MammogramThis news story reports on a study, described in the Annals of Internal Medicine, that was designed to clarify the role of breast tissue density in calculating the risk of breast cancer mortality, and thus help physicians and women over 50 make better informed choices about how frequently to have mammograms.

The story overall does a fair job of summarizing the main points about the study’s conclusions, and about the ongoing confusion, contradictions, and controversy surrounding best practices for “breast screening intervals.” It could have been clearer at the outset that the key conclusions related to triennial screening should apply solely to women over 50 at average or low risk of breast cancer. It could also have done a better job of bringing in outside commentary and interpretation from those not directly involved in the study, noting the weaknesses of “modeling” systems in general, and offering more detail about the potential physical and psychological harms unnecessary biopsies and treatments.

Most worrisome, though, was that the news story used direct passages from the news release, without attribution. This, paired with the lack of outside commentary, is not what we consider rigorous journalism.


Why This Matters

Over the past 30 years, as mammography technology and cancer treatments became more refined, health policymakers, physician groups, and breast cancer advocacy organizations and politicians have been at the eye of a constant storm of changing and often controversial claims and recommendations related to the need for and frequency of mammography screening.

The situation has been particularly confusing and anxiety-producing for older women without the known risk factors for increased risk of breast cancer. A big part of the problem in efforts to find clarity — both for clinicians and women — has to do with the expense, difficulty and ethical issues in designing studies to do so; and the fear factor that drives the choice to have more and more screening, even if the evidence points to the benefits of less less. Consequently, rigorous and peer-reviewed research that adds scientific heft to risk calculations are instantly newsworthy and can, over time, significantly change practices and behaviors.


Does the story adequately discuss the costs of the intervention?

Not Satisfactory

The article did not discuss costs. Screening mammograms impose a significant cost on the U.S. health care system and insurers, particularly Medicare. Some critics of those suggesting less frequent screening for some groups of women argue that such efforts want to cut costs at the expense of women’s health and lives. Evidence-based guidelines are winning over some hearts and minds, but hardly all.

In any case, it behooves news organizations to at least acknowledge the financial cost issue and put it in perspective. This article did not and although it noted the improvement in data collection wrought by digital mammograms in recent years, it also did not note the increased cost of digital (and now 3D) mammography over conventional screening methods.

Does the story adequately quantify the benefits of the treatment/test/product/procedure?

Not Satisfactory

The article did a pretty good job of presenting the complicated data modeling and reasons for the recommendations regarding breast density risk calculations and screening intervals. But it needed some actual numbers to quantify the benefits. In this case, it’s the number breast cancer deaths averted.

As the study abstract stated, “breast cancer deaths averted were similar for triennial versus biennial screening for both age groups (50 to 74 years, median of 3.4 to 5.1 vs. 4.1 to 6.5 deaths averted; 65 to 74 years, median of 1.5 to 2.1 vs. 1.8 to 2.6 deaths averted).”

Does the story adequately explain/quantify the harms of the intervention?


The article mentions overdiagnosis and a couple of its consequences.

To better communicate harms, it could have used some outside expert commentary about the numerous risk factors and comfort levels women and their physicians have when considering the frequency of mammography; and about the fact that there are no definitive answers about the ideal screening protocol for any individual woman.

The article also should have included specific information from the journal article about the economic costs of biopsies and other enhanced screening. While it’s hard to quantify the emotional and psychological costs to women who undergo biopsies that are found to be negative, it is possible to quantify the economic costs to a person and to the larger healthcare system.

Does the story seem to grasp the quality of the evidence?

Not Satisfactory

The story went into some detail about the epidemiological “modeling” and the outcomes simulations the researchers used. But it didn’t really place that in the context of wider research. For example, the editorial accompanying the study noted “modeling and registry data are not randomized trials, but each of these study types can provide critical information that extends our knowledge base.” The story also should have explained that the model needs to be validated before we know whether the recommendations are accurate. As the editorial also points out:  “It will be important to track outcomes in women who undergo alternative screening frequencies to validate this approach.”

The article also should have noted that determinations of breast density, BI-RADS, are not uniform. Recent papers have found a lack of agreement of breast density ratings among radiologists who reviewed the same digital mammograms. Now that this new model includes BI-RADS ratings, it would be most helpful for patients if there were more uniformity among those determining the BI-RADS.

Does the story commit disease-mongering?


It did not disease monger.

Does the story use independent sources and identify conflicts of interest?

Not Satisfactory

The article quotes only one of the lead investigators of the study, and used the same quote found in the news release (see last criterion). Coverage elsewhere often included comments from cancer clinicians and investigators not affiliated with the study, most of whom emphasize the complexity of the risk calculations and the human factors that drive medical decision making. The journal article also included an editorial by a Johns Hopkins University scientist, which offered good information about the quality of the research and other things. It would have helped to cite it. (Full disclosure: reviewer Joann Rodgers consults for Hopkins.)

An article about a new breast cancer modeling tool that may impact future screening guidelines should definitely include quotes from breast cancer researchers and clinicians who are not affiliated with the study.

Does the story compare the new approach with existing alternatives?


The article explains the current guidelines, and the variations in recommendations that are based on previous calculations of risk. Implicit in the story is the fact that these are “guidelines,” and not prescriptive rules; and that there must always be interpretations and exceptions to them.

Does the story establish the availability of the treatment/test/product/procedure?

Not Applicable

Digital mammography is available across the United States, so we’ll rate this N/A.

However, the news story would have been stronger had it more clearly explained that these findings were based on an assumption of all-digital mammography. Not all women have access to digital mammography and may have to travel farther for digital screening. Travel is often a barrier to screening in the first place, especially for those who rely on public transportation.

Does the story establish the true novelty of the approach?


The article does a very good job of pointing out what’s new here: the affirmation of the value of density calculation in determining risk and mammography frequency. The news article went into some detail about the epidemiological “modeling” and the outcomes simulations the researchers used.

The story would have benefitted from expert commentary about the numerous risk factors and comfort levels women and their physicians have when considering the frequency of mammography; and about the fact that there are no definitive answers for the ideal screening protocol for any individual woman.

As noted earlier, the journal carried an editorial by a Johns Hopkins University scientist, offering an opinion about who needs annual screening, and some of the weaknesses and strengths of the new study. Too bad this wasn’t cited.

Does the story appear to rely solely or largely on a news release?

Not Satisfactory

UCSF did issue a news release, and the LA Times story quotes nearly word-for-word one of the paragraphs from the release, without attribution:

LA Times: “Some lesions detected in screening will never grow to become clinically significant and will not impact a woman’s life,” Kerlikowske said.

News release: “Some lesions detected in screening will never grow to become clinically significant and will not impact a woman’s life,” Kerlikowske explained.


Total Score: 4 of 9 Satisfactory


Please note, comments are no longer published through this website. All previously made comments are still archived and available for viewing through select posts.