Too little time spent on the weaknesses of a statistical modeling study. Independent expert analysis would have helped.
The thoughtful, conversational tone of this piece helped readers start to sort through the evidence for and against PSA screening, but, unfortunately, it left readers with the impression that this new study was bringing a lot of new evidence to the table. It should have brought in some independent voices to help readers understand the significant weaknesses of this retrospective study – weaknesses that were better described in the WebMD story. It also fell short in describing the potential harms of treatment as “small.” We don’t think most men would view the chance of risk as small if given the numbers.
There has been such a parade of studies recently about PSA screening that stories need to take special care not to confuse readers. Randomized, controlled trials that actually follow patients and measure the specific mortality benefits of screenings and treatments should be discussed differently than retrospective studies — like the one in question here — that rely on snapshots in time and statistical modeling to draw big conclusions about saving lives.
We reviewed three stories that covered this new retrospective study on PSA screening. None of them discussed the costs involved in the PSA tests or the subsequent treatments.
The story does quantify the purported benefits of PSA screening as described in this recent study. And it at least made a stab at questioning the basis of these projected benefits – and the statistical modeling study that was just published.
WebMD’s story quantified the potential harms in a more helpful way.
We also take issue with the statement that it is an “albeit small risk of sexual dysfunction and urinary leakage from eventual treatment.”
The discussion of harms was incomplete.
The story takes far too long to explain to readers how this study was done. It says in the lead, “But a new study this week reflects the continued view of many physicians — that screening does help to catch tumors earlier.” Toward the end of the story, it finally raises the issue that the study design might prevent it from accurately assessing the true mortality impact of PSA screening. It does this in a glancing way, though: “But is going back in time 30 years really the same thing that we’d get if we abolished PSA testing today? Who knows?” A WebMD story on the same study, by contrast, directly critiqued the study design, noting “how easily false results can creep into look-back studies.”
The story does not engage in disease mongering.
It is clear that the writer brings a lot of reporting depth to the story and has a clear understanding of the topic. An independent voice or two might have helped the story show readers where the preponderance of the evidence now lies instead of making it look as if this is merely a debate between “between public health experts who have largely turned away from PSA screening, and many practicing clinicians who feel the test has helped their patients immensely.” That is a simplistic dichotomy and the story never really answers the question it poses of “Why Can’t Doctors Agree?”
The alternative to PSA screening is the choice to decline screening, which is implicit in the discussion.
The story makes it clear that PSA screening is widely available.
The story says “the new study reflects a broader divide in the medical community.” Does it? It’s one study – using a statistical model that is not thoroughly analyzed in the story. In HealthDay’s story, by contrast, Dr. Otis Brawley of the American Cancer Society said, “None of thee studies can be considered decisive other than in proving that there are some harms associated with treatment.” In the WebMD story, Dr. Barry Kramer of NIH is paraphrased saying false results can easily creep into such look-back studies.
The story does not rely on a press release.