Posted by Gary Schwitzer in Risk communication
Professor Gerd Gigerenzer of the Max Planck Institute in Berlin published an article in the BMJ last week, “Five year survival rates can mislead.” Excerpt:
While running for president of the United States the former New York mayor Rudy Giuliani announced in a 2007 campaign advertisement, “I had prostate cancer, 5, 6 years ago. My chance of surviving prostate cancer—and thank God, I was cured of it—in the United States? Eighty-two percent. My chance of surviving prostate cancer in England? Only 44 percent under socialized medicine.”
To Giuliani this meant that he was lucky to be living in New York and not in York, because his chances of surviving prostate cancer seemed to be twice as high in New York. Yet despite this impressive difference in the five year survival rate, the mortality rate was about the same in the US and the UK.
Why is an increase in survival from 44% to 82% not evidence that screening saves lives? For two reasons. The first is lead time bias. Earlier detection implies that the time of diagnosis is earlier; this alone leads to higher survival at five years evenwhen patients do not live any longer. The second is overdiagnosis. Screening detects abnormalities that meet the pathological definition of cancer but that will never progress to cause symptoms or death (non-progressive or slow growing cancers). The higher the number of overdiagnosed patients, the higher the survival rate. In the US a larger proportion of men are screened by prostate specific antigen testing than in the UK, contributing to the US’s higher survival rate.
Giuliani is not alone in misleading by the use of five-year survival rates. In an earlier article in the BMJ, Woloshin & Schwartz of Dartmouth wrote, “How a charity oversells mammography,” referring to The Susan G. Komen Foundation. In that BMJ article, they dismantled a Komen ad that claimed: “The five-year survival for breast cancer when caught early is 98%. When it’s not? 23%.”
Women need much more than marketing slogans about screening: they need—and deserve—the facts. The Komen advertisement campaign failed to provide the facts. Worse, it undermined decision making by misusing statistics to generate false hope about the benefit of mammography screening. That kind of behaviour is not very charitable.
The Gigerenzer paper ends with a call to medical educators and medical journals to do a better job, with three suggestions:
Firstly, risk communication needs to become a central skill in medical education. For decades medical schools have failed to teach students statistical thinking (biostatistics does not seem to help much). The basic structure for such a teaching programme already exists.
Secondly, organisations responsible for continuing medical education and recertification programmes should ensure that doctors are trained in understanding evidence and in risk communication.
Finally, journal editors and reviewers should no longer allow misleading statistics such as five year survival to be reported as evidence for screening. Editors should enforce transparent reporting of evidence, for the benefit of their readers and of healthcare in general.
This project deals primarily with journalists and consumers. The warning should be clear to both: scrutinize misleading survival rate statistics.
Follow us on Facebook, and on Twitter: