Reacting to some of these headlines is this guest post by Richard Hoffman, MD, MPH, one of our story reviewers on HealthNewsReview.org.
Earlier this week investigators reported that men treated for localized prostate cancer who were also taking aspirin were less likely to die from prostate cancer than men who were not taking aspirin. The story garnered considerable media attention. The news reports were intriguing, but I also found two underlying themes worth considering—the concepts of patient-centered outcomes and comparative effectiveness research.
While researchers and clinicians often focus on surrogate endpoints—markers of disease progression—patients are more concerned about outcomes such as disability, hospitalization, and death. Dying from prostate cancer is certainly an important outcome—as is dying from any cause. Because aspirin is used to protect against cardiovascular disease death, inquiring readers might want to know about the non-cancer deaths.
A New York Times story quoted the lead author as saying “researchers went to great lengths to make sure that aspirin users were not experiencing fewer deaths from prostate cancer simply because they were more elderly and therefore more likely to die of other diseases before prostate cancer had progressed enough to kill them.” This is important because competing causes of death could have biased the conclusions about aspirin and prostate cancer death.
However, when I read the study, I could find only that 75% of the deaths were not caused by prostate cancer—with no indication whether overall mortality differed by aspirin status. An important omission in the quest for patient centered outcomes.
Another theme highlighted by the study is comparative effectiveness research. Research often seeks to determine the best ways to treat diseases. The most valid evidence comes from randomized controlled trials, which provide the best-of-all-worlds’ answer—highly selected patients followed closely by expert health care teams, though with relatively small numbers of patients and limited follow up duration. However, using randomized trials to answer research questions is not always feasible and results are not necessarily generalizable to clinical practice. Consequently, investigators increasingly rely on observational data, which can come from prospectively followed cohorts, registries, claims data, or electronic health records.
While observational data are easier to collect–and better reflect the messy real world of clinical care–their interpretation is subject to biases. The most important is selection bias, because people who receive one treatment versus another may differ markedly in important characteristics that are highly correlated with the outcomes of interest. Determining whether good outcomes are due to taking a drug or due to baseline differences in patient health factors is challenging. While imperfect, there are accepted statistical techniques to adjust for selection bias in observational studies. However, it did not appear that the investigators used them in their analyses.
While observational studies can often be hypothesis-generating–an aspirin benefit in cancer patients is certainly plausible–there are numerous pitfalls in performing and reporting them. Caveat lector .