Last week’s Journal of the American Medical Association included an article, “Prespecified Falsification End Points: Can They Validate True Observational Associations?” that got guest blogger Harold DeMonaco, MS, thinking in a way that might get you thinking. Here is his guest post:
That JAMA article by Prasad and Jena offers a rather unique solution to the vexing problem of false positive associations generated in observational studies. Their solution is to include an implausible hypothesis into the mix, called a falsification endpoint.
The current New Drug Approval (NDA) process is woefully inadequate to identify relatively rare side effects of prescription drugs under review. The reason is fairly simple. There are too few subjects (patients) enrolled in the seminal trials. If an adverse event has an incidence of 1 per 1000 patient years, there is simply no way that it would be picked up on review. Unless of course, the side effect is sufficiently rare (e.g. Progressive Multifocal Leukoencephalopathy seen in patients treated with Tysarbi) as to jump up and grab the observer by the throat.
The increasing availability of large electronic medical databases provides a great opportunity for researchers to look for hypothesis generating observations and to identify potential rare side effects. But size does not always provide greater clarity. For example, several observational studies suggested an association between the use of acid reducing proton pump inhibitors and pneumonia. As is noted by the JAMA article authors, there is a biologic underpinning. De-acidification of the stomach allows what a colleague once described as colon-ization with bowel bacteria now residing in the neutralized stomach. Aspiration due to reflux disease would seem to be a plausible etiology for the apparent increase in the incidence of pneumonia. Subsequent studies however failed to demonstrate an increased risk.
Prasad and Jena offer an out of the box solution. In addition to testing the desired association, test one or more that cannot possibly be true. Introducing impossible associations into the statistical analysis reduces the likelihood of false positive results.
They also suggest that the coincident analysis of know side effects seen with a drug further validates study results.
The coincident search for either plausible association (e.g., rash with an antibiotic, bleeding with an antiplatelet drug) or a truly implausible association (e.g., monocular blindness with penicillin) provides additional insight into the validity of the sought after association. Or, in laymans terms, If you want to look for an association between a drug and a side effect in a population, also look for a side effect that is totally unlikely or one that has been demonstrated previously. It will be interesting to see if this approach gains any traction in the medical literature.
Follow us on Facebook, and on Twitter: