The following post is by Mary Chris Jaklevic, a freelance health reporter who’s written for Modern Healthcare magazine and several daily newspapers. She tweets as @mcjaklevic
The U.S. Preventive Services Task Force offered new guidance this week on the contentious question of when to prescribe cholesterol-lowering statins for millions of people without a history of heart disease. For all their import, the recommendations weren’t all that definitive.
The task force found that statin use is beneficial for some people aged 40 to 75 who are at increased risk for cardiovascular disease, but did not find enough evidence to recommend for or against statin use for people older than 75.
Statins have been in use for years, backed by ample research, and most experts agree that they are beneficial for patients who’ve already been diagnosed with heart disease (their use in these patients is called “secondary prevention.”) So why is there still furious debate on their clinical worth in people who don’t have a history of heart disease (“primary prevention”)? Some point a finger at data secrecy.
Lack of data-sharing isn’t part of the mainstream statin narrative
Sponsors of industry-funded studies haven’t shared individual patient-level trial data with other researchers. Some say this level of data — which is more detailed than the aggregate results reported in published studies — is needed to confirm the validity of the findings and make sure they support the conclusions that are being drawn. Lack of such data could have biased the results of the meta-analysis that the task force relied on for its recommendations, critics say. This point didn’t make it into news coverage.
Stories by The Washington Post and Reuters did describe the discord in the cardiology world. (For an analysis of the Post’s coverage, see this blog post by HealthNewsReview.org Managing Editor Kevin Lomangino.)
Both news outlets zeroed in on a critical editorial by two JAMA Internal Medicine editors that accompanied the guidelines’ publication. Rita F. Redberg, M.D., and Mitchell Katz, M.D., questioned the strength of the evidence showing the benefits of statins and asserted that not enough is known about potential harms.
But both news outlets skipped over a key issue at the heart of their argument: lack of data-sharing. Redberg and Katz wrote that the data on statins for primary prevention is “weak.” They explained:
The USPSTF and authors of the evidence report did not have access to the primary data (clinical study reports and anonymized patient-level data) from the statin clinical trials. Rather, they had to rely on peer-reviewed published reports as the basis for these recommendations. …
Additionally, the actual trial data are largely held by the Cholesterol Treatment Trialists’ Collaboration on behalf of the industry sponsor and have not been made available to other researchers, despite multiple requests over many years.
Exacerbating the potential bias, they wrote, all but one of the trials included in the task force evidence report were industry-sponsored, and industry-sponsored studies have been shown to report greater benefit and fewer adverse effects than non-commercially sponsored trials of the same drugs.
USPSTF: “confident” in recommendations, although “sharing would have been useful”
In an email, task force member Doug Owens said: “While additional data-sharing would have been useful, since the Task Force always seeks out the most detailed evidence available for each topic, in this case, we found that we were able to be confident that there was sufficient evidence available to generate a clear recommendation on statin use in certain populations.”
Individual patient-level data from statin trials are held by a UK-based research collaboration that has been authorized by trial sponsors to conduct meta-analyses of combined studies. However, that group states on its website that it isn’t authorized to share this data with other researchers such as the USPSTF — which noted in its evidence review that it didn’t have access to individual patient data. The collaboration says outside researchers are welcome to propose analyses and may be invited to collaborate with the group, but that “requests for data should be made directly to the data custodians of each trial.”
Other arguments against data-sharing include concerns about misinterpretation or misuse of proprietary data, as the New England Journal of Medicine spelled out in this editorial. Proponents argue that patient data can be de-identified and opening data to multiple interpretations is the point of science.
A missed opportunity to educate readers about avoidable uncertainty
By not reporting the refusal of industry sponsors to more broadly share data, media outlets lost an opportunity to educate the public about the importance of transparency, which directly affects the ability of patients and doctors to make fully informed decisions.
A few mainstream media outlets have covered the data-sharing controversy, notably Stat, the Wall Street Journal and the New York Times. But there’s an argument to be made that data-sharing should be thrust into more news stories about particular medical interventions.
Data secrecy is “an issue that affects all of medicine,” says BMJ editor-in-chief Fiona Godlee, M.D., whose journal convened a panel that called on release of primary statin data last year.
Godlee says in the few cases where full trial data has become available for reanalysis, industry biases have been exposed. For example, last year BMJ published a reanalysis of 2001 study data that found Paxil was not statistically better than a placebo in treating adolescent depression, reversing the findings of a 2001 study that showed a benefit. The data were made available through legal action against the drugmaker, GSK.
A 2010 German study concluded reporting bias is a “widespread phenomenon,” citing examples involving 50 pharmaceutical, surgical, diagnostic and preventive interventions.
Giving less weight to data that isn’t openly shared
Tim Errington, metascience manager at the Center for Open Science, said in an email that journalists should be asking questions about transparency when they cover research, including whether investigators are sharing underlying data, metadata and research methods and adhering to open science best practices such as those espoused by the EQUATOR (Enhancing the QUAlity and Transparency Of health Research) Network, an international initiative to improve the reliability of published research.
“If the underlying aspects of an exciting discovery cannot be shared openly or in a manner to allow other researchers to reanalyze and scrutinize, then the weight of that discovery is lower than one that can be shared,” Errington said.
Some journals including The BMJ as well as government and foundation funders of research have been at the vanguard of a growing transparency movement, requiring data sharing for investigators they sponsor or publish.
Re-analysis of data can expose clinically meaningful discrepancies
There’s evidence showing that re-examining study results can bring about alternative interpretations and strengthen scientific discourse. A 2014 study showed that of the small number of reanalyses of randomized clinical trials performed, only a few were conducted by entirely independent authors. But 35 percent “led to changes in findings that implied conclusions different from those of the original article about the types and number of patients who should be treated.”
The differences between the original studies and the re-analyses sometimes occurred because the researchers conducting the re-analyses used different statistical or analytical methods, outcomes measures, or methods of handling missing data, according to the study authors. In other cases, re-analyses identified errors such as the inclusion of patients who should have been excluded from the original results.
In some cases, re-analyses have uncovered bias on the part of original investigators. Cardiovascular risks associated with the anti-inflammatory drug Vioxx, which was removed from the market due to safety concerns in 2004, were obscured by an array of methods including using short treatment periods, recruiting patients at low risk of cardiovascular disease, not counting cardiovascular events for part of the trial, not reporting the absolute number of cardiovascular events, and suggesting that a comparison drug had a protective effect, according to report by researchers who examined trial data as part of a litigation against the drug’s maker, Merck.
Data-sharing: “A very long way from the norm”
Indeed, it can be impossible to tell how well a trial collected information on adverse events without looking at specific protocols, said epidemiologist Peter Doshi, an associate editor at The BMJ. He said one common problem is that researchers put more effort into proving a drug’s efficacy than in documenting potential harms. He was part of a Cochrane Collaboration review of clinical trial reports of Tamiflu.
Experts say there’s a long way to go before we see meaningful improvement on this front.
“Data-sharing is the exception rather than the norm, a very long way from the norm,” says Godlee. “In fact examples of data sharing, or more accurately examples where the anonymized individual patient data and clinical study reports from the trials have been made available and scrutinized by independent researchers, are vanishingly small.”