NOTE TO READERS: When this project lost substantial funding at the end of 2018, I lost the ability to continue publishing criteria-driven news story reviews and PR news release reviews - once the bread-and-butter of the site going back to 2006. The 3,200 archived reviews, while still educational, are getting old and difficult for me to technically maintain on the back end of the website. So I am announcing that I plan to remove these reviews from the site by April 1, 2021. The blog and the toolkit - two of the most popular features on the site - will remain. If you wish to peruse the reviews before they disappear, please do so by the end of March 2021. After that date you may still be able to access them via the Internet Archive Wayback Machine - https://archive.org/web/.
Read Original Story

New Blood Thinner Beats Plavix When Paired With Low-Dose Aspirin

Rating

3 Star

Categories

Tags

New Blood Thinner Beats Plavix When Paired With Low-Dose Aspirin

Our Review Summary

We feel several facets of the study should have been given more attention. Most notably, the drugmaker funded it. Given the ongoing FDA review, we think that’s pretty important. But read our full review for our questions about how the story covered (or didn’t cover) the evidence, benefits and cost issues at play.

 

Why This Matters

The path to approval of an investigational drug usually has many twists and turns. Brilinta is no exception. The pivotal trial for this platelet inhibitor was completed and based on the positive findings, the advisory board that reviewed the data voted to recommend approval. But there was an interesting sidebar. Brilinta did not work as well in preventing events in people with acute coronary syndrome as did a potential competitor, Plavix in the United States sites for the trial. Astra Zeneca wanted to know why. This latest report is a follow up examination of the original trial and the authors concluded that Brilinta and low dose aspirin worked a bit better that Plavix and aspirin. Patients treated with higher doses did not do as well.

The story misses this very important background entirely and present an incomplete and misleading summary. Our unsatisfactory ratings in this review identify the threads missing from the article that we feel are essential to weaving a fair portrait of the research.

Criteria

Does the story adequately discuss the costs of the intervention?

Not Satisfactory

No discussion of costs. Ticagrelor isn’t available in the US. But its cost – where available elsewhere – could have been cited. Cost will certainly be an issue if it’s approved because its competitor, clopidogrel, will soon be available in the US as a generic drug.

Does the story adequately quantify the benefits of the treatment/test/product/procedure?

Not Satisfactory

We have a few suggestions here. Most importantly, we think the take-home message of the study was too oversimplified. The headline claims that Brilinta “Beats Plavix When Paired With Low-Dose Aspirin.” In fact, this study was a geographic analysis to explain why Brilinta wasn’t better than Plavix in the US, unlike in other countries, and the lead conclusion was: “The regional interaction could arise from chance alone.” Next, it says that the aspirin dose was “a possible explanation.” In fact, since last year the dose of aspirin has been suggested as a possible explanation for the regional differences in Brilinta’s effects, and that hypothesis has generated controversy. This new study is important to raise our antennae and spur new research, but it wasn’t designed to prove the effect of aspirin.

Of course, the article’s take-home message is the same one that many in the field (including the FDA) may derive from this study. The aspirin finding is most relevant to current practice, both for the use of ticagrelor globally now and perhaps soon in the US. It’s also valuable for thinking about aspirin in this setting in general. However, we think the  context should not be boiled off from the message.

It’s important to keep in mind the limitations of this reported benefit, as we discussed under Evaluate the Quality of Evidence. The details are statistical, such as the weakening of a study’s power by repeatedly slicing and dicing the same data, which is beyond the scope of this article. But we think more of the big picture would have been appropriate. For journalists, the big picture is a question of tone, of conveying the caution urged by the researchers, For doctors, the big picture is that only future randomized trials can investigate whether the pattern observed in this study with aspirin dose is a true effect, a red herring resulting from chance, or a surrogate marker of another factor that’s more important to target.

Additionally, the benefit is quantified as a relative risk reduction. If you’re a regular here, you know how we feel about relative risks vs absolute ones. In absolute terms, the difference between the two drugs was about 2% for heart attack, stroke, and death. 

Does the story adequately explain/quantify the harms of the intervention?

Satisfactory

In the setting of heart attacks, the major harms of the drugs are related to their benefits — how well they prevent future cardiovascular tragedies. However, all the drugs in this article, including aspirin, impart a risk of bleeding, which is part of the comparison in PLATO and this follow-up analysis. It was good that the article gives a mention, albeit a brief one in the quote from Dr. Berger, of the increased risk of bleeding with higher doses of aspirin. 

Does the story seem to grasp the quality of the evidence?

Not Satisfactory

On the pros, it acknowledged that conference research is preliminary until a paper has been published in a peer-reviewed journal. (The paper is now available from the journal Circulation.) It also gave us the follow-up date and measurement for the benefits’ data point.

On the cons, we would have liked more about the underlying PLATO study, such as its design and the number of subjects. We also think the article would have tied a lot of threads together by explaining what this particular analysis by Mahaffey et al actually was. Why was it done? How is it related to the delay in FDA deliberations? Was it all about aspirin? Such information could have ushered in the background that in the PLATO trial, ticagrelor actually wasn’t better than clopidogrel at study sites in the US. In fact that’s thought to be why the FDA has delayed its decision, and this geographic analysis was intended to help settle the issue.

That first step would have been to describe that background and what type of study this was. The next step would have been to evaluate its quality. The investigators themselves offered some helpful critiques in their paper: the study was post hoc and data driven, no adjustments were made for multiple comparisons, this type of exploratory analysis is potentially hazardous, and because of all these limitations the indictment of aspirin dose as the culprit behind reduced ticagrelor efficacy should be considered cautiously.

Please see also our comments under Benefits for further discussion on these lines.

Does the story commit disease-mongering?

Satisfactory

The population is pretty well specified in the article as those with acute coronary syndromes, and it explains that the FDA is considering an indication for these patients who are undergoing angioplasty or stenting.

Does the story use independent sources and identify conflicts of interest?

Not Satisfactory

Our biggest bone to pick with the story is its failure to mention that the study was funded by the maker of drug. Keep in mind that Brilinta is going up against Plavix, shortlisted with the best-selling drugs on the planet, and at US sites of PLATO, Plavix “won” (kind of). With FDA approval influenced by the results of this PLATO follow-up, AstraZeneca’s funding was a pretty important conflict of interest to disclose.

One independent source was cited, Dr. Berger. Further sources may have revealed more of the history and controversy in the field. In the discussion at the AHA conference where this new study was announced, at least one panelist said he thought the results were “spurious,” emphasizing the study’s conclusion that the result could have been due to chance.

Does the story compare the new approach with existing alternatives?

Satisfactory

The main PLATO study was a comparison of an experimental drug (ticagrelor) to an approved one (clopidogrel).

Does the story establish the availability of the treatment/test/product/procedure?

Not Satisfactory

We would have preferred a different tone that didn’t suggest the drug is “awaiting” what then appears as almost inevitable FDA “approval.” The FDA panel that produced the 7-1 vote is not the panel that decides whether to approve drugs. It’s an advisory panel. The FDA usually follows its recommendations, but it has not decided whether to approve the drug. That colors the caveat that the “approval process is still ongoing” as sounding somewhat perfunctory, like it’s only a matter of time. It would be more balanced to suggest it’s still under review.

Does the story establish the true novelty of the approach?

Satisfactory

This was the latest analysis of the PLATO study, comparing different combinations of antiplatelet drugs, including one experimental drug that’s up for review. With more space, it would have been nice if the article further differentiated ticagrelor and clopidogrel for patients.

Does the story appear to rely solely or largely on a news release?

Satisfactory

The article has several differences from the news release, and on that we pass it on this criteria. They do have a similar tone, and neither addresses the investigators’ recommendation for cautionary interpretation of this exploratory study, which  they conclude may be reporting the effects of chance. Although the release does mention that AstraZeneca funded the study.

Total Score: 5 of 10 Satisfactory

Comments

Please note, comments are no longer published through this website. All previously made comments are still archived and available for viewing through select posts.