NOTE TO READERS: When this project lost substantial funding at the end of 2018, I lost the ability to continue publishing criteria-driven news story reviews and PR news release reviews - once the bread-and-butter of the site going back to 2006. The 3,200 archived reviews, while still educational, are getting old and difficult for me to technically maintain on the back end of the website. So I am announcing that I plan to remove these reviews from the site by April 1, 2021. The blog and the toolkit - two of the most popular features on the site - will remain. If you wish to peruse the reviews before they disappear, please do so by the end of March 2021. After that date you may still be able to access them via the Internet Archive Wayback Machine - https://archive.org/web/.
Read Original Story

Test of Eye Drug Is Said to Show Success in Elderly

Rating

4 Star

Categories

Test of Eye Drug Is Said to Show Success in Elderly

Our Review Summary

This NYT blog piece broke the story of a trial of one company’s $50 drug versus its $2,000 drug for the same problem – age-related macular degeneration.

The piece provides a lot of good flesh about the study, but it should’ve slapped a bigger warning label on this cocktail of leaked summaries of secret results, speculation, and an inadequate dash of independent evaluation.

So, while the story got a 4-star score in an honest and fair application of our criteria, the concerns that are raised are significant.

Give special attention to the “Evidence” and “Harms” criteria below, and the comments of other journalists on how the NYT handled this story.

 

Why This Matters

It’s an interesting preview of the debates that may become more frequent in the era of comparative effectiveness research, with devils in the details. The study itself, however, is a preview of a preview: it’s information from anonymous investigators who couldn’t even provide the non-peer reviewed data. (24 hours after the blog went live, the vetted study was published by NEJM.)

Ultimately too much of the article was about hearsay, without enough acknowledgement of the limitations of such sources. We think a little more caveat around these phenomenally early results — they’re rumors, really, and sometimes conflicting ones at that — would’ve done readers good. The same goes for the discussion of the Genentech study, which was derived from a conference abstract.

A business blog may focus more on the paper-and-coin implications than the scientific course, but certainly they are intimately related. And were it purely a market intelligence mill, we could excuse more. But we think the blog’s reach and general robustness on evaluating the evidence warrant the additional caveats. Even a “Stay tuned for more details,” or “…for details,” would’ve helped set the right tone.

Now that the embargo has been lifted and the peer-reviewed study published, the conclusion is similar. But we get much more specificity, which is especially important around the drug’s safety, of which the blog doesn’t have much to say (on the NEI study’s safety results), a notable omission. The perils of being first…

Criteria

Does the story adequately discuss the costs of the intervention?

Satisfactory

Costs are the heart of this story.

Does the story adequately quantify the benefits of the treatment/test/product/procedure?

Not Applicable

It did as much as it could given the lack of data, detailing the noninferioirty criteria and describing how the results are expected to fall. We think the blog is clear that peer-reviewed benefits data will be available upon publication. While in many cases we’d ding this approach, since this study was of the noninferiority variety, the ‘quantity’ of primary interest is a binary, yes or no, and it is explored. The study also hints at another endpoint of interest, retinal thickness. Although some seems speculative or hearsay, there’s much more here than about the harms.

On a sidenote, we would’ve liked some exploration of the clinical significance of either drug’s benefits. We understand business concerns about how the drugs stack up to each other financially, but it’s of interest to patients how well they fight this disease at all. And a blog piece like this reaches all readers, not just the business community.

Does the story adequately explain/quantify the harms of the intervention?

Not Satisfactory

Potential harms in the NEI trial couldn’t be quantified, as results weren’t available. That may’ve been excusable if the story had acknowledged that more information — potentially extremely important — would be available upon publication. We give it credit for noting that the trial’s size could only detect major differences in safety, but it doesn’t discuss whether any such differences were observed or might be; are we to assume from their silence that the two informants blessed the safety? Given the 57% increase in hemorrhagic stroke risk mentioned from the Genentech trial, one would think a major difference is a live concern, and the outcome was worth mentioning even qualitatively for balance.

The relative risks of the Roche Genentech study are presented, as well as a number of grains of salt.  We wish the Times had given a bit more information as journalist Jim Edwards, in his Placebo Effect blog, wrote:

  • the abstract of the Roche study only reports “relative” risks of death or cranial hemorrhage, which might look a lot less scary when expressed in absolute terms. For instance — and these numbers are merely illustrative — if Lucentis patients face a 0.1% chance of death, the risk for Avastin patients is just 0.11% (11% greater than the risk in the Lucentis group). Don’t count on the mainstream media to clarify that anytime soon.”

Does the story seem to grasp the quality of the evidence?

Not Satisfactory

Much of the story was right on target here. It gives us a detailed landscape to locate the NEI study, including the history of prior research on this topic, the study’s funder, randomized design, number of subjects, schedule for the presentation and publication of results, treatment regimen, as-needed portion, follow-up period, and definition of noninferiority. We’re also told about the ongoing second year of follow up and what it may yield. We’re given fewer details on the Genentech study, but the ones we get are excellent, including the number of subjects, study design, and potential confounders.

But two strikes determined our rating evaluation. First, at the time the blog was posted, the NEI study had not been published in a peer-reviewed journal or even presented at a conference. We get no data whatsoever, only the opinions of two people who had seen the data. We know they’re investigators but don’t know anything else about them, and we discuss the implications of that below. While the story points out that experts haven’t been able to scrutinize the Genentech study data yet, we don’t get the corresponding caveat for the NEI study information, which was even more preliminary than the Genentech summary at the time: the world didn’t even have a published abstract.

Sure, NEJM peer reviewers were able to scrutinize the NEI data, and presumably the two leakers are briefing us on that peer-reviewed version. But it’s not clear. We think the article should’ve been front and center about these limitations.

Second, there’s the Genentech study. As noted, it came with a good caveat about lack of peer review. But a key known aspect of the trial wasn’t evaluated in comparing it to the NEI study and interpreting the safety signals. They were different types of trials. The Genentech study was a retrospective record review, the NEI one a prospective randomized trial. The meaning is relevant, e.g., to potential confounding socioeconomic factors, and an explanation would’ve added more balance.

Other journalists have commented on the NYT jumping the gun on this story.

MedPageToday posted a blog piece, “Times Gets Away with CATT Burglary,” that stated:

The Times reported top-line results on its website Wednesday based on leaked information from two anonymous trial investigators. The disclosed results were presented in nebulous terms, but revealed enough that the NEJM scrambled to publish the paper online Thursday morning.

The Times article noted, “Revealing trial results before they are published or presented at a conference is considered a violation of scientific protocol.” Breaking an embargo is also a violation of journalistic protocol, but everyone seems to have gotten off the hook.

…this was not a respectable move on the Times‘ part in my eyes.

The article was essentially speculative, with a tone that suggested the reporter hadn’t seen anything to verify the claims by “two people familiar with the data” and with disagreement between the two sources on one point.

In the end, I don’t think the reporter did his readers a service by reporting such important data without knowing what the data actually were. The matter of a few days might make a big difference to investors but likely doesn’t to the general public, patients, and clinicians, who are arguably the more important audience as the ones using the information for treatment decisions.”

And the Embargo Watch blog reported on the broader embargo back story.

Does the story commit disease-mongering?

Satisfactory

The story avoids disease mongering and provides good context about the disease.

Does the story use independent sources and identify conflicts of interest?

Not Satisfactory

At best, the blog interviewed two NEI study investigators and one seemingly independent source to comment on the Genentech study. That doesn’t clear the bar for this criterion.

It’s good to get the conflicts of interest for the study designers, but the types, number, and attribution of sources is inadequate, especially given the pre-release nature of the study results. We understand that the two sources had to remain anonymous, but that’s because they were sworn to secrecy. They broke that oath to get the word out a day before the NEJM peer-reviewed publication. One has to wonder, was that truly a service to readers to discuss the results without discussing results, to summarize the message without peer review, to give us opinions without stating conflicts of interest? Especially since the sources disagreed on some aspects? For 24 hours?

No sources were identified. The closest we get is one “retina specialist.” We have no way of evaluating the lenses through which we’re given much of the information.

Does the story compare the new approach with existing alternatives?

Satisfactory

The article is about a comparison of an off-label approach to the approved one.

Does the story establish the availability of the treatment/test/product/procedure?

Satisfactory

The story provides information on the FDA approval status, off-label usage, and utilization of both drugs.

Does the story establish the true novelty of the approach?

Satisfactory

The blog tells us there has never been a definitive trial to compare these drugs for this condition, and that the two drugs have similar mechanisms of action but different FDA approval.

Does the story appear to rely solely or largely on a news release?

Satisfactory

The story doesn’t appear to rely on a news release.

Total Score: 6 of 9 Satisfactory

Comments

Please note, comments are no longer published through this website. All previously made comments are still archived and available for viewing through select posts.