This story does a good job of informing readers that although a combination implantable defibrillation-resynchronization heart device reduced heart failure risk compared to a defibrillation-only device, independent experts downplayed the importance of the results and predicted this trial would not dramatically alter standard treatment.
However, the story could have done a better job by reporting the absolute differences in heart failure risk, rather than just the relative risk reduction. The distinction is particularly important in situations like this one, where the overwhelming majority of patients fared well regardless of which device they received.
Also, even though the story pointed out that the trial was paid for my the manufacturer of the devices being studied, it should have noted that the editorial accompanying the research article in the New England Journal of Medicine raised several concerns about the trial design and the way the results were reported, including the lack of ‘blinding’ of physicians making the heart failure diagnoses (though the diagnoses were adjudicated by blinded committees) and factors in patient selection that may make it difficult to compare the results to other similar trials.
The story notes that the combination defibrillation-resynchronization device sells for $30,000 to $40,000. However, it should have also told readers how that price compares to the cost of a standard implantable defibrillator.
Readers would also want to know the total cost, including implantation and associated care; but the opacity of health care pricing in the U.S. does make such estimates difficult to determine.
We also wish this story (and many stories) included the Number Needed to Treat or NNT – how many need to be treated in order for one person to benefit. Based on the study results, the NNT for heart failure is 13, so the cost of preventing a single case of heart failure is $390-520,000 just for the device and for a period of about 2.4 years. While the story rates a satisfactory based on the criteria, this fairly simple calculation would have provided a better understanding of the financial implications to the average reader.
Although this story puts the trial results in context with several comments from experts saying this trial is unlikely to dramatically change current treatment… it fails this criterion by citing only relative risk reductions without mentioning the absolute numbers.
The story cited a 41 percent reduction in the risk of developing heart failure, but it did not point out that the overwhelming majority of patients remained free of heart failure during the study period regardless of which device they received. The story should have noted that the risk of developing heart failure was 22.8 percent in the defibrillator-only group vs. 13.9 percent in the combination device group, which means the absolute risk of heart failure was 9 percent lower in the combination group.
The story didn’t mention any of the complications or other harms linked to implantable defibrillation and resynchronization devices, such as unnecessary defibrillation shocks, broken shock wires that require surgical replacement, and blood clots.
The story included several caveats about the trial results, along with several comments from experts who put the results in perspective.
However, while the overall tone of the story captures the quality of the evidence, it should have mentioned some of the concerns raised in the editorial comment in the New England Journal of Medicine about how the design and reporting of this company-funded trial left important questions unanswered, relied on the diagnostic interpretations of physicians who knew which device the patients received (in other words they were not “blinded” to protect against bias) and how the journal article reporting the trial results failed to include important details about patients that might influence the interpretations of those results.
The story erred by saying patients were followed more than four years when actually the journal article says the average follow-up was less than two and a half years.
The story does not provide important information about the study population. The study population indeed had mild symptoms based on the NYHA classification I and II. However, they also had a prolonged QT interval and a low ejection fraction. Both of these factors place the patients enrolled at significant risk for an event. The suggestion that these patients had "mild" disease is incorrect; they had mild symptoms but significant disease. There’s a "mission creep" angle to this – making it appear that the device was important even for those with "mild" disease.
The strong point of this story is that it includes several comments from independent experts who are cautious about the implications of the trial results. It clearly points out that the trial was paid for by the manufacturer of the combination device that was studied.
However, the story could have expanded on concerns voiced in editorial about the design of this company-funded trial and the way the results were reported.
The story fails to point out that many patients with mild heart disease (like the ones in this trial) are managed with drug therapy.
The story says the combination device is commonly used in patients with serious heart failure and that this trial looked at patients with less severe disease.
The story noted that this trial was looking at expanding the use of devices that are already available. However, as noted in the editorial accompanying the published study, CRT-ICD therapy has been studied previously in smaller studies showing somewhat mixed results.
The story does not appear to be based on a news release.
Comments
Please note, comments are no longer published through this website. All previously made comments are still archived and available for viewing through select posts.
Our Comments Policy
But before leaving a comment, please review these notes about our policy.
You are responsible for any comments you leave on this site.
This site is primarily a forum for discussion about the quality (or lack thereof) in journalism or other media messages (advertising, marketing, public relations, medical journals, etc.) It is not intended to be a forum for definitive discussions about medicine or science.
We will delete comments that include personal attacks, unfounded allegations, unverified claims, product pitches, profanity or any from anyone who does not list a full name and a functioning email address. We will also end any thread of repetitive comments. We don”t give medical advice so we won”t respond to questions asking for it.
We don”t have sufficient staffing to contact each commenter who left such a message. If you have a question about why your comment was edited or removed, you can email us at feedback@healthnewsreview.org.
There has been a recent burst of attention to troubles with many comments left on science and science news/communication websites. Read “Online science comments: trolls, trash and treasure.”
The authors of the Retraction Watch comments policy urge commenters:
We”re also concerned about anonymous comments. We ask that all commenters leave their full name and provide an actual email address in case we feel we need to contact them. We may delete any comment left by someone who does not leave their name and a legitimate email address.
And, as noted, product pitches of any sort – pushing treatments, tests, products, procedures, physicians, medical centers, books, websites – are likely to be deleted. We don”t accept advertising on this site and are not going to give it away free.
The ability to leave comments expires after a certain period of time. So you may find that you’re unable to leave a comment on an article that is more than a few months old.
You might also like