One of the criticisms of health news coverage is that it is often too boosterish, breathlessly reporting new findings without any context, particularly when it comes to side effects. It’s hard to disagree. But two new reports suggest that some stories may be taking a cue from highly touted studies that fail to note complications.
The new reports, by independent researcher John Wilson, MD, of Brentwood, Tennessee, look at how authors of 20 studies of two heart devices present risks – and find them lacking.
Both devices are implanted, and do what pacemakers do, but are also more sophisticated. Implantable cardioverter-defibrillators (ICDs) include pacemakers, and can also shock the heart if it goes into a dangerous rhythm. Cardiac resynchronization therapy (CRT) devices coordinate how different parts of the heart contract, and are generally used in people with heart failure.
There’s no question that both types of devices can benefit patients, but they’re also expensive, costing thousands of dollars each, and putting them in can lead to complications. Wilson told me in a phone interview that he was prompted to do a more rigorous analysis of the studies while refreshing his memory about the complication rates of both devices:
I went to the discussion sections, and found nothing about them. I was aware of complications with defibrillators and ICDs, but there was no warning, no mention. That made me wonder what was going on. I went back to get all the papers, then did same with CRT. I found remarkably similar things.
Such complications can be serious, including bleeding, infection, and collapsed lung, and implantation is not always successful, meaning another procedure is required. But, says Wilson:
The articles were generally written to convince readers of efficacy. Generally, the complications were not enumerated if they were mentioned at all.
There was some good news. HealthNewsReview.org readers will be familiar with the arguments about whether reporting only relative risk benefit – rather than absolute risks – can skew interpretations. Other studies have shown that authors sometimes present only relative risk ratios. But, according to Wilson, not in the studies he examined:
Are they actually presenting the efficacy statistics correctly? Yes. Generally they did not do something you often see, using ratios. They were pretty careful about making certain people understood absolute instead of relative risks.
But while the studies did well in terms of absolute risks, they generally fared poorly when it came to discussing side effects. Some did describe side effects comprehensively, Wilson notes. But the authors hardly ever discussed how to decide whether a particular patient’s risk was worth the benefit.
Wilson published his findings on 10 ICD trials online in the American Journal of Cardiology in April, and today the Archives of Internal Medicine reports his results from looking at 10 CRT trials. The studies join others of whether published papers spend enough time on safety issues.
So does it matter if authors leave out side effects? Wilson gives a hypothetical:
If you tell a patient that in one trial, five percent of patients benefit over five years, and a physician says, ‘I’d like to put this in you, you’re likely to benefit,’ many might do it. Whereas if a physician says 95 out of 100 aren’t going to benefit, but 30 percent are going to have complications, very few are going to do it. We should try to let patients make decisions based on the best information.
Studies have shown that many of the people who have such devices implanted actually don’t meet the criteria set by various groups. Wilson is quick to say he can’t link that phenomenon to the rhetoric he found in the papers, but, he says “it suggests there’s a somewhat cavalier attitude and approach to this.”
Wilson – who notes he helped recruit patients for one of the CRT trials but was not involved in writing it up — isn’t out to paint the authors of these trials as unethical, or say they’re doing anything out of the ordinary:
I do not think that the people writing up these studies are trying to delude people. People tend to not talk about side effects in many studies, in many areas. There’s a tradition of writing these sorts of things up the way these were written up. I don’t think the cardiologists who were involved in these trials were trying to pull the wool over anybody’s eyes. This has just become a style of presentation in medicine, and people need to abandon that style of presentation, and be a bit more rigorous.
So why do authors understate complications? From the discussion of Wilson’s American Journal of Cardiology paper:
The reason authors use rhetorical techniques to emphasize treatment beneﬁts is uncertain. Publications that describe impressive treatment beneﬁts are likely to be published in high-impact medical journals and gain their authors considerable academic and public attention. This fact may encourage authors to highlight treatment beneﬁts and minimize treatment risks. In addition, ICD primary prevention trials are all sponsored by large medical device companies that beneﬁt ﬁnancially when device beneﬁts are emphasized. These companies also provide substantial grant support to clinical investigators. This support could potentially inﬂuence the objectivity of authors.
I tried to talk to some of the authors of the papers that scored worst on Wilson’s criteria, without much luck. One deferred to his “academician” co-authors, and another said simply:
This paper is not worthy of any form of scientific comment.
The editors and peer reviewers of the American Journal of Cardiology and the Archives of Internal Medicine – both highly ranked, prestigious journals – obviously disagreed.
Wilson himself has had varying responses to his analysis from the researchers who peer-reviewed the papers – some of whom were likely the very authors whose work he is critiquing, based on some of their comments:
One group says ‘this is an interesting observation, it highlights the importance of being careful about how you write things.’ Another suggests that the length of articles doesn’t allow them to put this information in. There are restrictions on the number of pages, for example. Others have suggested that a person who is an experienced cardiologist should be able to read between the lines and understand that they’re written in a way that highlights the benefits.
And he’s sympathetic to those length constraints. He doesn’t think authors need three or four pages for side effects. He’d be satisfied if papers simply listed them in a comprehensive table:
There’s very little of that in these papers. Very few tables are allocated to describing the complications. People who say they’re not
long enough, that’s perhaps not an accurate way to look at it.
And what about the idea that physicians who read these studies are able to follow them?
That’s not necessarily accurate. There are a lot of generalists reading it. A large population of physicians will read these types of things, particularly in major [nonspecialty] journals [such as the New England Journal of Medicine], get excited, and start referring patients based on that.
Wilson’s next step is to look at how the rhetoric in these papers has affected news coverage. His hunch is that it has led to boosterish stories. He also wants to examine how the studies are being presented at FDA approval hearings.
It’s been pretty interesting to read through transcripts of meetings to see whether risks and complications are discussed in great length. In general, they’re not.
One of his favorite comments was from an interventional electrophysiologist – a doctor who treats the electrical conduction problems of the heart – sitting in on one of the FDA panels. The physician said that if he had the complication rates described in these studies, he’d be run out of town. And yet it has been easier, the doctor contended, to get CRT devices approved than new devices in his field. Says Wilson:
My ultimate hope is to help patients. When you write these, the first group to receive the information is physicians. More important is what ultimately gets to the patients. I think it’s very important that people and physicians realize that not everybody has the same view of benefits and risks. Some people are willing to have high risks, and little payoff, while others are not. It’s most important for physicians to be able to articulate the benefits and the risks, but it’s also important how you express it.