Reason number 1,001 to slow down when reporting on breaking health news

Kevin Lomangino is the managing editor of HealthNewsReview.org. He tweets as @Klomangino.

Credit: Larry Husten

Credit: Larry Husten

We often plead for journalists and public relations people to slow down when reporting on breaking new research that makes big claims about health benefits.

Whether that news is coming from a company announcement or a scientific meeting, chances are good that the new findings haven’t been thoroughly evaluated by experts and are likely to overstate the significance of the research.

Journalist Larry Husten today provides a fresh example of why slow and steady wins the competition that really matters to readers.

Husten is in Italy reporting on the European Society of Cardiology meeting, and his post concerns a debate regarding the NIH’s controversial SPRINT trial on blood pressure targets.

You may recall that it was nearly a year ago that NIH issued a breathless news release calling the SPRINT study a “landmark” that “provides potentially lifesaving information” to health care professionals. The study apparently found that more intensive blood pressure management, below current targets, “greatly reduces” the rate of heart disease and death in adults with high blood pressure.

We immediately called on the NIH to slow down the hype train. Its news release never quantified the results, addressed potential harms, or provided other key specifics that are essential to interpreting the data. And as Husten’s new piece makes clear, the findings are much more complicated than some fawning news coverage at the time suggested.

Husten reports on a session where many cardiologists in attendance gave a “thumbs down” to the trial, saying it shouldn’t be used by guideline committees to develop new practice recommendations. The reason? Attendees were apparently swayed by arguments from panelist Sverre Kjeldsen, who revealed that the study adopted what Husten calls “an unprecedented and novel method to measure blood pressure, making it impossible to compare with previous trials.”

In all the major recent hypertension trials blood pressure had been measured three separate times with an automatic monitor in the presence of a healthcare professional. In SPRINT, however, the healthcare professionals were trained to leave the room before the measurements started. This method has not been validated and Kjeldsen presented multiple lines of evidence suggesting that it would lead to much lower blood pressure readings, as much as 10-20 mm Hg lower than usual. He said that the systolic BP target of 120 mmHg in SPRINT when examined in this light is actually closer to the prevailing standard of 140 mm Hg in other trials. He was also strongly critical of the SPRINT investigators for not reporting this significant fact in the original New England Journal of Medicine paper or in its supplement. He said it was unethical not to report this since many clinicians have tried to apply the findings to their patients but were not aware of this important change.

Here we are, almost a year after the promotional NIH news release was issued and some 10 months after the peer-reviewed study was published in the New England Journal of Medicine, and key details about the study are still dribbling out that impact how cardiologists view the findings.

This shouldn’t be surprising to anyone. Pendulum swings likes this are almost inevitable in scientific research. The optimism of an initial announcement gets tempered as other experts point out flaws and limitations in the research. This process takes time but should be anticipated with cautions and caveats.

We’ve gone from “lifesaving landmark” to “thumbs down” in less than a year. Who knows where the next swing of the pendulum will take us?

You might also like

Comments (7)

We Welcome Comments. But please note: We will delete comments left by anyone who doesn’t leave an actual first and last name and an actual email address.

We will delete comments that include personal attacks, unfounded allegations, unverified facts, product pitches, or profanity. We will also end any thread of repetitive comments. Comments should primarily discuss the quality (or lack thereof) in journalism or other media messages about health and medicine. This is not intended to be a forum for definitive discussions about medicine or science. Nor is it a forum to share your personal story about a disease or treatment -- your comment must relate to media messages about health care. If your comment doesn't adhere to these policies, we won't post it. Questions? Please see more on our comments policy.

Eskimo

September 2, 2016 at 12:01 pm

Wow. The results of major clinical trials can hinge on whether the person taking your blood pressure stays in the room?

    Kevin Lomangino

    September 3, 2016 at 9:09 am

    Eskimo, I’ll publish your comment because it raises an issue I think it’s important to address. But please, in the future, follow our comments policy and give us a complete name. Otherwise we won’t be able to publish your comment.

    As to your question, I believe this is the phenomenon that’s being referred to:

    http://www.bloodpressureuk.org/BloodPressureandyou/Medicaltests/Whitecoateffect

    Kevin Lomangino
    Managing Editor

Swapnil Hiremath

September 5, 2016 at 5:14 pm

Hi Kevin,
I have great respect for this website, as well as Larry Husten’s writing. On this issue, however, Larry is reporting on a talk given at the ESC. There was a lot of discussion on twitter (see https://twitter.com/skathire/status/770087290869776384 and thread below) to refute the fact that the details of blood pressure measurement were not known or reported. The speaker used the word ‘unethical’ – which Larry reported. Ben Goldacre called it ‘tomfoolery’ in his inimitable manner. In your case, it is ‘key details….dribbling out’. This is simply inaccurate. If you read the methods of SPRINT (the NEJM paper: http://www.nejm.org/doi/full/10.1056/NEJMoa1511939#t=article): “Dose adjustment was based on a mean of three blood-pressure measurements at an office visit while the patient was seated and after 5 minutes of quiet rest; the measurements were made with the use of an automated measurement system (Model 907, Omron Healthcare).” They used the words ‘quiet rest’. I think that is pretty unambiguous. In Canada, some parts of UK and US, these monitors (either the OMRON 907, or the BPTru) are regularly used in doctors offices. The MD or nurse is not present in the room – this minimizes the white coat effect. This is not a new technique – See papers by Dr Myers, the Canadian CHEP guidelines recommend using such methods – and we covered it in our #NephMadness education series (https://ajkdblog.org/2016/03/10/nephmadness-2016-hypertension-region/#SPRINT). This was apparent when the trial was discussed at the nephrology journal club (http://www.nephjc.com/sprint/) – our Pubmed Commons comment (http://1.usa.gov/25XZG70) clearly states: ‘BP measurement ….without patient-provider interaction’.
The ongoing criticism centers around the new characterization of quiet as ‘unattended’: this word was not used in the paper. This was apparent to us (see our letter to NEJM to expand on this topichttp://www.nejm.org/doi/full/10.1056/NEJMoa1511939#t=letters ).
One can argue whether the findings of SPRINT should be applied in the real world when one does not have the support of a typical clinical trial, one can be unhappy about the press release from NIH and how it was handled, one can be unhappy about key details dribbling out for other trials. But there is no reason for such criticism for an extremely well done, meticulously reported clinical trial done at over 80 centres with NIH funding.
Swapnil Hiremath, MD, MPH, FASN,
Ottawa
(CoI: not a part of SPRINT study; I am part of the HT Canada task force – these remarks are in a personal capacity)

    Kevin Lomangino

    September 6, 2016 at 7:28 am

    Swapnil,

    Thanks for your comment. My post isn’t about Whether the SPRINT investigators did or didn’t report the blood pressure methods appropriately or whether they did anything unethical. You can take up that issue with Larry. My post is a warning to journalists and the public not to jump the gun on new studies that haven’t been fully debated and digested by the medical community. Regardless of whether the details were communicated correctly in the paper, the implications of these key details were not fully appreciated by the larger medical community or those who reported on the study. So in that sense, these key details are very much “dribbling out” to the wider public (thanks to the ESC session and Larry’s post) and my post is not inaccurate to characterize it as such. My criticism is not aimed at the study or the published paper. Science is complicated and it takes time for results from a big study like this to be evaluated. That’s why investigators should not put out breathless news releases that lack key details, and why journalists should hedge their statements about what new research means for practice.

    Kevin Lomangino
    Managing Editor

bev M.D.

September 6, 2016 at 5:24 am

Well did they leave the room also during the ‘before’ BP measurements? If so then the change in BP following the applied management techniques should be comparable. While I am all for careful evaluation of studies, I also know that physicians will nitpick the data to justify their own pre-conclusions about its validity. In other words, I am asking if this new criticism is itself justified.

John Galbraith Simmons

September 6, 2016 at 7:08 am

Indeed not surprising. I did not have the impression that these results caused the major mainstream press outlets to overreact, possibly because there was no clear culprit. Would be interested to learn more.

Rob Oliver MD

September 6, 2016 at 9:35 am

Not a cardiologist, but the idea that someone raised the objection based on whether someone was in the room while the manual BP was checked is baffling as a platform to attack a study, particularly with the strong language of that panelist used. That seemed an awful like someone looking for any fig leaf to refute something they did not believe in rather then high quality evidence to dismiss the findings