Informed patients need one thing not provided in SPRINT trial news: what were the absolute benefit/harm numbers?

iStock_000055228508_SmallOn November 9th, the New England Journal of Medicine published SPRINT (Systolic Blood Pressure Intervention Trial) comparing “intensive” blood pressure lowering (targeting <120 mm HG systolic) against standard therapy, (<140 mm HG) and came to pretty definitive conclusions: “SPRINT now provides evidence of benefits for an even lower systolic blood-pressure target than that currently recommended in most patients with hypertension.”

The implications could be huge even as the authors acknowledge that lower rates of fatal and nonfatal cardiovascular events in the intervention needed to be considered against some worrying adverse events, particularly increased cases of kidney failure.

There was lots of coverage of this study. I looked at three stories that I thought would be representative of the coverage in general — one from Time, a widely read national news magazine; one from Reuters, a leading wire service; and one from MedPage Today, a highly respected news source for health professionals that is also read by many savvy members of the general public. All three stories contained both hits and misses as any drug story might. But none of them delivered on the key benefit/harm equation information that consumers worried about high blood pressure need to know.

In my opinion, Time gets the “You Should Know Better” award for reporting the benefits of the intervention in relative terms, (43%, 38%, for deaths and heart failure respectively) and the harms in absolute terms (1% or 2% for fainting, dizziness and falls).  Any readers seeing those two set of numbers will run, not walk, to the nearest doctor to get on the intensive blood pressure lowering train. Time, you missed a crucial opportunity to slow down that stampede.   

The Reuters story reported the benefits in unhelpful relative terms yet did go the extra mile to quote those who had serious concerns about whether antihypertensive guidelines needed changing. This story captured some problems with the SPRINT study, and included quotes from Dr. Steven Nissen, who wasn’t recommending sprinting to any hypertension finish line. He said that he’d want to know “which patients are likely to suffer kidney failure” before changing his practice.  

The MedPage Today story had one added feature those others didn’t: it got into important details about the ‘money factor’ surrounding the study, through a table at the end of the article listing financial conflicts of interest between the study’s investigators and pharmaceutical companies. The list was incomplete, however, failing to explain that the lead investigator, Jackson Wright, reported a 2013 consulting relationship with Takeda — but at least it was a start. Noting that it would be difficult for many physicians to lower their patients’ blood pressure very dramatically unless combination products were prescribed, the story reminded us that the newest angiotensin receptor blocker, azilsartan (Edarbi) as well its combination with chlorthalidone (Edarbyclor), were donated by Takeda and Arbor Pharmaceuticals in the SPRINT study. Even though these products accounted for only 5% of the medications used in SPRINT, and that most antihypertensives are cheap and generically available, it is no far stretch to see that this study’s conclusions will be soon distributed to our physicians and otherwise likely used to support Takeda’s marketing campaign for Edarbi and Edarbyclor.  

One factor not stressed in any of these media reports is that SPRINT is ONE study, and there have been many studies comparing aggressive versus standard hypertension management. The principle that needs reinforcing here is: NEVER jump onboard the moving train of a single study. Especially one that was stopped early, which is a known signal for bias to creep in. Our own reviewer Dr. Michael Pignone, echoed this sentiment when he wrote: “A larger, more general issue is whether to look at this trial in isolation, or whether to consider it in the context of other trials that have asked similar questions. Most such trials have been done in higher risk populations (particularly patients with diabetes).”

I agree with Dr. Pignone wholeheartedly, especially since “How low should we go?” is an issue that has been well-covered by the folks at the Cochrane Hypertension Review group. Their 2009 review said aiming for targets lower than 140/90 was not beneficial. Examining data from 7 trials in over 22,000 people, those researchers found that “using more drugs in the lower target groups did achieve modestly lower blood pressures. However, this strategy did not prolong survival or reduce stroke, heart attack, heart failure or kidney failure.” Do health care journalists learn early in their career that well-designed systematic reviews trump single studies? They should. They should also remember that no research happens in isolation and so the journalist’s job is to provide the important context that previous research supplies. 

It’s certainly clear to me that a certain breathlessness about the need to target lower blood pressures was reflected in media reports about SPRINT. They were essentially hammering home the narrative: lower is better. While it is good to see media reports that cite experts who question this conclusion, we can ask these journalists to report more completely by seeking out the absolute numbers. They, of course, might rightly respond: “Why didn’t the study authors themselves provide them for us to report?”  

I decided to construct my own table using the individual Secondary Outcomes and Serious Adverse Effects that were found to be ‘statistically significant’ (i.e.: had a P-value lower than 0.05).  As you see in the list below, here’s how many people need to be treated to be helped or harmed in the “intensive” blood pressure group compared to “standard” therapy: 

125: Number needed to treat to prevent one case of heart failure.

167: Number needed to treat to prevent one death by cardiovascular causes

83: Number needed to treat to prevent death by any cause

100: Number needed to harm to cause one case of hypotension

167: Number needed to harm to cause one case of syncope

125: Number needed to harm to cause one case of electrolyte abnormality

56: Number needed to harm to cause one case of acute kidney injury or renal failure

42: Number needed to harm to cause one serious adverse event


alan's tableClick on the table at left for a more detailed breakdown of where these numbers came from. In a nutshell, in this population of relatively older people with high blood pressure, the most likely outcomes with the more intensive blood pressure lowering are acute kidney injury/acute renal failure (1 in 56) or a serious adverse event (1 in 42), something which, while not deadly, is defined as “life-threatening” and requires hospitalization or prolongation of existing hospitalization, according to the FDA. While the more intensive group might have a slightly reduced risk of death or heart failure, these reductions come at a cost for other adverse effects that can also be disabling or even fatal. 

The vagueness of the reporting of benefits and harms is understandable, given that journalists would need to work to pry those numbers from the published study. But it’s still unacceptable. If I can take 20 minutes to construct a table like this, then why can’t other journalists?

In the absence of doing this, can they at least ask, and report on, the KEY information people need to make an informed decision about whether 3 drugs a day (versus 2) are worth swallowing: In ABSOLUTE TERMS, how do the benefits of intensive blood pressure lowering compare to the harms?

You might also like

Comments (8)

Please note, comments are no longer published through this website. All previously made comments are still archived and available for viewing through select posts.

Victor Montori

November 25, 2015 at 1:34 pm

I think you need to seriously re-think the value of NNTs to communicate risk to the public, be it professional or lay. NNTs are constantly changing the denominator (1 in 42, 1 in 52, 1 in 100) making it difficult to compare these estimates (which is easier if the base is kept constant). Also, to help understand them it is nice to see those who will NOT experience the bad outcomes as well. Finally, little has been said about the impact on these estimates of having stopped the trial earlier than planned. Should patients and clinicians cheer this decision? Or should new pressure be applied to prevent this from happening again…?

    Alan Cassels

    November 25, 2015 at 6:11 pm

    Thank you for your comments, Dr. Montori. I understand the problems of truncated trials, perhaps best described by Gordon Guyatt and colleagues in the BMJ, . I probably should have mentioned in the blog that, like other truncated trials, the halting and rapid publication of SPRINT may lead to overestimates of treatment effects, wider media dissemination and quicker uptake into clinical practice guidelines than if the trial was left to run its planned course. And so we should not cheer the decision to stop the trial, and sprint to erroneous conclusions about what it all means. As for NNTs your suggestion planted in me a seed of doubt around the value of the NNT so I discovered there was a Cochrane Systematic Review which concluded that compared with NNT, ARR was better understood and perceived to be larger though there was little or no difference for “persuasiveness.” Does this mean the NNT is more or less impactful compared to absolute numbers? I don’t know. Beyond that, there is the perennial problem of innumeracy among our citizenry and that most of us who are numerate have difficulty communicating meaning (except in very imprecise ways) regardless of which statistical display of treatment effects we deploy. Even when one’s odds are absurdly high (for winning the lottery, for example) people still flock to buy tickets in a massive display of not appreciating probabilities. As for using the inverse of the NNT/ NNH (ie: giving numbers for who will NOT experience the good or the harmful outcomes) brings up the important question of framing, on which there is also a Cochrane Review Their conclusion, is, quite frankly, a little dissatisfying: “Contrary to commonly held beliefs, the available low to moderate quality evidence suggests that both attribute and goal framing may have little if any consistent effect on health consumers’ behaviour.”

Andrew DePristo

November 26, 2015 at 6:50 am

I agree with Dr. Montori and have created the simple table of the outcomes for 1000 treated patients (much like disease incidence is reported as number per 100,000 population).

For each 1000 patients treated:
8 heart failures prevented
6 deaths by cardiovascular causes prevented
18 acute kidney injuries or renal failures caused
10 hypotensions caused
6 syncopes caused
8 electrolyte abnormalities caused
Overall, 12 deaths by any cause were prevented for every 24 serious adverse events caused.
The results would definitely not start a stampede to aggressive treatment of high blood pressure.

    Alan Cassels

    November 26, 2015 at 12:20 pm

    Andrew. Brilliant. Thank you.

Christina Zarcadoolas

November 30, 2015 at 7:36 am

The fundamental issue not discussed in any of the coverage is the low health literacy and numeracy of more than half the public. Goldbeck et al define numeracy as “the degree to which individuals have the capacity to access, process, interpret, communicate, and act on numerical, quantitative, graphical, biostatistical, and probabilistic health information needed to make effective health decisions.” We know from over 30 yrs of clear evidence that approx. half of the adults in the US are low health literate – have difficulty understanding health information. A substantial problem is the difficulty with interpreting basic numbers and calculation. Therefore proportions, percents, let alone statistical significance still remain completely inaccessible to millions. There is a pressing need to develop ways to present numerical information to the public if we ever hope to to free the public from media proclamations and pseudo-science.

    Alan Cassels

    November 30, 2015 at 12:19 pm

    Christina, thank you for those thoughts. I agree that many individuals would have trouble understanding quantitative information and we assume that as long as we get the effects represented in numbers, those numbers will affect the kinds of decisions people make. This might be a wrong assumption. Research such as this paper from Australia indicates that “perceived risk, motivation, and attitudes appeared to be more important than absolute risk thresholds” in a study around managing cardiovascular disease risk. The implications of this kind of research is that physicians might spend less time trying to ‘get the numbers right’ and more time determining what the patients’ perceived risk is, and how motivated they are to do anything (take drugs, adopt lifestyle changes, etc) to change that risk.

      Christina Zarcadoolas

      November 30, 2015 at 7:12 pm

      Your insightful comment and references to our Aussie friends also reminds me of what the behavioral economist Dan Ariely has been demonstrating (in a very entertaining way) since “Predictably Irrational”
      Scientific / numerical/financial ( fill the in blanks) Evidence competes with our perceptions, and habits of irrationality all the time.
      What I’ve seen most in my studies is that complicated, inaccessible numbers lead to either avoidance or magical thinking. Not sure which is worse.

Stephen Cox, MD

December 2, 2015 at 9:10 am

It seems if all medical articles reported all findings in ABSOLUTE rather than RELATIVE percentages the intentional and unintentional misunderstandings for professional and lay interpretation would be greatly minimized. Why is this not done?