Note to our followers: Our nearly 13-year run of daily publication of new content on HealthNewsReview.org came to a close at the end of 2018. Publisher Gary Schwitzer and other contributors may post new articles periodically. But all of the 6,000+ articles we have published contain lessons to help you improve your critical thinking about health care interventions. And those will be still be alive on the site for a couple of years.

Guest post: Absolute risk not as straightforward as you might think

Posted By

Tags

This is a guest post by Frederik Joelving, a staff writer at Reuters Health.

Absolute risk is one of the biggest buzzwords in health reporting today, and for good reasons.

It’s frightening to hear that hormone replacement therapy doubles your risk of suffering a blood clot in your lungs, for instance (the relative risk). But knowing that in fact it causes fewer than one such event per 1,000 women per year puts the risk in perspective (the absolute risk).

As a Cochrane review pointed out this week, relative risks are very persuasive, but they don’t always serve your best interests when making health decisions.

So why shouldn’t we as reporters just stick to absolute risks?

The problem is that unless you’re dealing with a large randomized controlled trial, absolute risks can be misleading, too. They carry hidden baggage, such as age, overall health, ethnicity and so on — all things that on their own could influence your risk of getting sick.

Here’s an example, which inspired me to write this post: When HealthNewsReview.org reviewed a recent story I wrote on a study of bone drugs and colon cancer, they pointed out its lack of absolute numbers:

The story should have said that 138 women in the non-cancer group took bisphosphonates and 97 in the cancer group took them, meaning that 41 women appear to have benefited from taking the drugs.

Instead of giving those figures, I chose to report the relative risk reduction (59 percent) and the lifetime risk in the general population (5 percent).

Why? Because even in a case-control study like the one I was writing about, there are bound to be important differences between the two patient groups — such as their general health, medication use and diet. Those differences make the absolute numbers hard to interpret at best, and misleading at worst.

In this and similar cases, I think you’re better off knowing the baseline risk in the general population. That gives you a sense of whether you should worry about colon cancer in the first place.

Once you know that, the relative risk starts to make sense. That’s because researchers usually do us the favor of trying to weed out the influence of extraneous factors in a so-called multivariate analysis.

What you end up with is the best estimate of the actual effect of bisphosphonates on colon cancer, assuming the link is causal (which of course we can’t in this case, because the study is observational).

Another example: About a month ago, researchers reported that 17 percent of people with mild hearing loss had dementia, compared to 4 percent of people with normal hearing.

Those are absolute numbers, but they won’t tell you much about your own risk of dementia, even if you happen to be hard of hearing. The main reason is that hearing loss tends to go hand-in-hand with getting old, as does dementia risk, and the numbers don’t take that into account. That’s why we didn’t include them.

The relative increase in risk — a doubling in this case — doesn’t tell you much about your own risk either, not least because it’s impossible to account for all potentially important variables.

But it tells you something real about the strength of the connection between the two phenomena, dementia and hearing loss. And more importantly, it doesn’t give you a false sense of knowing your own risk.

Absolute risk is an important measure, but we shouldn’t use it indiscriminately.

###

Publisher’s note: I welcome this kind of contribution from journalists whom we review. I hope for a broader, more open dialogue and exchange of ideas in the future. For years now we have offered a brief primer on absolute versus relative risks on our website.

You might also like

Comments

Please note, comments are no longer published through this website. All previously made comments are still archived and available for viewing through select posts.

Larry Husten

March 18, 2011 at 10:43 am

Great post! I (absolutely) agree. It’s important to remember that there’s no one magic number or statistic that can replace critical thinking. Every statistic has its advantages and disadvantages, and each story requires a thoughtful evaluation of the best way to convey the important points of the study.

Kate Murphy

March 21, 2011 at 9:46 am

People need information that helps them make good health care decisions.
Statistics are part of that information, but only if they understand what those numbers mean. Most don’t.
59 is a BIG number. When I see a 59 percent reduction in the risk of getting colorectal cancer, I am impressed. Unless, of course, the risk was very small anyway and I’m only slicing that very small risk in half.
And that very small risk becomes even smaller if I realize that it only applies to older women.
As health journalists we need to explain what those statistics mean, whether we use absolute risk, relative risk, rates, or any other numbers.
Finding simple ways to turn statistics into something that is both meaningful and understandable isn’t easy. But it is critical.

William M. London

March 21, 2011 at 10:57 am

In a case-control study, no measure of risk (absolute or relative) can be used for the study data since you’ve selected subjects on the basis of the health outcome (typically disease status).
The measure of association used in a case-control study is the odds ratio (or relative odds). It compares the odds of having been exposed to something (or having a personal characteristic) in a group with a disease to the odds of having been exposed to something (or having a personal characteristic) in a control group without the disease.
(The odds of having been exposed is the probability of having been exposed divided by the probability of having not been exposed.)
So there is a problem with the healthnewsreview.org criterion for reporters to present a measure of absolute risk from data in a news report when the study being covered doesn’t present any measure of risk.
A common error in reporting is equating relative odds with relative risk. This error is hard to avoid since few readers understand how relative odds and relative risk differ. (Epidemiology students often struggle with grasping this distinction, as well.) Fortunately, relative odds is a good estimate of relative risk when the magnitude of the association is not very large. Thus, even when the reporting on a case-control isn’t quite right, it often isn’t terribly misleading.

Gary Schwitzer

March 21, 2011 at 11:27 am

William,
Thanks for the thoughtful note.
I don’t agree that there’s a problem with that one aspect of one of our ten criteria.
We don’t criticize stories for using relative risk in such cases – or we try not to. If we have misapplied the criterion in any case, it was our error for which we apologize.
Sorry for brief reply. I’m on vacation but wanted to be sure to give some response promptly.
Gary Schwitzer
Publisher

Frederik Joelving

March 21, 2011 at 4:04 pm

I appreciate all the comments.
Admittedly, I use the terms “relative risk” and “absolute risk” rather loosely. The hearing loss-dementia study used a measure called the hazard ratio, for instance, not the relative risk. Although the two measures aren’t identical, the gist is the same, and that’s what matters here, I think.
As for the bisphosphonate study, the researchers do indeed calculate the relative risk reduction based on the different numbers of women who took the drugs in each group (as HNR’s reviewers pointed out). The point I was trying to make is not about absolute risk in the strict sense, but more about absolute numbers, which have the same problems.

William M. London

November 4, 2011 at 2:14 pm

Thanks for your reply, Gary.
To clarify, my point is that technically there is no (and cannot be any) measure of “risk” in a case-control study. Odds are not the same as risk. However, reporters often refer to results of case-control studies as relative risk even though the results are expressed as relative odds (odds ratio). This is understandable since the concept of odds is more difficult to understand than the concept of risk and the concept of relative odds is more difficult to understand than the concept of relative risk. (Odds mean the ratio of the probability of event occurring to the probability of the event not occurring. An odds ratio provides the ratio of odds for one group relative to the odds in another group.)
So my point is that your criterion that refers to risk isn’t technically applicable to coverage of case-control studies. I’m not suggesting this is more than a small problem; odds ratios provide a good estimate (though inflated in magnitude) of relative risks in most research papers. But to be technically correct, your criterion should refer to either relative risk or relative odds.
By the way, I’m a big fan of healthnewsreview.org! Thanks for your important efforts!