Odds ratios


From the U.S. Agency for Healthcare Research and Quality website page on odds ratios:

Definition: The chance of an event occurring in one group compared to the chance of it occurring in another group. The odds ratio (OR) is a measure of effect size and is commonly used to compare results in clinical trials.

Example: For example, a research study compared two groups of women who developed diabetes during their pregnancies. One group was treated with metformin, and the other group was treated with insulin. The researchers recorded how many of the mothers delivered their babies earlier than expected (less than 37 weeks after becoming pregnant). When they calculated the odds of an early delivery, the odds ratio (OR) for metformin was 1.06. This means that the women taking metformin had a small increase (1.06 times) in the odds of having an early delivery compared to the women taking insulin.

Jerome  Hoffman

 

I asked Jerome R Hoffman, MA, MD, Professor of Medicine Emeritus, UCLA School of Medicine to write a brief primer on odds ratios.  I told him that a journalist had recently asked me:

What the heck should I do with odds ratios in studies?

From what I’ve read, odds ratios are NOT the same as relative risk, but journalists and readers usually assume they are: They think something is twice as likely if the odds ratio is 2.0.  But that’s not what that means.

I write about studies on a daily basis and avoid citing odds ratios. Instead I try to get researchers to covert them to more understandable numbers when possible. If researchers can’t or won’t do that — some of them seem as confused about the concept of odds ratios as anyone else — I use vague language like “X is significantly more likely than Y.”

But that’s not really helpful to readers. And if researchers and journalists can’t understand odds ratios, I don’t know how I can teach readers to figure them out.

What do you think? Should odds ratios be reported as being the same as relative risk?  Should they be reported at all?”

So here’s what Dr. Hoffman wrote.


When we think about the relative effect of two competing approaches (tests, drugs, interventions, etc), we are intuitively thinking about what is mathematically known as the risk ratio.  If one drug cures 80% of people, and the other drug cures 90%, the relative risk (RR) of a bad outcome is cut in half.  The RR is therefore 0.5, and the relative risk reduction (RRR) = 50%.  This is of course in most cases better presented as absolute risk reduction (ARR), which would be 10% in this example.  (The NNT is the inverse of the ARR, so in this case it would be 1/10, or 10.)

For certain types of studies, where the size of the groups getting each of the interventions is not natural, but instead fixed by the study design (for example, in a case control study, when one has arbitrarily chosen to make the size of the control group the same as that of the cases), it would be statistically inappropriate to present results in terms of RR; in such cases, it is fine to use odds ration (OR) as a surrogate for RR – as long as one doesn’t then pretend that they mean the same thing (or even worse, then present the results in terms that suggest they represent a change in risk.)  Of course some people use OR even when RR is statistically appropriate — which is a bit like cheating; they do this because OR always looks more impressive than RR.

For certain types of results (when the outcome is rare for both groups) OR fairly closely approximates RR (it’s only a little better); the more common the outcomes, however, the more these two measures diverge (and OR starts to look a lot more impressive).

I can show this to you with very simple math.

RR is calculated as the ratio between the groups being compared with regard to the % who have the outcome of interest.  Thus if there is a bad outcome in 10% vs 5% in 2 groups of 100 patients each, the RR is 5/100 divided by 10/100, or 5/10, or one-half.

OR is different because when 5% have a bad outcome, the odds of such = 5/95 — that is 5 bad outcomes vs 95 good ones.  (When odds are 2 to 1 at the track, it means 2 chances of losing and 1 of winning — or 2 of 3 chance [risk] of losing!)  So if your wonder intervention increases survival from 1% to 2%, the RR is 2.0 (2/100 ÷ 1/100 = 2), and the OR is very similar, at 2/98 ÷ 1/99 = 198/98 = 99/49 = 2.02.  (That’s a tad more than 2 … but hardly enough to care.)

If you increase survival from 10 to 20%, RR is still 2.0, but now OR is 20/80 ÷ 10/90 = 18/8 = 2.25.  Going from 20 to 40% is 4/6 ÷ 2/8 = 2.67 … and from 40 to 80% is 8/2 ÷ 2/3 = 6.0.  That is, the RR of a good outcome, when it increases from 40% to 80%, is still 2 … but the OR is 6!

This mathematical phenomenon occurs because for both RR and OR the numerator is simply the number of people with the outcome in question, but while for RR the denominator is always the same — the total N in the group — for OR it keeps decreasing (with greater and greater impact on the final calculation) as the number of (bad) outcomes decreases.

When an author says “6 times the chance” for the latter, he’s either lying, or ignorant.  (I’ve encountered both.)

The same is true for RRs and ORs <1, where bad outcomes are decreasing, for example; for an RR of 0.5, the OR can be very similar, at 0.49, for example – or very different, at 0.16, using the inverse of the same examples from above.

The meaning of OR is not remotely intuitive, so expressing it in terms that suggest what we understand by a relative chance of A vs B is inappropriate — and depending on the specifics, can be extremely misleading.

back to “Tips for Understanding Studies”

Comments (3)

We Welcome Comments. But please note: We will delete comments left by anyone who doesn’t leave an actual first and last name and an actual email address.

We will delete comments that include personal attacks, unfounded allegations, unverified facts, product pitches, or profanity. We will also end any thread of repetitive comments. Comments should primarily discuss the quality (or lack thereof) in journalism or other media messages about health and medicine. This is not intended to be a forum for definitive discussions about medicine or science. Nor is it a forum to share your personal story about a disease or treatment -- your comment must relate to media messages about health care. If your comment doesn't adhere to these policies, we won't post it. Questions? Please see more on our comments policy.

Roland B. Stark

April 27, 2015 at 12:41 pm

A good explanation but I recommend you flag as incorrect the initial statement that an odds ratio is “The chance of an event occurring in one group compared to the chance of it occurring in another group.”

Dan Mayer

August 6, 2015 at 9:16 pm

I totally agree with Jerry’s discussion. The key elements that your readers need to know are:
1. Odds ratio is always larger than Relative Risk, sometimes a lot larger.
2. Odds ratios are only useful in true case control studies, which are done because the true incidence of the disease is very low
3. Neither OR or RR are useful to determine how good a therapy or how bad a risk is. The absolute risk or NNTB (number needed to treat to benefit) or NNTH (number needed to treat to harm) are the important numbers and can be used by physicians or journalists to truly inform patients.

Stephen R. Saklad, Pharm.D., BCPP

October 12, 2017 at 10:22 am

You can trivially convert Odds Ratios (OR) to Number Needed to Treat (NNT) any of several online calculators such as https://ebm-tools.knowledgetranslation.net/calculator/converter/.