When a journalist asked me a question about statistical significance recently, it opened my eyes to how little attention I’ve given the topic on this site. And as I started looking around, I found that there are some gems to guide understanding, but they’re not widely recognized.
Let’s look at a news example first.
9 years ago – I can’t believe it’s been that long – the Washington Post health section turned to Dartmouth’s Lisa Schwartz, Steve Woloshin and Gil Welch for a Healthy Skepticism column, “Fat or Fiction? Is There a Link Between Dietary Fat and Cancer Risk? Why Two Big Studies Reached Different Conclusions.”
It reflected on an “apparent flip-flop” in recent news about low-fat diet and breast cancer. One month, a front page Post headline read, “Low-Fat Diet’s Benefit Rejected: Study Finds No Drop in Risk for Disease.” But less than a year before, a headline sent a different message: “Study of Breast Cancer Patients Finds Benefit in Low-Fat Diet.”
The article addressed many concepts I can’t do justice to in this short blog post. And no need to, since you can read it yourself at the link above. But I draw your attention to this excerpt:
“Based on the size of the study groups and the number of cancers in each, the p value communicates how often you would expect to see an effect this big simply as a result of chance. By convention, scientists say p values below 5 percent are “statistically significant”— meaning not likely attributable to chance. And p values of 5 percent and higher are considered statistical noise (that is, likely due to chance).
The p values for the effect of low-fat diet on breast cancer in the two studies were quite similar. For women with breast cancer, the p value was 3 percent. For women without breast cancer, the p value was 7 percent.
So even though, by convention, one finding is called “statistically significant” and the other “not-significant,” we would say that the statistics of the two studies are not that different: Both are close to the conventional cutoff point of 5 percent. Since the p values are actually quite close, we would argue that the role of chance was about the same. That is, if you believe one is real, you should probably believe the other is real.”
Read the “Research Basics: Accounting for Chance” sidebar for an explanation of how close the two can be.
In part 3 of this series, learn from top biostatistician Dr. Donald Berry of MD Anderson Cancer Center, who wrote:
“Much of the world acts as though statistical significance implies truth, which is not even approximately correct.”
You’ll want to see what else he has to say.
Follow us on Twitter: