Anyone who follows our work with any regularity knows that we write frequently about the limitations of observational research — and that media messages about such work often inappropriately imply that association equals causation. Today’s guest post is by Reijo Laatikainen, who emphasizes that there are problems with studies even at the pinnacle of the evidence pyramid, and that randomized controlled trials must also be analyzed and interpreted very carefully. Reijo is a dietitian with appointments at the Docrates Cancer Center & Aava Medical Center in Helsinki, Finland. He blogs (mostly in Finnish) at pronutrionist.net and tweets as @pronutritionist.
Observational studies, particularly prospective cohort studies, are the backbone of nutrition science. Many recommendations in dietary guidelines are based on these types of studies. Lately, prospective cohort studies and consequently dietary guidelines have been subjected to criticism in the media. The critics sometimes suggest that randomized controlled trials (RCTs) are the only type of studies that are worth considering seriously, as they provide the “gold standard” of evidence.
Randomized studies do indeed sit at the top of the evidence pyramid, and data from high-quality RCTs offers the definitive word on many questions in health and medicine. But high-quality RCTs are rare, especially in the field of diet and nutrition; often we’re presented with data from lesser-quality RCTs that are subject to critical weaknesses and limitations — just like those observational studies the media are increasingly concerned about. In this post I explain why RCTs can be flawed — sometimes fatally so — and why they may never provide the answers we crave for important questions in the field of nutrition.
First some important background: Researchers who conduct studies are typically looking to find that their intervention (whether it’s a particular type of diet or a new medicine) had some effect on the study participants. Studies with positive results are more likely to get published in authoritative journals. And publication in authoritative journals leads to funding, prestige, and career advancement for researchers. Accordingly, researchers face pressure to design their studies in a way that increases the likelihood of observing a positive result.
A 2013 study of the so-called “healthy Nordic diet” may illustrate how this pressure can affect study design and subsequent media coverage of randomized trials in nutrition. In the study, researchers assigned one group of participants (the “Nordic diet” group) to consume a variety of healthy foods such as rye and other whole grains, salmon, canola oil and berries for 18 to 24 weeks. In addition, participants on Nordic diet were instructed to limit sugary dairy/cereal products and sweetened soda, as well as include 350 grams of fruit and vegetables (on top of 150 grams of berries) and possibly include nuts and seeds into their diet. By contrast, the control group in this study was instructed to consume an especially unhealthy diet — one that was high in refined carbohydrates (e.g. white bread), butter, and meat — and they were offered no advice at all about healthy eating. Participants in the control group ended up eating little in the way of fiber, unsaturated fat, antioxidants, or other healthful nutrients. In fact, it’s very likely that control group’s diet was actually worse than the average diet in Nordic countries, making the “healthy Nordic diet” look good by comparison. If you scrutinize the data tables in the study, it’s clear that the anti-inflammatory benefits observed with the Nordic diet were attributable mainly to a deterioration in the control diet group rather than to any improvement in the Nordic diet group.
Despite the control group’s clear nutritional downgrade from what is typical in Nordic countries, their dietary regimen was referred to as the “regular diet” in many news stories about the study — a misleading description. Moreover, the Nordic diet was depicted as being as good as, or even better than traditional Mediterranean diet in many such stories. Such claims are difficult to justify based on a study whose control group ate so poorly. And yet they may gain wider acceptance than warranted because of the perceived strength of the randomized trial design.
Adherence is another issue that can severely limit the conclusions of randomized trials, especially those dealing with weight loss. In a 2013 study, for example, participants were randomized to follow either a low-fat diet or a high-protein diet for two years to assess weight loss effects. Despite extensive counseling, participants in the high-protein group achieved only a 3-gram difference in daily protein intake compared with the low-fat group at the end of 2-year study period. The original target had been a difference of approximately 64 grams. Is it any surprise that there was no difference in weight loss or metabolic outcomes at the end of the study when participants could achieve only 4.6% of the targeted change?
Certainly no one would expect any big results from a drug study in which participants took just 4.6% of the prescribed medications. But at least one story reported that this diet study showed the two approaches were “equally effective.” Is that really what the study tells us? With such limited adherence to the diet, it seems foolish to draw any hard conclusion as to the effectiveness of either approach — even if the evidence did come from a randomized controlled trial.
Yet another concern with nutrition trials is the difficulty in achieving a blinded setup — i.e. a situation where neither the researchers nor the participants know which group individual subjects have been assigned to. This is a particular problem in areas where subjective symptoms like pain and well-being are the main outcomes, because these symptoms are especially susceptible to influence by the placebo effect. For example, the low-FODMAP (Fermentable Oligo-Di-Monosaccharides and Polyols) diet has become popular in the treatment of irritable bowel syndrome (IBS), and the concept is rather well supported by both randomized and observational studies. But in the era of the Internet, it is almost impossible to prevent participants from knowing whether they are on a low-FODMAP-diet or control diet. IBS patients are interested in dietary issues and it’s easy for them to find which foods belong to high- and low-FODMAP dietary plans. As participants in these types of studies cannot be blinded properly, it’s reasonable to think that the treatment benefits might be at least partially due to the placebo effect. But this inevitable placebo effect is seldom addressed in news stories about the low-FODMAP diet, many of which report uncritically that the diet can reduce symptoms in about 75% of IBS sufferers.
These are just a sampling of the many limitations that can affect the interpretation of randomized trials in nutrition. Generally speaking, randomized trials are expensive to conduct and therefore enroll smaller numbers of participants for a shorter period of time than observational studies. Randomized studies assign participants to interventions that they may not be able to maintain for long periods, whereas observational studies give us a portrait (limited though it may be when based on questionnaires) of what participants are actually eating over the course of many years. There are also ethical issues involved with randomized studies that don’t hamper researchers using an observational design. For example, there will probably never be a study assigning participants to a diet high vs. low in red meat for many years to see whether the low-meat diet prevents colon cancer. To protect the patients in such a study, the researchers would be obligated to perform regular colonoscopies, which would identify and eliminate most polyps before they had a chance to become cancerous — making the study an exercise in futility . But through observational research, we may get a better sense as to whether people who actually eat such diets do, in fact, have differing rates of colon cancer diagnoses as they get older.
In short, there is no magical study design that overcomes all limitations and obstacles or which replaces the need for critical thinking when analyzing nutrition research. Journalists would do well to remember that when reporting on any type of nutrition study — from the lowliest case series to the largest randomized trial.
The author gratefully acknowledges substantial editorial input from Kevin Lomangino on this post.