This story stood out in comparison with a competing ABC News story by doing a nice job of explaining the basic outlines of what the researchers did in this study, quantifying the findings in an accessible way, and at least nodding to the fact that someone would need expert supervision to perform this test accurately. It could have benefited from an explanation of costs, some independent expertise, a discussion of the risks of unnecessary screening, and an acknowledgment of the study’s limitations.
This study proposes a new way to quantify health risks using information currently available from treadmill stress tests. The proposed “FIT treadmill score” was derived using previously collected data on 58,000 patients who underwent stress tests for a variety of reasons. The authors then assessed whether the patients died over an average of 10 years of follow-up. The key findings are that a person’s fitness — their maximum exertion level and the ability to achieve maximum predicted heart rate — along with age and sex were the best predictors of future death. However, several important caveats aren’t clear from the two stories we reviewed about this study. First, the study had a number of weaknesses and limitations that we discuss below in the review; additional research will be needed to see if these results hold up with a more rigorous study design. It’s also not clear whether this test will actually lead individuals to make changes that will reduce their risk of premature death. It’s even conceivable that the test could cause harm — another issue that we tackle below in the review. Like many other seemingly “simple” tests, the FIT treadmill score gets a lot more complicated when you start asking tough questions — something we encourage all journalists to do when covering stories like this.
The story alluded to the fact that patients would have to pay to have a test done correctly. Even better would have been an actual cost for the test, or range of possible costs.
This story did a better job than a competing ABC News story in explaining what the test found. It broke down the different ranges of scores and the corresponding risk of premature death. A good next step would have been to explain how many people fell into each category. It’s unclear from most of the coverage of this study just how many people — of those who died — had a particular score with this new test.
There were many stories about this study, and we could not find one that mentioned harms. This is unfortunate because screening tests do produce false positives and people do make choices based on those results. As a clinician in a New York Times piece about the relative weakness of treadmill tests as a predictor of health explained, “In my own practice I’ve seen people who thought they shouldn’t be exercising anymore because someone put them on a treadmill and got an abnormal test result when in fact there was nothing wrong with them.”
Providing this information could also be harmful in other ways. It might reassure someone that they’re going to live a long time, and lead them to forego the things that they’ve previously done to be healthy. Or for those at increased risk of dying, it might make them more fatalistic and less prone to take steps to improve their health.
This story noted that the findings were published in a journal and provided a good description of how the study was conducted. A competing story in ABC News, for example, said that the study “looked at standard treadmill stress test results for more than 58,000 subjects, ages 18 to 96.” That could lead people to believe that 58,000 people were enrolled in a study and followed over time. The LA Times got it right, explaining that this study was not conducted in real time. Instead, it says that “The study plumbed the records of 58,020 patients in the Detroit area who were referred for a stress test because they reported chest pain, shortness of breath, palpitations or dizziness, among other things. Researchers checked the medical records for a median of 10 years after each stress test to see if a death — of any cause — was recorded. A total of 6,456 of these patients, who ranged in age from 18 to 96, died.” We would have liked to have seen a similarly detailed discussion of the study’s limitations. Moreover, we wish one of these stories had pointed out that there’s no evidence that knowing this score will change anything about a person’s health habits or their risk of dying.
This story did not engage in disease mongering.
The story has only one expert quoted, and that expert is an author of the study. Someone independent of the study would be in a better position to evaluate the claims objectively.
The story at least mentions the existence of other alternatives for risk assessment, such as the Framingham Risk Score. But it doesn’t really explain how this test differs from those other risk measures. We’ll give the story the benefit of the doubt here.
We have mixed feelings about this one. On the one hand, the story says, “Do-it-yourself stress testing is probably not very reliable, since a physician or sports physiologist needs to be around to decide when to call a halt to the test (and therefore what maximum a test-taker has achieved).” But then it encourages people to try to develop their own score at the gym or at home and gives them the formula that the researchers used. It’s great to give readers all the information, but we would have preferred that the story emphasize that the validity of the results stems from how these tests were conducted in a clinical setting.
The story nods in the direction of novelty when it quotes a researcher who says, “The notion that being in good physical shape portends lower death risk is by no means new, but we wanted to quantify that risk precisely by age, gender and fitness level, and do so with an elegantly simple equation that requires no additional fancy testing beyond the standard stress test.” We wish the story had more thoroughly examined why exactly this is new, but we’ll again give the benefit of the doubt here.
The story uses a quote from this news release, but doesn’t alert readers to that fact. This gives the impression that the researcher was interviewed, when that doesn’t seem to be the case.