How Four Stories Fared in Covering Alzheimer's Blood Test Study

Posted By



Last week many news organizations reported on research to find a blood test to help identify Alzheimer’s disease.  We reviewed four stories on that research:

San Francisco Chronicle

ABC’s Good Morning America 

ABC Evening News

CBS Evening News 

All four stories were rated as unsatisfactory for discussing the quality of the evidence.

Three out of four were unsatisfactory in discussing potential benefits and potential harms, in their use of sources, and in their discussion about other existing options beyond this new test.   

Here, specifically, is some of what our reviewers commented on about how the stories could have been better.

How well did the story discuss the quality of the evidence?

San Francisco Chronicle:  Mixed views on this criterion.  The story did a good job in providing actual numbers (e.g., test was positive for Alzheimer’s disease in 38 out of 42 patients….). However, the story doesn’t give much discussion about the specific limitations of this study. For example, this case-control type of patient selection almost always overestimates test accuracy.  So it’s likely to perform worse in real world populations.

Good Morning America: Mixed review.  The segment does a good job, particularly in a broadcast setting, of describing how the research was conducted. However, the story didn’t discuss a key limitation: the size of the study group. And it didn’t discuss another specific limitation of this study: this case-control type of patient selection almost always overestimates test accuracy.  

ABC Evening News:  The story says that larger studies will be needed to confirm this one, pointing out an important limitation of the study. But the story doesn’t discuss other important limitations of the new research. For example, to judge the accuracy of a new diagnostic screen, researchers must compare it against a “gold standard”. The gold standard in Alzheimer’s requires the examination of brain tissue at autopsy. How can the researchers be sure the people deemed to have Alzheimer’s actually have the disease? If they can’t, then the predictive value of the test is uncertain.  The segment also doesn’t mention that the results come from a case-control study with a design that is inherently susceptible to bias, including the selection of the comparison group, or controls.

CBS:  They failed to describe the type of population that was studied, how representative they may or may not have been. In addition, the story stated this was a study on 259 blood samples.  Rather the test was developed from these samples and tested on a rarified sample of about 50.

How well did the stories address the potential harms from the test?

Reviewers commented that one story or another:

• failed to address the potential emotional and financial toll of a diagnosis for the patient and the patient’s family and the insurance ramifications.

• failed to address the possibility of, or rate of, false positive test results

How well did the stories address the potential benefits from the test?

Reviewers commented that one story or another:

• failed to make it clear what one would do with advance knowledge that one was going to develop Alzheimer’s disease.

• failed to explain that what sounds like an extraordinarily powerful result – a 90% accurate test – came from a very small sample size.

• stated that the test identified Alzheimer’s in 90% of cases (a measure of the test’s “sensitivity”). But it doesn’t state the “specificity” of the test—which includes a measure of false-positive results.

• didn’t discuss whether the researchers compared the new test against the diagnostic gold standard—analysis of brain tissue at autopsy.

• presented a tone that knowing about the disease early would allow for treatment to benefit the patient when this is not the case given the lack of available proven treatments.

How well did the story seek independent sources?

Reviewers commented that one story or another:

• failed to disclose some obvious conflicts of interest. Four of the authors have commercial ties to the company that developed the test.

• included person-on-the-street interviews that didn’t add anything. It’s clear none of these people was well informed enough to have a meaningful opinion.

• quoted someone from the company developing the test and quoted only one "expert" but he did not comment on the test but only on the subtlety of the disease.

Three of four stories did not address costs and that’s understandable given the early stage of research.  But ABC quoted an investigator who said that the test is “cheap” and “inexpensive”. The story did not cite a dollar amount, nor did it compare the cost of the new test to that of current testing. Even if the cost of an individual test can be predicted, there will doubtless be high additional "downstream" costs associated with satisfying the curiosity of all the people at low risk of Alzheimer’s requesting low-yield blood tests. So the “cheap” and “inexpensive” comment may not give a complete picture of true costs.

We hope you find these cross-media comparisons helpful.  For us, it provides a re-affirmation that our evaluation criteria can be important reminders for journalists about the kind of information readers and viewers need in such stories.

You might also like


Please note, comments are no longer published through this website. All previously made comments are still archived and available for viewing through select posts.

Comments are closed.