Americans and their news media are in love with tests that tell us about medical problems we already have–as well as ones we might one day develop. These tests are often assumed to be beneficial. This broadcast–about a blood test that may predict the development of Alzheimer’s disease–immediately challenges this assumption. Given that there is no cure for the disease, it asks, “Would you really want to know you have it?” But by reporting only the “accuracy” of the test, the story overlooks a potentially devastating harm of testing. What happens if the test says you’ll develop Alzheimer’s—and it’s wrong? The researchers report that 6 out of 50 people who did not develop Alzheimer’s tested positive for the disease. In addition, they say the test wrongly tested positive in 1 out of 11 cases who did not have evidence of Alzheimer’s at autopsy. If these numbers prove to be accurate in future studies, as many as 12 out of every 100 people tested would be wrongly told they will develop Alzheimer’s. In addition to the potential emotional and financial toll of an Alzheimer’s diagnosis, these people might also be vulnerable to experimental treatments for a disease they don’t have. The broadcast does not explore this troubling scenario. The story falls short in others areas as well. It does not mention that any new diagnostic test should be compared against the gold standard—in this case, the examination of brain tissue at autopsy—and that the predictive value of the current blood test is based on a less reliable comparison. The broadcast neglects to note that four of the researchers have financial ties to the company that is developing the test. It does not mention that the research was published as a letter, and not an original research paper. It overlooks mention of other tests currently in use or being studied. Its discussion of costs is incomplete. Fortunately, the story also makes it clear that we have plenty of time to sort out these issues: The new blood test is “very preliminary” and may require another 5 years of research before it will be ready for wide use.
The broadcast cites a doctor from the Mayo Clinic who claims that the test is “cheap” and “inexpensive”. The story does not cite a dollar amount, nor does it compare the cost of the new test to that of current testing. Even if the cost of an individual test can be predicted, there will doubtless be high additional "downstream" costs associated with satisfying the curiosity of all the people at low risk of Alzheimer’s requesting low-yield blood tests.
The story states that the test identified Alzheimer’s in 90% of cases (a measure of the test’s “sensitivity”). But it doesn’t state the “specificity” of the test—which includes a measure of false-positive results. This is very important information when weighing the usefulness and potential harms of any diagnostic test. And though the story notes that larger studies will be needed, it fails to point out how many patients the researchers studied. Finally, the story doesn’t discuss whether the researchers compared the new test against the diagnostic gold standard—analysis of brain tissue at autopsy.
Very early on this segment raises a key question about the harms of testing for Alzheimer’s: Given that there is no cure for the disease, “Would you really want to know you have it?” But the story overlooks a potentially devastating harm of testing—what if the test is wrong? Such harm is almost always a possibility. The researchers report that 6 out of 50 people who did not have Alzheimer’s tested positive for the disease. If these numbers prove to be accurate in future studies, 12 out of every 100 people tested would be wrongly told they will develop Alzheimer’s. In addition to the potential emotional and financial toll of an Alzheimer’s diagnosis, these people might also be vulnerable to experimental treatments for a disease they don’t have. The broadcast does not explore this troubling scenario.
The story explains that the test was 90% accurate in “identifying which patients with mild memory loss would go on to develop [Alzheimer’s] within 2 to 6 years.” It also says that larger studies will be needed to confirm this one, pointing out an important limitation of the study. But the story doesn’t discuss other important limitations of the new research. For example, to judge the accuracy of a new diagnostic screen, researchers must compare it against a “gold standard”. The gold standard in Alzheimer’s requires the examination of brain tissue at autopsy. How can the researchers be sure the people deemed to have Alzheimer’s actually have the disease? If they can’t, then the predictive value of the test is uncertain. The segment also doesn’t mention that the results come from a case-control study with a design that is inherently susceptible to bias, including the selection of the comparison group, or controls.
The story does not exaggerate the effects or risks of Alzheimer’s, and very early raises a key question in the Alzheimer’s screening debate: “Would you really want to know you have it?”
The broadcast neglects to mention where the research was published (as a letter in the journal Nature Medicine). It also fails to note that four of the authors have financial ties to the company that is developing the test. Although the segment identifies the institutions of the two sources interviewed, it’s impossible to know whether they might have any potential conflicts of interest as well.
The story does not discuss other diagnostic tests for Alzheimer’s that are currently in use or being studied. These include spinal taps and imaging devices.
The story explains that a new blood test for Alzheimer’s disease is “very preliminary” and may require another 5 years of research before it will be commercially available.
The story makes it clear that this is a novel approach to Alzheimer’s diagnosis, though it doesn’t discuss how this differs from other approaches.
The story does not appear to rely solely or largely on a press release.
Comments
Please note, comments are no longer published through this website. All previously made comments are still archived and available for viewing through select posts.
Our Comments Policy
But before leaving a comment, please review these notes about our policy.
You are responsible for any comments you leave on this site.
This site is primarily a forum for discussion about the quality (or lack thereof) in journalism or other media messages (advertising, marketing, public relations, medical journals, etc.) It is not intended to be a forum for definitive discussions about medicine or science.
We will delete comments that include personal attacks, unfounded allegations, unverified claims, product pitches, profanity or any from anyone who does not list a full name and a functioning email address. We will also end any thread of repetitive comments. We don”t give medical advice so we won”t respond to questions asking for it.
We don”t have sufficient staffing to contact each commenter who left such a message. If you have a question about why your comment was edited or removed, you can email us at feedback@healthnewsreview.org.
There has been a recent burst of attention to troubles with many comments left on science and science news/communication websites. Read “Online science comments: trolls, trash and treasure.”
The authors of the Retraction Watch comments policy urge commenters:
We”re also concerned about anonymous comments. We ask that all commenters leave their full name and provide an actual email address in case we feel we need to contact them. We may delete any comment left by someone who does not leave their name and a legitimate email address.
And, as noted, product pitches of any sort – pushing treatments, tests, products, procedures, physicians, medical centers, books, websites – are likely to be deleted. We don”t accept advertising on this site and are not going to give it away free.
The ability to leave comments expires after a certain period of time. So you may find that you’re unable to leave a comment on an article that is more than a few months old.
You might also like