Overall, this story didn’t provide some of the key information readers need to make sense out of this study on a new blood test for Down syndrome. We wanted to see cost information, hear some good independent commentary on the study and, above all, read some context about why this test is worth any ink at all. Doing testing for 824 screenings, of which 11% were technical failures, is not a proof of concept yet for what would need to be millions of tests nationally and thousands per clinical lab if used in clinical practice. This story needed more context to give people the proper context to understand that.
Screening for genetic anomalies during pregnancy can be stressful for expectant mothers. They dread that they will find out something is wrong with their baby, and they also dread that the test itself will cause them to miscarry. This story helps women understand the long odds that either of those scenarios will happen, and it goes part of the way toward helping them understand the importance of these new findings.
But the story does conflate the screening and diagnostic process beginning with the first paragraph when it links the new test with “prenatal testing” done by amniocentesis and chorionic villus sampling. This new test is not a diagnostic test, as those tests are. Rather, as the authors of the BMJ paper confirm, it is a “screen for fetal trisomy 21 among high risk pregnancies.”
This is a “hope for the future” level test that may be inching its way to market. It’s not available now and won’t be in the immediate future.
The story doesn’t discuss costs. By comparison, an expert in the WebMD story that we also reviewed was paraphrased as saying the test is “likely prohibitively expensive and time consuming.” Even in a short blog piece, we think this is an important issue.
The story should have presented the actual number of pregnancies in which Down syndrome was identified, as the WebMD story did. It did manage, though, to provide this one figure: “Using the new technology, researchers found they could rule out Down syndrome in 98% of cases, sparing the women from further testing.”
Like the WebMD story that we also reviewed, this story notes, “The blood test did not produce false negative results (showing the fetus did not have the disorder when, in fact, the condition was present).” It also provides that helpful parenthetical note, which was absent from the WebMD story. We wish the story also spoke about false positives. False positives are certainly part of the stress and harms from any screening test that need to be considered. In this case, it appears the false positives were low but not negligible.
The story does not adequately evaluate the quality of the evidence. Unlike the WebMD story that we also reviewed, which provided many outside experts couching the findings in the appropriately cautious terms, this story does not provide any caveats. The study itselt noted some weaknesses in the study design that could have been included. For example, it says, that the study was limited only to women at high risk for Down syndrome, meaning that it might not be broadly applicable, and that the tests were conducted both on freshly collected blood and archived blood samples, making it potentially hard to replicate the study protocol exactly in a clinical setting.
On this count better than the WebMD story that we also reviewed, this story makes it clear that Down syndrome is a rare condition. “Down syndrome occurs in about 1 in 800 births.” We appreciate, too, that the story notes that the invasive tests for Down syndrome “carry about a 1% risk of miscarriage.” That’s quite low, and it should have been mentioned in the WebMD story. It’s the third sentence in the study itself.
The story uses no independent sources.
When we review a blog piece, we try to consider the links they use in stories that allow readers to dig deeper if they wish, while getting a broad overview in the blog if that’s all they want. This was a blog piece and we would have liked to have at least seen some links to outside sources.
The story does discuss alternatives to the blood test, but without all of the strong context provided in the WebMD story, and in the study itself, there are no meaningful comparisons made. There are multiple other tools being used at the screening level, and the study authors are not suggesting this is ready to replace diagnostic tests. Yet this blog piece ends with the suggestion that the new test “could be used after the combined test (blood test & ultrasound) or even as a first-tier test.”
Unlike the WebMD story that we also reviewed, this story makes it clear that this is a test in development and not available to patients yet. Technically this type of screening is not “new technology,” as the piece implies. Multiplex analysis is not new, and use for fetal and maternal cell DNA analysis is not new. Why this study is news is because this is the largest and likely best study to date on this existing technology.
As with the WebMD story that we also reviewed, the true novelty of the test is not established. The story indicates that only 5% of all women would even need such a test, a figure straight out of the press release (and the study), but the story also says that the test might “greatly reduce the number of cases requiring an invasive test.” This leads readers to believe that this test is a broadly applicable breakthrough when it looks to be, at most, a benefit to a small group of women. Because the method is not novel, the story should have explained that a study on a bigger population is not a breakthrough but instead a very important first step. The WebMD story did a better job in this regard. This is a diagnostic test very early in its evolution toward becoming a practical tool.
Not applicable because we can’t be sure of the extent to which the story was influenced solely or largely by a news release. The story presented very few facts beyond the information presented in the BMJ press release. No experts were quoted.
Comments
Please note, comments are no longer published through this website. All previously made comments are still archived and available for viewing through select posts.
Our Comments Policy
But before leaving a comment, please review these notes about our policy.
You are responsible for any comments you leave on this site.
This site is primarily a forum for discussion about the quality (or lack thereof) in journalism or other media messages (advertising, marketing, public relations, medical journals, etc.) It is not intended to be a forum for definitive discussions about medicine or science.
We will delete comments that include personal attacks, unfounded allegations, unverified claims, product pitches, profanity or any from anyone who does not list a full name and a functioning email address. We will also end any thread of repetitive comments. We don”t give medical advice so we won”t respond to questions asking for it.
We don”t have sufficient staffing to contact each commenter who left such a message. If you have a question about why your comment was edited or removed, you can email us at feedback@healthnewsreview.org.
There has been a recent burst of attention to troubles with many comments left on science and science news/communication websites. Read “Online science comments: trolls, trash and treasure.”
The authors of the Retraction Watch comments policy urge commenters:
We”re also concerned about anonymous comments. We ask that all commenters leave their full name and provide an actual email address in case we feel we need to contact them. We may delete any comment left by someone who does not leave their name and a legitimate email address.
And, as noted, product pitches of any sort – pushing treatments, tests, products, procedures, physicians, medical centers, books, websites – are likely to be deleted. We don”t accept advertising on this site and are not going to give it away free.
The ability to leave comments expires after a certain period of time. So you may find that you’re unable to leave a comment on an article that is more than a few months old.
You might also like