The enthusiasm this story effuses about a new brain scanning technology to diagnose autism may well be warranted. But it fails to give more than the most cursory of cautions about the early stage of this research and the tough road that lies ahead. The story makes it sound like we are a slam dunk away from seeing this test at the local radiology clinic. In reality, much larger studies with a much greater range of patients will be needed to tell whether the scan is effective enough to be widely used.
Autism is difficult to diagnose due to its many different causes and symptoms and because it often occurs with other similar neurodevelopmental disorders. A fast and accurate diagnostic brain scan would take much of the uncertainty out of the diagnostic process and would hopefully speed treatment and other support services to patients.
Many of the deficiencies discussed above in the "Evidence" criterion could also be applied to this criterion. Importantly, the story didn’t do enough to call attention to the small and highly selected sample of patients, which likely biases the results towards greater accuracy than what would be observed in a larger and more diverse sample. There was also inadequate acknowledgment of the fact that this was a study of adults whereas the real need is for diagnostic tool for use in children. Yes, it was acknowledged that the childrens’ studies hadn’t been done but the outcome of these not-yet-done studies was almost treated as a fait accompli. Lastly, we would like to have seen some discussion of a somewhat subtle but important issue that wasn’t addressed. That is, whenever we diagnose a condition with new technology, the group of individuals identified as having the condition, in this case autism, will change. Some previously undiagnosed children will be labeled with autism and some previously labeled with autism will be told that this is not the diagnosis. In addition, the severity of the disease may change (a more sensitive test will likely identify milder cases). Anytime the spectrum of disease changes, we have to ask ourselves, Will the treatments produce the same benefits and harms? Since benefits are often smaller for individuals with milder disease but the adverse effects are the same, the risk benefit ratio often increases with the more sensitive test.
As discussed above, the story doesn’t attempt to quantify the harm of false-positive or false-negative diagnoses. It also doesn’t explore the potential harm of labeling someone with borderline symptoms as having autism. In addition, it has now become clear that MRI with contrast is associated with kidney damage. Physicians now check kidney function in all patients prior to an MRI and must weigh the need for the images vs the very small (but tangible) risk of kidney damage.
Readers would have been better served by more detail and less unbridled optimism in this piece. To wit: we learn that the scan’s accuracy was "so high" that the results were strongly significant despite the study’s small sample size. Instead of gushing over the test’s accuracy, however, we think the story should have spent more time discussing the limitations of such a small study and the additional research that needs to be conducted to confirm these results. We still don’t know whether the method can differentiate between autism and other neurodevelopmental conditions. And it is unclear whether the results are applicable beyond the small group of high-functioning adult males who were studied.
We also think the story failed to provide all of the necessary information about the study’s outcomes. To be effective, diagnostic tests must do more than simply identify individuals with a disease (which is the result this story focused on); they must also rule out those individuals who do not have a disease and do not require treatment. Failure to accomplish the latter will result in an excessively high false-positive rate and could result in much unnecessary treatment of healthy people. We recognize that this is a complex concept to convey in a consumer news story, but the story didn’t even mention the possibility that people without autism could be given an erroneous positive diagnosis.
Finally, we should call attention to the story’s speculation about applying the new technology to children. The story quotes one of the study investigators saying there is "no reason" why the test wouldn’t work in children, even though it has never been tested in this population. In fact, there are any number of good reasons why this test might be less effective in the developing brain than in a mature one. Someone involved in this story – the reporter? the editor? somebody, please! – should have known this and injected some skepticism into the discussion.
The story didn’t exaggerate the prevalence or impact of autism.
One of the sources for this story was not involved in the research and has no obvious conflicts of interest.
The story notes that autism is currently diagnosed via interviews and patient observation.
Although it is clear that this technology is not yet available to the public, the story predicts the new scan "could be ready for general use in a couple of years." Really? Who says? It seems unlikely that any knowledgeable researcher would make such a wildly optimistic prediction on the basis of a 40-person study. But the story treats this forecast as an established fact requiring no attribution, so we can’t tell where it came from. In any case, we think such pie-in-the-sky guesstimates are usually wrong and should be avoided in health journalism.
Researchers have long studied the brains of patients with autism using MRI scans. However, the ability to make a clinical diagnosis of autism using MRI would be noteworthy progress.
Since this story included two expert interviews, we can be sure it didn’t rely on a news release
Comments
Please note, comments are no longer published through this website. All previously made comments are still archived and available for viewing through select posts.
Our Comments Policy
But before leaving a comment, please review these notes about our policy.
You are responsible for any comments you leave on this site.
This site is primarily a forum for discussion about the quality (or lack thereof) in journalism or other media messages (advertising, marketing, public relations, medical journals, etc.) It is not intended to be a forum for definitive discussions about medicine or science.
We will delete comments that include personal attacks, unfounded allegations, unverified claims, product pitches, profanity or any from anyone who does not list a full name and a functioning email address. We will also end any thread of repetitive comments. We don”t give medical advice so we won”t respond to questions asking for it.
We don”t have sufficient staffing to contact each commenter who left such a message. If you have a question about why your comment was edited or removed, you can email us at feedback@healthnewsreview.org.
There has been a recent burst of attention to troubles with many comments left on science and science news/communication websites. Read “Online science comments: trolls, trash and treasure.”
The authors of the Retraction Watch comments policy urge commenters:
We”re also concerned about anonymous comments. We ask that all commenters leave their full name and provide an actual email address in case we feel we need to contact them. We may delete any comment left by someone who does not leave their name and a legitimate email address.
And, as noted, product pitches of any sort – pushing treatments, tests, products, procedures, physicians, medical centers, books, websites – are likely to be deleted. We don”t accept advertising on this site and are not going to give it away free.
The ability to leave comments expires after a certain period of time. So you may find that you’re unable to leave a comment on an article that is more than a few months old.
You might also like