There’s a lot of optimism in this story for a study that hasn’t even been funded yet. Read the full point-by-point review below.
While one could piece together the information in this article to see a picture of preliminary research, it might not be clear to a reader that there is no evidence yet for the use of this technique to diagnose brain injury.
It’s a classic storyline: Potentially dangerous problem. Hope (or hype) for a new test to identify the problem early so that appropriate treatment can be started quickly. And yet, there are no data that the test can do what it says or that earlier identification leads to better outcomes.
It’s easy to talk about hopes for a world with less suffering and a new technology that could save lives. It’s much harder to talk about science’s slow and usually unsuccessful efforts to realize laboratory dreams. That balance is missing in this story.
Not applicable in this case because we understand that It’s too early to discuss costs. There is no specific test mentioned, nor are we told which biomarkers would be included in the pivotal study of that test.
Once a test has been created, costs will be relevant (and that’s a point the story could have made.) If the test is exorbitantly expensive to perform, its utility would be limited. We could send a helicopter to rush everyone who has a skiing accident to a neuro-intensive care unit, but that would be prohibitively expensive.
It’s worth noting that in practice few if any tests lower the cost of care. In routine use, they can be applied more indiscriminately, including situations where their use is unlikely to contribute to outcomes that lower downstream costs.
Although it’s too early to quantify the technique’s benefits, this story does that. We’re told that the study of 66 patients found that those with the most severe brain injury had 16 times the level of UCH-L1 as those without brain injury. What is the true clinical significance of that result?
The benefits of using biomarkers wouldn’t be just they can detect brain injury but they can do so better and more usefully than the methods we have now. Otherwise, how does it benefit us? What’s the clinical meaning of the 16x difference for this one biomarker? What about the other biomarkers in the panel alluded? Could the researchers use this result to distinguish mild from severe, or mild from normal, or cases that are hard-to-catch vs those that are obvious? (Dr. Hall suggests there are questions about mild cases.)
We’re given the analogy to troponin tests for heart attacks, which is "more precise than having a patient describe symptoms." The implication, thus, is that this 16x difference is a hugely superior signal to the standard cues we use to diagnose severe brain injury. But the researchers didn’t compare this new technique to current diagnostic ones. In short, the story does a poor job of describing how such a test will lead to improved care. The analogy to troponin is a good point, because early identification of a heart attack leads to treatments that have been shown to be lifesaving. A similar evidence base would be needed for biomarker identification of brain injuries.
That’s a big problem with reporting on laboratory research like it’s clinical research: we don’t have hard outcomes to look at, only surrogate ones. The true benefits would be the test’s specificity, sensitivity, and speed, and its effects on actual people’s lives comparative to the standard approach.
It’s too early to quantify harms, as it’s not clear there’s a physical test yet, no matter a final roster of biomarkers.
You might say the potential harms of a blood test are low, but what about the odds that the test will screw up? Not all tests are created equally, and some test results can be equivocal, inconsistent, or wrong. That’s why we need to study the test’s reliability in the DOD trial. We don’t know how many people with severe brain damage will slip past this test (its sensitivity) and how many it will wrongly diagnose (its specificity). Poor sensitivity or specificity cause harms when warning signs are missed and when people get treatments or follow-up tests, sometimes invasive ones, they don’t need.
On a fact-by-fact basis, the article states several good points about the evidence. The major, "first-of-its-kind" study with 1000 patients hasn’t been funded yet. If it is funded, it would start next year and take 18 months from then to bear results. Most of the evidence is in rats. The study of 66 humans involved (in part) people already known to have brain injuries. FDA approval of the test is conditional on the completion (and funding) of the Defense Department study, which implies that the study is a late-stage, pivotal investigation of the technique in a clinical setting.
The trouble is, the article doesn’t evaluate any of these points. Laboratory studies in rats and humans don’t prove ideas but generate ideas. We don’t know whether some future blood test will reliably identify serious cases that are missed by conventional techniques, change treatment plans, save lives, reduce long-term rehabilitation, or prevent what happened to Natasha Richardson. No one can say this technique will be better than what doctors already do, which is why the FDA won’t touch it yet. Not only do we not know the answer, we’ve yet to ask the question. None of the evidence mentioned has tested a diagnostic technique prospectively or compared it to another approach.
That context won’t be clear for a lay reader, who won’t see that the claims are remarkably premature.
The presumably independent source, Dr. Hall, explains that "the Banyan work" does show that these biomarkers can reliably track brain injury and are copious enough to be reliably detected. That statement, which seems to be about a panel of multiple biomarkers, comes before we learn about UCH-L1 and its particular evidence base. So we’re not sure whether Dr. Hall’s contextual explanation applies to the specific, quantitative evidence we’re given later, such as the 16-fold increase in UCH-L1. In other words, we’re not told what that number means.
The article is also vague about evidence from "more than 300 human brain-injury patients" regarding "certain biomarkers." On its face, in a sea of rat research, that could be the highest-quality aspect of this evidence…if we know anything about them. We presume from the detail about the study of 66 people that the 300 are all laboratory research. But of what biomarkers? All UCH-L1? Several other biomarkers of brain injury have been studied in the past. Which biomarkers would the DOD study look at — is it all about UCH-L1? Maybe no one knows because the study hasn’t even been funded — which is the point. The tone doesn’t match the facts, and pale inks were used to draw a very bold picture.
The narrative of the story is that many cases of brain injury go undiagnosed. If the laboratory research only involved people who were already diagnosed, what does that tell us about the technique’s ability to identify people who often aren’t currently diagnosed? In other words, does any of the evidence, including the UCH-L1 signal that’s quantified for us, address the problem posed by the story at the outset, the one that is blamed for Natasha Richardson’s death?
Of course, the story’s ebullient tone makes these caveats seem moot.
We think this article does engage in disease-mongering. The statistical sidebar cites some grim outcomes of head trauma. These numbers cloud the point of the story, which is about undiagnosed or slow-to-diagnose cases. How many of the 3 million people who need long-term care and the 52,000 who died had undiagnosed severe brain injuries? How many died on contact? How often does the current diagnostic approach fail? In other words, who exactly might this new technique benefit? While perhaps no one knows those answers, we think the overly general sidebar inflates the expectations balloon too much.
Another point that was easy to miss: the tests in humans measured UHC-L1 in spinal fluid. That requires a spinal tap. While it’s something one might consider in the first scenario, it’s unlikely to be practical for the second. One would expect that the test characteristics of a CSF level are different from a blood level.
One source works for the DOD, but it’s not clear whether he’s involved with the funding of biomarker research. The story, as written, colors it that way. Another source is the president of a company investigating the biomarkers. Dr. Hall is presumably an independent source not involved with the Banyan research or any competitive research, but that’s an assumption and preferably should’ve been stated explicitly. Dr. Hall evaluates the Banyan work and describes an open question with this research. This cautionary point is, however, immediately countered by an anonymous citation that "Other scientists say that Banyan’s work may help doctors distinguish between different types of brain injury."
We give the story credit for using a seemingly independent source and identifying conflicts of interest, at least in large strokes. There was room for improvement, though.
The story mentions some normal diagnostic techniques for brain injury, and their limitations. It compares the new approach to the old one in a procedural way, i.e., blood tests and the DOD’s dream of a portable battlefield device compared with checking vitals and pupils etc.
There is, as yet, no evidence available that can compare the two approaches on the basis of their effectiveness in diagnosing brain injury, which is what mankind ultimately cares about most. The article therefore isn’t missing such a comparison, but we would’ve liked to see the point made more clearly.
The article is clear that these tests aren’t available in your local hospital, and that the FDA hasn’t approved a blood test. But the overall tone is so unchecked that it seems like a foregone conclusion that it will be available "soon."
If researchers are close, why does the article lead with the evidence from rats? Again, the article is clear that the technique is not available, but we think it needed context about how preliminary this research is. Many readers would be shocked to learn how many techniques with promising preliminary results in the lab fail when tested in clinical trials.
The story is also unclear about the pivotal DOD trial. Will it specifically study UCH-L1, the biomarker investigated by Banyan Labs, or is it about a panel of biomarkers? Are they all Banyan biomarkers? Is UCH-L1 even in there? We don’t know how much of the research discussed in this article will even be investigated in the future study and, thus, potentially available.
Though the story could have commented about other biomarkers besides the Banyan one(s), it’s clear that this is a novel diagnostic test.
We don’t see evidence that the article relies on a news release.
Comments
Please note, comments are no longer published through this website. All previously made comments are still archived and available for viewing through select posts.
Our Comments Policy
But before leaving a comment, please review these notes about our policy.
You are responsible for any comments you leave on this site.
This site is primarily a forum for discussion about the quality (or lack thereof) in journalism or other media messages (advertising, marketing, public relations, medical journals, etc.) It is not intended to be a forum for definitive discussions about medicine or science.
We will delete comments that include personal attacks, unfounded allegations, unverified claims, product pitches, profanity or any from anyone who does not list a full name and a functioning email address. We will also end any thread of repetitive comments. We don”t give medical advice so we won”t respond to questions asking for it.
We don”t have sufficient staffing to contact each commenter who left such a message. If you have a question about why your comment was edited or removed, you can email us at feedback@healthnewsreview.org.
There has been a recent burst of attention to troubles with many comments left on science and science news/communication websites. Read “Online science comments: trolls, trash and treasure.”
The authors of the Retraction Watch comments policy urge commenters:
We”re also concerned about anonymous comments. We ask that all commenters leave their full name and provide an actual email address in case we feel we need to contact them. We may delete any comment left by someone who does not leave their name and a legitimate email address.
And, as noted, product pitches of any sort – pushing treatments, tests, products, procedures, physicians, medical centers, books, websites – are likely to be deleted. We don”t accept advertising on this site and are not going to give it away free.
The ability to leave comments expires after a certain period of time. So you may find that you’re unable to leave a comment on an article that is more than a few months old.
You might also like