Health News Review

What we have now are a hypothesis, a pile of forward-looking statements, and a study that isn’t sure to be funded, won’t start for a year, and will take 18 months before telling us what it found. How does that timeframe jibe with the first paragraph’s statement that biomarkers "may soon make it possible to pinpoint brain injuries with a simple blood test"? 

Our Review Summary

There’s a lot of optimism in this story for a study that hasn’t even been funded yet. Read the full point-by-point review below.


Why This Matters

While one could piece together the information in this article to see a picture of preliminary research, it might not be clear to a reader that there is no evidence yet for the use of this technique to diagnose brain injury.

It’s a classic storyline: Potentially dangerous problem. Hope (or hype) for a new test to identify the problem early so that appropriate treatment can be started quickly. And yet, there are no data that the test can do what it says or that earlier identification leads to better outcomes.

It’s easy to talk about hopes for a world with less suffering and a new technology that could save lives. It’s much harder to talk about science’s slow and usually unsuccessful efforts to realize laboratory dreams. That balance is missing in this story.


Criteria

Not Applicable

Does the story adequately discuss the costs of the intervention?

Not Applicable

Not applicable in this case because we understand that It’s too early to discuss costs. There is no specific test mentioned, nor are we told which biomarkers would be included in the pivotal study of that test.

Once a test has been created, costs will be relevant (and that’s a point the story could have made.)  If the test is exorbitantly expensive to perform, its utility would be limited. We could send a helicopter to rush everyone who has a skiing accident to a neuro-intensive care unit, but that would be prohibitively expensive.

It’s worth noting that in practice few if any tests lower the cost of care. In routine use, they can be applied more indiscriminately, including situations where their use is unlikely to contribute to outcomes that lower downstream costs.

Not Satisfactory

Does the story adequately quantify the benefits of the treatment/test/product/procedure?

Not Satisfactory

Although it’s too early to quantify the technique’s benefits, this story does that. We’re told that the study of 66 patients found that those with the most severe brain injury had 16 times the level of UCH-L1 as those without brain injury. What is the true clinical significance of that result?

The benefits of using biomarkers wouldn’t be just they can detect brain injury but they can do so better and more usefully than the methods we have now. Otherwise, how does it benefit us? What’s the clinical meaning of the 16x difference for this one biomarker? What about the other biomarkers in the panel alluded? Could the researchers use this result to distinguish mild from severe, or mild from normal, or cases that are hard-to-catch vs those that are obvious? (Dr. Hall suggests there are questions about mild cases.)

We’re given the analogy to troponin tests for heart attacks, which is "more precise than having a patient describe symptoms." The implication, thus, is that this 16x difference is a hugely superior signal to the standard cues we use to diagnose severe brain injury. But the researchers didn’t compare this new technique to current diagnostic ones. In short, the story does a poor job of describing how such a test will lead to improved care. The analogy to troponin is a good point, because early identification of a heart attack leads to treatments that have been shown to be lifesaving. A similar evidence base would be needed for biomarker identification of brain injuries.

That’s a big problem with reporting on laboratory research like it’s clinical research: we don’t have hard outcomes to look at, only surrogate ones. The true benefits would be the test’s specificity, sensitivity, and speed, and its effects on actual people’s lives comparative to the standard approach.

Not Applicable

Does the story adequately explain/quantify the harms of the intervention?

Not Applicable

It’s too early to quantify harms, as it’s not clear there’s a physical test yet, no matter a final roster of biomarkers.

You might say the potential harms of a blood test are low, but what about the odds that the test will screw up? Not all tests are created equally, and some test results can be equivocal, inconsistent, or wrong. That’s why we need to study the test’s reliability in the DOD trial. We don’t know how many people with severe brain damage will slip past this test (its sensitivity) and how many it will wrongly diagnose (its specificity). Poor sensitivity or specificity cause harms when warning signs are missed and when people get treatments or follow-up tests, sometimes invasive ones, they don’t need.

Not Satisfactory

Does the story seem to grasp the quality of the evidence?

Not Satisfactory

On a fact-by-fact basis, the article states several good points about the evidence. The major, "first-of-its-kind" study with 1000 patients hasn’t been funded yet. If it is funded, it would start next year and take 18 months from then to bear results. Most of the evidence is in rats. The study of 66 humans involved (in part) people already known to have brain injuries. FDA approval of the test is conditional on the completion (and funding) of the Defense Department study, which implies that the study is a late-stage, pivotal investigation of the technique in a clinical setting.

The trouble is, the article doesn’t evaluate any of these points. Laboratory studies in rats and humans don’t prove ideas but generate ideas. We don’t know whether some future blood test will reliably identify serious cases that are missed by conventional techniques, change treatment plans, save lives, reduce long-term rehabilitation, or prevent what happened to Natasha Richardson. No one can say this technique will be better than what doctors already do, which is why the FDA won’t touch it yet. Not only do we not know the answer, we’ve yet to ask the question. None of the evidence mentioned has tested a diagnostic technique prospectively or compared it to another approach.

That context won’t be clear for a lay reader, who won’t see that the claims are remarkably premature.

The presumably independent source, Dr. Hall, explains that "the Banyan work" does show that these biomarkers can reliably track brain injury and are copious enough to be reliably detected. That statement, which seems to be about a panel of multiple biomarkers, comes before we learn about UCH-L1 and its particular evidence base. So we’re not sure whether Dr. Hall’s contextual explanation applies to the specific, quantitative evidence we’re given later, such as the 16-fold increase in UCH-L1. In other words, we’re not told what that number means.

The article is also vague about evidence from "more than 300 human brain-injury patients" regarding "certain biomarkers." On its face, in a sea of rat research, that could be the highest-quality aspect of this evidence…if we know anything about them. We presume from the detail about the study of 66 people that the 300 are all laboratory research. But of what biomarkers? All UCH-L1? Several other biomarkers of brain injury have been studied in the past. Which biomarkers would the DOD study look at — is it all about UCH-L1? Maybe no one knows because the study hasn’t even been funded — which is the point. The tone doesn’t match the facts, and pale inks were used to draw a very bold picture.

The narrative of the story is that many cases of brain injury go undiagnosed. If the laboratory research only involved people who were already diagnosed, what does that tell us about the technique’s ability to identify people who often aren’t currently diagnosed? In other words, does any of the evidence, including the UCH-L1 signal that’s quantified for us, address the problem posed by the story at the outset, the one that is blamed for Natasha Richardson’s death?

Of course, the story’s ebullient tone makes these caveats seem moot.

Not Satisfactory

Does the story commit disease-mongering?

Not Satisfactory

We think this article does engage in disease-mongering. The statistical sidebar cites some grim outcomes of head trauma. These numbers cloud the point of the story, which is about undiagnosed or slow-to-diagnose cases. How many of the 3 million people who need long-term care and the 52,000 who died had undiagnosed severe brain injuries? How many died on contact? How often does the current diagnostic approach fail? In other words, who exactly might this new technique benefit? While perhaps no one knows those answers, we think the overly general sidebar inflates the expectations balloon too much.

Though not stated clearly in the article, here are the two clinical scenarios where such a test may be helpful:
  1. In someone who has suffered serious brain injury — say, a concussion with prolonged loss of consciousness — you don’t need a test to say that the patient has had a brain injury. In theory, the test may help identify subclasses of individuals for specific treatments or monitoring. There are no data provided that supports such a use for this proposed test.
  2. In someone who has suffered what appears to be minor brain trauma — with brief loss of consciousness and now looking fine or with no loss of consciousness — the test could theoretically identify the rare individuals who have had a more serious injury. Such individuals could be more carefully monitored or treated. Again, there is no data provided that supports such a use for this proposed test.

Another point that was easy to miss: the tests in humans measured UHC-L1 in spinal fluid. That requires a spinal tap. While it’s something one might consider in the first scenario, it’s unlikely to be practical for the second. One would expect that the test characteristics of a CSF level are different from a blood level. 

The only specific example provided in this story was that of Ms. Richardson, who had a torn blood vessel in the skull under the brain after falling. Unfortunately, a ski helmet could have averted her tragic outcome, but It is unlikely that such any test could have helped. 
Satisfactory

Does the story use independent sources and identify conflicts of interest?

Satisfactory

One source works for the DOD, but it’s not clear whether he’s involved with the funding of biomarker research. The story, as written, colors it that way. Another source is the president of a company investigating the biomarkers. Dr. Hall is presumably an independent source not involved with the Banyan research or any competitive research, but that’s an assumption and preferably should’ve been stated explicitly. Dr. Hall evaluates the Banyan work and describes an open question with this research. This cautionary point is, however, immediately countered by an anonymous citation that "Other scientists say that Banyan’s work may help doctors distinguish between different types of brain injury." 

We give the story credit for using a seemingly independent source and identifying conflicts of interest, at least in large strokes. There was room for improvement, though.

Satisfactory

Does the story compare the new approach with existing alternatives?

Satisfactory

The story mentions some normal diagnostic techniques for brain injury, and their limitations. It compares the new approach to the old one in a procedural way, i.e., blood tests and the DOD’s dream of a portable battlefield device compared with checking vitals and pupils etc.

There is, as yet, no evidence available that can compare the two approaches on the basis of their effectiveness in diagnosing brain injury, which is what mankind ultimately cares about most. The article therefore isn’t missing such a comparison, but we would’ve liked to see the point made more clearly.

Not Satisfactory

Does the story establish the availability of the treatment/test/product/procedure?

Not Satisfactory

The article is clear that these tests aren’t available in your local hospital, and that the FDA hasn’t approved a blood test. But the overall tone is so unchecked that it seems like a foregone conclusion that it will be available "soon."

  • "researchers are close" 
  • "[these biomarkers] could end up determining the future treatment" 
  • "I think this will revolutionize brain-injury care"
  • "the Army’s goal is to one day have a portable blood-test device"
  • "biomarkers could do the same for brain injury [as troponin did for heart attacks]"

If researchers are close, why does the article lead with the evidence from rats? Again, the article is clear that the technique is not available, but we think it needed context about how preliminary this research is. Many readers would be shocked to learn how many techniques with promising preliminary results in the lab fail when tested in clinical trials.

The story is also unclear about the pivotal DOD trial. Will it specifically study UCH-L1, the biomarker investigated by Banyan Labs, or is it about a panel of biomarkers? Are they all Banyan biomarkers? Is UCH-L1 even in there? We don’t know how much of the research discussed in this article will even be investigated in the future study and, thus, potentially available.

In summary, this test has theoretical value but proven, practical value in clinical situations is years off. If one views this story as hype, it may serve to push inappropriate use of the test before any real value is demonstrated.
Satisfactory

Does the story establish the true novelty of the approach?

Satisfactory

Though the story could have commented about other biomarkers besides the Banyan one(s), it’s clear that this is a novel diagnostic test. 

Not Applicable

Does the story appear to rely solely or largely on a news release?

Not Applicable

We don’t see evidence that the article relies on a news release.

Total Score: 3 of 7 Satisfactory


We Welcome Comments

But please note: We will delete comments that include personal attacks, unfounded allegations, unverified facts, product pitches, profanity or any from anyone who doesn't list what appears to be an actual email address. We will also end any thread of repetitive comments. We don't give medical advice so we won't respond to questions asking for it. Please see more on our comments policy.