This story about a blood test for nasopharyngeal carcinoma focuses on the business angles of the research as much as on the potential health contributions of a new screening test. It clearly reports that there is much work to be done before such a test could be considered for clinical use. It passes along a calculation by the researchers that detecting one case would cost almost $30,000 based on the results of this study. However, the story also says that the test is more accurate than existing methods and “boosted patients’ chances of survival,”even though this study was not designed to test either of those claims. The story should have been clearer about how rare this type of cancer is. The participants in this study had a rate of cancer many, many times higher than the general population in the US, which would greatly affect both its effectiveness and its cost. The comment that the cancer “is prevalent” in southern China and Southeast Asia is misleading–it’s more prevalent, but rates are still on the order of 10 to 20 per 100,000.
It would have been nice to see more context from an independent source.
Stories about medical studies should never proclaim findings that the studies were not designed to produce. This story includes several cautious descriptions of the study as a proof-of-concept and something that provides a “glimpse of evidence” that a blood test might be able to detect at least one rare type of cancer. However, the story states this test is more accurate than existing methods, despite the fact that the study did not directly compare the new test to standard clinical diagnosis. It also claims the test boosted survival, even though again there was no direct comparison to standard care. The relative accuracy and survival claims stick out from the rest of the reporting in a way that suggests they might have been added to juice up the impact of the story.
The story reports not only the cost of the blood test ($60), but the cost to detect one case (including follow-up), which the researchers calculated to be $28,600. This is excellent context. The story would have been better if it had noted that the people included in this study were at unusually high risk for developing this cancer. The 34 cases found after screening 20,000 people is almost 200 times the typical annual rate in the US and other countries (less than 1 per 100,000 people per year).
The ultimate measure of cost is cost-effectiveness (cost per life-year saved) which requires survival data. These analyses can also be adjusted for quality of life.
While most of this story is careful to note that this test has yet to show clinical effectiveness, that caution is trampled by the claim that this blood test “boosted patients’ chances of survival,” something that this type of study cannot determine. The claim that this test is more accurate than existing methods is also premature, since the study did not directly compare the blood test to existing methods.
Finding more early-stage cancers is a necessary but hardly sufficient step in the pathway of demonstrating benefit for a new screening test. Clinical trials are needed to address issues of false positives, over diagnosis, over treatment, and survival.
The story reports that developers of this blood test still need to show low rates of false positive or negative results. A low false negative rate means there are fewer missed cases of serious disease. The trade-off, though, is a higher false positive rate. All positive tests will need to be evaluated with a gold standard diagnostic test and the safety, acceptability, and costs of that gold standard test will profoundly affect the uptake of the new screening test.
While the story reports that this study was a “proof-of-concept” experiment, it then says the test boosted survival, something that can only be known after directly comparing the new test to existing methods. Making treatment “dramatically more effective” can be proven only with randomized trials. All screening tests that can detect cancers at an earlier stage will be associated with an apparent increase in survival time, but as explained in more detail below, this does not necessarily mean that patients with screen-detected cancers actually live longer than they would have otherwise. Efficacy can be established only with an RCT.
The story could have done a better job pointing out the many limitations in the work. For example, there was no follow-up testing for people who had negative blood test results. Even though the researchers surveyed participants a year after the testing, they didn’t produce any data on how many of the people with negative blood tests actually had cancer. What’s more, the researchers compared the survival rates of their patients to a different group of cancer patients from another study, which raises questions about how comparable the groups really are. There was also no discussion of the possibility that at least some of the tumors discovered through this screening might not have developed into dangerous cancers. The story does not address lead-time bias (that longer survival might be at least partly due to finding the tumor earlier in its natural course, rather than to just the effect of treatment). The researchers noted that the effect of lead time bias can only really be known through a randomized, controlled trial.
We will rate the story satisfactory on this criterion because it is careful to note that this test is not ready for clinical use and that the study results apply to only one type of cancer. However, the story would have been better if it had pointed out just how rare this cancer is, less than one case per 100,000 people in the US per year. That is about 3,000 cases in the US per year — out of about 1.7 million cancers diagnosed annually.
And the comment that the cancer “is prevalent” in southern China and Southeast Asia may be misleading. While it may occur more often in the region, rates are still on the order of 10 to 20 per 100,000 which is hardly widespread.
The story squeaks by on this criterion because it includes a quote from an editorial in the journal that was written by an independent source. It would have been more useful to readers to include more comment and context from someone without financial ties to this study. The story is upfront about the financial interests of the researchers themselves and the company that owns the rights to their work, noting the keen interest in bringing this test to market.
Although the story reports that “that nasopharyngeal carcinoma typically goes undetected until later on, when patients report symptoms such as recurring nosebleeds,” it does not tell readers anything about how often these cancers are found at an early stage. Readers are not given a sense of how patients typically fare with existing methods of diagnosis.
The story clearly reports that the test is still experimental and that developers have yet to show that it could be practical and effective in clinical practice.
The story highlights the desire for a new blood test that could be clinically useful.
The story doesn’t appear to rely on a news release. It includes quotes from a telephone interview with an executive at the company developing this test.