The story also glosses over some big, sticky issues that come up in early detection, like the "subjectively healthy" controls, the uncertain continuum between early cancerous signals ("cancer risk") and actual clinically significant cancers, and the
early detection and treatment of "pre-" prostate and breast cancers. If we’re going to pump this hypothetical tool’s use in early detection, we really need to study its ability and consequences in identifying the different biological entities, of varying clinical significance, on the odd family tree we collectively call cancer.This is the story of an idea, a hypothesis that is ready to be studied, not a sea change in how we detect cancer. If we’re lucky, it will lead to development of a new technique that proves better than the ones we currently employ, but at best we’re years away from having it in a hospital.
See the Daily Mail’s coverage of the study for many more of the nuances, caveats, and appropriate tone that we asked for in this review — some of which were provided by Dr. Kuten himself, e.g: "
We need to carry out a lot more work on many more people in different stages of the cancer. We also need to find out the extent to which it can diagnose cancer. This will probably take at least two years. It will be several years at least before it is available for use."But for precisely those reasons, the story gets a strike for passing on the unchallenged speculations that the hypothetical tool could be cheap. Unsound journalistic practice.
There’s almost no evaluation of the evidence. As we discussed in the "Availability" criterion above, this was a preliminary, retrospective, statistical analysis of data in a laboratory, not a clinical test of an actual diagnostic technique. Readers should’ve been told the difference.
Citing 177 people is a big number but a meaningless one without knowing what was done. (A hyperbolic example: consider the quality of evidence from a web poll with 177 respondents compared to a randomized, controlled, double-blind, multi-center clinical study in 177 people.) We’re told vaguely that more work is needed, but the sentence construction — "While more work is needed, the early success…" — makes clinical success seem like a foregone conclusion. People would be shocked to learn how many techniques with "early success" ultimately fail in the real world where real lives are at stake. The researcher states "if we can confirm these initial results in large-scale studies," which is good of him, but the tone and unbounded optimism of the article gives us no reason whatsoever to think this preliminary evidence won’t be confirmed.
The evidence in this lab study, in fact, appears to have used data from existing laboratory techniques of gas chromatography/mass spectrometry to inform that of the novel nanoarray, and also compared the uses of the techniques. It highlights the early nature of the evidence, as the study was about the development and laboratory analysis of the technique, not clinical outcomes.
There’s another limitation in the evidence that’s worth pointing out. According to the abstract: "The healthy population was healthy according to subjective patient’s data." The study itself states that the cancer population had been diagnosed by conventional tests. That questions the relevance of the study to the article’s presentation of the approach as a way to detect cancer early — before it has symptoms, before it would be diagnosed otherwise. Because we don’t know if the control group also had early but asymptomatic or undiagnosed cancer.
Should a reporter be expected to critically evaluate all these questions when writing a 6 paragraph summary of a study? We think that if the study is as preliminary as this one, and the technique as far from touching clinical practice as this one, then yes, we expect an article to do more than repeat the optimistic hopes of the investigators themselves without any independent evaluation. Otherwise, isn’t it just a press release from the investigators?
The story is vague and speculative about the advantages of the theoretical clinical test, such as the presumed uses and benefits in early detection. By being vague, it casts a wide net over an undefined population that the technology could presumably help. The story mentions the "four different common tumor types" of lung, breast, bowel and prostate. But not all cancers are equal. Not all pre-cancers are equal. There is controversy even with today’s screening methods about the benefits/harms of screening for some of these cancers in some populations. None of that nuance is even touched on. Should readers be worried about EVERYTHING? That’s disease-mongering.
Even without detailing how the hypothetical tool might compare to existing alternatives, or fill a need, the story at least could’ve told us that the reason further testing is needed is not just to repeat the research on a larger scale but also in order TO COMPARE the approach with what we’re currently doing.
Reuters posted just one reader comment before closing the comment section. It read as follows – typos and all: "This is a terrific Leap in Teachnology and should be made available Worldwide at hospitals, clinics and Good Dr. Offices." Readers should not have been led to believe the simple, proven breath test is at hand, ready to deploy to doctors. It’s years away if we’re lucky.
On the plus side, it tells us that the study builds on earlier research from last year, and we give it the benefit of the doubt. On the minus, the story is vague about the novelty of breath testing for cancer, which was not a new approach invented by this group in even its last study.
Comments
Please note, comments are no longer published through this website. All previously made comments are still archived and available for viewing through select posts.
Our Comments Policy
But before leaving a comment, please review these notes about our policy.
You are responsible for any comments you leave on this site.
This site is primarily a forum for discussion about the quality (or lack thereof) in journalism or other media messages (advertising, marketing, public relations, medical journals, etc.) It is not intended to be a forum for definitive discussions about medicine or science.
We will delete comments that include personal attacks, unfounded allegations, unverified claims, product pitches, profanity or any from anyone who does not list a full name and a functioning email address. We will also end any thread of repetitive comments. We don”t give medical advice so we won”t respond to questions asking for it.
We don”t have sufficient staffing to contact each commenter who left such a message. If you have a question about why your comment was edited or removed, you can email us at feedback@healthnewsreview.org.
There has been a recent burst of attention to troubles with many comments left on science and science news/communication websites. Read “Online science comments: trolls, trash and treasure.”
The authors of the Retraction Watch comments policy urge commenters:
We”re also concerned about anonymous comments. We ask that all commenters leave their full name and provide an actual email address in case we feel we need to contact them. We may delete any comment left by someone who does not leave their name and a legitimate email address.
And, as noted, product pitches of any sort – pushing treatments, tests, products, procedures, physicians, medical centers, books, websites – are likely to be deleted. We don”t accept advertising on this site and are not going to give it away free.
The ability to leave comments expires after a certain period of time. So you may find that you’re unable to leave a comment on an article that is more than a few months old.
You might also like