Nowhere do we hear how far along the research is. The main section doesn’t even allude to evidence, other than to state that more research is to be done, which is true about most everything in medicine except limes for scurvy. As far as a reader can tell, the “new portable scanner” may have yet to leave a laboratory setting. All we know is that it’s been submitted to, and is a finalist for, an engineering & technology award. It’s not the same as clinical research.
The article is quick to point out mammography’s low detection rates (sensitivity, perhaps?) in younger women, identifying the problem to be solved, but it never tells us the detection rates for the new tool. The data very well may be unknown. Which means detection rates could be worse! But from the structure of this article, focusing on technology but not evidence, a reader could only conclude the new kid on the block is superior.
If we’re lucky, the new kid will make the world a better place. But filing a patent for a device is not the same level of proof as clinical research. People would be shocked to learn how many promising ideas from labs and napkins never survive clinical trials, how many prove to be worse for mankind than what we already do. Science is hard and complex. Last year’s uproar over the new mammography guidelines highlighted that there are important subtleties about screening for cancer, and that clear, balanced communication is crucial.
It may sound like we ask a lot of journalists. But really, asking one more question would’ve turned this story around: How far along is this idea? The answer leads us to look beyond the novelty, beyond the theoretical benefits, beyond the entangled claims of the institution and inventor who stand to benefit, and see, simply, whether this idea is better or worse or needs 4 more years in the oven before we can even ask.
Mammography isn’t perfect, and an improved screening technique for breast cancer could reduce pain, suffering, and death for many women. But that need doesn’t give journalists a license to hype.
We wouldn’t expect a discussion of cost implications for a technique still in its formative years. But since the article states that “costs are reduced,” we have to address it.
First of all, costs compared to what? Mammography? The other investigational techniques using RF or microwaves just mentioned?
Let’s assume the comparator is mammography. It may be that, if and once there’s an available product, costs for each screening visit can be reduced. But the total costs depend on how effective it is. Without knowing how good it is at picking up breast cancer (sensitivity) and excluding signals that aren’t breast cancer (specificity), we don’t know its role. It might be too unreliable to replace mammography. Poor sensitivity leads to missing cancers, which cost life and money. Poor specificity leads to more unnecessary biopsies and treatments; if indeed people would screen themselves at home, poor specificity also means more unnecessary doctor visits. Also, will clinic and home uses be covered by insurance?
As the independent sources says, we don’t know how the new device will be used. Until trials determine whether it will replace mammography, supplement it, or sink out of memory, we can’t know the total costs. To say vaguely that “costs are reduced” may not, therefore, be correct.
They’re not quantified. Again, if it’s because the tool hasn’t been tested, we need to know that.
No harms are discussed. If it’s too early to tell the rates of false positives, we need to know that. The article draws a picture of a tool with no potential downsides.
The article does not evaluate the existence, no matter the quality, of evidence. If there is no clinical evidence yet, we need to know that. Omitting this discussion paints a picture of a tool that has the same level of proof supporting it as tools like mammography, which have been put through numerous rigorous trials and lived to tell the tale. The way the article puts it, there may have been research in patients, or there may just be a prototype on a workbench and non-clinical data.
It’s not enough to say that a tool is novel, or faster, or provides real-time images, or can fit in a lunch box, or even (as the press release cheers) is a finalist for an engineering award. The tool may be promising, but only a sober evaluation of how well it works in human beings, checking its results, will tell us whether it will save more lives than we currently can or join the thousands of ideas that sounded good but didn’t pan out.
The article alludes to prior research with RF and microwave technologies in detecting breast cancer. So what do we already know about this approach? Were the prior techniques effective but hampered by lack of speed and portability?
While not a critique of the article, we’ll point out that, for this technique to be compared to mammography, it will likely need evidence that it improves survival. Part of the challenge in breast cancer screening is distinguishing between cancer and “pre-cancer” (or “cancer risk”). The word cancer actually refers to a large, strange zoo of different abnormalities. Some grow slower than others, some never causing problems within a patient’s lifetime. That means it can be hard to know the danger of an early finding from screening. Thus, in addition to knowing the benefits and harms, like the rate of false positives, we’ll need to see if this tool used on a schedule makes groups of people live longer. It may some counterintuitive, but the answer isn’t always a clear yes. This uncertainty was one of the embers in last year’s firestorm over the new mammography guidelines.
The article cites statistics about mammography and breast cancer in younger women from the press release. They are within the range of key data in this field.
(As an aside, the article says the stats are from the research team. We can’t say whether the writer of the press release actually compiled them.)
An independent source was consulted. Talking to more of them, or giving more space to Ms. Rogers’s keen caveats about the uncertainty and context, instead of burying them like an appendix, may have reversed some of our unsatisfactory ratings.
The article is honest, at least, that it sourced a news release. Ideally it would have acknowledged that all the factoids and forward-looking upside in the article came from Dr. Wu and the university, those who presumably hold the patent on this tool and stand to profit financially from good press or success in the IET awards competition.
While it is briefly noted that more research and evidence are needed, we think the tone of the article—”New portable scanner for breast cancer”—misrepresents how soon the product could be available. Any such tool would have to run the gauntlet of clinical trials and evidence evaluation. As far as we can tell, this tool may not have even entered the arena yet. Clarity about the research status is necessary to create a realistic context, to give readers a sense of the timetable and likelihood for laboratory and the real world of hospitals or home screening.
A paragraph explains that the novelty of this technique lies in its speed and portability.
It seems that, save the email from Carolyn Rogers, the article is based entirely on the press release from the University of Manchester. The content and tone are very similar, and every rating we’ve given the story applies to the press release.