This news release reports on a study of a computerized technology that combines imaging with digital analysis to help improve the screening of melanoma skin lesions.
The news release explains the specificity and sensitivity of the test. It also clearly discloses who funded the research. One downside is that it’s not until the final paragraph that we learn this is still experimental research that must be confirmed in larger studies. Earlier statements make it sound farther along, e.g., “researchers at The Rockefeller University have developed an automated technology that combines imaging with digital analysis and machine learning to help physicians detect melanoma at its early stages.”
A larger concern is how the release downplayed the high-false positive rate of 64%, saying this “approaches” the false-positive rate of manual detection by expert dermatologist, which studies put at about 32%.
Melanoma can be very difficult to diagnose so if doctors could use a tool that helped better screen suspicious-looking moles, they might be able to improve the accuracy of diagnosis and save lives. This new approach thus far has shown promise, but further research will be needed to validate it as an enhanced diagnostic tool–and that should be made upfront and clear to the journalists and consumers who read the news release.
This technology seems to still be in the experimental stage.
The news release reports a 98% sensitivity and 36% specificity for melanoma detection, which is a level that approaches that of an expert lesion diagnosis.
But the release could have noted that this still isn’t very accurate: A 36% specificity means that 64% of those without melanoma will test positive (i.e. false positive). That translates to a lot of unnecessary anxiety and follow-up testing to rule out suspicious growths.
The release says the test’s ability to diagnose normal moles was 36 percent, “approaching the levels achieved by expert dermatologists performing visual examinations of suspect moles under the microscope.”
But as noted above, that statement downplays the fact that the test will result in a very high number of false-positives. A 36% specificity means that nearly two thirds of non-malignant moles will test positive and require additional follow-up. A test that correctly identifies nearly all melanomas, but also flags more than half of normal growths as melanomas, isn’t all that useful.
The investigators note that this research needs to be confirmed with larger studies–but not until the end of the release. More should have been included on this point, and higher up, so as to not mislead readers into thinking this product is coming to market soon.
And less needed to be said on the potential applications, which is still in “hearsay” phase considering the state of research.
For example, this quote: “I think this technology could help detect the disease earlier, which could save lives, and avoid unnecessary biopsies too,” is premature.
No obvious disease mongering is evident here.
We are told that “this work was supported in part by the National Institutes of Health and in part by the Paul and Irma Milstein Family Foundation and the American Skin Association.”
The release says “the ability of the test to correctly diagnose normal moles was 36 percent, approaching the levels achieved by expert dermatologists performing visual examinations of suspect moles under the microscope.”
That differs from what’s in the study, which reports “the visual examination of pigmented lesions by expert dermatologists using dermoscopy and following criteria such as the Menzies (or CASH[3]) method has yielded diagnostic accuracy as high as 98% sensitivity and 68% specificity in some studies.”
Sixty-eight percent specificity is much more accurate than 36%, so we’re not sure how that can be defined as “approaching the levels” of accuracy of visual exams by expert dermatologist.
Some of the language used suggests the technology is mature, “Now, researchers at The Rockefeller University have developed an automated technology that combines imaging with digital analysis and machine learning to help physicians detect melanoma at its early stages.”
It is not until the final paragraph that the need for further research is noted.
The release establishes that this is a potential new way to augment melanoma detection.
The release does not use sensational language.
However, we did want to highlight this quote, which was problematic:
“The success of the Q-score in predicting melanoma is a marked improvement over competing technologies,” says Daniel Gareau, first author of the report and instructor in clinical investigation in the Krueger laboratory.”
We think that it is a bit premature to be this definitive. A recent review of competing technologies suggests that confocal scanning laser microscopy has a sensitivity of 88-98% and a specificity of 83-99%.
Comments
Please note, comments are no longer published through this website. All previously made comments are still archived and available for viewing through select posts.
Our Comments Policy
But before leaving a comment, please review these notes about our policy.
You are responsible for any comments you leave on this site.
This site is primarily a forum for discussion about the quality (or lack thereof) in journalism or other media messages (advertising, marketing, public relations, medical journals, etc.) It is not intended to be a forum for definitive discussions about medicine or science.
We will delete comments that include personal attacks, unfounded allegations, unverified claims, product pitches, profanity or any from anyone who does not list a full name and a functioning email address. We will also end any thread of repetitive comments. We don”t give medical advice so we won”t respond to questions asking for it.
We don”t have sufficient staffing to contact each commenter who left such a message. If you have a question about why your comment was edited or removed, you can email us at feedback@healthnewsreview.org.
There has been a recent burst of attention to troubles with many comments left on science and science news/communication websites. Read “Online science comments: trolls, trash and treasure.”
The authors of the Retraction Watch comments policy urge commenters:
We”re also concerned about anonymous comments. We ask that all commenters leave their full name and provide an actual email address in case we feel we need to contact them. We may delete any comment left by someone who does not leave their name and a legitimate email address.
And, as noted, product pitches of any sort – pushing treatments, tests, products, procedures, physicians, medical centers, books, websites – are likely to be deleted. We don”t accept advertising on this site and are not going to give it away free.
The ability to leave comments expires after a certain period of time. So you may find that you’re unable to leave a comment on an article that is more than a few months old.
You might also like