This is a story about the apparent underuse of two blood tests—ROMA and OVA1—that are FDA approved to help clinicians decide whether surgery to investigate abnormal growths should be performed by a gynecologic oncologist (“gyn-onc”) trained to remove malignant ovarian cancer, rather than an obstetrician-gynecologist or a general surgeon.
The gist of the story is that the tests have not been widely adopted in the ensuing years, “largely because they haven’t been recommended by the physician groups that write the care guidelines.”
The story hit on a lot of important notes–it told readers the cost of these tests, and it included comments from independent sources. What is not clear is the extent to which clinical evidence supports the increased use of the two tests. We’re told they accurately picked up about 90 percent of cancer cases. But we’re not told how that compares to standard methods of detection.
As the story notes, surviving ovarian cancer is an uphill battle–it is often a deadly disease caught too late. Any procedure that increases the probability of survival is worth careful consideration.
Costs are mentioned reasonably early in the story, and with specificity. This was good to see.
Benefits of these tests are touted, but not quantified. An officer of the National Ovarian Cancer Research Fund Alliance, an advocacy group, testifies that the outcomes of women who have surgery from a gynecologic oncologist “are much better.” But we don’t know what that means. Is it longer survival?
Later the story explains that combining the two blood tests with assessment by a clinician “better determined the patients who should be referred to a [gynecological oncologist], identifying well over 90 percent of the women who actually had cancer.” (Note that this statistic reflects only referral to a gyn-onc, not patient survival.) But how does that compare to the standard assessment tools?
A 2016 study described as publication “of the first clinical utility data” demonstrating that use of OVA1 resulted in referral of ovarian patients to gynecologic oncologists offers similar outcomes but is not mentioned in the story.
We’re told that experts fear that these tests could lead to unnecessary surgery or delayed diagnosis, and that’s sufficient for a Satisfactory rating.
The story could have delved into a bit more of the specifics: What is the evidence for these concerns? What are the false positive and false negative rates? These details are useful because the point of the story is that clinicians are not using these tests to try to improve outcomes of ovarian cancer patients. Are the possible debits–or uncertainties driven by lack of clinical data–contributing to these decisions?
There isn’t sufficient information on the evidence behind these tests. The FDA has approved ROMA and OVA1 for use, but the studies that led to that decision are not discussed.
Nor is there sufficient discussion of why the physician groups that write guidelines haven’t endorsed the tests. Their concerns are raised, but the evidence is not presented or analyzed.
A late diagnosis of ovarian cancer is way too common, and the outcome is grim.
At the same time, the anecdotes seem to highlight the sad aspect of the disease without necessarily providing a relevant case in which the tests described would have helped (any more than a basic blood test would have).
The companies responsible for developing and marketing these two tests are front and center in this story, with scientists affiliated with the companies clearly identified. At least two sources in the story—one of them a scientist affiliated with the National Comprehensive Cancer Network—appear to be independent of the companies.
The story makes it clear that no effective early detection screening test exists. Existing strategies—the CA125 blood test and vaginal ultrasound—are mentioned and then discounted, in part via a personal narrative of a young woman who received a late diagnosis of cancer. While data-driven specifics aren’t included on why standard methods aren’t sufficient, this is enough to rate Satisfactory.
A direct comparison of survival stemming from existing diagnostic tests relative to survival stemming from ROMA and OVA1 would have strengthened the story. If those numbers aren’t available, that should have been pointed out.
The story makes it clear that the two blood tests have been available since 2009.
The story mentions that the tests are combinations of existing tests with a “proprietary” formula (essentially a risk prediction tool) to translate the results to likelihood of cancer.
But, the article should have talked about whether the tests, other than CA-125, have been used before.
This appears to be an enterprise story, and does not rely on a news release.