This news release about a trial that randomly assigned people with bladder cancer to either robot-assisted or open surgery buried the real news under a weak lead that garbles the findings. The news here is that, contrary to assertions of many proponents of robot-assisted surgery, it is possible to randomize patients in order to get a valuable comparison instead of relying on weaker study designs that can’t answer the fundamental question about how old and new techniques compare. The release is rated as satisfactory on many criteria because key information was included in the body of the release, but the headline proclaiming that robot surgery is “as effective” as open surgery misses the larger point. The headline and lead also put a pro-robot spin on the results, which could also be summarized as robot surgery is “not clearly worse.” Also not explained: robot-assisted surgery takes longer and is more expensive.
The final sentence of the release, a quote from a researcher, should have been the lead: “It is important to conduct these trials before widespread adoption of technology, as has been the case with robotic prostatectomy (removal of the prostate).” We are bombarded with releases and news stories trumpeting results from studies of robot-assisted surgery that don’t actually compare the new technology to existing practice. These studies often lack any control group or use historical data that may not be truly comparable. The promoters of expensive new devices then claim that they are relying on the best available evidence. This trial shows better evidence can (and should) be gathered. And in this case, good quality evidence indicates little difference in the techniques, despite what all the big hospital billboards about robot surgery proclaim.
There is no discussion of cost in this release. The journal article says several institutions refused to provide their prices. Nevertheless, the high cost of robotic surgery is an important issue. The researchers wrote they were unable to say whether those high costs might be offset [or not] by certain potential benefits (such as shorter hospital stays), but the release should have at least acknowledged the cost issues.
The release does report the percentage of patients in the robot-assisted and open surgery groups that were still alive with no signs of disease progression two years after their procedures: 72.3 percent of patients in the robotic surgery group compared with 71.6 percent in the open surgery group… and it notes the difference was not statistically significant. It also reports that the robot-assisted surgery patients checked out of the hospital after six days on average, compared to 7 days for those in the open surgery group. The release reports only half as much blood loss in the robot surgery group, but does not report the amounts, which averaged 300 mL in the robot-assisted group vs. 700 mL in the open surgery group… that’s a difference of less than one pint.
The release reports that patients who had robot-assisted surgery spent more time in the operating room than those having open procedures (seven hours vs. six hours). It also reports overall rates of adverse effects (67 percent in the robot group vs. 69 percent in the open group), along with listing the most common problems: urinary tract infections and intestinal obstructions. It would have helped if the release had pointed out that these complication rates were not statistically different.
In a way, the release is satisfactory and yet maddeningly unsatisfactory. It tells readers that the study was a randomly controlled trial with 350 patients, but it buries the most important point: that the ability of these researchers to do this sort of rigorous trial comparing robot-assisted surgery with open techniques highlights how few such trials are done. Deep in the release, the researchers are quoted as saying the findings “underscore the need for further high-quality trials to assess surgical innovation before this surgical technique is widely adopted in clinical practice” and further that “It is important to conduct these trials before widespread adoption of technology, as has been the case with robotic prostatectomy (removal of the prostate).” The quality of the evidence should have been the lead.
The release does not exaggerate the prevalence of bladder cancer. It does not push treatment on those who are unlikely to benefit.
The release could be interpreted as promoting a newer more expensive form of therapy that doesn’t provide any advantages over the standard therapy.
Although the release reports that the study was supported by the National Cancer Institute, it fails to note the financial disclosures several researchers made about money they have received from companies relevant to this study.
The comparison of robot-assisted to open surgery was the main point of the release.
The release makes clear that both techniques are available… including at the institution that issued the release. The release could have informed readers that, according to the published paper, that all of the operators in this study had done at least 10 robot-assisted cystectomies before being allowed to participate in the study. When widespread adoption of the robot-assisted procedure occurs, it would be possible that many surgeons would not have completed that many robotic cystectomies.
While the release does tell readers that what is really new here is the use of randomization to compare robot-assisted surgery to open surgery for bladder cancer, it manages to bury that lead so deep under the poorly worded description of the results of this non-inferiority trial that the thrust of the release is unsatisfactory.
The release doesn’t employ unjustified, sensational language.