We don’t think the story conveyed the conclusion of the researchers, nor that of the the editorial writer.
The researchers concluded:
“While robotic assisted and laparoscopic surgery are associated with fewer deaths, complications, transfusions and shorter length of hospital stay compared to open surgery, robotic assisted laparoscopic surgery is more costly than laparoscopic and open surgery. Additional studies are needed to better delineate the comparative and cost effectivenss of robotic assisted laparoscopic surgery relative to laparoscopic surgery and open surgery.”
That conclusion sends a somewhat different message than that delivered in the story – and the difference is important.
And the editorial writer stated that the study “failed to capture 3 important elements.” And then he clearly spelled those out. The story didn’t convey the depth/details of those concerns.
What are the outcomes that readers/patients should really care about? That’s the question the editorial raised – or that many other independent observers would raise about this study. While the story was functional, it could have helped readers become smarter health care consumers with a bit more context.
Much of the focus of the piece was on costs.
The story did an adequate job quantifying the operative mortality, blood transfusion and length-of-stay rates among the three groups studied. (Unfortunately, it did so inconsistently – sometimes giving data for all 3 types of surgery, sometimes providing results for only 2 surgical approaches.)
We’ll give it a satisfactory score here.
But are these the outcomes that really matter? That’s something we address in the “Evidence” criterion below.
As with the benefits criterion above, the details provided were adequate.
The story nodded in the direction of the accompanying editorial, but we wish it had captured the important points as written by the editorialist:
While this study provides some important information that will help patients, providers and policy makers make these value judgments, it fails to capture 3 important elements. First, as the authors note, the costs estimates do not include the capital investment of purchasing a robotic system or the indirect economic benefits of patients’ early return to work and increased productivity. Second, the analysis fails to capture the human cost of the learning curve. In other words, as providers learn new surgical techniques, outcomes are often worse for patients early in the learning curve as hypothesized by Hu and others. Finally, and perhaps most importantly, the study fails to capture patient reported outcomes such as postoperative pain, return to baseline functional status and health related quality of life, all of which are highly germane to the procedures under study here. It is difficult, if not impossible, to assess the costeffectiveness of minimally invasive technologies without including these critical outcomes. Future comparative effectiveness studies of these techniques must include patient reported outcomes as the primary end point if they are to inform the debate regarding the value of our interventions.
No disease mongering at play here.
The story included some input from the editorial writer – although, as stated above in the “Evidence” criterion – perhaps not to its greatest effectiveness.
The focus of the story was on a study comparing robot-assisted surgery with two other surgical techniques.
The availability of all 3 surgical approaches studies – while not explicitly described – could be inferred from the study details reported.
The story didn’t provide any context on the growing body of literature about comparative effectiveness questions in this field as robotic surgery proliferates.
It does not appear that the story relied on a news release.