The story promises information about a “new eye test” and “a new way of identifying people at risk of glaucoma years before vision loss happens.” But the test is never adequately described. And the supposed benefits are not explained in a meaningful way.
Stories that include claims about new technologies need to be more rigorous than this one was. Consumers don’t need a text book or a journal article. But our 10 criteria can be addressed readily. And if they were, consumers would be given far more of the information they need to make sense about new approaches and research studies.
This story missed on what are arguably our most two important criteria: explaining costs and benefits. These issues matter in all health care news stories.
It also missed a key point addressed in our “Does the story seem to grasp the quality of the evidence?” criterion. See below.
The story didn’t include anything about costs.
What will it cost to use the “computer-based imaging tool” described in the story? We’re not told. There is no information on the cost of expanding the use of retinal photographs to a wider population.
A clear example of how incomplete relative risk reduction/benefit figures can be. The story only says that “Those with the narrowest vessels at the beginning of the study were four times more likely to have developed glaucoma a decade later.”
But it does not explain:
Not applicable. There wasn’t any discussion of potential harms and we can’t think of any – besides the obvious one of unnecessary glaucoma screening in some cases.
At least the story included the independent perspective from an eye surgeon, who said:
“It remains to be seen if this approach will help us identify people at risk for glaucoma sooner,” he says. “We have a number of tools now to help us do that, but we’ve got to get people in our offices to use them.”
But there certainly wasn’t the kind of rigorous evaluation of the evidence that we would have liked to see in such a story.
The test is actually just computer analysis of retinal photographs, which many people have experience with for diabetic eye screening. Placing the study in that context would be very helpful for readers who are trying to understand the study. Also, the story doesn’t mention that 33% of participants are lost to follow-up. Since only about 3% of participants with complete follow-up developed glaucoma, the actual number of patients among the “missing” 33% who developed glaucoma are likely to significantly influence the results and potentially the conclusions to be drawn.
No disease mongering of glaucoma in the story.
The closing comment from an independent eye surgeon provided some necessary – albeit late and brief – perspective.
Again, we’ll give the story the benefit of the doubt for at least nodding in the direction of other alternatives. For example:
There was no discussion of what the “computer-based imaging tool” really was. Is it equipment already readily available and in use? If not, what is it, and what would it take to implement?
No details were provided – a weak spot in the story.
The story should have made the connection between retinal photography that is currently done for diabetic eye screening and this new, alternative use of the same images.
Related to the unsatisfactory “Availability” score above, since no details were given about the “computer-based imaging tool,” we don’t know anything about how novel it is (or isn’t).
It does not appear that the story relied solely or largely on a news release.