What is a person with multiple sclerosis to make of this story?
Is it the “promising new therapy” trumpeted in the headline?
Or is it the no-better-no-worse-not-a-breakthrough-nothing-novel-about-it of the Mayo physician’s perspective.
How do journalists get at the answer? By evaluating the evidence and providing data – something not done sufficiently in this story.
People with MS understandably hang on every word of promising new approaches. We feel for them as they try to analyze what this new study really means. News stories need to do a better job of evaluating evidence or they might be better off leaving such topics alone entirely.
It may have been understandable that costs wouldn’t be discussed at this early stage of research. However, when the Mayo expert weighed in at the end about a comparable drug, this provided the perfect and easy opportunity for the story to include ballpark costs of drugs in this category. And a ballpark cost of the comparable drug is somewhere around $20,000.
Same criticisms as in the “evidence” criterion:
What does an 89 percent drop in lesion count mean? What is the relationship between lesion count and symptoms or eventual outcome? How does a patient relate lesion count to something meaningful in his/her life?
Relapse rates may have been a more helpful indicator, but we’re only told they were “much lower.” How much lower? What does that mean to patients?
And if the Mayo expert says that a competing drug has been in early trials for years – why not provide the evidence from those trials?
The headline says “promising” but the independent expert says “I see no major advantage of this drug versus that older drug. It’s not better or worse. It’s the same” and that it’s not a breakthrough.
What does “appears to be safe” and “no serious adverse effects directly attributable to the drug” mean?
Here’s what the study abstract stated: “We noted serious adverse events in two of 54 (4%; 95% CI 3·0—4·4) patients in the placebo group, one of 55 (2%; 1·3—2·3) in the 600 mg ocrelizumab group, three of 55 (5%; 4·6—6·3) in the 2000 mg group, and two of 54 (4%; 3·0—4·4) in the interferon beta-1a group.”
Why didn’t the story report this?
What side effects were seen? Tell readers/patients and let them decide if they are serious or not. And how did researchers know that whatever was seen was not directly attributable to the drug?
And what’s the safety record been of the drug the Mayo expert described that has a longer track record?
Insufficient information on harms.
What does an 89 percent drop in lesion count mean?
Relapse rates may have been a more helpful indicator, but we’re only told they were “much lower.” How much lower? What does that mean to patients?
And if the Mayo expert says that a competing drug has been in early trials for years – why not provide the evidence from those trials?
All in all, the story didn’t provide a meaningful context for readers.
The Mayo expert’s input was the one saving grace of the story.
The story also disclosed that two drug companies funded the study.
We only had the Mayo expert’s broad comments comparing ocrelizumab and rituximab – but no comparative data on benefits and harms were provided.
It’s clear from the story that the approach “is only in the early stages of exploration.”
This is perhaps the most glaring weakness in the story.
The headline says calls it a “new therapy.” (It’s not a therapy until it’s proven to be one.)
The body of the story uses “new” or “novel” three times.
But then the Mayo expert says “there’s nothing novel about this at all.”
So which is it?
The addition of the Mayo expert’s perspective shows that the story did not rely solely on a news release.
Comments
Please note, comments are no longer published through this website. All previously made comments are still archived and available for viewing through select posts.
Our Comments Policy
But before leaving a comment, please review these notes about our policy.
You are responsible for any comments you leave on this site.
This site is primarily a forum for discussion about the quality (or lack thereof) in journalism or other media messages (advertising, marketing, public relations, medical journals, etc.) It is not intended to be a forum for definitive discussions about medicine or science.
We will delete comments that include personal attacks, unfounded allegations, unverified claims, product pitches, profanity or any from anyone who does not list a full name and a functioning email address. We will also end any thread of repetitive comments. We don”t give medical advice so we won”t respond to questions asking for it.
We don”t have sufficient staffing to contact each commenter who left such a message. If you have a question about why your comment was edited or removed, you can email us at feedback@healthnewsreview.org.
There has been a recent burst of attention to troubles with many comments left on science and science news/communication websites. Read “Online science comments: trolls, trash and treasure.”
The authors of the Retraction Watch comments policy urge commenters:
We”re also concerned about anonymous comments. We ask that all commenters leave their full name and provide an actual email address in case we feel we need to contact them. We may delete any comment left by someone who does not leave their name and a legitimate email address.
And, as noted, product pitches of any sort – pushing treatments, tests, products, procedures, physicians, medical centers, books, websites – are likely to be deleted. We don”t accept advertising on this site and are not going to give it away free.
The ability to leave comments expires after a certain period of time. So you may find that you’re unable to leave a comment on an article that is more than a few months old.
You might also like