The following is a guest post by Earle Holland, who, for almost 35 years, was the senior science and medical communications officer at Ohio State University. He’s now a member of our editorial team.
Sometimes, researchers can do all the right things in reporting their results and still the outcome falls short of the public”s need for information. That”s the case with a recent report published in the Journal of General Internal Medicine.
A team from Massachusetts General Hospital and the Harvard Medical School looked for a link between the ratings included on an institution”s Facebook page and the likelihood of a high or low readmission rate for patients, one indicator of the quality of care at a medical center. They wanted to know if a high Facebook rating might suggest a low readmission rate.
And it did.
They looked at 791 hospitals whose readmission rates for patients fell outside the national average, either higher or lower. Facebook pages allow viewers to rank an institution on a five-star basis and those hospitals with low readmission rates seemed to have garnered a higher number of stars in their ranking. The opposite was also true, based on the data the researchers reviewed.
It seemed a clear correlation.
The problem arises when the public perceives that linkage as causal, in other words, that the higher Facebook rating grew directly from the lower readmission rate. And that clearly wasn”t the case. In fact, the researchers were clear in saying just that:
“The study design is cross-sectional and correlative, which limits assigning causality in the findings.”
Still, people want simple answers, especially when it comes to complicated health care. And that”s why social media has become such a key part of most institutions” communications programs. Social media (Facebook, Twitter, Instagram, etc.) has given a quick and easy voice to the masses. Moreover, every voice has, for the most part, the same value. An anecdote from a disgruntled patient can carry the same weight as the advice of a knowledgeable expert. Social media has leveled the playing field for all.
And as liberating as that might seem, where health care is concerned, all commentary isn’t equal.
A cursory, and admittedly unscientific, review of a half-dozen med centers” Facebook pages shows the shortcomings of social media evaluations. True, Facebook will tally commenter’s’ rankings and post the composite score for viewers. – Massachusetts General got a 4.6 – but people have to choose 1 through 5 to be allowed to comment. Viewers who said they wanted to offer a zero couldn”t comment.
There are risks with assigning too much value to social media activity. It does offer institutions the chance to open a dialogue with consumers. But in the case of that cursory review I mentioned, commenters whose messages were negative toward the institution invariably got boilerplate responses, hardly the kind of feedback capable of changing attitudes.
Lastly, human nature plays a large role in social media. People who may have had a good or bad experience at an institution often don”t take the time and effort to share that on a Facebook page, so their experiences aren’t factored in. Commenters are always self-selected, adding a certain bias to any conclusions drawn from social media data.
In this case, the Mass General researchers seemed to include the necessary caveats in this study. They cited its limitations and were wary in extrapolating their findings too far. The same can”t be said about the news release that the hospital issued about the study. It compounded the other problems.
To begin with, there”s the headline on the release: – “”Hospital ratings on social media appear to reflect quality of care.”” I think that headline goes too far, appearing to reflect a causal link that actually hasn”t been established.
The release also quotes the lead researcher saying, “”While we can”t say conclusively that social media ratings are fully representative of the actual quality of care, this research adds support to the idea that social media has quantitative value in assessing the areas of patient satisfaction . . . and other quality outcomes.””
But research that “adds support” to an idea may not be information that will aid someone’s’ informed decision making.
News releases like this should reflect more of the caveats and limitations of the published journal article.
Note from Publisher Gary Schwitzer: We’ve recently introduced our new feature of systematic, criteria-driven reviews of health care news releases. Look for our database of such reviews to grow soon.
Follow us on Twitter: