Health News Review

oransky photo.jpg This is a guest column by Ivan Oransky, MD, who is executive editor of Reuters Health and publisher of the Embargo Watch blog.

In medical journalism, meeting coverage is ubiquitous. Just yesterday, for example, the embargo lifted on abstracts from the American Society of Clinical Oncology meeting. You will have doubtless seen lots of stories based on them by now, even though the meeting itself won’t start until early next month in Chicago. But you’ll also have seen stories from this week’s American College of Obstetricians and Gynecologists meeting and the American Heart Association’s Quality of Care and Outcomes Research in Cardiovascular Disease and Stroke meeting.

The “news peg” of a meeting – despite the fact that science does not work on an annual timetable – is irresistible to reporters. Medical societies, medical schools, and drug companies all whip deadline-driven journalists into a frenzy, burying them with press releases, often embargoed, they will feel pressure to cover. Results announced at conferences can change practice or wreak havoc on a stock price.

But how good are those conference presentations, really? Will most of them stand the test of time – and, perhaps more importantly, peer review – so they can make it into prestigious journals?

That’s what a group of urologists from the University of Florida and Indiana University wanted to find out. So they looked at 126 randomized controlled clinical trials – those are the “gold standard” of medical evidence – presented at two American Urological Association meetings in 2002 and 2003.

The quality of that evidence wasn’t pretty. None of the abstracts said how trial subjects were randomly assigned to different treatments or placebos, and none said how the study ensured that neither the researchers nor their doctors knew which they got. (I was particularly struck by the part of the study’s abstract in which the authors reported those two data points. It’s typical for studies to present percentages next to raw numbers, to put data in context. You’d think it would be clear to readers that “0%” meant zero studies, but they felt the need to spell them both out. Maybe journal style, maybe trying to make a point.) Only about a quarter of the studies said how long researchers followed the subjects in a trial.

Those are important things to know about a trial. Their absence makes it nearly impossible to judge whether a study is well-designed, free of bias, and strong enough to change clinical practice.

So why is this important to journalists?

The Journal of Urology study’s results mirror those of a 2002 JAMA study by Lisa Schwartz, Steven Woloshin, and Linda Baczek of Dartmouth and the VA Outcomes Group. That team found that a full quarter of studies presented at conferences hadn’t been published in a peer-reviewed journal within three years – despite the fact that meeting abstracts “often receive substantial attention in the news media.” (The urology researchers found about the same percentage were eventually published: About 63%.)

Those success rates probably wouldn’t surprise conference organizers, who know their own abstract acceptance rates vary a great deal. The Dartmouth/VA team found that one conference accepted just a quarter of submitted abstracts, suggesting stiff competition hopefully based on quality, while another accepted 100%. But organizers don’t trumpet the fact that many of those abstracts will never be published in journals.

The JAMA study – titled “Media Coverage of Scientific Meetings: Too Much, Too Soon?” — concluded:

We believe that our findings both stem from and highlight 2 competing purposes of scientific meetings. On one hand, the meetings serve a scientific purpose by enabling communication among researchers. In this context, it is not only appropriate but desirable that scientists share work in progress to get feedback and ideas for moving forward, perhaps the purest form of peer review. On the other hand, the meetings serve a public relations purpose, generating support for the meetings’ sponsors and for the agencies funding research, and drawing attention to individual investigators and their institutions.

I don’t think those two purposes are inherently at odds. In fact, I would argue that for the good of journalism and the public it serves, and for meetings and the doctors they serve, it’s critical for conference organizers to demand more information in abstracts. That would build trust, and better public relations.

If the abstracts won’t provide what doctors and reporters need to decide whether they’re worth their attention, that should be a red flag. Maybe they shouldn’t be covered at all. Or, if they are, it’s critical to put the findings – their quality, and their likelihood of making it to the majors – in context. And maybe a few years later, it’s worth a “whatever happened to?” roundup – warts and all.

Otherwise, we’ll all end up saying things like this:

“I hear on the news ‘Major breakthrough in cancer!’ And I think, Gee, I haven’t heard anything major recently. Then I listen to the broadcast and realize that I’ve never heard of this breakthrough. And then I never hear of it again.”

That quote, used by the Dartmouth/VA team to introduce their study, was from someone who was “pretty well plugged in to what’s going on in research:” Richard Klausner, former director of the National Cancer Institute.

Just imagine what more casual readers will think.

Comments

Ann MacDonald posted on May 21, 2010 at 9:13 am

Excellent guest commentary – I especially appreciate this because I turned on NBC last night to hear about a “breakthrough” in ovarian cancer detection and found myself listening to interesting but preliminary research. (It concerned using changes in CA-125 levels to identify women who might have ovarian cancer. Results were interesting but not a breakthrough. I also wondered, why are they not alluding to the problem of PSA tracking, which has turned out not to live up to its early promise and instead resulted in too many men undergoing surgery for prostate cancer for no reason.)
Anyway, thanks Gary, for sharing the post. I enjoy your site and have learned a lot by reading it and your occasional guest commentaries.

Joseph Arpaia, MD posted on May 21, 2010 at 12:17 pm

Excellent post. I often have patients come in asking about a new “breakthrough” treatment that I haven’t heard of. When I check into it I realize that their source is the media, usually TV, and my source is the peer-reviewed literature, generally that which has been replicated. It makes for some interesting conversations.

Greg Pawelski posted on May 22, 2010 at 8:00 am

I don’t believe passing peer-review is the equivalent of the Good Housekeeping seal of approval. Recent disclosures of fraudulent or flawed studies in professional medical journals have called into question the merits of their peer-review system.
Peer review lacks consistent standards. A peer reviewer often spends about four hours reviewing research that may have taken months or years to complete, but the amount of time spent on a review and the expertise of the reviewer can differ greatly.
Perhaps posting of conference presentations is like the power of the internet. Many papers can be viewed on internet websites, not just those that would selectively be handled by peer-reviewed journals. Papers are sent to the journals. Get it peer-reviewed. If they are accepted, great. If not, up it goes on the internet.
At least the information gets out there than it would have been had a journal done the right thing and publish what may be very good and important papers.
Journals control when the public learns about findings by setting dates when the research can be published (if they allow them published at all). They impose severe restrictions on what authors can say publicly, even before they submit a manuscript, and they have penalized authors for infractions by refusing to publish their papers.
Journal Editors are the “gatekeepers” of information (only information that they allow). Peer-review seems to be nothing but a form of vetting (whether it be anger, jealousy, or whatever). Reviewers are in fact often competitors of the authors of the papers they scrutinize, raising potential conflicts of interest.
Such problems are far more embarrassing for journals because of their claims for the superiority of their system of editing. Journal Editors do not routinely examine authors’ scientific notebooks, they rely on peer reviewers’ criticisms.
Then there is the problem with respected cancer journals publishing articles that identify safer and more effective treatment regimens, yet few oncologists are incorporating these synergistic methods into their clinical practice. Because of this, cancer patients often suffer through chemotherapy sessions that do not integrate all possibilities.
These are the major flaws in the system of peer-reviewed science. All the more reason why journalists should avoid relying on the latest studies for medical news coverage.

Robert Stern posted on May 24, 2010 at 9:34 am

Ivan,
Excellent post.
What we do at MedPage Today.com where we cover over 60 medical meetings a year on site is:
1. We don’t just write from abstracts. We cover presentations and those presentations–oral and posters–are very different from the abstracts. It is, however, really impossible to do an
AUA like review of the presentations because that would require having access to all complete posters and complete slide sets.
2. Action points are presented in an inset-box clearly stating the merit of the research and if it is actionable or developing.
3. Each article is reviewed by a physician (under the direction of the University of Pennsylvania School of Medicine to offer another level of review and objectivity if needed.
Bob Stern