This is a guest column by Ivan Oransky, MD, who is executive editor of Reuters Health and publisher of the Embargo Watch blog.
In medical journalism, meeting coverage is ubiquitous. Just yesterday, for example, the embargo lifted on abstracts from the American Society of Clinical Oncology meeting. You will have doubtless seen lots of stories based on them by now, even though the meeting itself won’t start until early next month in Chicago. But you’ll also have seen stories from this week’s American College of Obstetricians and Gynecologists meeting and the American Heart Association’s Quality of Care and Outcomes Research in Cardiovascular Disease and Stroke meeting.
The “news peg” of a meeting – despite the fact that science does not work on an annual timetable – is irresistible to reporters. Medical societies, medical schools, and drug companies all whip deadline-driven journalists into a frenzy, burying them with press releases, often embargoed, they will feel pressure to cover. Results announced at conferences can change practice or wreak havoc on a stock price.
But how good are those conference presentations, really? Will most of them stand the test of time – and, perhaps more importantly, peer review – so they can make it into prestigious journals?
That’s what a group of urologists from the University of Florida and Indiana University wanted to find out. So they looked at 126 randomized controlled clinical trials – those are the “gold standard” of medical evidence – presented at two American Urological Association meetings in 2002 and 2003.
The quality of that evidence wasn’t pretty. None of the abstracts said how trial subjects were randomly assigned to different treatments or placebos, and none said how the study ensured that neither the researchers nor their doctors knew which they got. (I was particularly struck by the part of the study’s abstract in which the authors reported those two data points. It’s typical for studies to present percentages next to raw numbers, to put data in context. You’d think it would be clear to readers that “0%” meant zero studies, but they felt the need to spell them both out. Maybe journal style, maybe trying to make a point.) Only about a quarter of the studies said how long researchers followed the subjects in a trial.
Those are important things to know about a trial. Their absence makes it nearly impossible to judge whether a study is well-designed, free of bias, and strong enough to change clinical practice.
So why is this important to journalists?
The Journal of Urology study’s results mirror those of a 2002 JAMA study by Lisa Schwartz, Steven Woloshin, and Linda Baczek of Dartmouth and the VA Outcomes Group. That team found that a full quarter of studies presented at conferences hadn’t been published in a peer-reviewed journal within three years – despite the fact that meeting abstracts “often receive substantial attention in the news media.” (The urology researchers found about the same percentage were eventually published: About 63%.)
Those success rates probably wouldn’t surprise conference organizers, who know their own abstract acceptance rates vary a great deal. The Dartmouth/VA team found that one conference accepted just a quarter of submitted abstracts, suggesting stiff competition hopefully based on quality, while another accepted 100%. But organizers don’t trumpet the fact that many of those abstracts will never be published in journals.
The JAMA study – titled “Media Coverage of Scientific Meetings: Too Much, Too Soon?” — concluded:
We believe that our findings both stem from and highlight 2 competing purposes of scientific meetings. On one hand, the meetings serve a scientific purpose by enabling communication among researchers. In this context, it is not only appropriate but desirable that scientists share work in progress to get feedback and ideas for moving forward, perhaps the purest form of peer review. On the other hand, the meetings serve a public relations purpose, generating support for the meetings’ sponsors and for the agencies funding research, and drawing attention to individual investigators and their institutions.
I don’t think those two purposes are inherently at odds. In fact, I would argue that for the good of journalism and the public it serves, and for meetings and the doctors they serve, it’s critical for conference organizers to demand more information in abstracts. That would build trust, and better public relations.
If the abstracts won’t provide what doctors and reporters need to decide whether they’re worth their attention, that should be a red flag. Maybe they shouldn’t be covered at all. Or, if they are, it’s critical to put the findings – their quality, and their likelihood of making it to the majors – in context. And maybe a few years later, it’s worth a “whatever happened to?” roundup – warts and all.
Otherwise, we’ll all end up saying things like this:
“I hear on the news ‘Major breakthrough in cancer!’ And I think, Gee, I haven’t heard anything major recently. Then I listen to the broadcast and realize that I’ve never heard of this breakthrough. And then I never hear of it again.”
That quote, used by the Dartmouth/VA team to introduce their study, was from someone who was “pretty well plugged in to what’s going on in research:” Richard Klausner, former director of the National Cancer Institute.
Just imagine what more casual readers will think.