Health News Review

John Ioannidis, MD, of Stanford, whom the CommonHealth/WBUR bloggers referred to as the “renowned mythbuster of medicine,”  asks in a JAMA viewpoint piece, “Are Medical Conferences Useful? And for Whom?” (unfortunately, subscription required for full text access).

The CommonHealth blog explains:

After many years of questioning assumptions and seeking harder data on everything from surgery customs to drug studies, Dr. Ioannidis is now taking on a major cultural institution of medicine: The conference. (Some might call it “the boondoggle, junket, fuel-wasting, resume-padding, often-not-peer-reviewed conference.”) This latest target is particularly striking given that the Atlantic piece says that “His work has been widely accepted by the medical community; it has been published in the field’s top journals, where it is heavily cited; and he is a big draw at conferences.”

Excerpts of the Ioannidis JAMA piece:

An estimate of more than 100 000 medical meetings per year may not be unrealistic, when local meetings are also counted. The cumulative cost of these events worldwide is not possible to fathom.

Do medical conferences serve any purpose? In theory, these meetings aim to disseminate and advance research, train, educate, and set evidence-based policy. Although these are worthy goals, there is virtually no evidence supporting the utility of most conferences. Conversely, some accumulating evidence suggests that medical congresses may serve a specific system of questionable values that may be harmful to medicine and health care.

The availability of a plethora of conferences promotes a mode of scientific citizenship in which a bulk production of abstracts, with no or superficial peer review, leads to mediocre curriculum vita building. Even though most research conferences have adopted peer-review processes, the ability to judge an abstract of 150 to 400 words is limited and the process is more of sentimental value.

Moreover, many abstracts reported at the medical meetings are never published as full-text articles even though abstract presentations can nevertheless communicate to wide audiences premature and sometimes inaccurate results. It has long been documented that several findings change when research reports undergo more extensive peer review and are published as completed articles.* Late-breaker sessions in particular have become extremely attractive prominent venues within medical conferences because seemingly they represent the most notable latest research news. However, it is unclear why these data cannot be released immediately when they are ready and it is unclear why attending a meeting far from home is necessary to hear them. A virtual online late-breaker portal could be established for the timely dissemination of important findings.

Power and influence appear plentiful in many of these meetings. Not surprisingly, the drug, device, biotechnology, and health care–related industries make full use of such opportunities to engage thousands of practicing physicians. Lush exhibitions and infiltration of the scientific program through satellite meetings or even core sessions are common avenues of engagement. Although many meetings require all speakers to disclose all potential conflicts, the majority of speakers often have numerous conflicts, as is also demonstrated in empirical evaluations of similar groups of experts named on authorship lists of influential professional society guidelines.”

Ioannidis doesn’t discard the entire notion of conferences.  In fact, he projects what “repurposed” conferences might be like:

“Repurposed conferences could be designed to be entirely committed to academic detailing (ed. note: drug company “educational” outreach to physicians). All their exhibitions and satellite symposia would deal with how to prescribe specific interventions appropriately and how to favor interventions that are inexpensive, well tested, and safe. Such repurposed conferences could also focus on how to use fewer tests and fewer interventions or even no tests and no interventions, when they are not clearly needed.”

* Journalists who cover talks at scientific/medical conferences should be aware of this but often either aren’t, or their news organizations demonstrate a disregard for anything but the latest and apparently greatest – flawed though it may eventually turn out to be.

A Google search suggests that no news organization other than the Boston-based blog cited above chose to write about Ioannidis’ piece.

Yet, in our HealthNewsReview.org daily reviews of news stories, we see stories every week that are in a rush to publish whatever is presented at such conferences.  Examples:

Each of these was reported just in the past 2 weeks.  We see it all the time.

Wake up and read the Ioannidis work.

 

 

 

Comments

Matthew Herper posted on April 3, 2012 at 2:23 pm

There’s a strain to this, though, that I really don’t like. Conferences are also where I’ve gotten many of the more negative stories I’ve written this year. They’re where I cultivate sources, hear stories. I get the feeling that’s what a lot of the doctors use them for, too: to meet up. We’re social monkeys. And I always worry that this kind of “caution” that we should only be trusting the most worthy studies translates into examining less stuff, which is not the way forward.

Surgeon’s law “ninety percent of everything is crap” was coined in 1958. Just because it happens to be true does not mean that it’s new.

    Gary Schwitzer posted on April 4, 2012 at 8:06 am

    Other side of the coin comes from Helen Branswell, medical reporter for the Canadian Press, who wrote on my Facebook page:

    “I hate covering medical conferences. Presentations contain too little data and you don’t have anything to show or send to outside experts who could find the holes in the arguments. Nor do you have enough time to find them or call them. So standards that apply to covering a study in a journal slip when it comes to covering a presentation at a conference. I have medical reporter friends who swear conferences are a better source of stories than journals, but I worry about reporting on half-baked stories.”

Greg Pawelski posted on April 3, 2012 at 3:00 pm

In regards to many abstracts reported at the medical meetings never being published as full-text articles. I recently found out that a clinical researcher, after presenting his study at a medical meeting, had his paper turned down by a journal because the reviewer (a competitor) had only good comments but turned the paper down because it didn’t reference all of the reviewer’s previous work in the field. This is ridiculous!

One of the researchers listed in a paper I did, told me that the study he finally published in the journal Oncology, was rejected by all other American & European cancer journals (Journal of Clinical Oncology, Cancer, Annals of Oncology, European Journal of Cancer, International Journal of Cancer) where it had been submitted. The journals were reluctant to publish such a scientific report, simply because the drugs studied were at the time very intensively advertised in these journals.

It is likely that many unpublished studies contain vitally important information that could influence future research and practice policy. Unpublished information may have special importance in oncology, due to the toxicity and/or expense of many therapies. In other words, the knowledge base is incomplete. On the “other” side of the coin, who does that help?

Tara Haelle posted on April 4, 2012 at 12:54 pm

I let this go when I saw the review on the coffee bean extract/weight loss story two weeks ago, but since you’re bringing it up again, I feel it’s important to point out that the study WAS peer-reviewed and had been published in a peer-reviewed journal. The two articles you reviewed did not mention this – perhaps they didn’t know – but I covered that story, and I didn’t cover it from the abstract. I spoke to Dr. Vinson, and I looked up and read the actual study in the journal Diabetes, Metabolic Syndrome and Obesity: Targets and Therapy. The link is here: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3267522/?tool=pubmed Something the others didn’t report as well is that Dr. Vinson did not conduct the study – he had no interaction with the study participants. The study was conducted in India, and Dr. Vinson came across it and crunched the numbers. More specific data is available in the paper itself. I did not provide specifics in my story about the weight loss changes during each of the six week periods because we aim for an eighth grade reading level and I was struggling to find a way to convey the results in those terms. (That could easily – and fairly – be charged as a failure in my own work.) I did try to convey that this study was small and that people shouldn’t be rushing out and buying green coffee extract. But I do think it was fair to cover the story since it had been published in a peer-reviewed journal back in January (even if nearly all other outlets neglected to find that out).