John Ioannidis, MD, of Stanford, whom the CommonHealth/WBUR bloggers referred to as the “renowned mythbuster of medicine,” asks in a JAMA viewpoint piece, “Are Medical Conferences Useful? And for Whom?” (unfortunately, subscription required for full text access).
The CommonHealth blog explains:
After many years of questioning assumptions and seeking harder data on everything from surgery customs to drug studies, Dr. Ioannidis is now taking on a major cultural institution of medicine: The conference. (Some might call it “the boondoggle, junket, fuel-wasting, resume-padding, often-not-peer-reviewed conference.”) This latest target is particularly striking given that the Atlantic piece says that “His work has been widely accepted by the medical community; it has been published in the field’s top journals, where it is heavily cited; and he is a big draw at conferences.”
Excerpts of the Ioannidis JAMA piece:
An estimate of more than 100 000 medical meetings per year may not be unrealistic, when local meetings are also counted. The cumulative cost of these events worldwide is not possible to fathom.
Do medical conferences serve any purpose? In theory, these meetings aim to disseminate and advance research, train, educate, and set evidence-based policy. Although these are worthy goals, there is virtually no evidence supporting the utility of most conferences. Conversely, some accumulating evidence suggests that medical congresses may serve a specific system of questionable values that may be harmful to medicine and health care.
…
The availability of a plethora of conferences promotes a mode of scientific citizenship in which a bulk production of abstracts, with no or superficial peer review, leads to mediocre curriculum vita building. Even though most research conferences have adopted peer-review processes, the ability to judge an abstract of 150 to 400 words is limited and the process is more of sentimental value.
…
Moreover, many abstracts reported at the medical meetings are never published as full-text articles even though abstract presentations can nevertheless communicate to wide audiences premature and sometimes inaccurate results. It has long been documented that several findings change when research reports undergo more extensive peer review and are published as completed articles.* Late-breaker sessions in particular have become extremely attractive prominent venues within medical conferences because seemingly they represent the most notable latest research news. However, it is unclear why these data cannot be released immediately when they are ready and it is unclear why attending a meeting far from home is necessary to hear them. A virtual online late-breaker portal could be established for the timely dissemination of important findings.
…
Power and influence appear plentiful in many of these meetings. Not surprisingly, the drug, device, biotechnology, and health care–related industries make full use of such opportunities to engage thousands of practicing physicians. Lush exhibitions and infiltration of the scientific program through satellite meetings or even core sessions are common avenues of engagement. Although many meetings require all speakers to disclose all potential conflicts, the majority of speakers often have numerous conflicts, as is also demonstrated in empirical evaluations of similar groups of experts named on authorship lists of influential professional society guidelines.”
Ioannidis doesn’t discard the entire notion of conferences. In fact, he projects what “repurposed” conferences might be like:
“Repurposed conferences could be designed to be entirely committed to academic detailing (ed. note: drug company “educational” outreach to physicians). All their exhibitions and satellite symposia would deal with how to prescribe specific interventions appropriately and how to favor interventions that are inexpensive, well tested, and safe. Such repurposed conferences could also focus on how to use fewer tests and fewer interventions or even no tests and no interventions, when they are not clearly needed.”
A Google search suggests that no news organization other than the Boston-based blog cited above chose to write about Ioannidis’ piece.
Yet, in our HealthNewsReview.org daily reviews of news stories, we see stories every week that are in a rush to publish whatever is presented at such conferences. Examples:
- Breast cancer vaccine research presented at the American Association for Cancer Research annual meeting.
- Mouse research on prostate cancer scans – presented at the same meeting – reported by the same news organization
- Story based on a presentation from a “Late Breaking Research Session” (one of Ioannidis’ other themes in the article above) at the annual meeting of the American Academy of Dermatology. The presentation was one of 15 conducted within a 2 hour time period (an average of 8 minutes per presentation). So how/why was this one selected for news coverage? We are especially bewildered since a poster at the same meeting (Poster 5300) provided information on the use of a competing microwave device for hyperhidrosis. The poster presented 6 month data for 27/31 enrolled subjects. Why then report on a 2 month study in 14 subjects presented presumably in 8 minutes?
Weak story on a weak, tiny, short-term manufacturer-funded study suggesting weight loss from a supplement containing unroasted coffee bean extract.The primary source material was a 150-word abstract that hasn’t been peer-reviewed and which had not yet even been presented at the national meeting of the American Chemical Society.
Each of these was reported just in the past 2 weeks. We see it all the time.
Wake up and read the Ioannidis work.
Comments (10)
Please note, comments are no longer published through this website. All previously made comments are still archived and available for viewing through select posts.
Matthew Herper
April 3, 2012 at 2:23 pmThere’s a strain to this, though, that I really don’t like. Conferences are also where I’ve gotten many of the more negative stories I’ve written this year. They’re where I cultivate sources, hear stories. I get the feeling that’s what a lot of the doctors use them for, too: to meet up. We’re social monkeys. And I always worry that this kind of “caution” that we should only be trusting the most worthy studies translates into examining less stuff, which is not the way forward.
Surgeon’s law “ninety percent of everything is crap” was coined in 1958. Just because it happens to be true does not mean that it’s new.
Gary Schwitzer
April 4, 2012 at 8:06 amOther side of the coin comes from Helen Branswell, medical reporter for the Canadian Press, who wrote on my Facebook page:
Matthew Herper
April 3, 2012 at 2:23 pmThere’s a strain to this, though, that I really don’t like. Conferences are also where I’ve gotten many of the more negative stories I’ve written this year. They’re where I cultivate sources, hear stories. I get the feeling that’s what a lot of the doctors use them for, too: to meet up. We’re social monkeys. And I always worry that this kind of “caution” that we should only be trusting the most worthy studies translates into examining less stuff, which is not the way forward.
Surgeon’s law “ninety percent of everything is crap” was coined in 1958. Just because it happens to be true does not mean that it’s new.
Greg Pawelski
April 3, 2012 at 3:00 pmIn regards to many abstracts reported at the medical meetings never being published as full-text articles. I recently found out that a clinical researcher, after presenting his study at a medical meeting, had his paper turned down by a journal because the reviewer (a competitor) had only good comments but turned the paper down because it didn’t reference all of the reviewer’s previous work in the field. This is ridiculous!
One of the researchers listed in a paper I did, told me that the study he finally published in the journal Oncology, was rejected by all other American & European cancer journals (Journal of Clinical Oncology, Cancer, Annals of Oncology, European Journal of Cancer, International Journal of Cancer) where it had been submitted. The journals were reluctant to publish such a scientific report, simply because the drugs studied were at the time very intensively advertised in these journals.
It is likely that many unpublished studies contain vitally important information that could influence future research and practice policy. Unpublished information may have special importance in oncology, due to the toxicity and/or expense of many therapies. In other words, the knowledge base is incomplete. On the “other” side of the coin, who does that help?
Greg Pawelski
April 3, 2012 at 3:00 pmIn regards to many abstracts reported at the medical meetings never being published as full-text articles. I recently found out that a clinical researcher, after presenting his study at a medical meeting, had his paper turned down by a journal because the reviewer (a competitor) had only good comments but turned the paper down because it didn’t reference all of the reviewer’s previous work in the field. This is ridiculous!
One of the researchers listed in a paper I did, told me that the study he finally published in the journal Oncology, was rejected by all other American & European cancer journals (Journal of Clinical Oncology, Cancer, Annals of Oncology, European Journal of Cancer, International Journal of Cancer) where it had been submitted. The journals were reluctant to publish such a scientific report, simply because the drugs studied were at the time very intensively advertised in these journals.
It is likely that many unpublished studies contain vitally important information that could influence future research and practice policy. Unpublished information may have special importance in oncology, due to the toxicity and/or expense of many therapies. In other words, the knowledge base is incomplete. On the “other” side of the coin, who does that help?
Tara Haelle
April 4, 2012 at 12:54 pmI let this go when I saw the review on the coffee bean extract/weight loss story two weeks ago, but since you’re bringing it up again, I feel it’s important to point out that the study WAS peer-reviewed and had been published in a peer-reviewed journal. The two articles you reviewed did not mention this – perhaps they didn’t know – but I covered that story, and I didn’t cover it from the abstract. I spoke to Dr. Vinson, and I looked up and read the actual study in the journal Diabetes, Metabolic Syndrome and Obesity: Targets and Therapy. The link is here: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3267522/?tool=pubmed Something the others didn’t report as well is that Dr. Vinson did not conduct the study – he had no interaction with the study participants. The study was conducted in India, and Dr. Vinson came across it and crunched the numbers. More specific data is available in the paper itself. I did not provide specifics in my story about the weight loss changes during each of the six week periods because we aim for an eighth grade reading level and I was struggling to find a way to convey the results in those terms. (That could easily – and fairly – be charged as a failure in my own work.) I did try to convey that this study was small and that people shouldn’t be rushing out and buying green coffee extract. But I do think it was fair to cover the story since it had been published in a peer-reviewed journal back in January (even if nearly all other outlets neglected to find that out).
Tara Haelle
April 4, 2012 at 1:01 pmAt risk of opening myself up to other potential reporting failures (I’m relatively green in health reporting myself), the link to the story I wrote is here: http://www.dailyrx.com/news-article/unroasted-green-coffee-beans-appear-associated-greater-weight-loss-18278.html
Our Comments Policy
But before leaving a comment, please review these notes about our policy.
You are responsible for any comments you leave on this site.
This site is primarily a forum for discussion about the quality (or lack thereof) in journalism or other media messages (advertising, marketing, public relations, medical journals, etc.) It is not intended to be a forum for definitive discussions about medicine or science.
We will delete comments that include personal attacks, unfounded allegations, unverified claims, product pitches, profanity or any from anyone who does not list a full name and a functioning email address. We will also end any thread of repetitive comments. We don”t give medical advice so we won”t respond to questions asking for it.
We don”t have sufficient staffing to contact each commenter who left such a message. If you have a question about why your comment was edited or removed, you can email us at feedback@healthnewsreview.org.
There has been a recent burst of attention to troubles with many comments left on science and science news/communication websites. Read “Online science comments: trolls, trash and treasure.”
The authors of the Retraction Watch comments policy urge commenters:
We”re also concerned about anonymous comments. We ask that all commenters leave their full name and provide an actual email address in case we feel we need to contact them. We may delete any comment left by someone who does not leave their name and a legitimate email address.
And, as noted, product pitches of any sort – pushing treatments, tests, products, procedures, physicians, medical centers, books, websites – are likely to be deleted. We don”t accept advertising on this site and are not going to give it away free.
The ability to leave comments expires after a certain period of time. So you may find that you’re unable to leave a comment on an article that is more than a few months old.
You might also like