NOTE TO READERS: When this project lost substantial funding at the end of 2018, I lost the ability to continue publishing criteria-driven news story reviews and PR news release reviews - once the bread-and-butter of the site going back to 2006. The 3,200 archived reviews, while still educational, are getting old and difficult for me to technically maintain on the back end of the website. So I am announcing that I plan to remove these reviews from the site by April 1, 2021. The blog and the toolkit - two of the most popular features on the site - will remain. If you wish to peruse the reviews before they disappear, please do so by the end of March 2021. After that date you may still be able to access them via the Internet Archive Wayback Machine - https://archive.org/web/.

NYT Dartmouth Atlas criticism falls flat with many bloggers

Posted By

Tags

Some very smart bloggers raised many questions of their own about yesterday’s New York Times critique of the Dartmouth Atlas methodology.

• “This journalism, like the Dartmouth research on which it draws, isn’t perfect” – Jonathan Cohn on The New Republic blog.

• “a confused, woffly attack on Dartmouth from Reed Abelson & Gardiner Harris. This is a dreadful article. Period.” – Matthew Holt on The Health Care Blog.

• “How can news reporters avoid making mistakes when reporting on technical issues?” –
Andrew Gelman – on his Statistical Modeling, Causal Inference, and Social Science blog.

• “Are Reed Abelson and Gardiner Harris as Big Tools as Their Attempted Trashing of Dartmouth Suggests? Yes. Time to Shut the New York Times for Good” – Brad DeLong on his Grasping Reality with Both Hands blog.

• “Others quoted in today’s story indicate that the Times’ piece distorted what they said” – Maggie Maher on her Health Beat blog.

(Addendum 80 minutes after original post:) I meant to include a sixth blogger. Paul Raeburn on the Knight Science Journalism Tracker wrote:

“Reporters Reed Abelson and Gardiner Harris evidently think they have written a story that says the data used to identify potential health savings in Medicare is faulty. They haven’t. Their bias is evident from the start.”

You might also like

Comments

Please note, comments are no longer published through this website. All previously made comments are still archived and available for viewing through select posts.

Alan Burgener

June 4, 2010 at 10:58 am

This is a very interesting debate, and unfortunately, is all too representative of what passes for thoughtful policy deliberations in health care. Any research carries caveats relating to methodology, unconsidered variables, etc. That doesn’t make the available findings useless, it merely limits the degree of confidence you can have in any conclusions extrapolated from the research. Rather than recognizing the limitations of the research and applying appropriate caution to the resulting health policy applications, the tendency in health care has been to ignore completely any research findings that don’t comport with the pre-existing views of a given individual or group — and then justify doing so by citing the weaknesses in methodology or the incomplete nature of the studies. The truth is — at least as I understand it — that using the Dartmouth research to support an argument that cheaper care is always better care is, while not completely indefensible, a pretty weak position. However, it’s also true that there is virtually no credible research (notwithstanding a few very small studies) that suggests anything other than comparable quality and comparable clinical outcomes across communities and regions — not individual hospitals or physicians — that have vastly different costs of care for a variety of conditions. Therefore, the critical hypothesis that the U.S. health care system could achieve current outcomes for much less money strikes me as a very valid starting point for the health policy debate. We would all be better off if the ongoing conversation revolved around this hypothesis, rather than allowing ourselves to be distracted by the efforts at misdirection coming from those who fear that testing the above hypothesis and better understanding the complex multi-factorial interface between health care costs, quality and outcomes will leave them looking bad.

Elaine Schattner, M.D.

June 4, 2010 at 12:23 pm

I thought the Times piece was good, overall, pointing to some possible limitations in the Dartmouth study. Equivocation about complex and conflicting data is sometimes appropriate, far safer than simply swallowing the position as framed by some experts with whom we happen to agree.

c3

June 4, 2010 at 1:48 pm

Reminiscent of the attacks on the Agency for Health Care Policy and Research (AHCPR) in the 80’s

Gregory D. Pawelski

June 6, 2010 at 5:36 pm

I’m sorry Alan but the real truth is that Dartmouth research never said “cheaper care is always better care.” What they have said was that “More care is not necessarily better care. Sometimes outcomes are worse.” And they have said or at least implied that less expensive care can be just as effective, although it is not always the case. But when less expensive, less agressive care is as effective, patients are exposed to fewer risks. The folks at Dartmouth are well aware that the cheaper care the poor often receive is often not as effective.
For instance, in regards to cancer medicine, emerging evidence shows that many of the highly expensive targeted cancer drugs like Avastin may be just as effective and produce fewer side effects if taken over shorter periods and in lower doses. Dr. Ian Haines reported in the Journal of Clinical Oncology that the dose being used is 15 milligrams per kilogram of body weight, despite research showing it may work with 3 milligrams per kilogram.
Pharmaceutical companies are attracted to studies looking at the maximum tolerated dose of any treatments. As the increasing number of drug studies are developed through collaborations between academic medical centers and drug companies, it is important to understand the influence that industry involvement may have on the nature and direction of cancer research.