Interesting post, pointing to an interesting paper published last week in the Annals of Emergency Medicine. Excerpt of the post:
Over 14 years, 84 editors at the journal rated close to 15,000 reviews by about 1,500 reviewers. Highlights of their findings:
…92% of peer reviewers deteriorated during 14 years of study in the quality and usefulness of their reviews (as judged by editors at the time of decision), at rates unrelated to the length of their service (but moderately correlated with their mean quality score, with better-than average reviewers decreasing at about half the rate of those below average). Only 8% improved, and those by very small amount.
How bad did they get? The reviewers were rated on a scale of 1 to 5 in which a change of 0.5 (10%) had been earlier shown to be “clinically” important to an editor.
The average reviewer in our study would have taken 12.5 years to reach this threshold; only 3% of reviewers whose quality decreased would have reached it in less than 5 years, and even the worst would take 3.2 years. Another 35% of all reviewers would reach the threshold in 5 to 10 years, 28% in 10 to 15 years, 12% in 15 to 20 years, and 22% in 20 years or more.
So the decline was slow. Still, the results, note the authors, were surprising:
Such a negative overall trend is contrary to most editors’ and reviewers’ intuitive expectations and beliefs about reviewer skills and the benefits of experience.
Journalists should realize the flaws in the peer review system and perhaps reconsider a steady news diet that comes only or mostly from journal articles. And/or be prepared to discuss limitations more often.