Scivillage.com Casual Discussion Science Forum
Peer review: How is that working out for ya? - Printable Version

+- Scivillage.com Casual Discussion Science Forum (https://www.scivillage.com)
+-- Forum: Science (https://www.scivillage.com/forum-61.html)
+--- Forum: Junk Science (https://www.scivillage.com/forum-88.html)
+--- Thread: Peer review: How is that working out for ya? (/thread-8726.html)



Peer review: How is that working out for ya? - C C - Jun 25, 2020

https://ajbenjaminjrbeta.blogspot.com/2020/06/peer-review-how-is-that-working-out-for.html

EXCERPT (Arlin James Benjamin, Jr.): A paragraph from a form editor in chief of a major journal in my specialty area: Less than a month later, this got me into trouble. Apparently I had upset some Very Important People by “desk-rejecting” their papers, which means I turned them down on the basis of serious methodological flaws before sending out the work to other reviewers. [...] The rest of the article is worth reading. Peer review is not all it's cracked up to be.... (MORE)

Peer-Reviewed Scientific Journals Don't Really Do Their Job
https://www.wired.com/story/peer-reviewed-scientific-journals-dont-really-do-their-job/

EXCERPT: . . . In many ways, [science] journals don't even pretend to ensure the validity of scientific findings. If that were their primary goal, journal policies would require authors to share their data and analysis code with peer reviewers, and would ask reviewers to double-check results. In practice, reviewers can only judge the science based on what’s reported in the writeup, and they usually can’t see the details of the process that led to the findings. (This is kind of like asking a mechanic to evaluate a car without looking under the hood.) And for really important discoveries, you might expect journals to recruit an independent team of scientists to try to replicate a study from scratch. This basically never happens.

Journals do ask reviewers to weigh in on a study’s quality, but also its novelty and drama. Most peer-reviewed journals aren't simply trying to filter out inaccurate findings, they're also trying to select the stuff that will boost their "impact factor"—a public ranking based on how many times a journal's articles get cited in the few years after they've been published. Accuracy matters but many other aspects of a study also play important roles. Whether the authors are eminent scientists, for example, or whether they're from prestigious universities, or whether the discovery is likely to get media attention. (Journal peer review also makes no attempt to ferret out deliberate fraud.)

Scientists know all of this, in principle. I knew all of it myself. But I didn't know the full extent until I became editor in chief of a peer-reviewed journal, Social Psychological and Personality Science, in 2015. I should never have gotten the job: I was young, barely tenured, and a bit rebellious. But the gatekeepers took a chance on me, and, as obstreperous as I was, I knew this job was a big responsibility and I had to fulfill my duties according to professional norms and ethics. I took this to mean that I should evaluate the scientific merits of each manuscript submitted to the journal, and decide whether to publish it based only on considerations of quality. In fact, I chose to hide the authors' names from myself as much as possible (sometimes called “triple-blind” review), so that I wouldn't be swayed or intimidated by how famous they were.

Less than a month later, this got me into trouble. Apparently I had upset some Very Important People by “desk-rejecting” their papers, which means I turned them down on the basis of serious methodological flaws before sending out the work to other reviewers. (This practice historically accounted for about 30 percent of the rejections at this journal.) My bosses—the committee that hires the editor in chief and sets journal policy—sent me a warning via email. After expressing concern about “toes being stepped on,” especially the toes of "visible ... scholars whose disdain will have a greater impact on the journal's reputation," they forwarded a message from someone whom they called "a senior, highly respected, award-winning social psychologist." That psychologist had written them to say that my decision to reject a certain manuscript was "distasteful." I asked for a discussion of the scientific merits of that editorial decision and others, but got nowhere.

In the end, no one backed down. I kept doing what I was doing, and they stood by their concerns about how I was damaging the journal’s reputation. It’s not hard to imagine how things might have gone differently, though. Without the persistent support of the associate editors and the colleagues I called on for advice during this episode, I very likely would have caved and just agreed to keep the famous people happy.

This is the seedy underbelly of peer-reviewed journals. Award-winning scientists are so used to getting their way that they can email the editor's boss and complain that they find rejection "distasteful." Then the editor is pressured to be nicer to the award-winning scientists.

I heard later that the person who had hired me as editor in chief described the decision as "an experiment gone terribly, terribly wrong." Fair enough: That's basically what I think about the whole system of peer-reviewed science journals. It was once a good idea—even a necessary one—but it isn’t working anymore.

It's not that peer review can't work; indeed, as the old saying goes, it's the worst form of quality control, except for all the other ones that have been tried. But there are new ways of doing peer review that we haven't yet tried, and that's where preprints come into play. Many of the problems with peer-reviewed journals are problems with the journals, rather than problems with peer review, per se... (MORE - details)