Psychology is in a replication crisis. The PSA is trying to fix it.

C C Offline

EXCERPTS: . . . For the past 10 years, psychology has been struggling through what’s called the “replication crisis.” In summary: About a decade ago, many scientists realized that their standard research methods were delivering them false, unreliable results.

When many famous and textbook psychological studies were retested with more rigorous methods, many failed. Other results simply looked less impressive upon reinspection. It’s possible around 50 percent of the published psychological literature fails upon retesting, but no one knows precisely the extent of the instability in the foundations of psychological science. The realization provoked a painful period of introspection and revision.

Chartier’s idea for the Psychological Science Accelerator [PSA] was inspired by global, massive projects in physics [...] The accelerator is a global network of psychologists who work together on answering some of the field’s toughest questions, with methodological rigor.

There’s an old model for conducting psychological research: done in small labs, run by one big-name professor, probing the brains of American college undergrads. The incentives built into this model have favored publishing as many papers with positive results as possible (those that show statistically significant results, but not those that turned up bupkis) over rigorous inquiry. This old model has produced a mountain of scientific literature — but a lot of it has failed upon closer inspection.

Under this structure, researchers had arguably too much freedom: freedom to report positive findings but keep negative findings in a file drawer; to stop conducting an experiment as soon as desired results were obtained; to make 100 predictions but only report the ones that panned out. That freedom led researchers — often unwittingly, and without malicious intent (a lot of the practices were to make best use of scant resources) — to flimsy results.

[...] Chris Chartier dreamed of a distributed lab network, with researchers in outposts all around the world, who could work together, democratically, on choosing topics to study and recruiting a truly global, diverse participant pool to use in experiments. They’d preregister their study designs, meaning they promise to stick to a particular recipe in running and analyzing an experiment, which staves off the cherry-picking and p-hacking (a variety of practices to get data to yield a false positive) that was rampant before the replication crisis became apparent.

They’d keep everything transparent and accessible, and foster a culture of accountability to produce rigorous, meaningful work. The payoff would be to deeply study human psychology on a global scale, and to see in which ways human psychology varies around the world, and which ways it does not.

[...] The slate of research projects is ambitious and promising, but it faces many challenges. The accelerator is potentially a model for the future, but it still has to operate in the existing status quo of academia, including very limited funding and a lack of incentives from institutions for researchers — especially more junior faculty — to sign on to these large projects.

Under the current status quo, researchers get ahead and make progress in their careers by being the primary author on a big-idea study, not by being one of hundreds of authors playing a bit role in a huge project.

Its members are also largely volunteers, and mostly from North America and Europe. “We wanted it to be much more diverse, and we’re still struggling with that,” says Dana Basnight Brown...

[...] Despite the challenges, the work continues. The members of the Psychological Science Accelerator still believe in the value of psychological research, even though — and perhaps because — the recent history of the replication crisis is upsetting to them... (MORE - details)
Syne Offline
You can't really expect to thwart the hubris of people seeking personal aggrandizement by promising them no chance for significant recognition. That's not practical for humans. What you need is to incentivize refuting studies and greater penalties for flawed work. But academia really isn't geared toward merit, especially after tenure. Maybe a three-strikes rule that tenure is not immune to.

Possibly Related Threads…
Thread Author Replies Views Last Post
  Attachment theory: pop psychology’s latest trend for explaining relationships C C 0 16 Dec 5, 2022 09:17 PM
Last Post: C C
  Two "prog" biologists go Ivermectin instead of vaxx + Fix science, don't just fund C C 0 30 Sep 17, 2021 11:15 PM
Last Post: C C
  Failures of replication in psychology C C 0 40 Jun 30, 2021 04:43 PM
Last Post: C C
  Funding isn't enough to fix science C C 0 69 May 30, 2021 02:30 AM
Last Post: C C
  Cure for psychology's woes: Add more political presuppositions & cognitive filters C C 0 40 Apr 14, 2021 07:02 PM
Last Post: C C
  Self esteem is overrated: Decades of "me" generations bogus pop psychology C C 1 95 Jan 26, 2021 09:26 PM
Last Post: Syne
  Science has been in a “replication crisis” for a decade. Have we learned anything? C C 1 83 Oct 15, 2020 05:24 AM
Last Post: Syne
  This philosopher is challenging all of evolutionary psychology C C 1 119 May 20, 2020 11:33 PM
Last Post: C C
  New paper points out flaw in Rubber Hand Illusion: tough questions for psychology. C C 0 110 Apr 12, 2020 02:50 AM
Last Post: C C
  Controversial psychology tests are often still used in US courts C C 0 110 Feb 19, 2020 04:15 AM
Last Post: C C

Users browsing this thread: 1 Guest(s)