Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5

In psychology & other social sciences, many studies [still] fail reproducibility test

#1
C C Offline
https://www.npr.org/sections/health-shot...ibility-te

EXCERPT: The world of social science got a rude awakening a few years ago, when researchers concluded that many studies in this area appeared to be deeply flawed. Two-thirds could not be replicated in other labs. Some of those same researchers now report those problems still frequently crop up, even in the most prestigious scientific journals. But their study, published Monday in *Nature Human Behaviour* also finds that social scientists can actually sniff out the dubious results with remarkable skill.

[...] The results were better than the average of a previous review of the psychology literature, but still far from perfect. Of the 21 studies, the experimenters were able to reproduce 13. And the effects they saw were on average only about half as strong as had been trumpeted in the original studies.

The remaining eight were not reproduced. "A substantial portion of the literature is reproducible," [Brian] Nosek concludes. "We are getting evidence that someone can independently replicate [these findings]. And there is a surprising number [of studies] that fail to replicate."

[...] As part of the reproducibility study, about 200 social scientists were surveyed and asked to predict which results would stand up to the re-test and which would not. "They're taking bets with each other, against us," says Anna Dreber, an economics professor [...] and coauthor of the new study. It turns out, "these researchers were very good at predicting which studies would replicate," she says. "I think that's great news for science."

[...] But if social scientists were really good at identifying flawed studies, why did the editors and peer reviewers at Science and Nature let these eight questionable studies through their review process? "The likelihood that a finding will replicate or not is one part of what a reviewer would consider," says Nosek. "But other things might influence the decision to publish. It may be that this finding isn't likely to be true, but if it is true, it is super important, so we do want to publish it because we want to get it into the conversation."

MORE: https://www.npr.org/sections/health-shot...ibility-te
Reply


Possibly Related Threads…
Thread Author Replies Views Last Post
  Article Credibility crisis in science + How logic & reasoning can fail as scientific tools C C 0 59 Mar 22, 2024 04:18 PM
Last Post: C C
  Research Reproducibility trial: 246 biologists get different results from same data sets C C 1 109 Oct 18, 2023 06:40 PM
Last Post: Magical Realist
  Is psychology good for anything? + Public is OK, it's scientists who are the problem C C 0 68 Oct 5, 2023 05:15 AM
Last Post: C C
  Article Be skeptical of studies designed to scare you about CTE and sports C C 0 66 Sep 16, 2023 06:32 PM
Last Post: C C
  Article Startling estimate of how many clinical-trial studies are fake or fatally flawed C C 0 64 Jul 19, 2023 01:03 PM
Last Post: C C
  Article New studies refute assumptions about link between power & concern about reputation C C 0 60 Apr 5, 2023 09:26 PM
Last Post: C C
  Hawthorn effect: one of the most influential social science studies is pretty bad C C 0 70 Feb 18, 2023 07:51 PM
Last Post: C C
  Hijacked journal still being indexed in Scopus + Viral studies likelier to be bogus C C 0 58 Feb 7, 2023 04:39 PM
Last Post: C C
  Attachment theory: pop psychology’s latest trend for explaining relationships C C 0 235 Dec 5, 2022 09:17 PM
Last Post: C C
  Students: another [underlying] facet of why the social sciences can be unreliable? C C 0 201 Sep 13, 2022 04:29 PM
Last Post: C C



Users browsing this thread: 1 Guest(s)