Scivillage.com Casual Discussion Science Forum

Full Version: In psychology & other social sciences, many studies [still] fail reproducibility test
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
https://www.npr.org/sections/health-shot...ibility-te

EXCERPT: The world of social science got a rude awakening a few years ago, when researchers concluded that many studies in this area appeared to be deeply flawed. Two-thirds could not be replicated in other labs. Some of those same researchers now report those problems still frequently crop up, even in the most prestigious scientific journals. But their study, published Monday in *Nature Human Behaviour* also finds that social scientists can actually sniff out the dubious results with remarkable skill.

[...] The results were better than the average of a previous review of the psychology literature, but still far from perfect. Of the 21 studies, the experimenters were able to reproduce 13. And the effects they saw were on average only about half as strong as had been trumpeted in the original studies.

The remaining eight were not reproduced. "A substantial portion of the literature is reproducible," [Brian] Nosek concludes. "We are getting evidence that someone can independently replicate [these findings]. And there is a surprising number [of studies] that fail to replicate."

[...] As part of the reproducibility study, about 200 social scientists were surveyed and asked to predict which results would stand up to the re-test and which would not. "They're taking bets with each other, against us," says Anna Dreber, an economics professor [...] and coauthor of the new study. It turns out, "these researchers were very good at predicting which studies would replicate," she says. "I think that's great news for science."

[...] But if social scientists were really good at identifying flawed studies, why did the editors and peer reviewers at Science and Nature let these eight questionable studies through their review process? "The likelihood that a finding will replicate or not is one part of what a reviewer would consider," says Nosek. "But other things might influence the decision to publish. It may be that this finding isn't likely to be true, but if it is true, it is super important, so we do want to publish it because we want to get it into the conversation."

MORE: https://www.npr.org/sections/health-shot...ibility-te