EXCERPT: If you follow social-science news, maybe you saw the headlines recently: “Conservative political beliefs not linked to psychotic traits, as study claimed,” noted Retraction Watch, for example. As the site explained, four political science and psychology papers published since 2010 have now been corrected for wrongly implying a positive correlation between psychoticism (a trait that isn’t what it sounds like — we’ll get to that) and conservatism. What the researchers should have reported, it turned out, was an inverse correlation: The higher someone rates for psychoticism, the more likely they are to be liberal. There was an error in the way the researchers coded and interpreted their data.
Given the common shortcomings of media coverage of social science, and given that we’re turning a corner into peak presidential-election season, it’s no surprise that conservatives had a field day with this news, ignoring the fact that psychoticism, in this case, doesn’t mean psychotic in the everyday sense of the word. “Epic Correction of the Decade,” trumpeted Power Line, where Steven Hayward snidely suggested that “maybe the authors were hoping for a job with Dan Rather or Katie Couric if tenure didn’t come through?” “Science says liberals, not conservatives, are psychotic,” went a New York Post headline. “Well, pundits and Senator John McCain have called Trump supporters crazies,” said a Fox & Friends host, “but science says liberals, not conservatives, are psychotics.” On a live version of Slate’s Political Gabfest at the Aspen Ideas Festival, Mitch Daniels, the president of Purdue University and the former governor of Indiana, referenced the corrections as evidence that liberals are more authoritarian than conservatives.
At first glance, this seems like a fairly straightforward story: Researchers get something wrong, discover their error, and fix it. And the media, as is so frequently the case, respond with overblown, too-cute-by-half coverage. If you dig deeper and talk to the people who were actually involved, though, what emerges is a frustrating, occasionally bizarre story of scientific dysfunction. And while this is by no means one of the biggest errors to have made its way into the social-science literature, it’s a useful cautionary tale in what happens when the norms social scientists are frantically trying to establish — replicability, openness, and data-sharing — are ignored....
MORE: http://nymag.com/scienceofus/2016/07/why...icism.html