Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5

The trouble with scientists: Tackling human biases in science

#1
C C Offline
http://nautil.us/issue/24/error/the-trou...scientists

EXCERPT: Sometimes it seems surprising that science functions at all. In 2005, medical science was shaken by a paper with the provocative title “Why most published research findings are false.” Written by John Ioannidis, a professor of medicine at Stanford University, it didn’t actually show that any particular result was wrong. Instead, it showed that the statistics of reported positive findings was not consistent with how often one should expect to find them. As Ioannidis concluded more recently, “many published research findings are false or exaggerated, and an estimated 85 percent of research resources are wasted.”

It’s likely that some researchers are consciously cherry-picking data to get their work published. And some of the problems surely lie with journal publication policies. But the problems of false findings often begin with researchers unwittingly fooling themselves: they fall prey to cognitive biases, common modes of thinking that lure us toward wrong but convenient or attractive conclusions. “Seeing the reproducibility rates in psychology and other empirical science, we can safely say that something is not working out the way it should,” says Susann Fiedler, a behavioral economist at the Max Planck Institute for Research on Collective Goods in Bonn, Germany. “Cognitive biases might be one reason for that.”

Psychologist Brian Nosek of the University of Virginia says that the most common and problematic bias in science is “motivated reasoning”: We interpret observations to fit a particular idea. Psychologists have shown that “most of our reasoning is in fact rationalization,” he says. In other words, we have already made the decision about what to do or to think, and our “explanation” of our reasoning is really a justification for doing what we wanted to do—or to believe—anyway. Science is of course meant to be more objective and skeptical than everyday thought—but how much is it, really?

Whereas the falsification model of the scientific method championed by philosopher Karl Popper posits that the scientist looks for ways to test and falsify her theories—to ask “How am I wrong?”—Nosek says that scientists usually ask instead “How am I right?” (or equally, to ask “How are you wrong?”). When facts come up that suggest we might, in fact, not be right after all, we are inclined to dismiss them as irrelevant, if not indeed mistaken. The now infamous “cold fusion” episode in the late 1980s, instigated by the electrochemists Martin Fleischmann and Stanley Pons, was full of such ad hoc brush-offs.

[...] Statistics may seem to offer respite from bias through strength in numbers, but they are just as fraught. Chris Hartgerink of Tilburg University in the Netherlands works on the influence of “human factors” in the collection of statistics. He points out that researchers often attribute false certainty to contingent statistics. “Researchers, like people generally, are bad at thinking about probabilities,” he says. While some results are sure to be false negatives—that is, results that appear incorrectly to rule something out—Hartgerink says he has never read a paper that concludes as much about its findings. His recent research shows that as many as two in three psychology papers reporting non-significant results may be overlooking false negatives.

Given that science has uncovered a dizzying variety of cognitive biases, the relative neglect of their consequences within science itself is peculiar. “I was aware of biases in humans at large,” says Hartgerink, “but when I first ‘learned’ that they also apply to scientists, I was somewhat amazed, even though it is so obvious.....
Reply
#2
Yazata Offline
C C Wrote:EXCERPT: Sometimes it seems surprising that science functions at all. In 2005, medical science was shaken by a paper with the provocative title “Why most published research findings are false.”

Maybe not false, but likely misleading.

Quote:It’s likely that some researchers are consciously cherry-picking data to get their work published.

Another problem is that the phrase 'statistically significant' is interpreted to mean 'valuable' and 'important', which isn't always the case.
We see it in drug testing. A new drug might indeed have a statistically significant effect in slowing the progress of a disease, based on sample sizes and whatnot. But the significant effect might be so small as to be worthless in practical terms. Perhaps untreated individuals live 52 additional weeks on average. And treated individuals live 53 weeks. If the drug costs $10,000 per weekly dose, that's basically a failure from a patient's point of view. But that additional week is likely to be spun as a significant success, with mathematics to prove it.

Quote:But the problems of false findings often begin with researchers unwittingly fooling themselves: they fall prey to cognitive biases, common modes of thinking that lure us toward wrong but convenient or attractive conclusions.

I think that's likely the case in a large percentage of papers published in the so-called "social sciences". Professors come to problems with preexisting conclusions and ideologies in place, and are basically seeking illustrations for what they already believe. I'm not convinced that it's always 'unwitting' either.

In science more generally, individual scientists will often become closely associated with particular ideas. If those ideas subsequently succeed, the scientist has the opportunity to be remembered as a visionary pioneer in an important breakthrough. But if the idea doesn't subsequently succeed, the scientist will become a mere historical footnote who wasted the best years of his/her career exploring a dead end.

So sure there are motivations that particular ideas succeed. Careers are dependent on it.

Quote:Psychologist Brian Nosek of the University of Virginia says that the most common and problematic bias in science is “motivated reasoning”: We interpret observations to fit a particular idea. Psychologists have shown that “most of our reasoning is in fact rationalization,” he says. In other words, we have already made the decision about what to do or to think, and our “explanation” of our reasoning is really a justification for doing what we wanted to do—or to believe—anyway.

Exactly.

Quote:Science is of course meant to be more objective and skeptical than everyday thought—but how much is it, really?
It gets even worse when access to publishing in journals and academic hiring/tenure decisions depend on expressing the "correct" views on particular "scientific" issues.

Quote:Given that science has uncovered a dizzying variety of cognitive biases, the relative neglect of their consequences within science itself is peculiar. “I was aware of biases in humans at large,” says Hartgerink, “but when I first ‘learned’ that they also apply to scientists, I was somewhat amazed, even though it is so obvious.....

That's why, as a scientific layperson, I often find myself approaching some of the assertions made in the name of 'science' with considerable skepticism.
Reply


Possibly Related Threads…
Thread Author Replies Views Last Post
  Strategies to correct for overconfidence, systematic biases & even academic feuding C C 0 879 Jul 24, 2015 05:24 AM
Last Post: C C



Users browsing this thread: 1 Guest(s)