Scivillage.com Casual Discussion Science Forum
The “Big Five” Misinterpretations of Statistical Significance - Printable Version

+- Scivillage.com Casual Discussion Science Forum (https://www.scivillage.com)
+-- Forum: Science (https://www.scivillage.com/forum-61.html)
+--- Forum: Ergonomics, Statistics & Logistics (https://www.scivillage.com/forum-78.html)
+--- Thread: The “Big Five” Misinterpretations of Statistical Significance (/thread-1409.html)



The “Big Five” Misinterpretations of Statistical Significance - C C - Oct 9, 2015

http://www.statspecialist.com/blog/the-big-five-misinterpretations-of-statistical-significance/

EXCERPT: There is ample evidence that many of us do not know the correct interpretation of outcomes of statistical tests, or p values. For example, at the end of a standard statistics course, most students know how to calculate statistical tests, but they do not typically understand what the results mean (Haller & Krauss, 2002). About 80% of psychology professors endorse at least one incorrect interpretation of statistical tests (Oakes, 1986). It is easy to find similar misinterpretations in books and articles (Cohen, 1994), so it seems that psychology students get their false beliefs from teachers and also from what students read. However, the situation is no better in other behavioral science disciplines (e.g., Hubbard & Armstrong, 2006).

Most misunderstandings about statistical tests involve overinterpretation, or the tendency to see too much meaning in statistical significance. Specifically, we tend to believe that statistical tests tell us what we want to know, but this is wishful thinking. Elsewhere I described statistical tests as a kind of collective Rorschach inkblot test for the behavioral sciences in that what we see in them has more to do with fantasy than with what is really there (Kline, 2004). Such wishful thinking is so pervasive that one could argue that much of our practice of hypothesis testing based on statistical tests is myth....

= = = = = = = =

Also: Statistical fallacy impairs post-publication mood

EXCERPT: [...] This mistake in analysis – which is far from unique to this paper – is discussed in a classic 2011 paper by Nieuwenhuis and colleagues: Erroneous analyses of interactions in neuroscience: a problem of significance. At the time of writing the sentiment on Pubpeer is that the paper should be retracted – in effect striking it from the scientific record. With commentary like this, you can see why Pubpeer has previously been the target of legal action by aggrieved researchers who feel the site unfairly maligns their work....