Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5

Article  Measuring censorship in science is challenging. Stopping it is harder still

#1
C C Offline
https://www.realclearscience.com/article...98121.html

INTRO: In a new paper for the Proceedings of the National Academy of Sciences, we, alongside colleagues from a diverse range of fields, investigate the prevalence and extent of censorship and self-censorship in science.

Measuring censorship in science is difficult. It’s fundamentally about capturing studies that were never published, statements that were never made, possibilities that went unexplored and debates that never ended up happening. However, social scientists have come up with some ways to quantify the extent of censorship in science and research.

For instance, statistical tests can evaluate “publication bias” – whether or not papers with findings tilting a specific way were systematically excluded from publication. Sometimes editors or reviewers may reject findings that don’t cut the preferred direction with the preferred magnitude. Other times, scholars “file drawer” their own papers that don’t deliver statistically significant results pointing in the “correct” direction because they assume (often rightly) that their study would be unable to find a home in a respectable journal or because the publication of these findings would come at a high reputational cost. Either way, the scientific literature ends up being distorted because evidence that cuts in the “wrong” direction is systematically suppressed.

Audit studies can provide further insight. Scholars submit identical papers but change things that should not matter (like the author’s name or institutional affiliation) or reverse the direction of the findings (leaving all else the same) to test for systematic variance in whether the papers are accepted or rejected and what kinds of comments the reviewers offer based on who the author is or what they find. Other studies collect data on all papers submitted to particular journals in specific fields to test for patterns in whose work gets accepted or rejected and why. This can uncover whether editors or reviewers are applying standards inconsistently that shut out perspectives in a biased way.

Additionally, databases from organizations like the Foundation for Individual Rights and Expression or PEN America track attempts to silence or punish scholars, alongside state policies or institutional rules that undermine academic freedom. These data can be analyzed to understand the prevalence of censorious behaviors, who partakes in them, who is targeted, how these behaviors vary across contexts, and what the trendlines look like over time.

Supplementing these behavioral measures, many polls and surveys ask academic stakeholders how they understand academic freedom, their experiences with being censored or observing censorship, the extent to which they self-censor (and about what), or their appetite for censoring others. These self-reports can provide additional context to the trends observed by other means – including and especially with respect to the question of why people engage in censorious behaviors.

One thing that muddies the waters, however, is that many scholars understand and declare themselves as victims of censorship when they have not, in fact, been censored... (MORE - details)
Reply


Possibly Related Threads…
Thread Author Replies Views Last Post
  Trump’s deal with Moderna hampers global vaccine effort + Censorship on display at FU C C 1 89 Nov 6, 2021 01:27 AM
Last Post: Syne
  Moderation may be the most challenging & rewarding virtue C C 14 1,715 Oct 26, 2017 03:43 AM
Last Post: Syne



Users browsing this thread: 1 Guest(s)