Melanoma: A Pseudoepidemic of Skin Cancer Prompts New Screening Recommendations
https://sciencebasedmedicine.org/melanom...endations/
INTRO (Harriet Hall): There appeared to be an epidemic of melanoma skin cancer, but it seems to be a pseudoepidemic caused by overdiagnosis. Screening everyone with skin exams does more harm than good and can no longer be recommended... (MORE - details)
Science needs a radical overhaul
https://iai.tv/articles/why-science-need..._auid=2020
EXCERPTS: Scientists have a problem. We are discovery junkies. [...] This drive can be channeled in positive ways, but can do serious damage to science and to society when it goes unchecked. And what’s worse, the journals we publish our science in are the enablers that pretend they are protecting us.
The problem runs deep. Science selects for people who are naturally curious, and then it heaps rewards and incentives that amplify their drive to find new things. Unfortunately, not all findings are equally gratifying, or equally valued – learning that your potential new cure [...] doesn’t work is just as informative as finding out that it does work, but it just doesn’t feel nearly as good. What’s more, the negative answers aren’t rewarded nearly as much as the positive ones. Journals don’t want to publish the negative results. Awards are rarely given out for rigorously testing a good idea that turned out to be wrong. A track record of negative results is not going to get you a grant.
A system that only publishes positive results and sweeps negative results under the rug would be bad enough, but it gets worse. There’s good reason to think that in some fields, many of the positive results aren’t real discoveries – they are quite likely to be false positives. In my field of psychology, the evidence of this problem is piling up, even if many of the leaders in the field still deny it. [...] We know false discoveries are a big problem in psychology because we’ve looked. Most fields haven’t even begun to look.
[...] Unfortunately, in many sciences, no one in the system is particularly motivated to find out if a discovery is real or not. ... So it’s not exactly that journals and editors are being duped – in fact, they’re a big part of the problem.
Indeed, after 10 years as a journal editor, seeing how things work behind the scenes, I’m convinced that journals and the people who run them (editors, publishers, societies) are a bigger culprit for the spread of bad science than are individual researchers. Journals compete to be the most prestigious, but the race for prestige is not determined by who provides the best quality control. [...] journals are rewarding scientists for being flashy, for producing big, bold findings, and they are looking the other way when it comes to questions about whether those findings are reliable and whether the methods were rigorous. This reality is in stark contrast to the common myth about peer review – that journal-based peer review is a quality filter, and that the most prestigious journals have the most stringent filter. But the myth persists.
This misplaced faith in prestigious journals’ peer review system is doing serious damage to science. Scientists continue to chase the reward of getting published in prestigious journals [...] Scientists who – consciously or not – use methods that get extravagant and often false results get rewarded; they get jobs and grants and train future generations of scientists. Those who produce the more humble but accurate results get weeded out.
Why do we all – scientists, the public, funders – continue to buy into this myth of quality control? [...] One reason is that we are too easily enamored of discoveries. ... If we uncovered and retracted most false discoveries ... it takes an almost superhuman amount of self-control to wait for the much-rarer hit of true discovery rather than the immediate gratification of illusory discovery.
[...] Even if we wanted to do the right thing and evaluate scientific papers based on their quality ... it’s not clear how we’d do that. ... even if we all agree that impact is not a good measure of quality, that leaves unsolved the difficult question of how to measure quality.
Luckily, we don’t need to wait for a definitive answer to “what is quality?” to start improving the quality of published work. For example, requiring authors to transparently report their methods, including materials and procedures, and, to the extent possible, their data and code, would already be an important step towards enabling reviewers and readers to evaluate the quality of the research [...] Similarly, conducting peer review before the research is carried out (i.e., on the proposed research question, design, methods, and planned analyses, as is done in Registered Reports) helps ensure that research is judged based on its rigor, rather than based on whether the findings are exciting.
[...] journals need to make their peer review process transparent – by publishing the reviews and editors’ decision letters along with published papers – so that readers can evaluate what factors are emphasized during peer review.
Finally, journals should also take seriously concerns raised about papers they have published. [...] Too often, researchers who identify serious problems with published papers are ignored, or worse, by the very institutions charged with correcting the record...
These measures aren’t enough – quality is more than just transparency, replicability, reproducibility, etc. In fact, these are extremely low bars, and we need to think bigger about what it means to evaluate the quality of a research paper [...] But the steps I’ve described here are all very simple steps journals can take to signal a commitment to quality – they are the bare minimum we should expect from journals that ask us for their trust... (MORE - details)
https://sciencebasedmedicine.org/melanom...endations/
INTRO (Harriet Hall): There appeared to be an epidemic of melanoma skin cancer, but it seems to be a pseudoepidemic caused by overdiagnosis. Screening everyone with skin exams does more harm than good and can no longer be recommended... (MORE - details)
Science needs a radical overhaul
https://iai.tv/articles/why-science-need..._auid=2020
EXCERPTS: Scientists have a problem. We are discovery junkies. [...] This drive can be channeled in positive ways, but can do serious damage to science and to society when it goes unchecked. And what’s worse, the journals we publish our science in are the enablers that pretend they are protecting us.
The problem runs deep. Science selects for people who are naturally curious, and then it heaps rewards and incentives that amplify their drive to find new things. Unfortunately, not all findings are equally gratifying, or equally valued – learning that your potential new cure [...] doesn’t work is just as informative as finding out that it does work, but it just doesn’t feel nearly as good. What’s more, the negative answers aren’t rewarded nearly as much as the positive ones. Journals don’t want to publish the negative results. Awards are rarely given out for rigorously testing a good idea that turned out to be wrong. A track record of negative results is not going to get you a grant.
A system that only publishes positive results and sweeps negative results under the rug would be bad enough, but it gets worse. There’s good reason to think that in some fields, many of the positive results aren’t real discoveries – they are quite likely to be false positives. In my field of psychology, the evidence of this problem is piling up, even if many of the leaders in the field still deny it. [...] We know false discoveries are a big problem in psychology because we’ve looked. Most fields haven’t even begun to look.
[...] Unfortunately, in many sciences, no one in the system is particularly motivated to find out if a discovery is real or not. ... So it’s not exactly that journals and editors are being duped – in fact, they’re a big part of the problem.
Indeed, after 10 years as a journal editor, seeing how things work behind the scenes, I’m convinced that journals and the people who run them (editors, publishers, societies) are a bigger culprit for the spread of bad science than are individual researchers. Journals compete to be the most prestigious, but the race for prestige is not determined by who provides the best quality control. [...] journals are rewarding scientists for being flashy, for producing big, bold findings, and they are looking the other way when it comes to questions about whether those findings are reliable and whether the methods were rigorous. This reality is in stark contrast to the common myth about peer review – that journal-based peer review is a quality filter, and that the most prestigious journals have the most stringent filter. But the myth persists.
This misplaced faith in prestigious journals’ peer review system is doing serious damage to science. Scientists continue to chase the reward of getting published in prestigious journals [...] Scientists who – consciously or not – use methods that get extravagant and often false results get rewarded; they get jobs and grants and train future generations of scientists. Those who produce the more humble but accurate results get weeded out.
Why do we all – scientists, the public, funders – continue to buy into this myth of quality control? [...] One reason is that we are too easily enamored of discoveries. ... If we uncovered and retracted most false discoveries ... it takes an almost superhuman amount of self-control to wait for the much-rarer hit of true discovery rather than the immediate gratification of illusory discovery.
[...] Even if we wanted to do the right thing and evaluate scientific papers based on their quality ... it’s not clear how we’d do that. ... even if we all agree that impact is not a good measure of quality, that leaves unsolved the difficult question of how to measure quality.
Luckily, we don’t need to wait for a definitive answer to “what is quality?” to start improving the quality of published work. For example, requiring authors to transparently report their methods, including materials and procedures, and, to the extent possible, their data and code, would already be an important step towards enabling reviewers and readers to evaluate the quality of the research [...] Similarly, conducting peer review before the research is carried out (i.e., on the proposed research question, design, methods, and planned analyses, as is done in Registered Reports) helps ensure that research is judged based on its rigor, rather than based on whether the findings are exciting.
[...] journals need to make their peer review process transparent – by publishing the reviews and editors’ decision letters along with published papers – so that readers can evaluate what factors are emphasized during peer review.
Finally, journals should also take seriously concerns raised about papers they have published. [...] Too often, researchers who identify serious problems with published papers are ignored, or worse, by the very institutions charged with correcting the record...
These measures aren’t enough – quality is more than just transparency, replicability, reproducibility, etc. In fact, these are extremely low bars, and we need to think bigger about what it means to evaluate the quality of a research paper [...] But the steps I’ve described here are all very simple steps journals can take to signal a commitment to quality – they are the bare minimum we should expect from journals that ask us for their trust... (MORE - details)