Oct 24, 2024 09:25 PM
(This post was last modified: Oct 24, 2024 09:51 PM by C C.)
(Oct 24, 2024 06:23 PM)Secular Sanity Wrote: https://arstechnica.com/science/2024/10/...re-to-lie/
[...] In both cases, supervised learning resulted in the higher number of correct answers, but also in a higher number of incorrect answers and reduced avoidance. The more difficult the question and the more advanced model you use, the more likely you are to get well-packaged, plausible nonsense as your answer. [...] “What you can do today is use AI in areas where you are an expert yourself or at least can verify the answer with a Google search afterwards..."
Heh. As humans get lazier and lazier due to this, the traditional "expert sources" themselves will have largely been written partially or wholly by AI. But the excuse offered by the last item below is true: Human outputted research and scholarship today is itself rife with errors, fraud, invalid science, replication problems, biases, predatory publishing (fake journals), etc.
I.e., the standards were falling before LLMs, thus it's easy to justify looking the other way and allowing the new transition to march on.
Some scientists can't stop using AI to write research papers
https://www.theregister.com/2024/05/03/a..._articles/
The promise and perils of using AI for research and writing
https://www.apa.org/topics/artificial-in...ch-writing
- - - - - - -
The advent of AI peer review
https://perspectivesblog.sagepub.com/blo...grity-team
Pressure on peer reviewers to provide timely and thorough assessments means that the temptation to replace human reviewers with AI technology is growing. There have been industry wide discussions on whether AI generated reviews can become the future of peer review, either augmenting or replacing humans in the process.
- - - - - -
The problem is poor quality control, not AI
https://theconversation.com/ai-assisted-...-ok-229416
The most serious problem with AI is the risk of introducing unnoticed errors, leading to sloppy scholarship. Instead of banning AI, we should try to ensure that mistaken, implausible or biased claims cannot make it onto the academic record. After all, humans can also produce writing with serious errors, and mechanisms such as peer review often fail to prevent its publication.
