Will programmers ever be replaced by AI?

#21
C C Offline
(Oct 24, 2024 06:23 PM)Secular Sanity Wrote: https://arstechnica.com/science/2024/10/...re-to-lie/

[...] In both cases, supervised learning resulted in the higher number of correct answers, but also in a higher number of incorrect answers and reduced avoidance. The more difficult the question and the more advanced model you use, the more likely you are to get well-packaged, plausible nonsense as your answer. [...] “What you can do today is use AI in areas where you are an expert yourself or at least can verify the answer with a Google search afterwards..."

Heh. As humans get lazier and lazier due to this, the traditional "expert sources" themselves will have largely been written partially or wholly by AI. But the excuse offered by the last item below is true: Human outputted research and scholarship today is itself rife with errors, fraud, invalid science, replication problems, biases, predatory publishing (fake journals), etc.

I.e., the standards were falling before LLMs, thus it's easy to justify looking the other way and allowing the new transition to march on.

Some scientists can't stop using AI to write research papers
https://www.theregister.com/2024/05/03/a..._articles/

The promise and perils of using AI for research and writing
https://www.apa.org/topics/artificial-in...ch-writing
- - - - - - -

The advent of AI peer review
https://perspectivesblog.sagepub.com/blo...grity-team

Pressure on peer reviewers to provide timely and thorough assessments means that the temptation to replace human reviewers with AI technology is growing. There have been industry wide discussions on whether AI generated reviews can become the future of peer review, either augmenting or replacing humans in the process.
- - - - - -

The problem is poor quality control, not AI
https://theconversation.com/ai-assisted-...-ok-229416

The most serious problem with AI is the risk of introducing unnoticed errors, leading to sloppy scholarship. Instead of banning AI, we should try to ensure that mistaken, implausible or biased claims cannot make it onto the academic record. After all, humans can also produce writing with serious errors, and mechanisms such as peer review often fail to prevent its publication.
Reply
#22
Secular Sanity Offline
You’d know better than me, CC, but remember back in the day (before Google) when you’d search something and you’d get thousands of results of raw data, and you’d just have to narrow down it down with “[…]”? Now, you get maybe ten pages, which are mostly repeats of the same.

You consider yourself a sleuth, what’s your thoughts on the changes? Do you think we might have been biased and just assumed that we were able to separate the wheat from the chaff or has something changed?
Reply
#23
C C Offline
(Oct 25, 2024 02:56 AM)Secular Sanity Wrote: You’d know better than me, CC, but remember back in the day (before Google) when you’d search something and you’d get thousands of results of raw data, and you’d just have to narrow down it down with “[…]”? Now, you get maybe ten pages, which are mostly repeats of the same.

You consider yourself a sleuth, what’s your thoughts on the changes? Do you think we might have been biased and just assumed that we were able to separate the wheat from the chaff or has something changed?

My guess is that content mills, spamdexing, etc are still heavily winning the cat and mouse game.

Generative AI is supposedly going to make such worse, despite Google claims that "With new breakthroughs in generative AI, we’re again reimagining what a search engine can do.".

Problem is that the "enemy" has exploited and resourcefully conceived what it can do for them, too.
Reply
Reply




Users browsing this thread: 1 Guest(s)