The World Health Organization promotes quackery yet again
https://sciencebasedmedicine.org/the-wor...yet-again/
INTRO: The World Health Organization concluded its First WHO Traditional Medicine Global Summit on Friday. The conference was co-hosted by the Indian government and held in Gandhinagar, India, where the WHO had set up a Global Centre for Traditional Medicine, with help from $250 million from the Indian government.
Unfortunately, as we have long lamented, the WHO has long had a penchant for promoting “traditional medicine,” particularly Traditional Chinese Medicine but also Ayurveda and others, as “evidence-based” and worthy of being “integrated” with science-based medicine, and this conference is just one more example of how far down that road the WHO has gone. To get an idea of how this meeting is being described and promoted by its stakeholders, I refer you to a statement by the Indian government released on August 17, the first day of the summit... (MORE - details)
RELATED: Decolonization of knowledge
What are LLMs bad at? Reference lists
https://edifix.com/blog/using-edifix-res...ity-issues
EXCERPT: ChatGPT does not have a true understanding of the questions it is asked or the tasks it is set. Among the “nonsensical answers” that ChatGPT can give, one type especially pertinent to research publishing is its inability to generate relevant and accurate citations.
- - - - - - - - - - - -
Use of AI Is Seeping Into Academic Journals—and It’s Proving Difficult to Detect
https://www.wired.com/story/use-of-ai-is...to-detect/
EXCERPTS: he rapid rise of generative AI has stoked anxieties across disciplines. High school teachers and college professors are worried about the potential for cheating. News organizations have been caught with shoddy articles penned by AI. And now, peer-reviewed academic journals are grappling with submissions in which the authors may have used generative AI to write outlines, drafts, or even entire papers, but failed to make the AI use clear.
Journals are taking a patchwork approach to the problem. [...] Experts say there’s a balance to strike in the academic world when using generative AI—it could make the writing process more efficient and help researchers more clearly convey their findings. But the tech—when used in many kinds of writing—has also dropped fake references into its responses, made things up, and reiterated sexist and racist content from the internet, all of which would be problematic if included in published scientific writing.
If researchers use these generated responses in their work without strict vetting or disclosure, they raise major credibility issues. [...] generative AI is not all bad—it could help researchers whose native language is not English write better papers...
[...] For now, it's impossible to know how extensively AI is being used in academic publishing, because there’s no foolproof way to check for AI use, as there is for plagiarism... (MORE - missing details)
https://sciencebasedmedicine.org/the-wor...yet-again/
INTRO: The World Health Organization concluded its First WHO Traditional Medicine Global Summit on Friday. The conference was co-hosted by the Indian government and held in Gandhinagar, India, where the WHO had set up a Global Centre for Traditional Medicine, with help from $250 million from the Indian government.
Unfortunately, as we have long lamented, the WHO has long had a penchant for promoting “traditional medicine,” particularly Traditional Chinese Medicine but also Ayurveda and others, as “evidence-based” and worthy of being “integrated” with science-based medicine, and this conference is just one more example of how far down that road the WHO has gone. To get an idea of how this meeting is being described and promoted by its stakeholders, I refer you to a statement by the Indian government released on August 17, the first day of the summit... (MORE - details)
RELATED: Decolonization of knowledge
What are LLMs bad at? Reference lists
https://edifix.com/blog/using-edifix-res...ity-issues
EXCERPT: ChatGPT does not have a true understanding of the questions it is asked or the tasks it is set. Among the “nonsensical answers” that ChatGPT can give, one type especially pertinent to research publishing is its inability to generate relevant and accurate citations.
- - - - - - - - - - - -
Use of AI Is Seeping Into Academic Journals—and It’s Proving Difficult to Detect
https://www.wired.com/story/use-of-ai-is...to-detect/
EXCERPTS: he rapid rise of generative AI has stoked anxieties across disciplines. High school teachers and college professors are worried about the potential for cheating. News organizations have been caught with shoddy articles penned by AI. And now, peer-reviewed academic journals are grappling with submissions in which the authors may have used generative AI to write outlines, drafts, or even entire papers, but failed to make the AI use clear.
Journals are taking a patchwork approach to the problem. [...] Experts say there’s a balance to strike in the academic world when using generative AI—it could make the writing process more efficient and help researchers more clearly convey their findings. But the tech—when used in many kinds of writing—has also dropped fake references into its responses, made things up, and reiterated sexist and racist content from the internet, all of which would be problematic if included in published scientific writing.
If researchers use these generated responses in their work without strict vetting or disclosure, they raise major credibility issues. [...] generative AI is not all bad—it could help researchers whose native language is not English write better papers...
[...] For now, it's impossible to know how extensively AI is being used in academic publishing, because there’s no foolproof way to check for AI use, as there is for plagiarism... (MORE - missing details)