
https://www.nytimes.com/2025/07/02/healt...=url-share
INTRO: Scientists know it is happening, even if they don’t do it themselves. Some of their peers are using chatbots, like ChatGPT, to write all or part of their papers.
In a paper published Wednesday in the journal Science Advances, Dmitry Kobak of the University of Tübingen and his colleagues report that they found a way to track how often researchers are using artificial intelligence chatbots to write the abstracts of their papers. The A.I. tools, they say, tend to use certain words — like “delves,” “crucial,” “potential,” “significant” and “important” — far more often than human authors do. [...In 2024, there were a total of 454 words used excessively by chatbots, the researchers report...]
The group analyzed word use in the abstracts of more than 15 million biomedical abstracts published between 2010 and 2024, enabling them to spot the rising frequency of certain words in abstracts.
The findings tap into a debate in the sciences over when it is and is not appropriate to use A.I. helpers for writing papers.
[...] In the academic sciences, some researchers have grown wary of even a whiff of A.I. assistance in their publications.
Computer scientists are aware that A.I. favors certain words, although it’s not clear why, said Subbarao Kambhampati, a professor of computer science at Arizona State University and the past president of the Association for the Advancement of Artificial Intelligence. Some scientists, he said, have been deliberately refraining from using words like “delve” for fear of being suspected of using A.I. as a writing tool.
Other scientists seem blasé about the risk of being caught using chatbots... (MORE - details)
INTRO: Scientists know it is happening, even if they don’t do it themselves. Some of their peers are using chatbots, like ChatGPT, to write all or part of their papers.
In a paper published Wednesday in the journal Science Advances, Dmitry Kobak of the University of Tübingen and his colleagues report that they found a way to track how often researchers are using artificial intelligence chatbots to write the abstracts of their papers. The A.I. tools, they say, tend to use certain words — like “delves,” “crucial,” “potential,” “significant” and “important” — far more often than human authors do. [...In 2024, there were a total of 454 words used excessively by chatbots, the researchers report...]
The group analyzed word use in the abstracts of more than 15 million biomedical abstracts published between 2010 and 2024, enabling them to spot the rising frequency of certain words in abstracts.
The findings tap into a debate in the sciences over when it is and is not appropriate to use A.I. helpers for writing papers.
[...] In the academic sciences, some researchers have grown wary of even a whiff of A.I. assistance in their publications.
Computer scientists are aware that A.I. favors certain words, although it’s not clear why, said Subbarao Kambhampati, a professor of computer science at Arizona State University and the past president of the Association for the Advancement of Artificial Intelligence. Some scientists, he said, have been deliberately refraining from using words like “delve” for fear of being suspected of using A.I. as a writing tool.
Other scientists seem blasé about the risk of being caught using chatbots... (MORE - details)