22% of computer science papers may contain AI-generated text, study finds
Since the launch of ChatGPT in 2022, the use of generative artificial intelligence in academic writing has become increasingly noticeable. A large-scale study published in the journal Nature Human Behaviour found signs of AI involvement in up to 22.5% of computer science papers, Kazinform News Agency correspondent reports, citing Science.

The authors analyzed over one million papers and preprints published between 2020 and 2024, focusing on abstracts and introductions. These are the sections most often edited with the help of language models. To detect signs of AI use, the researchers applied statistical methods that track the frequency of certain words commonly found in AI-generated text, such as "pivotal," "showcase," and "intricate."
According to James Zou, a co-author of the study and a computational biologist at Stanford University, a sharp increase in AI-generated content was seen just months after ChatGPT became publicly available. The trend was especially strong in fields closely tied to artificial intelligence, including computer science, electrical engineering, and related areas.
By comparison, signs of language model use were found in only 7.7% of math abstracts, with even lower rates in biomedical research and physics. Still, the trend is gradually spreading across all scientific fields.
Early on, the academic community tried to limit the use of generative AI. Many journals introduced policies requiring authors to disclose if such tools were used.
In practice, though, enforcing these rules has proven difficult. Some papers included obvious traces of language models, such as phrases like "regenerate response" or "my knowledge cutoff." Researchers, including University of Toulouse computer scientist Guillaume Cabanac, began compiling databases of questionable publications.
Today, detecting AI involvement is becoming increasingly difficult. Authors have learned to avoid giveaway phrases, and current detection tools often deliver inconsistent results, especially when evaluating work by non-native English speakers.
Risks and challenges
Although the study focused mainly on abstracts and introductions, co-author and data scientist at the University of Tübingen, Dmitry Kobak warns that researchers may increasingly turn to AI to write sections that review previous studies. This could make those parts of papers more uniform and eventually create a vicious cycle, where new language models are trained on content generated by earlier ones.
The publication of AI-generated papers that include errors or fabricated information raises concerns about the reliability of the peer review process and may undermine trust in scientific publishing overall.
Earlier, Kazinform News Agency reported on the influence of artificial intelligence on the labor market and future jobs.