Chatbots Are Writing Letters to Scientific Journals, Editors Report

8

The Problem: AI-Penned Letters Flood Scientific Publications

Scientific journals worldwide are facing an unexpected deluge: letters written by artificial intelligence systems rather than human researchers. According to a groundbreaking study, chatbots are drafting correspondence for prestigious medical and scientific publications, raising concerns about the integrity of scholarly communication.

The phenomenon gained attention after researchers discovered that these AI systems are capable of mimicking the writing style of experts in specific fields. These language models, searching for references in specialized areas with limited literature, sometimes incorporate real researchers’ own work to bolster their arguments.

A Telltale Sign Emerges

The issue came to light through the experience of Dr. Carlos Chaccour, a tropical disease specialist at the University of Navarra in Spain. After publishing a paper on malaria control in The New England Journal of Medicine, he received a strongly worded letter questioning his research.

What made this unusual was the letter’s specific references to studies Dr. Chaccour himself had authored. Suspicious of this coincidence, Dr. Chaccour investigated and concluded the letter must have been generated by a large language model.

A Pattern Unfolds

This isolated incident revealed a broader trend. Dr. Chaccour and his team analyzed over 730,000 letters published in scientific journals since 2005 and found a dramatic increase in suspicious correspondence coinciding with the widespread availability of advanced AI systems.

The evidence shows authors suddenly producing an extraordinary volume of letters after 2023. One researcher published 234 letters in a single year across multiple journals. Another author went from zero published letters in 2023 to 84 letters by 2025.

Journal Editors Sound the Alarm

The problem extends beyond Dr. Chaccour’s experience. Dr. Eric Rubin, editor-in-chief of The New England Journal of Medicine, acknowledged the concerning incentive for authors to use AI to boost their publication records.

“Letters to the editor published in scientific journals are listed in databases that also list journal articles, and they count as much as an article,” Dr. Rubin explained. “For doing a very small amount of work, someone can get an article in The New England Journal of Medicine on their CV. The incentive to cheat is high.”

The Scale of the Issue

The study revealed alarming statistics:

  • 6% of letters in 2023 were from prolific authors (those with three or more published letters in a year)
  • This increased to 12% in 2024
  • Current rates are approaching 22%

Dr. Amy Gelfand, editor-in-chief of the journal Headache, has noticed that suspicious letters often arrive shortly after papers are published, unlike human authors who typically take weeks to respond.

A Growing Concern

The proliferation of AI-generated letters represents a significant threat to scientific discourse. These automated communications often masquerade as legitimate contributions but lack the expertise and nuanced understanding that human researchers bring.

“Their [AI] output may appear plausible, but it lacks the depth, context, and critical thinking that characterize genuine scholarly exchange,” Dr. Chaccour observed.

As Dr. Chaccour noted, these AI-generated letters “are invading journals like Omicron,” referring to the Covid variant that rapidly displaced other strains.

The scientific community now faces a critical challenge: maintaining the integrity of scholarly communication while navigating the era of powerful language models