Sflintl

AI Tool Flood Threatens Academic Publishing with Low-Quality Submissions, Study Finds

New study in Organization Science reveals AI tools are flooding academic journals with low-quality submissions, overwhelming peer review and threatening scientific integrity.

Sflintl · 2026-05-02 08:55:00 · Science & Space

Breaking: AI-Generated Papers Surge in Academic Journals

The widespread use of artificial intelligence by researchers is triggering an alarming wave of poorly written and scientifically shallow submissions, according to a new study published in the journal Organization Science.

AI Tool Flood Threatens Academic Publishing with Low-Quality Submissions, Study Finds
Source: phys.org

Researchers found that while AI tools can assist with summarizing data and polishing language, they are also being used to generate entire manuscripts with minimal human oversight. This has led to a sharp increase in submissions that lack original insight or rigorous methodology.

“We are witnessing a deluge of papers that are superficially correct but intellectually empty,” said Dr. Elena Torres, lead author of the study and professor of scientific integrity at the University of Barcelona. “Editors report spending more time desk-rejecting AI-fluffed manuscripts than ever before.”

Background: The Double-Edged Sword of AI in Research

Language models like OpenAI’s GPT series have become popular among scientists for tasks such as literature reviews, hypothesis generation, and even drafting peer reviews. However, the ease of generating text has also lowered the barrier for submitting incomplete or derivative work.

The study analyzed over 1,200 manuscripts submitted to 15 journals across fields like biology, physics, and economics. It found that papers flagged as potentially AI-generated were 3.5 times more likely to be rejected for poor writing quality or lack of substantive contribution.

“The same technology that can democratize science is now overwhelming quality-control systems,” noted co-author Dr. Mark Chen, an AI ethics researcher at MIT. “Journals are struggling to keep up with the volume, and many are resorting to new screening tools—which themselves have limitations.”

What This Means for Science and Peer Review

The findings have immediate implications for the integrity of academic publishing. If unchecked, AI-generated work could corrupt the peer-review process, waste reviewer time, and inflate publication counts for researchers who rely on quantity over quality.

Editors and publishers are now debating mandatory disclosure of AI use, as well as automated detection software. However, the study warns that current detectors are not foolproof and can produce false positives, potentially penalizing legitimate authors who use AI as a tool.

“We risk entering an arms race between generative AI and detection tools,” said Dr. Torres. “The real solution is cultural: we need to reward rigorous, reproducible science—not just publish-or-perish metrics.”

Some universities have already updated their guidelines to require authors to explicitly state how AI was used. But enforcement remains inconsistent, and many journals lack the resources to verify each manuscript.

The study’s authors call for a global task force on AI in publishing, similar to COPE (Committee on Publication Ethics). They also urge funders to penalize institutions that produce high volumes of low-quality, AI-assisted output.

For now, scientists and editors are navigating a new reality: AI can accelerate discovery, but it also threatens to drown the academic conversation in a tide of noise.

Recommended