AI ’Slop’ Is Distorting Science: Researchers Push for Mandatory Disclosure

Science is drowning in AI-generated garbage—and researchers are demanding a cleanup.
They call it 'slop': the flood of low-quality, AI-produced text, images, and data that's seeping into academic journals and research databases. It's not just a typo here or a weird image there. This synthetic sludge is warping the foundational knowledge we rely on, creating a hall of mirrors where studies reference other AI-generated studies in an endless, meaningless loop.
The Fix: Mandatory Disclosure
The proposed solution is blunt force transparency. A growing coalition of scientists is pushing for journals and conferences to mandate that any AI-assisted or generated content be clearly flagged. No more hiding behind the curtain. If a language model wrote your literature review, drafted your methodology, or created your figures, you have to say so. The goal is to let readers—and peer reviewers—apply the appropriate skepticism.
Trust Is the First Casualty
The core issue isn't the tools themselves. It's the deception, intentional or not. When you can't tell what's human-curated and what's machine-hallucinated, the entire edifice of peer-reviewed science starts to crack. How do you replicate a study based on AI-invented protocols? How do you trust a meta-analysis polluted with synthetic papers? The push for disclosure is a last-ditch effort to rebuild the walls between rigorous research and automated content farms.
Forget crypto scams—the next big grift might be selling AI-generated research papers to desperate academics, a cynical new revenue stream in the publish-or-perish economy.
The clock is ticking. Every undisclosed AI slop paper that gets published doesn't just waste time; it actively corrodes our shared understanding of the world. The call for labels is a fight for science's soul, a desperate bid to keep the signal clear through the synthetic noise.
Conferences crack down as low-quality papers overwhelm reviewers
Researchers warned early that unchecked use of automated writing tools could damage the field. Inioluwa Deborah Raji, an AI researcher at the University of California, Berkeley, said the situation turned chaotic fast.
“There is a little bit of irony to the fact that there’s so much enthusiasm for AI shaping other fields when, in reality, our field has gone through this chaotic experience because of the widespread use of AI,” she said.
Hard data shows how widespread the problem became. A Stanford University study published in August found that up to 22 percent of computer science papers showed signs of large language model use.
Pangram, a text analysis start-up, reviewed submissions and peer reviews at the International Conference on Learning Representations in 2025. It estimated that 21 percent of reviews were fully generated by AI, while more than half used it for tasks like editing. Pangram also found that 9 percent of submitted papers had more than half their content produced this way.
The issue reached a tipping point in November. Reviewers at ICLR flagged a paper suspected of being generated by AI that still ranked in the top 17 percent based on reviewer scores. In January, detection firm GPTZero reported more than 100 automated errors across 50 papers presented at NeurIPS, widely seen as the top venue for advanced research in the field.
As concerns grew, ICLR updated its usage rules before the conference. Papers that fail to disclose extensive use of language models now face rejection. Reviewers who submit low-quality evaluations created with automation risk penalties, including having their own papers declined.
Hany Farid, a computer science professor at the University of California, Berkeley, said “If you’re publishing really low-quality papers that are just wrong, why should society trust us as scientists?”
Paper volumes surge while detection struggles to keep up
Per the report, NeurIPS received 21,575 papers in 2025, up from 17,491 in 2024 and 9,467 in 2020. One author submitted more than 100 papers in a single year, far beyond what is typical for one researcher.
Thomas G. Dietterich, emeritus professor at Oregon State University and chair of the computer science section of arXiv, said uploads to the open repository also ROSE sharply.
Still, researchers say the cause is not simple. Some argue the increase comes from more people entering the field. Others say heavy use of AI tools plays a major role. Detection remains difficult because there is no shared standard for identifying automated text. Dietterich said common warning signs include made-up references and incorrect figures. Authors caught doing this can be temporarily banned from arXiv.
Commercial pressure also sits in the background. High-profile demos, soaring salaries, and aggressive competition have pushed parts of the field to focus on quantity. Raji said moments of HYPE attract outsiders looking for fast results.
At the same time, researchers say some uses are legitimate. Dietterich noted that writing quality in papers from China has improved, likely because language tools help rewrite English more clearly.
The issue now stretches beyond publishing. Companies like Google, Anthropic, and OpenAI promote their models as research partners that can speed up discovery in areas like life sciences. These systems are trained on academic text.
Farid warned that if training data includes too much synthetic material, model performance can degrade. Past studies show large language models can collapse into nonsense when fed uncurated automated data.
Farid said companies scraping research have strong incentives to know which papers are human-written. Kevin Weil, head of science at OpenAI, said tools still require human checks. “It can be a massive accelerator,” he said. “But you have to check it. It doesn’t absolve you from rigour.”
Join a premium crypto trading community free for 30 days - normally $100/mo.