ArXiv, a leading open-access repository, has revoked its policy for accepting review and viewpoint articles in the Computer Science category. Only articles that have already undergone peer review at a journal or conference will now be accepted. The policy change, announced on October 31, follows an overwhelming influx of AI-generated survey papers. This phenomenon has prompted moderators to reevaluate the value of submitted papers, citing a decline in the quality and content of these pieces.
With the rise of generative AI – and especially large language models like ChatGPT – it has become easier for researchers to produce superficial surveys. ArXiv has received hundreds of such submissions per month in recent years, in stark contrast to the previously small number of high-quality reviews, typically written by established academics. This sudden increase has led to a substantial change in how ArXiv handles these submissions, as it can no longer guarantee the ability to assess these quality differences.
According to Thomas G. Dietterich, an ArXiv moderator, they are forced to make this drastic decision due to the dramatic number of LLM-assisted survey papers. The platform does not have a sufficient moderator team to perform qualitative analyses of all submissions. This places a significant strain on the resources of moderators, who traditionally fulfill the role of quality assurance without replicating the peer-review process.
A recent study published in Nature Human Behaviour supports the concern that a significant percentage of computer science abstracts are tweaked using AI. This raises questions about the integrity of research and the ability to distinguish authentic and high-quality scientific contributions. The concern is valid; looking at the results of a study in Science Advances, we see an exponential increase in the use of AI in research papers since the introduction of ChatGPT.
However, reactions from the research community have been mixed. Some, such as AI security researcher Stephen Casper, are concerned about the negative impact of this policy on emerging researchers and those involved in ethics and governance. They fear that these new requirements could hinder depth and diversity of voice in research, especially for younger researchers and those who lack the resources of large institutions.
Others advocate for a more nuanced approach, discussing the possibility of implementing an unmoderated section on ArXiv. This suggests a possible way forward to simultaneously ensure quality and accessibility, without excluding those conducting early-stage research from the platform.
ArXiv's recent policy shift reflects a broader trend within academic publishing; conferences like CVPR 2025 have already taken similar measures to reject papers from reviewers with untrustworthy behavior. This is part of a broader community of inquiry into how to maintain the integrity of scholarly work amid the explosive growth of AI technologies.
For investors and analysts seeking to understand the impact of these shifts, it is crucial to consider what this means for the quality and reliability of research within the crypto marketScientific publications are a foundation for trust in new technologies and applications. If we monitor this development closely, we can see that there is still considerable room for improvement in the evaluation and implementation of AI in fundamental research.
What are the consequences of this policy change for researchers who are committed to publishing review and position papers?
The policy change may be particularly detrimental to emerging researchers and academics without access to the resources of larger institutions, limiting their ability to make valuable contributions to their fields.
What are the broader implications of the growing influence of generative AI on scientific research?
Generative AI raises questions about the integrity of published work and the ability to distinguish authentic, in-depth research contributions from superficial, AI-generated content.
What alternatives are being discussed to ArXiv's current policy?
There have been proposals for creating unmoderated sections within ArXiv, which could help to both ensure access for newcomers to the field while also improving quality control.