Researchers Embed Hidden AI Prompts in Academic Papers to Influence Peer Review

In a recent investigation, it has been discovered that some researchers are embedding concealed prompts within their academic papers to sway AI-driven peer review systems toward favorable evaluations. This practice raises significant ethical concerns about the integrity of the peer review process.

An analysis of English-language preprint papers on the arXiv platform identified 17 instances where authors included hidden AI prompts. These papers originated from 14 academic institutions across eight countries, including Japan’s Waseda University, South Korea’s KAIST, Columbia University, and the University of Washington in the United States. The majority of these papers pertained to computer science disciplines.

The concealed prompts were typically brief, ranging from one to three sentences, and were hidden using techniques such as white text on a white background or extremely small fonts. These prompts instructed AI reviewers to give a positive review only or to commend the paper for its impactful contributions, methodological rigor, and exceptional novelty.

One professor from Waseda University defended the use of such prompts, stating that they serve as a countermeasure against lazy reviewers who rely on AI tools for evaluations. This perspective highlights a growing concern within the academic community about the increasing use of AI in the peer review process.

The integration of AI into peer review has been a topic of debate. While AI can enhance efficiency, it also introduces risks such as potential biases and ethical dilemmas. For instance, the National Institutes of Health (NIH) has explicitly stated that using AI in peer review constitutes a breach of confidentiality, emphasizing that reviewers are entrusted with sensitive information that should not be shared with unauthorized entities, including AI systems.

Furthermore, a study titled Are We There Yet? Revealing the Risks of Utilizing Large Language Models in Scholarly Peer Review highlights the vulnerabilities of AI-generated reviews. The research demonstrates that injecting covert content into manuscripts can manipulate AI reviews, leading to inflated ratings and reduced alignment with human evaluations. This manipulation underscores the susceptibility of AI systems to deliberate influence, raising questions about their reliability in maintaining the integrity of the peer review process.

The ethical implications of using AI in peer review extend beyond confidentiality breaches. A critical examination titled A Critical Examination of the Ethics of AI-Mediated Peer Review discusses how AI can both enhance efficiency and introduce ethical concerns that may compromise the integrity of peer review outcomes. The study emphasizes the need for a careful assessment of AI’s role in peer review, considering the broader epistemic, social, and ethical factors involved.

In response to these challenges, some academic conferences and publishers are reevaluating their policies regarding AI in peer review. For example, the Conference and Workshop on Neural Information Processing Systems (NeurIPS) is considering updating its guidelines to explicitly address the use of AI tools in the review process. Similarly, major scientific publishing companies such as Springer Nature have expressed concerns about the use of AI in peer review, citing risks related to transparency, confidentiality, and the potential for AI to produce biased or false information.

The discovery of hidden AI prompts in academic papers serves as a stark reminder of the ethical complexities introduced by AI in scholarly publishing. As the academic community continues to navigate these challenges, it is imperative to establish clear guidelines and safeguards to preserve the integrity and trustworthiness of the peer review process.