Concerns arise as researchers debate the use of AI in academic peer review. While some argue for selective AI assistance, others fear potential bias and misinformation risks, demanding clear policies.
AT A GLANCE
- Volunteer Effort: Peer review, a vital part of academic publishing, involves volunteers who meticulously assess manuscripts for accuracy, novelty, and significance.
- Unrecognized Contribution: Despite its importance, peer reviewing is often unpaid, voluntary, and underappreciated, leading to questions about the fairness of this practice.
- AI in Peer Review: Researchers debate whether AI should replace human reviewers, expressing concerns about bias, misinformation, and potential threats to the integrity of the review process.
- Lack of Policies: A bioethicist, Vasiliki Mollaki, highlights the absence of concrete policies in top academic publishers regarding the use of AI in peer review, especially in comparison to policies for AI in manuscript writing.
- Integrity Concerns: Mollaki emphasizes that without clear policies, there is a risk of compromising the integrity and trust in the peer review process due to potential misuse of AI.
- AI’s Effectiveness: Opinions vary on AI’s capability to provide effective peer review. Some argue it offers valuable and immediate feedback, while others point out limitations, such as generating false citations.
- Selective AI Integration: Instead of an outright ban, some propose selectively using AI to assist human reviewers, focusing on areas like assessing novelty and consolidating reviewer feedback, emphasizing the need for transparent AI review policies.
AI and Peer Review: Can Machines Replace Human Critique?
The time-honored tradition of peer review, where academic experts critique research manuscripts, faces a potential technological disruption: Artificial Intelligence (AI). While some see AI as a savior for the overburdened and underappreciated reviewers, others warn of its dangers to the integrity and effectiveness of the process. This article delves into the debate surrounding AI in peer review, exploring its potential benefits and drawbacks.
The Criticisms: Bias, Misinformation, and Lack of Humanity
Vasiliki Mollaki, a bioethicist, raises concerns about AI perpetuating bias and introducing misinformation. Her research highlights the lack of concrete policies from major publishers regarding AI use in peer review. This absence of ethical guidelines could erode trust and transparency, potentially leading to the “death of peer review integrity,” as her paper warns.
Furthermore, some argue that AI currently lacks the capacity for complex evaluation. Tjibbe Donker, an infectious disease epidemiologist, emphasizes AI’s inability to provide personalized feedback or even create accurate citations. He describes AI tools as “hallucinating” due to their limitations in understanding context and nuance.
The Potential: Efficiency, Assistance, and Early Feedback
Despite the concerns, others see potential benefits in using AI to assist human reviewers. James Zou, a biomedical data scientist, points to studies where AI feedback overlapped significantly with human reviewers’ insights. He argues that AI could be particularly helpful for authors seeking early feedback on drafts, potentially accelerating the publication process.
Donker suggests using AI selectively, for tasks like summarizing key points or assessing novelty, to support human reviewers. This could free up reviewers’ time and effort for more complex evaluations that require human judgment and understanding.
Transparency, Disclosure, and Nuanced Responses
To mitigate potential risks, Mollaki advocates for clear AI review policies focusing on transparency and disclosure. She suggests journals require reviewers to disclose how they used AI and even the specific prompts provided to the tools. Penalties for violating such policies could be necessary, but Donker warns against overly harsh reactions. Pushing reviewers away might exacerbate the existing shortage and hinder the research ecosystem.
The debate around AI in peer review is far from settled. While concerns about bias and misinformation are valid, the potential for efficiency and early feedback cannot be ignored. Ultimately, the future may lie in a collaborative approach where AI acts as an assistant, empowering human reviewers to focus on the most critical aspects of their crucial role.
Source: IEEE Spectrum
The information above is curated from reliable sources and modified for clarity. Slash Insider is not responsible for its completeness or accuracy. We strive to deliver reliable articles but encourage readers to verify details independently.