Bioethicist Warns of Potential Risks in AI-Driven Peer Review

Concerns arise as researchers debate the use of AI in academic peer review. While some argue for selective AI assistance, others fear potential bias and misinformation risks, demanding clear policies.

AI in Peer Review: Balancing Innovation with Integrity Sparks Academic Debate (Image: Research Information)
AI in Peer Review: Balancing Innovation with Integrity Sparks Academic Debate (Image: Research Information)

AI and Peer Review: Can Machines Replace Human Critique?

The time-honored tradition of peer review, where academic experts critique research manuscripts, faces a potential technological disruption: Artificial Intelligence (AI). While some see AI as a savior for the overburdened and underappreciated reviewers, others warn of its dangers to the integrity and effectiveness of the process. This article delves into the debate surrounding AI in peer review, exploring its potential benefits and drawbacks.

The Criticisms: Bias, Misinformation, and Lack of Humanity

Vasiliki Mollaki, a bioethicist, raises concerns about AI perpetuating bias and introducing misinformation. Her research highlights the lack of concrete policies from major publishers regarding AI use in peer review. This absence of ethical guidelines could erode trust and transparency, potentially leading to the “death of peer review integrity,” as her paper warns.

Furthermore, some argue that AI currently lacks the capacity for complex evaluation. Tjibbe Donker, an infectious disease epidemiologist, emphasizes AI’s inability to provide personalized feedback or even create accurate citations. He describes AI tools as “hallucinating” due to their limitations in understanding context and nuance.

The Potential: Efficiency, Assistance, and Early Feedback

Despite the concerns, others see potential benefits in using AI to assist human reviewers. James Zou, a biomedical data scientist, points to studies where AI feedback overlapped significantly with human reviewers’ insights. He argues that AI could be particularly helpful for authors seeking early feedback on drafts, potentially accelerating the publication process.

Donker suggests using AI selectively, for tasks like summarizing key points or assessing novelty, to support human reviewers. This could free up reviewers’ time and effort for more complex evaluations that require human judgment and understanding.

Transparency, Disclosure, and Nuanced Responses

To mitigate potential risks, Mollaki advocates for clear AI review policies focusing on transparency and disclosure. She suggests journals require reviewers to disclose how they used AI and even the specific prompts provided to the tools. Penalties for violating such policies could be necessary, but Donker warns against overly harsh reactions. Pushing reviewers away might exacerbate the existing shortage and hinder the research ecosystem.

The debate around AI in peer review is far from settled. While concerns about bias and misinformation are valid, the potential for efficiency and early feedback cannot be ignored. Ultimately, the future may lie in a collaborative approach where AI acts as an assistant, empowering human reviewers to focus on the most critical aspects of their crucial role.

Google News Icon

Add Slash Insider to your Google News Feed

Source: IEEE Spectrum

The information above is curated from reliable sources and modified for clarity. Slash Insider is not responsible for its completeness or accuracy. We strive to deliver reliable articles but encourage readers to verify details independently.