How to spot fake reviewers: a beginner's guide

For those unfamiliar with or new to academic publishing, peer review manipulation is hard to spot. This is mainly because most peer review reports are still confidential documents that stay buried deep in academic journal's editorial offices. As long as this continues to be the default, recognising good and bad peer reviews is difficult enough... now we have to worry about fake peer review as well?! 😒

Well, yes. Yes, we do. The peer review process in academic publishing is unlike any other. The style and content of a manuscript are criticized, as are the thought processes that birthed it. Everything from word choice to significance of the research focus is under scrutiny. In traditional peer review formats, reviewers recommend that manuscripts be accepted or rejected. This much power may make some less scrupulous people guiddy... or yearning to rig the system.

The refereeing work that is supposed to be one of the garantors of research integrity is itself highly susceptible to manipulation. 

  • Reviewers are invited to assess a manuscript via email. But many don't supply their institutional or ORCID-verified email addresses. This means that fake accounts run by the authors themselves or their accomplices may be mistakenly contacted.
  • Some reviewers are recommended by authors, but are they truly qualified to assess the work? Some elaborate peer review circles where a group of authors agree to favourably review each others’ manuscripts— have already been uncovered.
  • The anonymous nature of the process may lend itself to some researchers inappropriately promoting their own work via manipulation of the citations suggested.
COPE even produced a flowchart that summarises the recognised features or patterns of questionable reviewer activity. Basically, if it's too good to be true, it's probably fake.❗
 
 
Why am I talking about this now? There have been discussions recently regarding the use of AI in peer review. I even participated in some of them. While I'm not fundamentally against the use of some AI tools, trusting ChatGPT to review the paper and write the report goes beyond what I'm willing to consider as a time-saving device. 
 
Some of the ways in which I believe fake reviewers can be spotted is by textual analysis 👆. I have spoken before about anonymisation in peer review, and how it can actually work for open research. However, if we all sound like a LLM (Large Language Model), then we cannot expect to recognise individual style and voice in any writing.
 
This issue becomes ever more pertinent when reviewers behave poorly. Trust in science and the scholarly record is low, but steps have been taken to address systemic issues and those arising from technological innovations, such as the rise of paper mills. 

If you think you can stop fearing the influence of peer reviewers as soon as the manuscript is published, think again... we are here to continuously monitor and audit. And yes, peer reviewers are monitored. By peers. Like me. 😐

 


 

Comments

Leave a comment!

Auditing published papers (part I)

IMHO: why open science should adopt double anonymous peer review