EDIT: This Nature news piece is based on a study that is only reported in a glossy report by Frontiers that has a clear agenda ("only half of researchers use AI for peer review? Clearly we need to help the rest"). I'm not saying it's wrong, but... I have limited trust. https://www.frontiersin.org/documents/unlocking-ai-potential.pdf
Nature : "More than half of researchers now use AI for peer review"
https://www.nature.com/articles/d41586-025-04066-5
"GPT-5 could mimic the structure of a peer-review report and use polished language, but that it failed to produce constructive feedback and made factual errors."
Yep, that matches the recent review I had, on the basis of which my manuscript was rejected. Vague criticisms that sound bad but are not actionable.
Bigger picture: Nobody has time to do peer review, so many reviews are shoddy. Now shoddy reviews can have AI help, but they're still shoddy reviews.
AI's making it worse, but the fundemental issues here are around workloads.
#academicchatter