#OpenRxiv just added an #AI #PeerReview feature for #preprints on #bioRxiv and #medRxiv. At the moment, they're using the #qedscience tool.
https://www.nature.com/articles/d41586-025-03909-5
The bioRxiv announcement makes clear that AI review is optional for authors and that authors might be able to choose from other AI tools in the future.
https://connect.biorxiv.org/news/2025/11/04/qed_review_tool
PS: My experiments lead me to think that AI isn't good enough to do peer review yet -- even if (1) it's getting better, (2) it can already help human reviewers, and (3) many human reviewers are worse. Journals that allow it too large a role are abdicating their responsibility and might be deceiving authors and readers. Referees who give it too large a role are abdicating their responsibility and might be deceiving journals, authors, and readers. If you lean in the same direction, let me suggest that these objections don't carry over to preprint servers making AI review an #FWIW option for authors. This kind of AI review doesn't pretend to be more than it is. When it happens, it's a voluntary decision by authors. Of course authors could have gotten AI feedback on their own, with the AI tools of their choice, and without the preprint-server mediation. But giving them another option for the same kind of feedback is harmless and convenient. Moreover, it creates a training ground to monitor the quality and improvement of the AI tools.