Discussion
Loading...

Post

Log in
  • About
  • Code of conduct
  • Privacy
  • Users
  • Instances
  • About Bonfire
petersuber
petersuber
@petersuber@fediscience.org  ·  activity timestamp 3 weeks ago

#OpenRxiv just added an #AI #PeerReview feature for #preprints on #bioRxiv and #medRxiv. At the moment, they're using the #qedscience tool.
https://www.nature.com/articles/d41586-025-03909-5

The bioRxiv announcement makes clear that AI review is optional for authors and that authors might be able to choose from other AI tools in the future.
https://connect.biorxiv.org/news/2025/11/04/qed_review_tool

PS: My experiments lead me to think that AI isn't good enough to do peer review yet -- even if (1) it's getting better, (2) it can already help human reviewers, and (3) many human reviewers are worse. Journals that allow it too large a role are abdicating their responsibility and might be deceiving authors and readers. Referees who give it too large a role are abdicating their responsibility and might be deceiving journals, authors, and readers. If you lean in the same direction, let me suggest that these objections don't carry over to preprint servers making AI review an #FWIW option for authors. This kind of AI review doesn't pretend to be more than it is. When it happens, it's a voluntary decision by authors. Of course authors could have gotten AI feedback on their own, with the AI tools of their choice, and without the preprint-server mediation. But giving them another option for the same kind of feedback is harmless and convenient. Moreover, it creates a training ground to monitor the quality and improvement of the AI tools.

  • Copy link
  • Flag this post
  • Block
Stan Schymanski
Stan Schymanski
@schymans@mastodon.social replied  ·  activity timestamp 2 weeks ago

@petersuber My objections to using #AI for article reviews do carry over to preprint servers. The sole purpose of publishing a paper is for readers to learn something from it. By readers I mean humanoid readers. I don't think a machine should be telling me if humans can learn something new from a manuscript. It could be used in a SECOND step to check if the referencing is consistent etc. but I don't think it can even tell me if an author cites the right papers for the right reasons. #science

  • Copy link
  • Flag this comment
  • Block
Kate Nyhan
Kate Nyhan
@nyhan@fediscience.org replied  ·  activity timestamp 2 weeks ago

@petersuber Re AI peer review - as a scholcomm maven, did you read the reference to "automated fraud detection" in the NIH publication cost cap RFI as a reference to AI?

Key paragraph:
In addition to compensating peer reviewers, other kinds of publishing best practices that NIH should consider as factors in determining the potential allowability of a higher per publication cost, such as use of automated fraud detection capabilities;
Key paragraph: In addition to compensating peer reviewers, other kinds of publishing best practices that NIH should consider as factors in determining the potential allowability of a higher per publication cost, such as use of automated fraud detection capabilities;
Key paragraph: In addition to compensating peer reviewers, other kinds of publishing best practices that NIH should consider as factors in determining the potential allowability of a higher per publication cost, such as use of automated fraud detection capabilities;
  • Copy link
  • Flag this comment
  • Block
Zillion
Zillion
@zillion@freeradical.zone replied  ·  activity timestamp 3 weeks ago

@petersuber AI review is _not_ peer review, good or bad. Peers are those in the field of the article being reviewed. "Review by a random person" would be bad; AI review can only be worse. What was Rxiv thinking!

  • Copy link
  • Flag this comment
  • Block
Christian Pietsch
Christian Pietsch
@chpietsch@fedifreu.de replied  ·  activity timestamp 3 weeks ago

@petersuber “AI” will never be good enough to do peer review because all it will ever be able to do is to compute some kind of similarity between some input and its language model. “AI” does not understand anything about research. Using it to make judgements about innovation would be silly. Using it for automatic decision making would be insane.

  • Copy link
  • Flag this comment
  • Block
Dr. Robert M Flight
Dr. Robert M Flight
@rmflight@mastodon.social replied  ·  activity timestamp 3 weeks ago

@petersuber I haven't tried this particular tool, but did try one someone had made available somewhere else.

The problem is that it can't really look outside the manuscript at the literature. All I got back was a summary of what I did, and then it suggested experiments that WERE ALREADY IN THE MANUSCRIPT.

https://www.reddit.com/r/bioinformatics/comments/1m8lxug/comment/n52qa0i/

So no, I really don't want this. At least I can argue about comments with an actual person.

  • Copy link
  • Flag this comment
  • Block

bonfire.cafe

A space for Bonfire maintainers and contributors to communicate

bonfire.cafe: About · Code of conduct · Privacy · Users · Instances
Bonfire social · 1.0.1-alpha.40 no JS en
Automatic federation enabled
Log in
  • Explore
  • About
  • Members
  • Code of Conduct