Got one of these emails again...
Is your lab doing code reviews? I'd like to learn from your experiences and practices!
In our teams, we're establishing internal peer-reviews of our research software. Our focus is on the correctness of our "one-off" analysis scripts that our results are based on. Because of that, the usual, patch-centered review processes don't quite work, nor does the tooling on the Git forges, as they only work for Git diffs, not the "finished" script.
Please also let me know about any practical procedures and materials that explain how to organize and document such reviews. So far, I've found stuff from the software industry, including the "IEEE Standard for Software Reviews
and Audits", but that's very abstract and not easy to translate into the context of small teams and projects like ours, in the behavioral and psychological sciences...
OK, I have finally reset my password, got an email to confirm, clicked again on the Manuscript Central system LIKE I DO EVERY 😡 TIME and still cannot access my assigned manuscript because MY USER PROFILE is not up to date. It is missing my secondary email address and home address. WHYWHYWHY
So by this time, my energy and good intentions are already drained. I am doing everyone a favor - for free -, but why is this still so hard in 2026?
Why can't we just log in with our ORCID number?
One password to rule them all.
#Academia #PeerReview
a woman wearing glasses is sit...
More benefits of #OpenPeerReview:
https://www.sciencedirect.com/science/article/pii/S1751157725001221
When they know their reports will be made public, referees make them longer, clearer, more informative, and more constructive in suggesting improvements. And btw, reports by women are better in these respects than reports by men.
More benefits of #OpenPeerReview:
https://www.sciencedirect.com/science/article/pii/S1751157725001221
When they know their reports will be made public, referees make them longer, clearer, more informative, and more constructive in suggesting improvements. And btw, reports by women are better in these respects than reports by men.
#OpenRxiv just added an #AI #PeerReview feature for #preprints on #bioRxiv and #medRxiv. At the moment, they're using the #qedscience tool.
https://www.nature.com/articles/d41586-025-03909-5
The bioRxiv announcement makes clear that AI review is optional for authors and that authors might be able to choose from other AI tools in the future.
https://connect.biorxiv.org/news/2025/11/04/qed_review_tool
PS: My experiments lead me to think that AI isn't good enough to do peer review yet -- even if (1) it's getting better, (2) it can already help human reviewers, and (3) many human reviewers are worse. Journals that allow it too large a role are abdicating their responsibility and might be deceiving authors and readers. Referees who give it too large a role are abdicating their responsibility and might be deceiving journals, authors, and readers. If you lean in the same direction, let me suggest that these objections don't carry over to preprint servers making AI review an #FWIW option for authors. This kind of AI review doesn't pretend to be more than it is. When it happens, it's a voluntary decision by authors. Of course authors could have gotten AI feedback on their own, with the AI tools of their choice, and without the preprint-server mediation. But giving them another option for the same kind of feedback is harmless and convenient. Moreover, it creates a training ground to monitor the quality and improvement of the AI tools.
#OpenRxiv just added an #AI #PeerReview feature for #preprints on #bioRxiv and #medRxiv. At the moment, they're using the #qedscience tool.
https://www.nature.com/articles/d41586-025-03909-5
The bioRxiv announcement makes clear that AI review is optional for authors and that authors might be able to choose from other AI tools in the future.
https://connect.biorxiv.org/news/2025/11/04/qed_review_tool
PS: My experiments lead me to think that AI isn't good enough to do peer review yet -- even if (1) it's getting better, (2) it can already help human reviewers, and (3) many human reviewers are worse. Journals that allow it too large a role are abdicating their responsibility and might be deceiving authors and readers. Referees who give it too large a role are abdicating their responsibility and might be deceiving journals, authors, and readers. If you lean in the same direction, let me suggest that these objections don't carry over to preprint servers making AI review an #FWIW option for authors. This kind of AI review doesn't pretend to be more than it is. When it happens, it's a voluntary decision by authors. Of course authors could have gotten AI feedback on their own, with the AI tools of their choice, and without the preprint-server mediation. But giving them another option for the same kind of feedback is harmless and convenient. Moreover, it creates a training ground to monitor the quality and improvement of the AI tools.
Did you know you can read the peer reviews for all papers published in eLife?
We're on a mission to make publishing more transparent. Take a look at the #PeerReview tabs of our latest articles or learn about what we're doing for research communication: https://elifesciences.org/about/?utm_source=mastodon&utm_medium=social&utm_campaign=organic
Did you know you can read the peer reviews for all papers published in eLife?
We're on a mission to make publishing more transparent. Take a look at the #PeerReview tabs of our latest articles or learn about what we're doing for research communication: https://elifesciences.org/about/?utm_source=mastodon&utm_medium=social&utm_campaign=organic
Mit #peerreview in einem renommierten Journal erschienen:
SARS-CoV-2 verursacht zwar kein HIV/AIDS, aber seine Fähigkeit, Immunfunktionsstörungen hervorzurufen – einschließlich T-Zell-Depletion und -Dysfunktion, erhöhter Anfälligkeit für Infektionen, einschließlich opportunistischer Infektionen, beschleunigter biologischer Alterung, neurologischer und systemischer Schäden – bietet Parallelen zu AIDS im breiteren immunologischen Kontext.
https://www.sciencedirect.com/science/article/pii/S2773065425001464
Update. _Nature Structural & Molecular Biology_ is "now offering all [its] reviewers the opportunity to invite an early career researcher to formally co-review manuscripts with them."
https://www.nature.com/articles/s41594-025-01727-x
Wow... I was just pointed to this paper
https://doi.org/10.1016/j.surfin.2024.104081
and its retraction
https://doi.org/10.1016/j.surfin.2024.104081
Want to read the first sentence of the introduction together with me?
"Certainly, here is a possible introduction for your topic: Lithiummetal batteries are..."
What the...
This passed #peerreview in a journal with IF=6.3 ?!
#science #publishing #ai #slop #bullshit #researchethics #openscience #misconduct #misconductsconduct #elsevier #journal
Wow... I was just pointed to this paper
https://doi.org/10.1016/j.surfin.2024.104081
and its retraction
https://doi.org/10.1016/j.surfin.2024.104081
Want to read the first sentence of the introduction together with me?
"Certainly, here is a possible introduction for your topic: Lithiummetal batteries are..."
What the...
This passed #peerreview in a journal with IF=6.3 ?!
#science #publishing #ai #slop #bullshit #researchethics #openscience #misconduct #misconductsconduct #elsevier #journal
Update. In response to this problem (previous post, this thread), some publishers are desk-rejecting papers based on open health datasets. The problem is not the quality of the data, but the absence of additional work to validate findings.
Two reports:
1. "Journals and publishers crack down on research from open health data sets," Science, Oct 8, 2025.
https://www.science.org/content/article/journals-and-publishers-crack-down-research-open-health-data-sets
2. "AI: Journals are automatically rejecting public health dataset papers to combat paper mills," BMJ, Oct 15, 2025.
https://www.bmj.com/content/391/bmj.r2170
( #paywalled)
Update. Here's how #arXiv is dealing with a similar problem in computer science.
https://blog.arxiv.org/2025/10/31/attention-authors-updated-practice-for-review-articles-and-position-papers-in-arxiv-cs-category/
"Before being considered for submission to arXiv’s #CS category, review articles and position papers must now be accepted at a journal or a conference and complete successful peer review…In the past few years, arXiv has been flooded with papers. Generative #AI / #LLMs have added to this flood by making papers – especially papers not introducing new research results – fast and easy to write. While categories across arXiv have all seen a major increase in submissions, it’s particularly pronounced in arXiv’s CS category."