Is your lab doing code reviews? I'd like to learn from your experiences and practices!
In our teams, we're establishing internal peer-reviews of our research software. Our focus is on the correctness of our "one-off" analysis scripts that our results are based on. Because of that, the usual, patch-centered review processes don't quite work, nor does the tooling on the Git forges, as they only work for Git diffs, not the "finished" script.
Please also let me know about any practical procedures and materials that explain how to organize and document such reviews. So far, I've found stuff from the software industry, including the "IEEE Standard for Software Reviews
and Audits", but that's very abstract and not easy to translate into the context of small teams and projects like ours, in the behavioral and psychological sciences...