Truly wild reading.
"If such a prompt injection is included in a submission and it consequently results in a positive LLM-generated review, we consider this a form of collusion (which, as per past precedent, is a Code of Ethics violation) that both the paper authors and the reviewer would be held accountable for, because it involves the author explicitly requesting and receiving a positive review.
https://blog.iclr.cc/2025/08/26/policies-on-large-language-model-usage-at-iclr-2026/