Truly wild reading.

"If such a prompt injection is included in a submission and it consequently results in a positive LLM-generated review, we consider this a form of collusion (which, as per past precedent, is a Code of Ethics violation) that both the paper authors and the reviewer would be held accountable for, because it involves the author explicitly requesting and receiving a positive review.

https://blog.iclr.cc/2025/08/26/policies-on-large-language-model-usage-at-iclr-2026/

While it is the LLM that is “obliging” by providing the positive review, the reviewer is ultimately responsible for the LLM’s review, and consequently they would bear the consequences. On the other hand, we consider the injection of such a prompt by an author to be an attempt at collusion which would similarly be a code of ethics violation."

And you know they added this because someone actually did it and a reviewer who I presume was using an LLM to review, found this out.