"We have recently clarified our penalties for this. If a submission contains incontrovertible evidence that the authors did not check the results of LLM generation, this means we can't trust anything in the paper. 3/
The penalty is a 1-year ban from arXiv followed by the requirement that subsequent arXiv submissions must first be accepted at a reputable peer-reviewed venue. 4/
Examples of incontrovertible evidence: hallucinated references, meta-comments from the LLM ("here is a 200 word summary; would you like me to make any changes?"; "the data in this table is illustrative, fill it in with the real numbers from your experiments") end/"
https://xcancel.com/tdietterich/status/2055000962713133220
Good stuff. If you can't be trusted to proof-read your own papers to check it doesn't contain ridiculous out-of-place material from an LLM (nor if you trust genAI to correctly give you references when even today it's known to make up details) then how do we know you could stand up in a seminar and present the work and take appropriate credit for it? Speaking from a mathematics pov here, where even for jointly-authored papers, all authors get equal credit are are assumed to at least know and understand a substantial portion of the mathematics if not 100% of it.