OK this has been bothering me as replies come in to this post so I just wanna say sthing:
A few people are commenting 'ah but talking about the ways we can identify LLM content means people will tweak the LLMs so they learn how to better mimic humans'.
Yes. This is possible and even likely. But I think we should discuss how to identify LLMs anyway. Continuing the logic of this would mean we should never share knowledge in case it falls into the hands of bad actors.