Anecdotally, so, so many of blog posts and Substack whatnots getting passed around approvingly by mostly-anti-LLM accounts here on the mostly-anti-LLM platform have pretty obvious LLM fingerprints all over them.
Discussion
This is obvious, I think (?) to people who are using or studying LLMs and/or intentionally reading their output, but clearly not to people who strictly avoid them. I don't really know how to think about this.
Like, I'm not myself going to call out LLM-shaped posts—that's not really something I can easily objectively demonstrate and also egh, extremely not my ministry. But people who (reasonably) hate the tech are finding its output persuasive without knowing what they're reading. It seems not great.
(I haven't written about my own approach, which involves routine evaluations with elaborate analogs to lab goggles and rubber gloves, and I probably won't, it's quite dull. I just need to understand what these things do well enough to perceive them with something approaching clarity.)
I wish none of this was happening to us.