I appreciate the academic and rigorous effort to measure the bullshit that LLMs generate: https://machine-bullshit.github.io/
I just wish we would do away with the rhetoric that we're just aiming to make these models better.
How about no? How about admitting that the premise is poisoned from the start?
How about saying that holding everyone's labor, creativity and expression hostage for the sake of deskilling people and making them precarious is a horrible idea?