@leaverou I think what bothers me is the "interpreted" generation, where it's generating a bunch of code by wizardry from a prompt, and that prompt can be very high level; "build an app that does X, Y, Z".
I don't like that the LLM's source data is an internet that is already bloated and somewhat broken, insecure and inaccessible, as that's what they spit out too.
If it eventually becomes another level of programming language, and we can be precise with it, and get safe output, fine with me.