So how _did_ they solve the stochastic problem of LLMs?
So how _did_ they solve the stochastic problem of LLMs?
@juergen_hubert The bubble might not burst, though, because it's not a bubble. Coding with coding agents based on LLMs is just so much faster that there will be no going back.
It may be "faster", but do the people who implement it truly have the same process knowledge of the code as people who do it manually?
@juergen_hubert Do they need it, if all they do is ask the LLM interface to adjust, fix or rewrite it?
If they are okay with a code base that is increasingly incomprehensible to its developers, sure. Though I only recommend that if the software in question won't be in use after a few years.
@juergen_hubert It's not incomprehensible to the developers, there's just another layer between the human and the machine code. A coding LLM is basically just the logical extension of the concept of a compiler. Most people have also never read machine code, and why would they, when compilers exist?
The difference is that compilers follow deterministic processes that can be understood by humans if the need arises (and the need _does_ arise when you deal with ancient legacy code).
LLM systems, by their very nature, are _stochastic_ processes. They might give the right answer and the right code, but you cannot rule out that they give the wrong answer and buggy code, and there is nothing you can do to prevent that. And then a human has to figure out what went wrong, and if they do not understand how the code was generated, they are already off to a bad start.