"I speculated that transformer performance would converge on not-quite-good-enough. Needs more work. See me after. Not so much 'super-intelligence' as 'super-mediocrity'."

#gpt5

https://codemanship.wordpress.com/2025/01/11/the-llm-in-the-room/

The time is upon as, folks. If anyone doubted that LLMs have hit a performance wall, it's undeniable today. This is as good as they're gonna get, and it ain't good enough.

@jasongorman What worries me is that so many companies have now doubled and tripled down on their decisions to “leverage” AI that they won’t back down. Sunk cost fallacy is strong.

I share your concern that because of the ongoing dumbing down of our industry those “not quite good enough” LLMs *are* good enough to replace juniors and even average devs - which, as you say, will remove the ‘raw material’ to train up from the system.