"I speculated that transformer performance would converge on not-quite-good-enough. Needs more work. See me after. Not so much 'super-intelligence' as 'super-mediocrity'."
https://codemanship.wordpress.com/2025/01/11/the-llm-in-the-room/
The time is upon as, folks. If anyone doubted that LLMs have hit a performance wall, it's undeniable today. This is as good as they're gonna get, and it ain't good enough.