One of the side-effects of the AI bubble is that we stopped talking about alternative solutions to computing problems. Everything needs to be solved with "AI" - meaning an LLM - and nobody stops and asks if there is an alternative. This drives me mad because LLMs are not only unreliable, but also shockingly inefficient. And I'm positive that for most tasks there is an alternative algorithmic solution that is more reliable and less expensive.

@gabrielesvelto In areas not entirely technological many solutions are also already well known and haven't been realized because monneeh. Politicians just rather throw an app at people with e.g. mental health issues than finance more actual therapy opportunities, or hope an AI fixes the climate crisis instead of changing the industries causing it.

Now with AI and LLMs around, the people with the power for meaningful change can just point at that and say "Look, we did something! Now shut up!" Neither the solutions we already have are being used, nor are people looking for new ones, as you say.

@gabrielesvelto THIS! Whenever anyone proposes solving a probem with AI I try to present it to them like this:

You have a choice between spending a few days now to hash out a procedure that eliminates this problem. It's a one-time cost.

OR you can try to use AI. You'll have a ongoing cost forever, which could increase with higher needs, you'll have to send your data to an untrusted third party, it's not guaranteed to work reliably and it exposes you to cybersecurity threats.

@gabrielesvelto

But... But... But.... If I don't use "AI" how will I ever demolish the environment while churning out mountains of garbage based on stolen content and forgetting how to actually do anything?!?!?!

"I used AI to....", is nothing more than, "Listen I'm not an asshole but....", for the 21st Century.

#FuckAI

@gabrielesvelto ⬆️ ⬆️ ⬆️ ⬆️ ⬆️

Absolutely agree. In preparing some upcoming interview for a group grant, we realised we did not mention AI once in 50 pages...and now we are afraid to get even strong critical questions concerning why we neglect this all-mighty-all-solving-magic , as if we underlooked something fundamental (we did not, for our science, but we're already at the stage in which this might look "retro")

@gabrielesvelto Well this is not only related to AI, but a general trend. It's called the "Crapularity". Any task in IT takes more and more resources (both CPU and personal) to complete and gets less reliable. So eventually we're going to have huge development teams, developing trivial applications, that no longer work. We currently see that, for example, on the WWW.
@gabrielesvelto Until someone comes up with an energy efficient LLM then this form of AI is completely bottlenecked. It’s utterly useless for new, nuanced, and novel tasks… IE, most situations. LLMs will never ever have enough data to replace workers, no matter how much the new aristocracy wishes it to. The most it can do is make existing workers a little more productive.
@gabrielesvelto I'm reminded of someone making a calculator that uses a LLM as a joke. There were processors that ran in the kiloherts (not megahertz!) range in the 1970s that could do basic calculations more accurately than a LLM, yet here we are where there actually are people who non-jokingly ask a LLM to do calculations...

There are some things a LLM can be good at, but everyone is focused on using them for the wrong things instead. Admittedly there isn't a lot they're good at, but it's kind of sad that between the con artists and people caught up in the hype, all the focus has been on using them as wrongly as possible instead of on those few good things.

I'm hoping when the bubble pops people don't kneejerk 100% away from the mechanisms but instead they become a niche thing.

@gabrielesvelto To me the most disheartening fact is that the stuff where LLMs are good at, a (big) if-then-else would probably suffice and requiring a fraction of the energy (but fair enough, more robotic).
The stuff where an if-then-else is not enough, the more novel approaches you need, the less effective LLMs start to become as they can only do probability calculations on existing knowledge.
In those cases, a rubber ducky probably produces the same result, but LLMs are simply better at bias confirmation, so more comforting and less confronting as critical thinking
@gabrielesvelto Sure. I do wonder, with pretty much every org on the planet running an AI transformation programme (often as a solution looking for problems), it would be interesting to see data on usage of general purpose vs domain specific generative AI models. My hypothesis is that most enterprise AI implementations make inefficient use of GP models, augmented with prompting techniques such as RAG, as the barrier to model training is perceived as too high, or it is not even considered. And, as per your original post, a certain percentage could have been solved more effectively with non-AI solutions.
@mattjhayes yes, or even "different kind" of AI solutions. I'll give you two examples we've had success with within Mozilla: translations done using a very small scale local model (a few MiB in size). Not perfect, definitely lower quality than large models, but can be run on any hardware and trained in a few hours on a regular desktop machine.

@mattjhayes the other one automatic crash detection. We've got a bot using deep learning techniques that automatically sifts through our crash reporting data and files a bug when it finds something suspicious. Not perfect, has both false positives and false negatives, but it greatly reduced the need for manually triaging large amount of crash reports. Again it's a very simple solution that doesn't need an LLM in spite of using similar underlying technology.

1+ more replies (not shown)
@gabrielesvelto

The vast investment in AI was intended to derail innovative climate solutions.

AI has sucked billions of investment dollars away from real problems & real solutions.

The source of all that funding is a red flag.

Petrostate despots are using it as cover for an international state surveillance platform.

Putin, OPEC & Koch Network intend to keep captive oil consumers or else.
https://www.cnbc.com/2025/08/27/saudi-arabia-wants-to-be-worlds-third-largest-ai-provider-humain.html

https://www.bloomberg.com/news/articles/2025-08-25/saudi-s-humain-launches-arabic-chatbot-with-islamic-values

https://www.bloomberg.com/news/articles/2025-08-25/saudi-s-humain-to-open-data-centers-with-us-chips-in-early-2026

https://www.bloomberg.com/news/articles/2024-11-06/saudis-plan-100-billion-ai-powerhouse-to-rival-uae-s-tech-hub

@gabrielesvelto Not only algorithmic solutions. The other “use cases” of LLMs also: People use them because search engines did not give them good answers, good (re)sources for their questions (either anymore or never did), because they did not find good tutorials and boilerplates for their programming questions, because they were drowned in low-quality, ad-infested click-bait instead of getting well-presented articles with expert knowledge and links to dig further.

It would be, would have been so much more valuable to develop further in those directions, make good original content more omnipresent, better discoverable, … instead of letting “AI” do the discovery and synopsis in a very suboptimal way.

And the reason why I'm absolutely certain of this is the volume of investments sunk into artificial intelligence. We're talking about 500+ billion $ in 2024 alone. That kind of money finds an alternative solution to *every* problem. You don't even need to think about the technical aspects, with enough funding we can make pigs fly.

FYI this bit of news is very relevant to my post from yesterday:

https://finance.yahoo.com/news/coreweave-stock-sinks-as-insiders-sell-shares-at-very-rapid-pace-163455020.html

CoreWeave is one of the companies at the center of the AI bubble, and insiders are dumping their stock options as fast as they possibly can. Nothing tells you "this industry is toast" more than insiders dumping shares on the market.

@hipsterelectron sorry to hear that. Yeah, fuzzy search is interesting and the computing power of modern machines make it very appealing. ripgrep can tear through data at 10s of GiB per second on my box. Could that be used for more structured search across different file types? I'm sure it's possible, and it would surely beat sending stuff back and forth to a gigantic data center on the other side of the world.
@gabrielesvelto my emacsconf presentation last year https://emacsconf.org/2024/talks/regex/ mentions my research to develop a database structure that can execute NFAs directly against the index without touching the source. ripgrep is actually remarkably efficient against large monorepos like we had at twitter because of SIMD prefilter optimizations but finite automata are actually insufficient on a theoretical level to enable using SIMD after the automaton is invoked (so "a.*b" can't use a SIMD operation to match b). text search is also completely unable to parallelize across threads, and my model addresses this and more
2 more replies (not shown)
@gabrielesvelto The question is ill posed because an LLM doesn't generate code. It copies code the model was overfitted to and obfuscates the source to avoid getting sued.

Can an algorithm generate code? In some cases yes, but when that can be done reliably in a way you can prove is reliable, it's indicative that you're writing code at the wrong level. This is the problem high level languages have been addressing for nearly half a century.

@gabrielesvelto

there is a dark/ humorous corollary to the problem you've identified:

everything getting labelled #AI

even things not AI

when the decision makers are in the throes of mania, to get approval or funding: it just gets called AI, whatever it is, and green lights

call it malicious compliance to mass hysteria without grounding in technical acumen

"we need a new door"

"don't have the funding"

"the door is AI: it senses when someone approaches, and slides open"

"approved!"

@gabrielesvelto computers are great at generating code in deterministic ways. After all, that's basically what a compiler is! Leapfrogging over DSLs and metaprogramming for a shitty jackpot.

It also drives me nuts because I've always advocated for the end-user and for adding the cloud (and thus making the app unusable if the company folds) only as a last resort. Now it's so much harder to do this. The AI bubble is user-hostile.