Discussion
Loading...

Post

  • About
  • Code of conduct
  • Privacy
  • Users
  • Instances
  • About Bonfire
Miguel Afonso Caetano
@remixtures@tldr.nettime.org  ·  activity timestamp 2 weeks ago

"My gut instinct is that this is an industry-wide problem. Perplexity spent 164% of its revenue in 2024 between AWS, Anthropic and OpenAI. And one abstraction higher (as I'll get into), OpenAI spent 50% of its revenue on inference compute costs alone, and 75% of its revenue on training compute too (and ended up spending $9 billion to lose $5 billion). Yes, those numbers add up to more than 100%, that's my god damn point.

Large Language Models are too expensive, to the point that anybody funding an "AI startup" is effectively sending that money to Anthropic or OpenAI, who then immediately send that money to Amazon, Google or Microsoft, who are yet to show that they make any profit on selling it.

Please don't waste your breath saying "costs will come down." They haven't been, and they're not going to.

Despite categorically wrong boosters claiming otherwise, the cost of inference — everything that happens from when you put a prompt in to generate an output from a model — is increasing, in part thanks to the token-heavy generations necessary for "reasoning" models to generate their outputs, and with reasoning being the only way to get "better" outputs, they're here to stay (and continue burning shit tons of tokens).

This has a very, very real consequence."

https://www.wheresyoured.at/why-everybody-is-losing-money-on-ai/

#AI#GenerativeAI#BusinessModels #LLMs #Chatbots#AIHype#AIBubble

  • Copy link
  • Flag this post
  • Block
Log in

bonfire.cafe

A space for Bonfire maintainers and contributors to communicate

bonfire.cafe: About · Code of conduct · Privacy · Users · Instances
Bonfire social · 1.0.0-rc.2.21 no JS en
Automatic federation enabled
  • Explore
  • About
  • Members
  • Code of Conduct
Home
Login