This is the third and last in my series on #UX and #LLMs. It was the hardest to write as I (rather foolishly) take a stand on where we should go. My basic point is that the current AI bubble is going to pop. So what should replace it? I make the case that smaller, more energy-efficient, and more ethically trained models should be used for "boring" tasks. Our job is to solve problems, not look cool.
”…the AI bubble is ripe for bursting. Things like the efficient compute frontier…and the Floridi conjecture…mean that the AI models we have now are about as good as they will ever be. … This is a huge problem! Because, as they currently stand, generative AI models aren’t actually that useful or even remotely profitable.”
—Will Lockett, The AI Bubble Is About To Burst, But The Next Bubble Is Already Growing
#ai #generativeai #llm #llms
“ #LLMs are a #technology suited to a decadent #culture, one that chases easy profits rather than tackles the real challenges we face. It’s easier to make money rearranging words according to various probabilities than it is to make a living improving the #health of our topsoil, #communities, and souls.”
“#SocialMedia didn’t democratize speech so much as it degraded our informational #ecosystem; it democratized the need to be vigilant and discerning as we sift through vast quantities of debris in search of reliable #institutions and #information.”
“ #LLMs are a #technology suited to a decadent #culture, one that chases easy profits rather than tackles the real challenges we face. It’s easier to make money rearranging words according to various probabilities than it is to make a living improving the #health of our topsoil, #communities, and souls.”
“Solviture ambulando, it is solved by #walking, is the only fitting response to some queries, yet no LLM can provide answers of this type.”
“Interacting with #LLMs trains us to expect pre-formed answers from the void rather than endure the #productive effort of #conversation – literally, turning back and forth with another person in pursuit of #truth.”
"Claude’s update relies on a striking pop-up with a large, black "Accept" button. The data sharing toggle is tucked away, switched on by default, and framed positively ("You can help..."). A faint "Not now" button and hard-to-find instructions on changing the setting later complete the manipulative design.
These interface tricks, known as dark patterns, are considered unlawful under the General Data Protection Regulation (GDPR) and by the European Court of Justice when used to obtain consent for data processing. Pre-checked boxes do not count as valid consent under these rules.
The European Data Protection Board (EDPB) has also stressed in its guidelines on deceptive design patterns that consent must be freely given, informed, and unambiguous. Claude’s current design clearly fails to meet these standards, making it likely that Anthropic will soon draw the attention of privacy regulators."
#EU#AI#GenerativeAI#Anthropic #LLMs #Chatbots#Claude#DarkPatterns#Privacy#DataProtection
"My gut instinct is that this is an industry-wide problem. Perplexity spent 164% of its revenue in 2024 between AWS, Anthropic and OpenAI. And one abstraction higher (as I'll get into), OpenAI spent 50% of its revenue on inference compute costs alone, and 75% of its revenue on training compute too (and ended up spending $9 billion to lose $5 billion). Yes, those numbers add up to more than 100%, that's my god damn point.
Large Language Models are too expensive, to the point that anybody funding an "AI startup" is effectively sending that money to Anthropic or OpenAI, who then immediately send that money to Amazon, Google or Microsoft, who are yet to show that they make any profit on selling it.
Please don't waste your breath saying "costs will come down." They haven't been, and they're not going to.
Despite categorically wrong boosters claiming otherwise, the cost of inference — everything that happens from when you put a prompt in to generate an output from a model — is increasing, in part thanks to the token-heavy generations necessary for "reasoning" models to generate their outputs, and with reasoning being the only way to get "better" outputs, they're here to stay (and continue burning shit tons of tokens).
This has a very, very real consequence."
https://www.wheresyoured.at/why-everybody-is-losing-money-on-ai/
#AI#GenerativeAI#BusinessModels #LLMs #Chatbots#AIHype#AIBubble
"Claude’s update relies on a striking pop-up with a large, black "Accept" button. The data sharing toggle is tucked away, switched on by default, and framed positively ("You can help..."). A faint "Not now" button and hard-to-find instructions on changing the setting later complete the manipulative design.
These interface tricks, known as dark patterns, are considered unlawful under the General Data Protection Regulation (GDPR) and by the European Court of Justice when used to obtain consent for data processing. Pre-checked boxes do not count as valid consent under these rules.
The European Data Protection Board (EDPB) has also stressed in its guidelines on deceptive design patterns that consent must be freely given, informed, and unambiguous. Claude’s current design clearly fails to meet these standards, making it likely that Anthropic will soon draw the attention of privacy regulators."
#EU#AI#GenerativeAI#Anthropic #LLMs #Chatbots#Claude#DarkPatterns#Privacy#DataProtection
"My gut instinct is that this is an industry-wide problem. Perplexity spent 164% of its revenue in 2024 between AWS, Anthropic and OpenAI. And one abstraction higher (as I'll get into), OpenAI spent 50% of its revenue on inference compute costs alone, and 75% of its revenue on training compute too (and ended up spending $9 billion to lose $5 billion). Yes, those numbers add up to more than 100%, that's my god damn point.
Large Language Models are too expensive, to the point that anybody funding an "AI startup" is effectively sending that money to Anthropic or OpenAI, who then immediately send that money to Amazon, Google or Microsoft, who are yet to show that they make any profit on selling it.
Please don't waste your breath saying "costs will come down." They haven't been, and they're not going to.
Despite categorically wrong boosters claiming otherwise, the cost of inference — everything that happens from when you put a prompt in to generate an output from a model — is increasing, in part thanks to the token-heavy generations necessary for "reasoning" models to generate their outputs, and with reasoning being the only way to get "better" outputs, they're here to stay (and continue burning shit tons of tokens).
This has a very, very real consequence."
https://www.wheresyoured.at/why-everybody-is-losing-money-on-ai/
#AI#GenerativeAI#BusinessModels #LLMs #Chatbots#AIHype#AIBubble
"In addition to violation of data privacy, other risks are involved when psychotherapists consult LLMs on behalf of a client. Studies have found that although some specialized therapy bots can rival human-delivered interventions, advice from the likes of ChatGPT can cause more harm than good.
A recent Stanford University study, for example, found that chatbots can fuel delusions and psychopathy by blindly validating a user rather than challenging them, as well as suffer from biases and engage in sycophancy. The same flaws could make it risky for therapists to consult chatbots on behalf of their clients. They could, for example, baselessly validate a therapist’s hunch, or lead them down the wrong path.
Aguilera says he has played around with tools like ChatGPT while teaching mental health trainees, such as by entering hypothetical symptoms and asking the AI chatbot to make a diagnosis. The tool will produce lots of possible conditions, but it’s rather thin in its analysis, he says. The American Counseling Association recommends that AI not be used for mental health diagnosis at present.
A study published in 2024 of an earlier version of ChatGPT similarly found it was too vague and general to be truly useful in diagnosis or devising treatment plans, and it was heavily biased toward suggesting people seek cognitive behavioral therapy as opposed to other types of therapy that might be more suitable."
https://www.technologyreview.com/2025/09/02/1122871/therapists-using-chatgpt-secretly/
#AI#GenerativeAI #Chatbots#ChatGPT #LLMs#MentalHealth#Therapy
»KI im Bewerbungsprozess – LLMs bevorzugen KI-generierte Lebensläufe:
Noch stärker fällt die Bevorzugung aus, wenn Bewerbungsunterlagen vom selben Sprachmodell verfasst wurde.«
Wenn dies nicht Marketing ist von Untertanen die von der KI gesteuert werden? Vermehrt wird nun auch von grossen Unternehmen Bewerbungen ohne jeglichen KI Einfluss verlangt.
#ki #bewerbung #LLMs #lebenslauf #Bevorzugung #Sprachmodell #arbeit #job #work #AIjobs
»KI im Bewerbungsprozess – LLMs bevorzugen KI-generierte Lebensläufe:
Noch stärker fällt die Bevorzugung aus, wenn Bewerbungsunterlagen vom selben Sprachmodell verfasst wurde.«
Wenn dies nicht Marketing ist von Untertanen die von der KI gesteuert werden? Vermehrt wird nun auch von grossen Unternehmen Bewerbungen ohne jeglichen KI Einfluss verlangt.
#ki #bewerbung #LLMs #lebenslauf #Bevorzugung #Sprachmodell #arbeit #job #work #AIjobs
Big News! The completely #opensource#LLM#Apertus 🇨🇭 has been released today:
📰 https://www.swisscom.ch/en/about/news/2025/09/02-apertus.html
🤝 The model supports over 1000 languages [EDIT: an earlier version claimed over 1800] and respects opt-out consent of data owners.
▶ This is great for #publicAI and #transparentAI. If you want to test it for yourself, head over to: https://publicai.co/
🤗 And if you want to download weights, datasets & FULL TRAINING DETAILS, you can find them here:
https://huggingface.co/collections/swiss-ai/apertus-llm-68b699e65415c231ace3b059
🔧 Tech report: https://huggingface.co/swiss-ai/Apertus-70B-2509/blob/main/Apertus_Tech_Report.pdf
After #Teuken7b and #Olmo2, Apertus is the next big jump in capabilities and performance of #FOSS #LLMs, while also improving #epistemicresilience and #epistemicautonomy with its multilingual approach.
I believe that especially for sensitive areas like #education, #healthcare, or #academia, there is no alternative to fully open #AI models. Everybody should start building upon them and improving them.
#KIMündigkeit#SovereignAI#FOSS#ethicalAI #swissai#LernenmitKI
"In addition to violation of data privacy, other risks are involved when psychotherapists consult LLMs on behalf of a client. Studies have found that although some specialized therapy bots can rival human-delivered interventions, advice from the likes of ChatGPT can cause more harm than good.
A recent Stanford University study, for example, found that chatbots can fuel delusions and psychopathy by blindly validating a user rather than challenging them, as well as suffer from biases and engage in sycophancy. The same flaws could make it risky for therapists to consult chatbots on behalf of their clients. They could, for example, baselessly validate a therapist’s hunch, or lead them down the wrong path.
Aguilera says he has played around with tools like ChatGPT while teaching mental health trainees, such as by entering hypothetical symptoms and asking the AI chatbot to make a diagnosis. The tool will produce lots of possible conditions, but it’s rather thin in its analysis, he says. The American Counseling Association recommends that AI not be used for mental health diagnosis at present.
A study published in 2024 of an earlier version of ChatGPT similarly found it was too vague and general to be truly useful in diagnosis or devising treatment plans, and it was heavily biased toward suggesting people seek cognitive behavioral therapy as opposed to other types of therapy that might be more suitable."
https://www.technologyreview.com/2025/09/02/1122871/therapists-using-chatgpt-secretly/
#AI#GenerativeAI #Chatbots#ChatGPT #LLMs#MentalHealth#Therapy
"Asked one major industry analyst: ‘Who is going to be motivated to adopt if they know the intent is to replace them?’
Nearly one in three (31%) company employees say they are “sabotaging their company’s generative AI strategy,” according to a survey from AI vendor Writer — a number that jumps to 41% for millennial and Gen Z employees.
The survey also found that “one out of ten workers say they’re tampering with performance metrics to make it appear AI is underperforming, intentionally generating low-quality outputs, refusing to use generative AI tools or outputs, or refusing to take generative AI training.”
Other activities lumped in as sabotage include entering company information into non-approved gen AI tools (27%), using non-approved gen AI tools (20%), and knowing of an AI security leak without reporting it (16%)."
https://www.cio.com/article/4022953/31-of-employees-are-sabotaging-your-gen-ai-strategy.html
Big News! The completely #opensource#LLM#Apertus 🇨🇭 has been released today:
📰 https://www.swisscom.ch/en/about/news/2025/09/02-apertus.html
🤝 The model supports over 1000 languages [EDIT: an earlier version claimed over 1800] and respects opt-out consent of data owners.
▶ This is great for #publicAI and #transparentAI. If you want to test it for yourself, head over to: https://publicai.co/
🤗 And if you want to download weights, datasets & FULL TRAINING DETAILS, you can find them here:
https://huggingface.co/collections/swiss-ai/apertus-llm-68b699e65415c231ace3b059
🔧 Tech report: https://huggingface.co/swiss-ai/Apertus-70B-2509/blob/main/Apertus_Tech_Report.pdf
After #Teuken7b and #Olmo2, Apertus is the next big jump in capabilities and performance of #FOSS #LLMs, while also improving #epistemicresilience and #epistemicautonomy with its multilingual approach.
I believe that especially for sensitive areas like #education, #healthcare, or #academia, there is no alternative to fully open #AI models. Everybody should start building upon them and improving them.
#KIMündigkeit#SovereignAI#FOSS#ethicalAI #swissai#LernenmitKI
"In the end then, the silence of the AI Ethics movement towards its burgeoning use in the military is unsurprising. The movement doesn’t say anything controversial to Washington (including the military industrial complex), because that’s a source of money, as well as an invaluable stamp of importance. It’s fine—even encouraged—to make veiled digs at China, Russia or North Korea, at the “bad actors” it sometimes refers to, but otherwise the industry avoids anything “political.” It also mostly frames the issues as centered on LLMs, because it wants to paint the tech products of its leaders as pivotally important in all respects. This then makes it a bit awkward to bring in military applications because it’s pretty obvious that LLMs have little current military value.
I personally came to AI research nearly ten years ago, from a deep curiosity about the nature of the mind and the self. At that time it was still a somewhat fringe subject, and as the field exploded into public awareness, I’ve been horrified to watch it intertwine with the most powerful and destructive systems on the planet, including the military-industrial complex, and, potentially, the outbreak of the next major global conflicts. To find the right way forward, we need to think much more deeply about where we’re going and what our values are. We need an authentic AI Ethics movement that questions the forces and assumptions shaping current development, rather than imbibing the views passed down from a few, often misguided, leaders."
https://www.currentaffairs.org/news/ai-ethics-discourse-ignores-its-deadliest-use-war
The price of intelligence (Three risks inherent in LLMs). ~ Mark Russinovich, Ahmed Salem, Santiago Zanella-Béguelin, Yonatan Zunger. https://cacm.acm.org/practice/the-price-of-intelligence/#AI #LLMs
The price of intelligence (Three risks inherent in LLMs). ~ Mark Russinovich, Ahmed Salem, Santiago Zanella-Béguelin, Yonatan Zunger. https://cacm.acm.org/practice/the-price-of-intelligence/#AI #LLMs