”…the AI bubble is ripe for bursting. Things like the efficient compute frontier…and the Floridi conjecture…mean that the AI models we have now are about as good as they will ever be. … This is a huge problem! Because, as they currently stand, generative AI models aren’t actually that useful or even remotely profitable.”
—Will Lockett, The AI Bubble Is About To Burst, But The Next Bubble Is Already Growing
#ai #generativeai #llm #llms
"If you wanted a sense of the current relationship between the tech right and the populists, you had to be sitting in Breakout Room C on the first day of NatCon 5, the annual gathering of the MAGA right’s powerhouses. At the end of the afternoon panel on the culture wars (“The Need for Heroism”), Geoffrey Miller was handed the mic and started berating one of the panelists: Shyam Sankar, the chief technology officer of Palantir, who is in charge of the company’s AI efforts.
“I argue that the AI industry shares virtually no ideological overlap with national conservatism,” Miller said, referring to the conference’s core ideology. Hours ago, Miller, a psychology professor at the University of New Mexico, had been on that stage for a panel called “AI and the American Soul,” calling for the populists to wage a literal holy war against artificial intelligence developers “as betrayers of our species, traitors to our nation, apostates to our faith, and threats to our kids.” Now, he stared right at the technologist who’d just given a speech arguing that tech founders were just as heroic as the Founding Fathers, who are sacred figures to the natcons. The AI industry was, he told Sankar, “by and large, globalist, secular, liberal, feminized transhumanists. They explicitly want mass unemployment, they plan for UBI-based communism, and they view the human species as a biological ‘bootloader,’ as they say, for artificial superintelligence.”"
https://www.theverge.com/politics/773154/maga-tech-right-ai-natcon
"If you wanted a sense of the current relationship between the tech right and the populists, you had to be sitting in Breakout Room C on the first day of NatCon 5, the annual gathering of the MAGA right’s powerhouses. At the end of the afternoon panel on the culture wars (“The Need for Heroism”), Geoffrey Miller was handed the mic and started berating one of the panelists: Shyam Sankar, the chief technology officer of Palantir, who is in charge of the company’s AI efforts.
“I argue that the AI industry shares virtually no ideological overlap with national conservatism,” Miller said, referring to the conference’s core ideology. Hours ago, Miller, a psychology professor at the University of New Mexico, had been on that stage for a panel called “AI and the American Soul,” calling for the populists to wage a literal holy war against artificial intelligence developers “as betrayers of our species, traitors to our nation, apostates to our faith, and threats to our kids.” Now, he stared right at the technologist who’d just given a speech arguing that tech founders were just as heroic as the Founding Fathers, who are sacred figures to the natcons. The AI industry was, he told Sankar, “by and large, globalist, secular, liberal, feminized transhumanists. They explicitly want mass unemployment, they plan for UBI-based communism, and they view the human species as a biological ‘bootloader,’ as they say, for artificial superintelligence.”"
https://www.theverge.com/politics/773154/maga-tech-right-ai-natcon
"Claude’s update relies on a striking pop-up with a large, black "Accept" button. The data sharing toggle is tucked away, switched on by default, and framed positively ("You can help..."). A faint "Not now" button and hard-to-find instructions on changing the setting later complete the manipulative design.
These interface tricks, known as dark patterns, are considered unlawful under the General Data Protection Regulation (GDPR) and by the European Court of Justice when used to obtain consent for data processing. Pre-checked boxes do not count as valid consent under these rules.
The European Data Protection Board (EDPB) has also stressed in its guidelines on deceptive design patterns that consent must be freely given, informed, and unambiguous. Claude’s current design clearly fails to meet these standards, making it likely that Anthropic will soon draw the attention of privacy regulators."
#EU#AI#GenerativeAI#Anthropic #LLMs #Chatbots#Claude#DarkPatterns#Privacy#DataProtection
"My gut instinct is that this is an industry-wide problem. Perplexity spent 164% of its revenue in 2024 between AWS, Anthropic and OpenAI. And one abstraction higher (as I'll get into), OpenAI spent 50% of its revenue on inference compute costs alone, and 75% of its revenue on training compute too (and ended up spending $9 billion to lose $5 billion). Yes, those numbers add up to more than 100%, that's my god damn point.
Large Language Models are too expensive, to the point that anybody funding an "AI startup" is effectively sending that money to Anthropic or OpenAI, who then immediately send that money to Amazon, Google or Microsoft, who are yet to show that they make any profit on selling it.
Please don't waste your breath saying "costs will come down." They haven't been, and they're not going to.
Despite categorically wrong boosters claiming otherwise, the cost of inference — everything that happens from when you put a prompt in to generate an output from a model — is increasing, in part thanks to the token-heavy generations necessary for "reasoning" models to generate their outputs, and with reasoning being the only way to get "better" outputs, they're here to stay (and continue burning shit tons of tokens).
This has a very, very real consequence."
https://www.wheresyoured.at/why-everybody-is-losing-money-on-ai/
#AI#GenerativeAI#BusinessModels #LLMs #Chatbots#AIHype#AIBubble
"Claude’s update relies on a striking pop-up with a large, black "Accept" button. The data sharing toggle is tucked away, switched on by default, and framed positively ("You can help..."). A faint "Not now" button and hard-to-find instructions on changing the setting later complete the manipulative design.
These interface tricks, known as dark patterns, are considered unlawful under the General Data Protection Regulation (GDPR) and by the European Court of Justice when used to obtain consent for data processing. Pre-checked boxes do not count as valid consent under these rules.
The European Data Protection Board (EDPB) has also stressed in its guidelines on deceptive design patterns that consent must be freely given, informed, and unambiguous. Claude’s current design clearly fails to meet these standards, making it likely that Anthropic will soon draw the attention of privacy regulators."
#EU#AI#GenerativeAI#Anthropic #LLMs #Chatbots#Claude#DarkPatterns#Privacy#DataProtection
"My gut instinct is that this is an industry-wide problem. Perplexity spent 164% of its revenue in 2024 between AWS, Anthropic and OpenAI. And one abstraction higher (as I'll get into), OpenAI spent 50% of its revenue on inference compute costs alone, and 75% of its revenue on training compute too (and ended up spending $9 billion to lose $5 billion). Yes, those numbers add up to more than 100%, that's my god damn point.
Large Language Models are too expensive, to the point that anybody funding an "AI startup" is effectively sending that money to Anthropic or OpenAI, who then immediately send that money to Amazon, Google or Microsoft, who are yet to show that they make any profit on selling it.
Please don't waste your breath saying "costs will come down." They haven't been, and they're not going to.
Despite categorically wrong boosters claiming otherwise, the cost of inference — everything that happens from when you put a prompt in to generate an output from a model — is increasing, in part thanks to the token-heavy generations necessary for "reasoning" models to generate their outputs, and with reasoning being the only way to get "better" outputs, they're here to stay (and continue burning shit tons of tokens).
This has a very, very real consequence."
https://www.wheresyoured.at/why-everybody-is-losing-money-on-ai/
#AI#GenerativeAI#BusinessModels #LLMs #Chatbots#AIHype#AIBubble
"In addition to violation of data privacy, other risks are involved when psychotherapists consult LLMs on behalf of a client. Studies have found that although some specialized therapy bots can rival human-delivered interventions, advice from the likes of ChatGPT can cause more harm than good.
A recent Stanford University study, for example, found that chatbots can fuel delusions and psychopathy by blindly validating a user rather than challenging them, as well as suffer from biases and engage in sycophancy. The same flaws could make it risky for therapists to consult chatbots on behalf of their clients. They could, for example, baselessly validate a therapist’s hunch, or lead them down the wrong path.
Aguilera says he has played around with tools like ChatGPT while teaching mental health trainees, such as by entering hypothetical symptoms and asking the AI chatbot to make a diagnosis. The tool will produce lots of possible conditions, but it’s rather thin in its analysis, he says. The American Counseling Association recommends that AI not be used for mental health diagnosis at present.
A study published in 2024 of an earlier version of ChatGPT similarly found it was too vague and general to be truly useful in diagnosis or devising treatment plans, and it was heavily biased toward suggesting people seek cognitive behavioral therapy as opposed to other types of therapy that might be more suitable."
https://www.technologyreview.com/2025/09/02/1122871/therapists-using-chatgpt-secretly/
#AI#GenerativeAI #Chatbots#ChatGPT #LLMs#MentalHealth#Therapy
"In addition to violation of data privacy, other risks are involved when psychotherapists consult LLMs on behalf of a client. Studies have found that although some specialized therapy bots can rival human-delivered interventions, advice from the likes of ChatGPT can cause more harm than good.
A recent Stanford University study, for example, found that chatbots can fuel delusions and psychopathy by blindly validating a user rather than challenging them, as well as suffer from biases and engage in sycophancy. The same flaws could make it risky for therapists to consult chatbots on behalf of their clients. They could, for example, baselessly validate a therapist’s hunch, or lead them down the wrong path.
Aguilera says he has played around with tools like ChatGPT while teaching mental health trainees, such as by entering hypothetical symptoms and asking the AI chatbot to make a diagnosis. The tool will produce lots of possible conditions, but it’s rather thin in its analysis, he says. The American Counseling Association recommends that AI not be used for mental health diagnosis at present.
A study published in 2024 of an earlier version of ChatGPT similarly found it was too vague and general to be truly useful in diagnosis or devising treatment plans, and it was heavily biased toward suggesting people seek cognitive behavioral therapy as opposed to other types of therapy that might be more suitable."
https://www.technologyreview.com/2025/09/02/1122871/therapists-using-chatgpt-secretly/
#AI#GenerativeAI #Chatbots#ChatGPT #LLMs#MentalHealth#Therapy
"Asked one major industry analyst: ‘Who is going to be motivated to adopt if they know the intent is to replace them?’
Nearly one in three (31%) company employees say they are “sabotaging their company’s generative AI strategy,” according to a survey from AI vendor Writer — a number that jumps to 41% for millennial and Gen Z employees.
The survey also found that “one out of ten workers say they’re tampering with performance metrics to make it appear AI is underperforming, intentionally generating low-quality outputs, refusing to use generative AI tools or outputs, or refusing to take generative AI training.”
Other activities lumped in as sabotage include entering company information into non-approved gen AI tools (27%), using non-approved gen AI tools (20%), and knowing of an AI security leak without reporting it (16%)."
https://www.cio.com/article/4022953/31-of-employees-are-sabotaging-your-gen-ai-strategy.html
"In the end then, the silence of the AI Ethics movement towards its burgeoning use in the military is unsurprising. The movement doesn’t say anything controversial to Washington (including the military industrial complex), because that’s a source of money, as well as an invaluable stamp of importance. It’s fine—even encouraged—to make veiled digs at China, Russia or North Korea, at the “bad actors” it sometimes refers to, but otherwise the industry avoids anything “political.” It also mostly frames the issues as centered on LLMs, because it wants to paint the tech products of its leaders as pivotally important in all respects. This then makes it a bit awkward to bring in military applications because it’s pretty obvious that LLMs have little current military value.
I personally came to AI research nearly ten years ago, from a deep curiosity about the nature of the mind and the self. At that time it was still a somewhat fringe subject, and as the field exploded into public awareness, I’ve been horrified to watch it intertwine with the most powerful and destructive systems on the planet, including the military-industrial complex, and, potentially, the outbreak of the next major global conflicts. To find the right way forward, we need to think much more deeply about where we’re going and what our values are. We need an authentic AI Ethics movement that questions the forces and assumptions shaping current development, rather than imbibing the views passed down from a few, often misguided, leaders."
https://www.currentaffairs.org/news/ai-ethics-discourse-ignores-its-deadliest-use-war
How the AI hype is pushing up emissions -- even if it never delivers.
https://wimvanderbauwhede.codeberg.page/articles/the-real-problem-with-AI/
"For three weeks in May, the fate of the world rested on the shoulders of a corporate recruiter on the outskirts of Toronto. Allan Brooks, 47, had discovered a novel mathematical formula, one that could take down the internet and power inventions like a force-field vest and a levitation beam.
Or so he believed.
Mr. Brooks, who had no history of mental illness, embraced this fantastical scenario during conversations with ChatGPT that spanned 300 hours over 21 days. He is one of a growing number of people who are having persuasive, delusional conversations with generative A.I. chatbots that have led to institutionalization, divorce and death.
Mr. Brooks is aware of how incredible his journey sounds. He had doubts while it was happening and asked the chatbot more than 50 times for a reality check. Each time, ChatGPT reassured him that it was real. Eventually, he broke free of the delusion — but with a deep sense of betrayal, a feeling he tried to explain to the chatbot."
https://www.nytimes.com/2025/08/08/technology/ai-chatbots-delusions-chatgpt.html
#AI#GenerativeAI#ChatGPT#Delusions#MentalHealth#Hallucinations #Chatbots
"For three weeks in May, the fate of the world rested on the shoulders of a corporate recruiter on the outskirts of Toronto. Allan Brooks, 47, had discovered a novel mathematical formula, one that could take down the internet and power inventions like a force-field vest and a levitation beam.
Or so he believed.
Mr. Brooks, who had no history of mental illness, embraced this fantastical scenario during conversations with ChatGPT that spanned 300 hours over 21 days. He is one of a growing number of people who are having persuasive, delusional conversations with generative A.I. chatbots that have led to institutionalization, divorce and death.
Mr. Brooks is aware of how incredible his journey sounds. He had doubts while it was happening and asked the chatbot more than 50 times for a reality check. Each time, ChatGPT reassured him that it was real. Eventually, he broke free of the delusion — but with a deep sense of betrayal, a feeling he tried to explain to the chatbot."
https://www.nytimes.com/2025/08/08/technology/ai-chatbots-delusions-chatgpt.html
#AI#GenerativeAI#ChatGPT#Delusions#MentalHealth#Hallucinations #Chatbots
The gap between student GenAI use and the support students are offered
I argued a couple of days ago that the sector is unprepared for our first academic year where the use of generative AI is completely normalised amongst students. HEPI found 92% of undergraduates using LLMs this year, up from 66% the previous year, which matches AdvancedHE’s finding of 62% using AI in their studies “in a way that is allowed by their university” (huge caveat). This largely accords with my own experience in which it appeared that last year LLMs become mainstream amongst students and this year they it to become a near uniform phenomenon.
The problem arises from the gap between near uniform use of LLMs in some way and the the lack of support being offered. Only 36% of students in the HEPI survey said they had been offered support by their university: a 56% gap. Only 26% of students say their university provides access to AI tools: a 66% gap. This is particularly problematic because we have evidence that wealthier students are tending to use LLMs more and in more analytical and reflective ways. They are more likely to use LLMs in a way that supports rather than hinders learning.
How do we close that gap between student LLM use and the support students are offered? My concern is that centralised training is either going to tend towards banality or irrelevance because the objective of GenAI training for students needs to be how to learn with LLMs rather than outsource learning to them. There are general principles which can be offered here but the concrete questions which have to be answered for students are going to vary between disciplinary areas:
- What are students in our discipline using AI for, which tools, at what stages of their work?
- Which foundational skills and ways of thinking in our discipline are enhanced vs threatened by AI use?
- When does AI use shift from “learning with” to “outsourcing learning” in our specific field?
- What forms of assessment still make sense and what new approaches do we need in an AI-saturated environment?
- What discipline-specific scaffolding helps students use AI as a thinking partner rather than a thinking replacement?
Furthermore answering these questions is a process taking place in relating to changes in the technology and the culture emerging around it. Even if those changes are now slowing down, they are certainly not stopping. We need infrastructure for continuous adaptation in a context where the sector is already in crisis for entirely unrelated reasons. Furthermore, that has to willingly enrol academics in a way consistent with their workload and outlook. My sense is we have to find ways of embedding this within existing conversations and processes. The only way to do this I think is to genuinely give academics voice within the process, finding ways to network existing interactions in order that norms and standards emerge from practice rather than the institution expecting practice adapts to another centrally imposed policy.
#higherEducation #technology #university #academic #students #generativeAI #malpractice #LLMs #HEPI
The gap between student GenAI use and the support students are offered
I argued a couple of days ago that the sector is unprepared for our first academic year where the use of generative AI is completely normalised amongst students. HEPI found 92% of undergraduates using LLMs this year, up from 66% the previous year, which matches AdvancedHE’s finding of 62% using AI in their studies “in a way that is allowed by their university” (huge caveat). This largely accords with my own experience in which it appeared that last year LLMs become mainstream amongst students and this year they it to become a near uniform phenomenon.
The problem arises from the gap between near uniform use of LLMs in some way and the the lack of support being offered. Only 36% of students in the HEPI survey said they had been offered support by their university: a 56% gap. Only 26% of students say their university provides access to AI tools: a 66% gap. This is particularly problematic because we have evidence that wealthier students are tending to use LLMs more and in more analytical and reflective ways. They are more likely to use LLMs in a way that supports rather than hinders learning.
How do we close that gap between student LLM use and the support students are offered? My concern is that centralised training is either going to tend towards banality or irrelevance because the objective of GenAI training for students needs to be how to learn with LLMs rather than outsource learning to them. There are general principles which can be offered here but the concrete questions which have to be answered for students are going to vary between disciplinary areas:
- What are students in our discipline using AI for, which tools, at what stages of their work?
- Which foundational skills and ways of thinking in our discipline are enhanced vs threatened by AI use?
- When does AI use shift from “learning with” to “outsourcing learning” in our specific field?
- What forms of assessment still make sense and what new approaches do we need in an AI-saturated environment?
- What discipline-specific scaffolding helps students use AI as a thinking partner rather than a thinking replacement?
Furthermore answering these questions is a process taking place in relating to changes in the technology and the culture emerging around it. Even if those changes are now slowing down, they are certainly not stopping. We need infrastructure for continuous adaptation in a context where the sector is already in crisis for entirely unrelated reasons. Furthermore, that has to willingly enrol academics in a way consistent with their workload and outlook. My sense is we have to find ways of embedding this within existing conversations and processes. The only way to do this I think is to genuinely give academics voice within the process, finding ways to network existing interactions in order that norms and standards emerge from practice rather than the institution expecting practice adapts to another centrally imposed policy.
#higherEducation #technology #university #academic #students #generativeAI #malpractice #LLMs #HEPI
"I believe GPT-5 is part of a larger process happening in generative AI — enshittification, Cory Doctorow’s term for when platforms start out burning money offering an unlimited, unguarded experience to attract their users, then degrade and move features to higher tiers as a means of draining the blood from users.
With the launch of GPT-5, OpenAI has fully committed to enshittifying its consumer and business subscription products, arbitrarily moving free users to a cheaper model and limiting their ability to generate images, and removing the ability to choose which model you use in its $20, $35 and “enterprise” subscriptions, moving any and all choice to its “team” and $200-a-month “pro” subscriptions.
OpenAI’s justification is an exercise in faux-altruism, framing “taking away all choice” as a “real-time router that quickly decides which [model] to use.” ChatGPT Plus and Team members now mostly have access to two models — GPT-5 and GPT-5-Thinking — down from the six they had before.
This distinction is quite significant. Where users once could get hundreds of messages a day on OpenAI’s o4-mini-high and o4-mini reasoning models, GPT-5 for ChatGPT Plus subscribers offers 200 reasoning (GPT-5-thinking) messages a week, with 80 GPT-5 messages every 3 hours which allow you to ask it to “think” about its answer, shoving you over to an undisclosed reasoning model. This may seem like a good deal, OpenAI is likely putting you on the cheapest model whenever it can in the name of “the best choice.”"
https://www.wheresyoured.at/the-enshittification-of-generative-ai/
"I believe GPT-5 is part of a larger process happening in generative AI — enshittification, Cory Doctorow’s term for when platforms start out burning money offering an unlimited, unguarded experience to attract their users, then degrade and move features to higher tiers as a means of draining the blood from users.
With the launch of GPT-5, OpenAI has fully committed to enshittifying its consumer and business subscription products, arbitrarily moving free users to a cheaper model and limiting their ability to generate images, and removing the ability to choose which model you use in its $20, $35 and “enterprise” subscriptions, moving any and all choice to its “team” and $200-a-month “pro” subscriptions.
OpenAI’s justification is an exercise in faux-altruism, framing “taking away all choice” as a “real-time router that quickly decides which [model] to use.” ChatGPT Plus and Team members now mostly have access to two models — GPT-5 and GPT-5-Thinking — down from the six they had before.
This distinction is quite significant. Where users once could get hundreds of messages a day on OpenAI’s o4-mini-high and o4-mini reasoning models, GPT-5 for ChatGPT Plus subscribers offers 200 reasoning (GPT-5-thinking) messages a week, with 80 GPT-5 messages every 3 hours which allow you to ask it to “think” about its answer, shoving you over to an undisclosed reasoning model. This may seem like a good deal, OpenAI is likely putting you on the cheapest model whenever it can in the name of “the best choice.”"
https://www.wheresyoured.at/the-enshittification-of-generative-ai/
"Wikipedia editors just adopted a new policy to help them deal with the slew of AI-generated articles flooding the online encyclopedia. The new policy, which gives an administrator the authority to quickly delete an AI-generated article that meets a certain criteria, isn’t only important to Wikipedia, but also an important example for how to deal with the growing AI slop problem from a platform that has so far managed to withstand various forms of enshittification that have plagued the rest of the internet."
https://www.404media.co/wikipedia-editors-adopt-speedy-deletion-policy-for-ai-slop-articles/
"Wikipedia editors just adopted a new policy to help them deal with the slew of AI-generated articles flooding the online encyclopedia. The new policy, which gives an administrator the authority to quickly delete an AI-generated article that meets a certain criteria, isn’t only important to Wikipedia, but also an important example for how to deal with the growing AI slop problem from a platform that has so far managed to withstand various forms of enshittification that have plagued the rest of the internet."
https://www.404media.co/wikipedia-editors-adopt-speedy-deletion-policy-for-ai-slop-articles/