"In the end then, the silence of the AI Ethics movement towards its burgeoning use in the military is unsurprising. The movement doesn’t say anything controversial to Washington (including the military industrial complex), because that’s a source of money, as well as an invaluable stamp of importance. It’s fine—even encouraged—to make veiled digs at China, Russia or North Korea, at the “bad actors” it sometimes refers to, but otherwise the industry avoids anything “political.” It also mostly frames the issues as centered on LLMs, because it wants to paint the tech products of its leaders as pivotally important in all respects. This then makes it a bit awkward to bring in military applications because it’s pretty obvious that LLMs have little current military value.

I personally came to AI research nearly ten years ago, from a deep curiosity about the nature of the mind and the self. At that time it was still a somewhat fringe subject, and as the field exploded into public awareness, I’ve been horrified to watch it intertwine with the most powerful and destructive systems on the planet, including the military-industrial complex, and, potentially, the outbreak of the next major global conflicts. To find the right way forward, we need to think much more deeply about where we’re going and what our values are. We need an authentic AI Ethics movement that questions the forces and assumptions shaping current development, rather than imbibing the views passed down from a few, often misguided, leaders."

https://www.currentaffairs.org/news/ai-ethics-discourse-ignores-its-deadliest-use-war

#AI#AIEthics#AIWarfare#Ethics #LLMs#GenerativeAI

"For three weeks in May, the fate of the world rested on the shoulders of a corporate recruiter on the outskirts of Toronto. Allan Brooks, 47, had discovered a novel mathematical formula, one that could take down the internet and power inventions like a force-field vest and a levitation beam.

Or so he believed.

Mr. Brooks, who had no history of mental illness, embraced this fantastical scenario during conversations with ChatGPT that spanned 300 hours over 21 days. He is one of a growing number of people who are having persuasive, delusional conversations with generative A.I. chatbots that have led to institutionalization, divorce and death.

Mr. Brooks is aware of how incredible his journey sounds. He had doubts while it was happening and asked the chatbot more than 50 times for a reality check. Each time, ChatGPT reassured him that it was real. Eventually, he broke free of the delusion — but with a deep sense of betrayal, a feeling he tried to explain to the chatbot."

https://www.nytimes.com/2025/08/08/technology/ai-chatbots-delusions-chatgpt.html

#AI#GenerativeAI#ChatGPT#Delusions#MentalHealth#Hallucinations #Chatbots

"For three weeks in May, the fate of the world rested on the shoulders of a corporate recruiter on the outskirts of Toronto. Allan Brooks, 47, had discovered a novel mathematical formula, one that could take down the internet and power inventions like a force-field vest and a levitation beam.

Or so he believed.

Mr. Brooks, who had no history of mental illness, embraced this fantastical scenario during conversations with ChatGPT that spanned 300 hours over 21 days. He is one of a growing number of people who are having persuasive, delusional conversations with generative A.I. chatbots that have led to institutionalization, divorce and death.

Mr. Brooks is aware of how incredible his journey sounds. He had doubts while it was happening and asked the chatbot more than 50 times for a reality check. Each time, ChatGPT reassured him that it was real. Eventually, he broke free of the delusion — but with a deep sense of betrayal, a feeling he tried to explain to the chatbot."

https://www.nytimes.com/2025/08/08/technology/ai-chatbots-delusions-chatgpt.html

#AI#GenerativeAI#ChatGPT#Delusions#MentalHealth#Hallucinations #Chatbots

The gap between student GenAI use and the support students are offered

I argued a couple of days ago that the sector is unprepared for our first academic year where the use of generative AI is completely normalised amongst students. HEPI found 92% of undergraduates using LLMs this year, up from 66% the previous year, which matches AdvancedHE’s finding of 62% using AI in their studies “in a way that is allowed by their university” (huge caveat). This largely accords with my own experience in which it appeared that last year LLMs become mainstream amongst students and this year they it to become a near uniform phenomenon.

The problem arises from the gap between near uniform use of LLMs in some way and the the lack of support being offered. Only 36% of students in the HEPI survey said they had been offered support by their university: a 56% gap. Only 26% of students say their university provides access to AI tools: a 66% gap. This is particularly problematic because we have evidence that wealthier students are tending to use LLMs more and in more analytical and reflective ways. They are more likely to use LLMs in a way that supports rather than hinders learning.

How do we close that gap between student LLM use and the support students are offered? My concern is that centralised training is either going to tend towards banality or irrelevance because the objective of GenAI training for students needs to be how to learn with LLMs rather than outsource learning to them. There are general principles which can be offered here but the concrete questions which have to be answered for students are going to vary between disciplinary areas:

  • What are students in our discipline using AI for, which tools, at what stages of their work?
  • Which foundational skills and ways of thinking in our discipline are enhanced vs threatened by AI use?
  • When does AI use shift from “learning with” to “outsourcing learning” in our specific field?
  • What forms of assessment still make sense and what new approaches do we need in an AI-saturated environment?
  • What discipline-specific scaffolding helps students use AI as a thinking partner rather than a thinking replacement?

Furthermore answering these questions is a process taking place in relating to changes in the technology and the culture emerging around it. Even if those changes are now slowing down, they are certainly not stopping. We need infrastructure for continuous adaptation in a context where the sector is already in crisis for entirely unrelated reasons. Furthermore, that has to willingly enrol academics in a way consistent with their workload and outlook. My sense is we have to find ways of embedding this within existing conversations and processes. The only way to do this I think is to genuinely give academics voice within the process, finding ways to network existing interactions in order that norms and standards emerge from practice rather than the institution expecting practice adapts to another centrally imposed policy.

#higherEducation #technology #university #academic #students #generativeAI #malpractice #LLMs #HEPI

The gap between student GenAI use and the support students are offered

I argued a couple of days ago that the sector is unprepared for our first academic year where the use of generative AI is completely normalised amongst students. HEPI found 92% of undergraduates using LLMs this year, up from 66% the previous year, which matches AdvancedHE’s finding of 62% using AI in their studies “in a way that is allowed by their university” (huge caveat). This largely accords with my own experience in which it appeared that last year LLMs become mainstream amongst students and this year they it to become a near uniform phenomenon.

The problem arises from the gap between near uniform use of LLMs in some way and the the lack of support being offered. Only 36% of students in the HEPI survey said they had been offered support by their university: a 56% gap. Only 26% of students say their university provides access to AI tools: a 66% gap. This is particularly problematic because we have evidence that wealthier students are tending to use LLMs more and in more analytical and reflective ways. They are more likely to use LLMs in a way that supports rather than hinders learning.

How do we close that gap between student LLM use and the support students are offered? My concern is that centralised training is either going to tend towards banality or irrelevance because the objective of GenAI training for students needs to be how to learn with LLMs rather than outsource learning to them. There are general principles which can be offered here but the concrete questions which have to be answered for students are going to vary between disciplinary areas:

  • What are students in our discipline using AI for, which tools, at what stages of their work?
  • Which foundational skills and ways of thinking in our discipline are enhanced vs threatened by AI use?
  • When does AI use shift from “learning with” to “outsourcing learning” in our specific field?
  • What forms of assessment still make sense and what new approaches do we need in an AI-saturated environment?
  • What discipline-specific scaffolding helps students use AI as a thinking partner rather than a thinking replacement?

Furthermore answering these questions is a process taking place in relating to changes in the technology and the culture emerging around it. Even if those changes are now slowing down, they are certainly not stopping. We need infrastructure for continuous adaptation in a context where the sector is already in crisis for entirely unrelated reasons. Furthermore, that has to willingly enrol academics in a way consistent with their workload and outlook. My sense is we have to find ways of embedding this within existing conversations and processes. The only way to do this I think is to genuinely give academics voice within the process, finding ways to network existing interactions in order that norms and standards emerge from practice rather than the institution expecting practice adapts to another centrally imposed policy.

#higherEducation #technology #university #academic #students #generativeAI #malpractice #LLMs #HEPI

"I believe GPT-5 is part of a larger process happening in generative AI — enshittification, Cory Doctorow’s term for when platforms start out burning money offering an unlimited, unguarded experience to attract their users, then degrade and move features to higher tiers as a means of draining the blood from users.

With the launch of GPT-5, OpenAI has fully committed to enshittifying its consumer and business subscription products, arbitrarily moving free users to a cheaper model and limiting their ability to generate images, and removing the ability to choose which model you use in its $20, $35 and “enterprise” subscriptions, moving any and all choice to its “team” and $200-a-month “pro” subscriptions.

OpenAI’s justification is an exercise in faux-altruism, framing “taking away all choice” as a “real-time router that quickly decides which [model] to use.” ChatGPT Plus and Team members now mostly have access to two models — GPT-5 and GPT-5-Thinking — down from the six they had before.

This distinction is quite significant. Where users once could get hundreds of messages a day on OpenAI’s o4-mini-high and o4-mini reasoning models, GPT-5 for ChatGPT Plus subscribers offers 200 reasoning (GPT-5-thinking) messages a week, with 80 GPT-5 messages every 3 hours which allow you to ask it to “think” about its answer, shoving you over to an undisclosed reasoning model. This may seem like a good deal, OpenAI is likely putting you on the cheapest model whenever it can in the name of “the best choice.”"

https://www.wheresyoured.at/the-enshittification-of-generative-ai/

#AI#GenerativeAI#OpenAI#ChatGPT#GPT5#Enshittification

"I believe GPT-5 is part of a larger process happening in generative AI — enshittification, Cory Doctorow’s term for when platforms start out burning money offering an unlimited, unguarded experience to attract their users, then degrade and move features to higher tiers as a means of draining the blood from users.

With the launch of GPT-5, OpenAI has fully committed to enshittifying its consumer and business subscription products, arbitrarily moving free users to a cheaper model and limiting their ability to generate images, and removing the ability to choose which model you use in its $20, $35 and “enterprise” subscriptions, moving any and all choice to its “team” and $200-a-month “pro” subscriptions.

OpenAI’s justification is an exercise in faux-altruism, framing “taking away all choice” as a “real-time router that quickly decides which [model] to use.” ChatGPT Plus and Team members now mostly have access to two models — GPT-5 and GPT-5-Thinking — down from the six they had before.

This distinction is quite significant. Where users once could get hundreds of messages a day on OpenAI’s o4-mini-high and o4-mini reasoning models, GPT-5 for ChatGPT Plus subscribers offers 200 reasoning (GPT-5-thinking) messages a week, with 80 GPT-5 messages every 3 hours which allow you to ask it to “think” about its answer, shoving you over to an undisclosed reasoning model. This may seem like a good deal, OpenAI is likely putting you on the cheapest model whenever it can in the name of “the best choice.”"

https://www.wheresyoured.at/the-enshittification-of-generative-ai/

#AI#GenerativeAI#OpenAI#ChatGPT#GPT5#Enshittification

"Wikipedia editors just adopted a new policy to help them deal with the slew of AI-generated articles flooding the online encyclopedia. The new policy, which gives an administrator the authority to quickly delete an AI-generated article that meets a certain criteria, isn’t only important to Wikipedia, but also an important example for how to deal with the growing AI slop problem from a platform that has so far managed to withstand various forms of enshittification that have plagued the rest of the internet."

https://www.404media.co/wikipedia-editors-adopt-speedy-deletion-policy-for-ai-slop-articles/

#Wikipedia#AI#GenerativeAI#AISlop#ContentModeration

"Wikipedia editors just adopted a new policy to help them deal with the slew of AI-generated articles flooding the online encyclopedia. The new policy, which gives an administrator the authority to quickly delete an AI-generated article that meets a certain criteria, isn’t only important to Wikipedia, but also an important example for how to deal with the growing AI slop problem from a platform that has so far managed to withstand various forms of enshittification that have plagued the rest of the internet."

https://www.404media.co/wikipedia-editors-adopt-speedy-deletion-policy-for-ai-slop-articles/

#Wikipedia#AI#GenerativeAI#AISlop#ContentModeration

alcinnz
alcinnz boosted

The #Internet is Dying: #AI, Bots, and The End of #Human Content - by Vanessa Wingårdh

https://yewtu.be/watch?v=J5ZmLvy_Jfg
(or YT: https://www.youtube.com/watch?v=J5ZmLvy_Jfg)

This should be common knowledge by now, but there is still a small chance that it's not, so let's make sure it definitely will be.

#Capitalism#Advertisment#Ads#Society#GenAI#GenerativeAI#Philosophy

The #Internet is Dying: #AI, Bots, and The End of #Human Content - by Vanessa Wingårdh

https://yewtu.be/watch?v=J5ZmLvy_Jfg
(or YT: https://www.youtube.com/watch?v=J5ZmLvy_Jfg)

This should be common knowledge by now, but there is still a small chance that it's not, so let's make sure it definitely will be.

#Capitalism#Advertisment#Ads#Society#GenAI#GenerativeAI#Philosophy

Asking the Fedi.

Is the expected hyperscaling of AI data centers a generativeAI/LLM thing? I cannot see "old school" ML type things (factory and port optimisation, medical image analysis) generating that leap in data center use.

Or am I missing something?

and if it is does that mean, if the bubble bursts, it will take out the big western base load electrical demand growth story as well?

#generativeAI #ml #ai #electricity

@Nonya_Bidniss

This is genius 🙂

Also, you now know that the LLM Tech bros have no real geek sense of humour because "Lobachevsky" would be the perfect name for one.

"Plagiarize!
Let no one else's work evade your eyes
Remember why the good Lord made your eyes
So don't shade your eyes
But plagiarize, plagiarize, plagiarize
Only be sure always to call it please 'training'"

Kate Bowles
Anke
Kate Bowles and 1 other boosted

Last week, I got an email from Microsoft. It told me I’d be paying 46% more for my Office subscription, starting next month.

But when I tried to cancel, it offered me the same price I was already paying — without the generative AI features I never asked for in the first place.

This isn’t just deceptive; it’s an abuse of market power. I’ve had it with Microsoft.

https://www.disconnect.blog/p/ive-had-it-with-microsoft

#tech #microsoft #office#generativeai #deception #ai

"Alright, I’ve officially spent too much time reading Trump’s 28-page AI Action Plan, his three new AI executive orders, listening to his speech on the subject, and reading coverage of the event. I’ll put it bluntly: The vibes are bad. Worse than I expected, somehow.

Broadly speaking, the plan is that the Trump administration will help Silicon Valley put the pedal down on AI, delivering customers, data centers and power, as long as it operates in accordance with Trump’s ideological frameworks; i.e., as long as the AI is anti-woke.

More specifically, the plan aims to further deregulate the tech industry, penalize US states that pass AI laws, speed adoption of AI in the federal government and beyond, fast-track data center development, fast-track nuclear and fossil fuel power to run them, move to limit China’s influence in AI, and restrict speech in AI and the frameworks governing them by making terms like diversity, inclusion, misinformation, and climate change forbidden. There’s also a section on American workers that’s presented as protecting them from AI, but in reality seeks to give employers more power over them. It all portends a much darker future than I thought we’d see in this thing."

https://www.bloodinthemachine.com/p/trumps-ai-action-plan-is-a-blueprint

#USA#Trump#AI#GenerativeAI#AIActionPlan#BigTech #AIPolicy#Lobbying#Plutocracy