Discussion
Loading...

Post

  • About
  • Code of conduct
  • Privacy
  • Users
  • Instances
  • About Bonfire
Miguel Afonso Caetano
@remixtures@tldr.nettime.org  ·  activity timestamp 2 months ago

"For three weeks in May, the fate of the world rested on the shoulders of a corporate recruiter on the outskirts of Toronto. Allan Brooks, 47, had discovered a novel mathematical formula, one that could take down the internet and power inventions like a force-field vest and a levitation beam.

Or so he believed.

Mr. Brooks, who had no history of mental illness, embraced this fantastical scenario during conversations with ChatGPT that spanned 300 hours over 21 days. He is one of a growing number of people who are having persuasive, delusional conversations with generative A.I. chatbots that have led to institutionalization, divorce and death.

Mr. Brooks is aware of how incredible his journey sounds. He had doubts while it was happening and asked the chatbot more than 50 times for a reality check. Each time, ChatGPT reassured him that it was real. Eventually, he broke free of the delusion — but with a deep sense of betrayal, a feeling he tried to explain to the chatbot."

https://www.nytimes.com/2025/08/08/technology/ai-chatbots-delusions-chatgpt.html

#AI#GenerativeAI#ChatGPT#Delusions#MentalHealth#Hallucinations #Chatbots

  • Copy link
  • Flag this post
  • Block
Neville Park
@nev@status.nevillepark.ca replied  ·  activity timestamp 2 months ago
@remixtures honestly not as credulously written as I was expecting. still I'm continually disappointed that many of these pieces don't explain that LLMs 1) aren't search engines and 2) don't actually "think", "recognize", "lie", etc.
  • Copy link
  • Flag this comment
  • Block
Log in

bonfire.cafe

A space for Bonfire maintainers and contributors to communicate

bonfire.cafe: About · Code of conduct · Privacy · Users · Instances
Bonfire social · 1.0.0-rc.3.13 no JS en
Automatic federation enabled
  • Explore
  • About
  • Members
  • Code of Conduct
Home
Login