Discussion
Loading...

#Tag

Log in
  • About
  • Code of conduct
  • Privacy
  • Users
  • Instances
  • About Bonfire
Strypey
Strypey
@strypey@mastodon.nzoss.nz  ·  activity timestamp 3 weeks ago

Anyway, I'm looking forward to seeing all the hardened prohibitionists who attack drug law reform efforts - only because they're concerned about mental health, of course - these knee-jerk conservatives, publicans, newspaper editors and alcohol industry lobbyists, all lobbying to ban generative models;

Shock! Horror! AI causes psychosis!?!

They're not going to ignore the same risk when it comes from a different source, are they? : P

(3/3)

Strypey
Strypey
@strypey@mastodon.nzoss.nz replied  ·  activity timestamp 3 weeks ago

"Another case involved a man with a history of a psychotic disorder falling in love with an AI chatbot and then seeking revenge because he believed the AI entity was killed by OpenAI. This led to an encounter with the police in which he was shot and killed."

#MarlynnWei M.D., J.D, 2025

https://www.psychologytoday.com/us/blog/urban-survival/202507/the-emerging-problem-of-ai-psychosis

Holy spitballs!

Prohibitionists;

Shock! Horror! ChatGPT uses can cause DEATH!?!

#AI #MOLE #AIPsychosis

  • Copy link
  • Flag this comment
  • Block
Kee Hinckley boosted
Lazarou Monkey Terror 🚀💙🌈
Lazarou Monkey Terror 🚀💙🌈
@Lazarou@mastodon.social  ·  activity timestamp 2 months ago

"Everything New is Old"

#AI #Eliza #ChatGPT #OpenAI #AIPsychosis

The illusion itself is not the core concern. Those discussing ChatGPT often
invoke its distant ancestor, the Eliza “psychotherapist” chatbot developed in
1967 that produced a similar illusion. By modern standards Eliza was
primitive: it generated responses via simple heuristics, often rephrasing
input as a question or making generic comments. Memorably, Eliza’s creator,
the computer scientist Joseph Weizenbaum, was surprised - and worried - by
how many users seemed to feel Eliza, in some sense, understood them. But
what modern chatbots produce is more insidious than the “Eliza effect”.
Eliza only reflected, but ChatGPT magnifies.
The illusion itself is not the core concern. Those discussing ChatGPT often invoke its distant ancestor, the Eliza “psychotherapist” chatbot developed in 1967 that produced a similar illusion. By modern standards Eliza was primitive: it generated responses via simple heuristics, often rephrasing input as a question or making generic comments. Memorably, Eliza’s creator, the computer scientist Joseph Weizenbaum, was surprised - and worried - by how many users seemed to feel Eliza, in some sense, understood them. But what modern chatbots produce is more insidious than the “Eliza effect”. Eliza only reflected, but ChatGPT magnifies.
The illusion itself is not the core concern. Those discussing ChatGPT often invoke its distant ancestor, the Eliza “psychotherapist” chatbot developed in 1967 that produced a similar illusion. By modern standards Eliza was primitive: it generated responses via simple heuristics, often rephrasing input as a question or making generic comments. Memorably, Eliza’s creator, the computer scientist Joseph Weizenbaum, was surprised - and worried - by how many users seemed to feel Eliza, in some sense, understood them. But what modern chatbots produce is more insidious than the “Eliza effect”. Eliza only reflected, but ChatGPT magnifies.
  • Copy link
  • Flag this post
  • Block
Lazarou Monkey Terror 🚀💙🌈
Lazarou Monkey Terror 🚀💙🌈
@Lazarou@mastodon.social  ·  activity timestamp 2 months ago

"Everything New is Old"

#AI #Eliza #ChatGPT #OpenAI #AIPsychosis

The illusion itself is not the core concern. Those discussing ChatGPT often
invoke its distant ancestor, the Eliza “psychotherapist” chatbot developed in
1967 that produced a similar illusion. By modern standards Eliza was
primitive: it generated responses via simple heuristics, often rephrasing
input as a question or making generic comments. Memorably, Eliza’s creator,
the computer scientist Joseph Weizenbaum, was surprised - and worried - by
how many users seemed to feel Eliza, in some sense, understood them. But
what modern chatbots produce is more insidious than the “Eliza effect”.
Eliza only reflected, but ChatGPT magnifies.
The illusion itself is not the core concern. Those discussing ChatGPT often invoke its distant ancestor, the Eliza “psychotherapist” chatbot developed in 1967 that produced a similar illusion. By modern standards Eliza was primitive: it generated responses via simple heuristics, often rephrasing input as a question or making generic comments. Memorably, Eliza’s creator, the computer scientist Joseph Weizenbaum, was surprised - and worried - by how many users seemed to feel Eliza, in some sense, understood them. But what modern chatbots produce is more insidious than the “Eliza effect”. Eliza only reflected, but ChatGPT magnifies.
The illusion itself is not the core concern. Those discussing ChatGPT often invoke its distant ancestor, the Eliza “psychotherapist” chatbot developed in 1967 that produced a similar illusion. By modern standards Eliza was primitive: it generated responses via simple heuristics, often rephrasing input as a question or making generic comments. Memorably, Eliza’s creator, the computer scientist Joseph Weizenbaum, was surprised - and worried - by how many users seemed to feel Eliza, in some sense, understood them. But what modern chatbots produce is more insidious than the “Eliza effect”. Eliza only reflected, but ChatGPT magnifies.
  • Copy link
  • Flag this post
  • Block
Lazarou Monkey Terror 🚀💙🌈
Lazarou Monkey Terror 🚀💙🌈
@Lazarou@mastodon.social  ·  activity timestamp 2 months ago

"He does not understand how humans are wired"

It's like Sam Altman is some blue-eyed psycho or something, isn't it?

"AI Psychosis" is Humans failing the Turing Test...

https://www.theguardian.com/commentisfree/2025/oct/28/ai-psychosis-chatgpt-openai-sam-altman

#SamAltman #AI #AIPsychosis #ChatGPT #OpenAI

the Guardian

AI psychosis is a growing danger. ChatGPT is moving in the wrong direction | Amandeep Jutla

OpenAI’s CEO has announced loosening the platform’s safety restrictions. He seems not to understand how humans are wired
  • Copy link
  • Flag this post
  • Block
Cory Doctorow boosted
JWcph, Radicalized By Decency
JWcph, Radicalized By Decency
@jwcph@helvede.net  ·  activity timestamp 3 months ago

"- while a member of a gang stalking forum might have a delusion that is just different enough from yours that they seem foolish, or they accuse you of being paranoid, the chatbot's conception of gang stalking delusion is being informed, tuned and shaped by you. It's an improv partner, "yes-and"ing you into a life of paranoid terror." - @pluralistic

https://mastodon.social/@courtcan/115246325654985208

#AI #tech #AIpsychosis #LLM

  • Copy link
  • Flag this post
  • Block
JWcph, Radicalized By Decency
JWcph, Radicalized By Decency
@jwcph@helvede.net  ·  activity timestamp 3 months ago

"- while a member of a gang stalking forum might have a delusion that is just different enough from yours that they seem foolish, or they accuse you of being paranoid, the chatbot's conception of gang stalking delusion is being informed, tuned and shaped by you. It's an improv partner, "yes-and"ing you into a life of paranoid terror." - @pluralistic

https://mastodon.social/@courtcan/115246325654985208

#AI #tech #AIpsychosis #LLM

  • Copy link
  • Flag this post
  • Block

bonfire.cafe

A space for Bonfire maintainers and contributors to communicate

bonfire.cafe: About · Code of conduct · Privacy · Users · Instances
Bonfire social · 1.0.1-alpha.40 no JS en
Automatic federation enabled
Log in
  • Explore
  • About
  • Members
  • Code of Conduct