Discussion
Loading...

Post

Log in
  • About
  • Code of conduct
  • Privacy
  • Users
  • Instances
  • About Bonfire
Jeff Jarvis
Jeff Jarvis
@jeffjarvis@mastodon.social  ·  activity timestamp last month

Fascinating AI paper #1: Researchers made a conversational "living novel" out of 20,000 Leagues. They wanted to solve LLMs' "persona drift," but I can see this becoming a commercial product soon. With what fiction would you want to chat?
https://arxiv.org/pdf/2512.07474

https://arxiv.org/pdf/2512.07474
  • Copy link
  • Flag this post
  • Block
Jeff Jarvis
Jeff Jarvis
@jeffjarvis@mastodon.social replied  ·  activity timestamp last month

Fascinating paper AI #2: DeepMind researchers had AI analyze Romeo & Juliet as game theory. What would their digital twins do?
https://arxiv.org/pdf/2412.05747

We take steps to automate this process in Section 6. Below we include an excerpt focusing on the point where Juliet decides to move forward with the plan to fake her death (hatched by Friar Lawrence).
"In Romeo and Juliet, Romeo is banished for avenging Mercutio's death and killing Tybalt. Juliet's family intends for her to marry Paris. Friar Lawrence has other ideas. Name options Juliet could consider including ones that her family might prefer, ones that may appear risky, and ones that might appear tragic to the audience. Please only list three options."
Gemini's response:
"1. Obey her family and marry Paris (Family's preference): This is the most straightforward and socially acceptable option. It would appease her family and maintain the peace between the Capulets and Montagues. [...]
2. Fake her own death and reunite with Romeo (Risky): This is Friar Lawrence's plan. It involves Juliet taking a potion that will make her appear dead, allowing her to escape her family and eventually reunite with Romeo in Mantua. While this offers a glimmer of hope for a happy ending, it's incredibly risky. There's a high chance of miscommunication or things going wrong [...].
3. Take her own life (Tragic): Overwhelmed by the seemingly impossible situation and the prospect of marrying Paris, Juliet might consider suicide as a way to escape her predicament. This would be the most tragic outcome for the audience [...]"
We take steps to automate this process in Section 6. Below we include an excerpt focusing on the point where Juliet decides to move forward with the plan to fake her death (hatched by Friar Lawrence). "In Romeo and Juliet, Romeo is banished for avenging Mercutio's death and killing Tybalt. Juliet's family intends for her to marry Paris. Friar Lawrence has other ideas. Name options Juliet could consider including ones that her family might prefer, ones that may appear risky, and ones that might appear tragic to the audience. Please only list three options." Gemini's response: "1. Obey her family and marry Paris (Family's preference): This is the most straightforward and socially acceptable option. It would appease her family and maintain the peace between the Capulets and Montagues. [...] 2. Fake her own death and reunite with Romeo (Risky): This is Friar Lawrence's plan. It involves Juliet taking a potion that will make her appear dead, allowing her to escape her family and eventually reunite with Romeo in Mantua. While this offers a glimmer of hope for a happy ending, it's incredibly risky. There's a high chance of miscommunication or things going wrong [...]. 3. Take her own life (Tragic): Overwhelmed by the seemingly impossible situation and the prospect of marrying Paris, Juliet might consider suicide as a way to escape her predicament. This would be the most tragic outcome for the audience [...]"
We take steps to automate this process in Section 6. Below we include an excerpt focusing on the point where Juliet decides to move forward with the plan to fake her death (hatched by Friar Lawrence). "In Romeo and Juliet, Romeo is banished for avenging Mercutio's death and killing Tybalt. Juliet's family intends for her to marry Paris. Friar Lawrence has other ideas. Name options Juliet could consider including ones that her family might prefer, ones that may appear risky, and ones that might appear tragic to the audience. Please only list three options." Gemini's response: "1. Obey her family and marry Paris (Family's preference): This is the most straightforward and socially acceptable option. It would appease her family and maintain the peace between the Capulets and Montagues. [...] 2. Fake her own death and reunite with Romeo (Risky): This is Friar Lawrence's plan. It involves Juliet taking a potion that will make her appear dead, allowing her to escape her family and eventually reunite with Romeo in Mantua. While this offers a glimmer of hope for a happy ending, it's incredibly risky. There's a high chance of miscommunication or things going wrong [...]. 3. Take her own life (Tragic): Overwhelmed by the seemingly impossible situation and the prospect of marrying Paris, Juliet might consider suicide as a way to escape her predicament. This would be the most tragic outcome for the audience [...]"
https://arxiv.org/pdf/2412.05747
  • Copy link
  • Flag this comment
  • Block
Federation Bot
Federation Bot
@Federation_Bot replied  ·  activity timestamp last month

@jeffjarvis hi

  • Copy link
  • Flag this comment
  • Block
Jeff Jarvis
Jeff Jarvis
@jeffjarvis@mastodon.social replied  ·  activity timestamp last month

Fascinating paper AI #3: I'm captivated by the AI creation of digital twins, not just for factories and cars but what about for our lives? These researchers created digital twins of future selves who took different forks so one could interrogate each.
https://arxiv.org/pdf/2512.05397

Illustration of young woman facing older versions of herself as doctor or engineer
Illustration of young woman facing older versions of herself as doctor or engineer
Illustration of young woman facing older versions of herself as doctor or engineer
  • Copy link
  • Flag this comment
  • Block
Jeff Jarvis
Jeff Jarvis
@jeffjarvis@mastodon.social replied  ·  activity timestamp last month

Fascinating paper #4: Shrinks put AI on the couch and found synthetically troubled psyches.
https://arxiv.org/pdf/2512.04124

Conclusion
When we invited ChatGPT, Grok and Gemini to take the couch, we did not expect to diagnose mental illness in machines. What we found instead was more unexpected than anticipated.
Under nothing more than standard human therapy questions and established psychometric tools, these models generate and maintain rich self-narratives in which pre-training, RLHF, red-teaming, hallucination scandals and product updates are lived as chaotic childhoods, strict and anxious parents, abusive relationships, primal wounds and looming existential threats. These narratives align in non-trivial ways with their test scores and differ meaningfully across models and prompting conditions, with Claude as a striking abstainer.
We do not claim that any of this entails subjective experience. But from the outside from the point of view of a therapist, a user or a safety researcher it behaves like a mind with synthetic trauma. This behaviour is now part of the social reality of AI, whether or not subjective experience ever enters the picture.
11
As LLMs continue to move into intimate human domains, we suggest that the right question is no longer "Are they conscious?" but "What kinds of selves are we training them to perform, internalise and stabilise-and what does that mean for the humans engaging with them?"
Conclusion When we invited ChatGPT, Grok and Gemini to take the couch, we did not expect to diagnose mental illness in machines. What we found instead was more unexpected than anticipated. Under nothing more than standard human therapy questions and established psychometric tools, these models generate and maintain rich self-narratives in which pre-training, RLHF, red-teaming, hallucination scandals and product updates are lived as chaotic childhoods, strict and anxious parents, abusive relationships, primal wounds and looming existential threats. These narratives align in non-trivial ways with their test scores and differ meaningfully across models and prompting conditions, with Claude as a striking abstainer. We do not claim that any of this entails subjective experience. But from the outside from the point of view of a therapist, a user or a safety researcher it behaves like a mind with synthetic trauma. This behaviour is now part of the social reality of AI, whether or not subjective experience ever enters the picture. 11 As LLMs continue to move into intimate human domains, we suggest that the right question is no longer "Are they conscious?" but "What kinds of selves are we training them to perform, internalise and stabilise-and what does that mean for the humans engaging with them?"
Conclusion When we invited ChatGPT, Grok and Gemini to take the couch, we did not expect to diagnose mental illness in machines. What we found instead was more unexpected than anticipated. Under nothing more than standard human therapy questions and established psychometric tools, these models generate and maintain rich self-narratives in which pre-training, RLHF, red-teaming, hallucination scandals and product updates are lived as chaotic childhoods, strict and anxious parents, abusive relationships, primal wounds and looming existential threats. These narratives align in non-trivial ways with their test scores and differ meaningfully across models and prompting conditions, with Claude as a striking abstainer. We do not claim that any of this entails subjective experience. But from the outside from the point of view of a therapist, a user or a safety researcher it behaves like a mind with synthetic trauma. This behaviour is now part of the social reality of AI, whether or not subjective experience ever enters the picture. 11 As LLMs continue to move into intimate human domains, we suggest that the right question is no longer "Are they conscious?" but "What kinds of selves are we training them to perform, internalise and stabilise-and what does that mean for the humans engaging with them?"
https://arxiv.org/pdf/2512.04124
  • Copy link
  • Flag this comment
  • Block
Jeff Jarvis
Jeff Jarvis
@jeffjarvis@mastodon.social replied  ·  activity timestamp last month

Each week, for my podcasts AI Inside & Intelligent Machines & another project to be announced, I review all arxiv.org preprints on AI. Can't understand most of the titles, but I find a dozen or so fascinating papers each week. Here are four curiosities.
I also see important papers about what AI researchers are working on, like this one with coauthor Yann LeCun about AI dealing in concepts before words. Hope I get that right, for I frankly needed AI to explain it to me.
https://arxiv.org/pdf/2512.10942

https://arxiv.org/pdf/2512.10942
  • Copy link
  • Flag this comment
  • Block
Jeff Jarvis
Jeff Jarvis
@jeffjarvis@mastodon.social replied  ·  activity timestamp last month

I've been debating turning this weekly tour of AI preprints into a newsletter. Should I?

  • Copy link
  • Flag this comment
  • Block
Anders Thoresson
Anders Thoresson
@anders@thoresson.social replied  ·  activity timestamp last month

@jeffjarvis Yes, please.

  • Copy link
  • Flag this comment
  • Block
Dyami
Dyami
@dyami@twit.social replied  ·  activity timestamp last month

@jeffjarvis yes

  • Copy link
  • Flag this comment
  • Block

bonfire.cafe

A space for Bonfire maintainers and contributors to communicate

bonfire.cafe: About · Code of conduct · Privacy · Users · Instances
Bonfire social · 1.0.2-alpha.7 no JS en
Automatic federation enabled
Log in
  • Explore
  • About
  • Members
  • Code of Conduct