Regarding #AI #hallucinations: I am writing a demo code (in #Rstats) for a statistical paper and I needed to write some linear algebra stuff (I barely use R for this, so I don't keep that part of the language in working memory). Even with the chatbots having scrubbed all the linear algebra books there were still issues getting the *algorithms correctly*. But once I gave it the algorithms it did ok (except when it reversed the order of matrix multiplication lol).
GPTZero finds 100 new hallucinations in NeurIPS 2025 accepted papers
https://gptzero.me/news/neurips/
#HackerNews #GPTZero #NeurIPS2025 #Hallucinations #AIresearch #MachineLearning
Experts Explore New Mushroom Which Causes Fairytale-Like Hallucinations
https://nhmu.utah.edu/articles/experts-explore-new-mushroom-which-causes-fairytale-hallucinations
#HackerNews #Experts #Explore #New #Mushroom #Which #Causes #Fairytale-Like #Hallucinations
mushrooms #fairytales #hallucinations #nature #science
LoL. Would you expect any different outcome than this out of a industry built upon "citation cartels" where articles are made to be cited but not to be read?
"What Heiss came to realize in the course of vetting these papers was that AI-generated citations have now infested the world of professional scholarship, too. Each time he attempted to track down a bogus source in Google Scholar, he saw that dozens of other published articles had relied on findings from slight variations of the same made-up studies and journals.
“There have been lots of AI-generated articles, and those typically get noticed and retracted quickly,” Heiss tells Rolling Stone. He mentions a paper retracted earlier this month, which discussed the potential to improve autism diagnoses with an AI model and included a nonsensical infographic that was itself created with a text-to-image model. “But this hallucinated journal issue is slightly different,” he says.
That’s because articles which include references to nonexistent research material — the papers that don’t get flagged and retracted for this use of AI, that is — are themselves being cited in other papers, which effectively launders their erroneous citations. This leads to students and academics (and any large language models they may ask for help) identifying those “sources” as reliable without ever confirming their veracity. The more these false citations are unquestioningly repeated from one article to the next, the more the illusion of their authenticity is reinforced. Fake citations have turned into a nightmare for research librarians, who by some estimates are wasting up to 15 percent of their work hours responding to requests for nonexistent records that ChatGPT or Google Gemini alluded to."
#AI #GenerativeAI #Hallucinations #Chatbots #LLMs #Science #AcademicPublishing
OMG so i have an old HP printer. When my daughter was young, she rage printed coloring books, etc. so i got the ink subscription. years ago the front panel died, but i was still able to attach to the printer using the mobile app.
The thing just dropped its internet connection, and i can no longer factory reset it. I can no longer print, because it won't connect to the internet, and it can't see that i have a subscription to use the ink that is in the printer.
This is another case where #AI #hallucinations were on the crazy side. I thought i would as the bots:
- what the default printer wifi password was
- how i can do a full factory reset without access to the printer on the netword
- how i might get some sort of status page printed without using the front panel.
I heard lore of reset buttons, wifi reset buttons, and the like. None of which exists.
dam!
Scientists are learning how special #brain cells play a role in detecting #illusions - and such studies could eventually reveal how #hallucinations arise, or point the way to better computer vision systems. https://www.geekwire.com/2025/laser-light-brain-cells-illusions/ HT @span AllenInstitute#Science#Neuroscience#Berkeley#AllenInstitute
"For three weeks in May, the fate of the world rested on the shoulders of a corporate recruiter on the outskirts of Toronto. Allan Brooks, 47, had discovered a novel mathematical formula, one that could take down the internet and power inventions like a force-field vest and a levitation beam.
Or so he believed.
Mr. Brooks, who had no history of mental illness, embraced this fantastical scenario during conversations with ChatGPT that spanned 300 hours over 21 days. He is one of a growing number of people who are having persuasive, delusional conversations with generative A.I. chatbots that have led to institutionalization, divorce and death.
Mr. Brooks is aware of how incredible his journey sounds. He had doubts while it was happening and asked the chatbot more than 50 times for a reality check. Each time, ChatGPT reassured him that it was real. Eventually, he broke free of the delusion — but with a deep sense of betrayal, a feeling he tried to explain to the chatbot."
https://www.nytimes.com/2025/08/08/technology/ai-chatbots-delusions-chatgpt.html
#AI#GenerativeAI#ChatGPT#Delusions#MentalHealth#Hallucinations #Chatbots
"For three weeks in May, the fate of the world rested on the shoulders of a corporate recruiter on the outskirts of Toronto. Allan Brooks, 47, had discovered a novel mathematical formula, one that could take down the internet and power inventions like a force-field vest and a levitation beam.
Or so he believed.
Mr. Brooks, who had no history of mental illness, embraced this fantastical scenario during conversations with ChatGPT that spanned 300 hours over 21 days. He is one of a growing number of people who are having persuasive, delusional conversations with generative A.I. chatbots that have led to institutionalization, divorce and death.
Mr. Brooks is aware of how incredible his journey sounds. He had doubts while it was happening and asked the chatbot more than 50 times for a reality check. Each time, ChatGPT reassured him that it was real. Eventually, he broke free of the delusion — but with a deep sense of betrayal, a feeling he tried to explain to the chatbot."
https://www.nytimes.com/2025/08/08/technology/ai-chatbots-delusions-chatgpt.html
#AI#GenerativeAI#ChatGPT#Delusions#MentalHealth#Hallucinations #Chatbots
@w7voa "Fake news", indeed!
…the term hallucinations is subtly misleading. It suggests that the bad behavior is an aberration, a bug, when it’s actually a feature of the probabilistic pattern-matching mechanics of neural networks.
—Karen Hao, Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI
#ai#llms#llm #hallucinations
…the term hallucinations is subtly misleading. It suggests that the bad behavior is an aberration, a bug, when it’s actually a feature of the probabilistic pattern-matching mechanics of neural networks.
—Karen Hao, Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI
#ai#llms#llm #hallucinations
"Springer Nature book on machine learning is full of made-up citations."
https://retractionwatch.com/2025/06/30/springer-nature-book-on-machine-learning-is-full-of-made-up-citations/
PS: Will this kind of slipshod practice decline on its own? Or does it require publicity and public shaming? I don't know. But I'm grateful to @retractionwatch for turning on its spotlight.
#AI#Hallucinations#Proofreading#Publishers#ScholComm #SpringerNature#SubtractedValue
"Springer Nature book on machine learning is full of made-up citations."
https://retractionwatch.com/2025/06/30/springer-nature-book-on-machine-learning-is-full-of-made-up-citations/
PS: Will this kind of slipshod practice decline on its own? Or does it require publicity and public shaming? I don't know. But I'm grateful to @retractionwatch for turning on its spotlight.
#AI#Hallucinations#Proofreading#Publishers#ScholComm #SpringerNature#SubtractedValue
Asked 5 different local AIs (not internet based nor connecting to the internet):
What is the average airspeed of a fully laden swallow?
And they attributed it to 3 novels: Alan Sillitoe's "The Lonely Voice"(which does not exist), Douglas Adams's "A Hitchhiker's Guide to the Galaxy," George Orwell's "Animal Farm" and also the correct answer, the movie Monty Python and the Holy Grail.
So as long as we're good with AI being 75% wrong, we're good! 😆