OpenAI admits prompt injection may never be fully solved, casting doubt on the agentic AI vision
A remarkably prophetic 1923 cartoon depicting how a creative process would be automated in 2023.
#cartoon #tech #technology #BigTech #AI #ArtificialIntelligence #LLM #LLMs #MachineLearning #GenAI #generativeAI #AISlop #Meta #Google #gemini #OpenAI #ChatGPT #anthropic #claude
A remarkably prophetic 1923 cartoon depicting how a creative process would be automated in 2023.
#cartoon #tech #technology #BigTech #AI #ArtificialIntelligence #LLM #LLMs #MachineLearning #GenAI #generativeAI #AISlop #Meta #Google #gemini #OpenAI #ChatGPT #anthropic #claude
👆
This is an example for another latent dysfunction:
Fewer (public) questions get asked on the internet and so knowledge is not spread, but contained, making it more individualized.
This also creates an even stronger bias towards older content, so people might take the shortcut and use a more established technology, instead of looking into new, less explored, but more innovative solutions.
#AI will create a world of average - forever stuck in the past.
In the age of #AI there will be no more room for nuance or detail.
Everything will be coarse and average.
👆
AI Is Homogenizing Our Thoughts (June 2025):
https://www.newyorker.com/culture/infinite-scroll/ai-is-homogenizing-our-thoughts
"With the #LLM “you have no divergent opinions being generated,” Kosmyna said. She continued, “Average everything everywhere all at once—that’s kind of what we’re looking at here.”
#AI is a technology of averages: [ #LLMs] are trained to spot patterns across vast tracts of data; the answers they produce tend toward consensus, both in the quality of the writing [...] and in the calibre of the ideas."
1/2
2026 is the year of #AI sobriety - a Quiet #Rebellion in a World full of #Slop
- by persistent bloom
Invidious:
https://inv.nadeko.net/watch?v=y39n3Mn6jac
(or YT: https://www.youtube.com/watch?v=y39n3Mn6jac)
"Does this allow me to hear myself think? Or does it make it harder to hear what voice is my own?"
I absolutely love this! 💯 So many good points!
I also highly agree with the usefulness of smaller open language models.
#AISober #AISobriety #LLM #LLMs #ArtificialIntelligence #Art #Philosophy #CriticalThinking
⁂ Article
LLM`s and the openweb
The debate about so called #AI and large language models inside the #openweb paths is not, at its core, a technical argument. It is a question of relationship. Not “is this tool good or bad?” but how is it used, who controls it, and whose interests it serves.
This tension is not new, every wave of open communication technology has arrived carrying the same anxiety: printing presses, telephones, email, the web itself. Each was accused - often correctly - of flattening culture, […]
⁂ Article
LLM`s and the openweb
The debate about so called #AI and large language models inside the #openweb paths is not, at its core, a technical argument. It is a question of relationship. Not “is this tool good or bad?” but how is it used, who controls it, and whose interests it serves.
This tension is not new, every wave of open communication technology has arrived carrying the same anxiety: printing presses, telephones, email, the web itself. Each was accused - often correctly - of flattening culture, […]
Richtlinien für Machine Learning im Kernel diskutiert
https://linuxnews.de/richtlinien-fuer-machine-learning-im-kernel/ #kernel #KI #ai #LLMs #linux #linuxnews
Richtlinien für Machine Learning im Kernel diskutiert
https://linuxnews.de/richtlinien-fuer-machine-learning-im-kernel/ #kernel #KI #ai #LLMs #linux #linuxnews
LoL. Would you expect any different outcome than this out of a industry built upon "citation cartels" where articles are made to be cited but not to be read?
"What Heiss came to realize in the course of vetting these papers was that AI-generated citations have now infested the world of professional scholarship, too. Each time he attempted to track down a bogus source in Google Scholar, he saw that dozens of other published articles had relied on findings from slight variations of the same made-up studies and journals.
“There have been lots of AI-generated articles, and those typically get noticed and retracted quickly,” Heiss tells Rolling Stone. He mentions a paper retracted earlier this month, which discussed the potential to improve autism diagnoses with an AI model and included a nonsensical infographic that was itself created with a text-to-image model. “But this hallucinated journal issue is slightly different,” he says.
That’s because articles which include references to nonexistent research material — the papers that don’t get flagged and retracted for this use of AI, that is — are themselves being cited in other papers, which effectively launders their erroneous citations. This leads to students and academics (and any large language models they may ask for help) identifying those “sources” as reliable without ever confirming their veracity. The more these false citations are unquestioningly repeated from one article to the next, the more the illusion of their authenticity is reinforced. Fake citations have turned into a nightmare for research librarians, who by some estimates are wasting up to 15 percent of their work hours responding to requests for nonexistent records that ChatGPT or Google Gemini alluded to."
#AI #GenerativeAI #Hallucinations #Chatbots #LLMs #Science #AcademicPublishing
"I announced my divorce on Instagram and then AI impersonated me."
https://eiratansey.com/2025/12/20/i-announced-my-divorce-on-instagram-and-then-ai-impersonated-me/
#tech #technology #BigTech #AI #ArtificialIntelligence #LLM #LLMs #MachineLearning #GenAI #generativeAI #AISlop #Meta #Google #gemini #OpenAI #ChatGPT #anthropic #claude
Funny that you should ask. I went to the LinkedIn post, and there's a hyperlink there to the actual job listing on Microsoft's WWW site.
It is patently LLM-written. The 'Responsibilities' section shouts that fact the loudest.
https://careerhub.microsoft.com/careers/job/1970393556639051
Amusingly, for a job that deals in rewriting things in Rust, actual experience with that language is an optional requirement, whereas >= 6 years experience in Python or JavaScript fulfils the mandatory requirement.
Mind you, job listings have been autocompleted using boilerplate, especially by recruitment agencies, for decades.
"I announced my divorce on Instagram and then AI impersonated me."
https://eiratansey.com/2025/12/20/i-announced-my-divorce-on-instagram-and-then-ai-impersonated-me/
#tech #technology #BigTech #AI #ArtificialIntelligence #LLM #LLMs #MachineLearning #GenAI #generativeAI #AISlop #Meta #Google #gemini #OpenAI #ChatGPT #anthropic #claude
LoL. Would you expect any different outcome than this out of a industry built upon "citation cartels" where articles are made to be cited but not to be read?
"What Heiss came to realize in the course of vetting these papers was that AI-generated citations have now infested the world of professional scholarship, too. Each time he attempted to track down a bogus source in Google Scholar, he saw that dozens of other published articles had relied on findings from slight variations of the same made-up studies and journals.
“There have been lots of AI-generated articles, and those typically get noticed and retracted quickly,” Heiss tells Rolling Stone. He mentions a paper retracted earlier this month, which discussed the potential to improve autism diagnoses with an AI model and included a nonsensical infographic that was itself created with a text-to-image model. “But this hallucinated journal issue is slightly different,” he says.
That’s because articles which include references to nonexistent research material — the papers that don’t get flagged and retracted for this use of AI, that is — are themselves being cited in other papers, which effectively launders their erroneous citations. This leads to students and academics (and any large language models they may ask for help) identifying those “sources” as reliable without ever confirming their veracity. The more these false citations are unquestioningly repeated from one article to the next, the more the illusion of their authenticity is reinforced. Fake citations have turned into a nightmare for research librarians, who by some estimates are wasting up to 15 percent of their work hours responding to requests for nonexistent records that ChatGPT or Google Gemini alluded to."
#AI #GenerativeAI #Hallucinations #Chatbots #LLMs #Science #AcademicPublishing