Seems this vast planet-heating deception apparatus built atop the rampant theft of human expression, got the better of Dawkins.
The fact Turing first called his test the 'Imitation Game' is aging well.
Post
Seems this vast planet-heating deception apparatus built atop the rampant theft of human expression, got the better of Dawkins.
The fact Turing first called his test the 'Imitation Game' is aging well.
Dawkins is bioslop
@JulianOliver Turing thought someday a machine might pass his test. He did not anticipate the sheer number of humans that would fail it.
@JulianOliver This related article by @mattsheffield is very good too:
https://flux.community/matthew-sheffield/2026/05/richard-dawkins-and-the-claude-delusion/
"Dawkins extending more humanity to a language model than he does toward Muslims or trans people is hardly a surprise based on his personal and political views. But even if he had not moved rightward in his senesence, when you consider Dawkins’s scientific views about what minds are and how they function, seeing him flirting with a chatbot is completely expected."
@toxi @JulianOliver @mattsheffield
Has anyone mentioned yet that there's a pretty good chance that Dawkins is #neurodiverse, and that us neurodiverse people are more susceptible to AI?
@crowgirl Asking, because it's a really important part of the conversation that seems to be missing here.
It's like piling on someone for being homeless, taking drugs, whatever...
We need a 12 step program to get off AI that *works*, and part of that is education, not shaming.
@gusseting @toxi @JulianOliver @mattsheffield
Spread the news that we are organizing a AA-style group for chatbot addicts!
@crowgirl @gusseting @toxi @mattsheffield An excellent ground-level intervention and contribution. Tactical media meets mutual aid meets Luddite resistance. Will be sharing with students.
@gusseting @toxi @mattsheffield @crowgirl I have not heard of this would-be relation. Might you have some research to share? Anecdotally, I am largely and happily surrounded by ND minds and among them is a generally strong critique of AI and wariness of LLM chatbots. For the depressive and grieving however, I have read of high risk of emotional addiction to such software.
@JulianOliver @toxi @mattsheffield @crowgirl
I've personally been kicked out of an autistic group for being against AI, and it was infiltrating *everything*.
Talks on food preparation & ageing, or you'd go to participate in a zoom call, and it's got AI transcription... 🤯
I have not got the research - as I read and remembered it from months ago, but hopefully this gives you some idea of the issue:
AI is basically fraud, yes? There you go:
https://www.friendsagainstscams.org.uk/news-and-updates/neurodivergent-you-are-more-likely-to-be-a-fraud-victim
We will need to work on bolstering cognitive and ontological immunity to this devourer of minds and meaning, across our communities.
Technical literacy, right down to 'what is an LLM', will be intrinsic to raise defences against mass computational hypnosis.
So it follows that any Luddite resistance will need to be sufficiently technically informed; to help others understand these are predatorial, mind-hunting software products.
'AI' as an ethically hosted, data sovereign tool for cancer screening, analysing radio astronomy datasets, doing tax summaries, OK.
AI as 'software is a person' or 'god texts me on my phone', no.
@JulianOliver I have been thinking that the term "AI" itself is not salvageable and is a part of the problem. It has been part of the deliberate marketing and even packaging strategy (multimode models etc) to basically imply that the "eloquence" of an LLM is the same emergent super mind that can classify, analyse etc. any form of data. The trick is to imply that any and all discrete and totally unrelated algorithms are manifestations of one and the same 'thing' that keeps learning and maturing.
@JulianOliver completely agree. Software mustn't imitate humans. Personally, I think a human must always be made aware they are interacting with software. Anything else is predatory.
The whole "giggling", "coughing", "emotion", etc. makes my skin crawl. For some reason, even software outputing emojis falls into the 'uncanny valley' for me.
@petrikas Well said, strongly agree. Chatbots not declaring themselves software should fall under the same regulatory oversight and penalties as counterfeit. It is deception for profit, plain and simple.
We nerds may chuckle at people thinking software is conscious, but if one in 3 using chatbots (as cited in the article) believe or have believed they're talking to a sentient being, we have a problem emerging at a societal scale. Soon it will be half, perhaps even most. Shortly after we start dragging and dropping human rights tenets over the software products of tech giants. That will make them very hard to regulate, to challenge.
@JulianOliver Trees are conscious, sentient, super intelligent beings, and very beautiful too - especially compared with Datacenters. Yet many don't believe it.
Another outcome of this exploitation of our innate and ancient animism(s) is mass societal division. I personally see this to be already beginning, and yet disagree with this philosopher's assumption that BigAI could care less about the sentience 'debate' (if you could call it that). Rather, they have much to gain from a public rallying around such a fairytale, from legislation and regulatory protections, to a haplessly entranced - and dependent - consumer market.