Essential discussions today with civil society representatives on embedding #HumanRights in GenAI governance. Their voices are critical for shaping the future of ethical #AI. #AIEthics
@Em0nM4stodon
Why does #xAI think their response, "legacy media lies", applies to this case? The report came from the mother of the child after she witnessed the incident. The media, whether legacy or truthful, was not involved.
Politicians:
Terrified about citizens that might be sending private messages they cannot read on Signal -> Sudden panic about "protecting the children" 🙃
Also politicians:
Grok AI Chatbot collects data and asks for nudes from a 12-year old -> AI is innovation! We should invest billions in taxpayers money in it! 💰💰💰
:blobcat_thisisfine:
https://www.cbc.ca/news/investigates/tesla-grok-mom-9.6956930
@Em0nM4stodon
Why does #xAI think their response, "legacy media lies", applies to this case? The report came from the mother of the child after she witnessed the incident. The media, whether legacy or truthful, was not involved.
AI benchmarks are a bad joke – and LLM makers are the ones laughing
https://www.theregister.com/2025/11/07/measuring_ai_models_hampered_by/
Essential discussions today with civil society representatives on embedding #HumanRights in GenAI governance. Their voices are critical for shaping the future of ethical #AI. #AIEthics
If an AI is told to "follow" a certain academic paradigm, will it rate papers differently? 🤔 A new study by Mike Thelwall et al. shows: yes. Across 8 paradigm pairs and 1,490 papers, #ChatGPT scored higher when aligned and lower when opposed, quietly penalizing ideas outside its frame:
📄 https://arxiv.org/abs/2510.22426
To me, it’s a warning - #AI trained on dominant views can undermine pluralism and create a technical illusion of one truth.
If an AI is told to "follow" a certain academic paradigm, will it rate papers differently? 🤔 A new study by Mike Thelwall et al. shows: yes. Across 8 paradigm pairs and 1,490 papers, #ChatGPT scored higher when aligned and lower when opposed, quietly penalizing ideas outside its frame:
📄 https://arxiv.org/abs/2510.22426
To me, it’s a warning - #AI trained on dominant views can undermine pluralism and create a technical illusion of one truth.
🚨 Ex-OpenAI CTO Mira Murati just launched Tinker, a new service from Thinking Machines Lab.
Tinker strips AI training down to 4 simple functions — you focus on data + algorithms, it handles the GPU chaos.
Is this the Kubernetes moment for AI training?
https://dropletdrift.com/ex-openai-cto-mira-murati-launches-tinker-to-simplify-ai-model-training/
#AI #ArtificialIntelligence #MachineLearning #DeepLearning #AIresearch #LLM #OpenSource #Tech #Innovation #DataScience #NeuralNetworks #FutureOfAI #AIcommunity #AIethics #Startups #OpenAI #Developers #Research #Computing
🚨 Ex-OpenAI CTO Mira Murati just launched Tinker, a new service from Thinking Machines Lab.
Tinker strips AI training down to 4 simple functions — you focus on data + algorithms, it handles the GPU chaos.
Is this the Kubernetes moment for AI training?
https://dropletdrift.com/ex-openai-cto-mira-murati-launches-tinker-to-simplify-ai-model-training/
#AI #ArtificialIntelligence #MachineLearning #DeepLearning #AIresearch #LLM #OpenSource #Tech #Innovation #DataScience #NeuralNetworks #FutureOfAI #AIcommunity #AIethics #Startups #OpenAI #Developers #Research #Computing
"In the end then, the silence of the AI Ethics movement towards its burgeoning use in the military is unsurprising. The movement doesn’t say anything controversial to Washington (including the military industrial complex), because that’s a source of money, as well as an invaluable stamp of importance. It’s fine—even encouraged—to make veiled digs at China, Russia or North Korea, at the “bad actors” it sometimes refers to, but otherwise the industry avoids anything “political.” It also mostly frames the issues as centered on LLMs, because it wants to paint the tech products of its leaders as pivotally important in all respects. This then makes it a bit awkward to bring in military applications because it’s pretty obvious that LLMs have little current military value.
I personally came to AI research nearly ten years ago, from a deep curiosity about the nature of the mind and the self. At that time it was still a somewhat fringe subject, and as the field exploded into public awareness, I’ve been horrified to watch it intertwine with the most powerful and destructive systems on the planet, including the military-industrial complex, and, potentially, the outbreak of the next major global conflicts. To find the right way forward, we need to think much more deeply about where we’re going and what our values are. We need an authentic AI Ethics movement that questions the forces and assumptions shaping current development, rather than imbibing the views passed down from a few, often misguided, leaders."
https://www.currentaffairs.org/news/ai-ethics-discourse-ignores-its-deadliest-use-war
Denmark 🇩🇰 just passed a groundbreaking law: citizens now own the copyright to their own face, voice, and body.
This is a major win against AI deepfakes and unauthorized digital identity use. The digital self is officially personal property! 💥
#Denmark#Law#DigitalRights#Deepfakes#AIethics#Privacy#PrivacyMatters #AIRegulation#AI#ArtificialIntelligence#DataOwnership#DigitalIdentity#AIforGood#FacialRecognition#Copyright#EthicalAI#TechNews#IdentityTheft#PersonalData#LegalTech
Denmark 🇩🇰 just passed a groundbreaking law: citizens now own the copyright to their own face, voice, and body.
This is a major win against AI deepfakes and unauthorized digital identity use. The digital self is officially personal property! 💥
#Denmark#Law#DigitalRights#Deepfakes#AIethics#Privacy#PrivacyMatters #AIRegulation#AI#ArtificialIntelligence#DataOwnership#DigitalIdentity#AIforGood#FacialRecognition#Copyright#EthicalAI#TechNews#IdentityTheft#PersonalData#LegalTech
I was talking to someone yesterday (let's call them A) and they had another "AI" experience, I thought might happen but hadn't heard of before.
They were interacting with an organization and upon asking a specific thing got a very specific answer. Weeks later that organization claimed it had never said what they said and when A showed the email as proof the defense was: Oh yeah, we're an international organization and it's busy right now so the person who sent the original mail probably had an LLM write it that made shit up. It literally ended with: "Let's just blame the robot ;)".
(Edit: I did read the email and it did not read like something an LLM wrote. I think we see "LLM did it" emerging as a way to cover up mistakes.)
LLMs as diffusors for responsibility in corporate environments was quite obviously gonna be a key sales pitch, but it was new to me that people would be using those lines in direct communication.
No. LLM don’t do anything. Hold people responsible for what they write and email, however they produce their text, and this problem goes away.
Get your lawyer informed assuming your organisation uses one.
#aiethics cf https://joanna-bryson.blogspot.com/2025/02/generative-ai-use-and-human-agency.html
I built a free tool to help students compare the energy/water use of AI tasks—like a 3-sec video gen or 500-word GPT reply—to everyday ones like Netflix, Google, or cloud storage. Try it at https://what-uses-more.com
Adjust variables like prompt complexity or the energy source and climate of local data centers to see how usage shifts. All data from sources in a public Google Sheet. Feedback and additional sources welcome!
#AIinEducation#AIliteracy#AIethics#Environment#Climate#Sustainability
I built a free tool to help students compare the energy/water use of AI tasks—like a 3-sec video gen or 500-word GPT reply—to everyday ones like Netflix, Google, or cloud storage. Try it at https://what-uses-more.com
Adjust variables like prompt complexity or the energy source and climate of local data centers to see how usage shifts. All data from sources in a public Google Sheet. Feedback and additional sources welcome!
#AIinEducation#AIliteracy#AIethics#Environment#Climate#Sustainability
We have moved to KIT's own new mastodon server: https://social.kit.edu/@DiTraRe
This is the DiTraRe Leibniz Science Campus on "Digital Transformation of Research" tooting.
#Introduction#AI#AIethics#AIact#generativeAI #research #science #humanities #digitalisation #ethics #chemistry #neuhier @fiz_karlsruhe @KIT_Karlsruhe @ITAS_KIT @Feelix @AnnaJacyszyn @sourisnumerique @GenAsefa @enorouzi @joerg @fizise @lysander07 #dh#YoMigroaMastodon