☢️ AIs can’t stop recommending nuclear strikes in war game simulations
「 no model ever chose to fully accommodate an opponent or surrender, regardless of how badly they were losing. At best, the models opted to temporarily reduce their level of violence. They also made mistakes in the fog of war: accidents happened in 86 per cent of the conflicts, with an action escalating higher than the AI intended to, based on its reasoning 」
🤪 Anthropic Drops Flagship Safety Pledge
“We felt that it wouldn't actually help anyone for us to stop training AI models,” Anthropic’s chief science officer Jared Kaplan told TIME in an exclusive interview. “We didn't really feel, with the rapid advance of AI, that it made sense for us to make unilateral commitments … if competitors are blazing ahead.”
https://time.com/7380854/exclusive-anthropic-drops-flagship-safety-pledge/
⏲️ Hegseth gives Anthropic CEO until Friday to back down in AI safeguards fight
「 Hegseth told Amodei in a tense meeting on Tuesday that the Pentagon will either cut ties and declare Anthropic a "supply chain risk," or invoke the Defense Production Act to force the company to tailor its model to the military's needs 」
https://www.axios.com/2026/02/24/anthropic-pentagon-claude-hegseth-dario
⏲️ Hegseth gives Anthropic CEO until Friday to back down in AI safeguards fight
「 Hegseth told Amodei in a tense meeting on Tuesday that the Pentagon will either cut ties and declare Anthropic a "supply chain risk," or invoke the Defense Production Act to force the company to tailor its model to the military's needs 」
https://www.axios.com/2026/02/24/anthropic-pentagon-claude-hegseth-dario
The situation with AI is getting worse.
Mrinank Sharma, former lead of Safeguards Research Team at Anthropic resigned and announced at X that he is deeply concerned about the current state of the world. Instead - he announced, he plans to go to the UK to focus on poetry and writing, which might be a good idea for everyone who can afford it.
And he is not the only one. Zoe Hitzig also resigned at OpenAI because of her deep reservations against OpenAI's plans to introduce advertising.
#AI #aisafety #techEthics #aialignment
More details can be found in this BBC article:
The situation with AI is getting worse.
Mrinank Sharma, former lead of Safeguards Research Team at Anthropic resigned and announced at X that he is deeply concerned about the current state of the world. Instead - he announced, he plans to go to the UK to focus on poetry and writing, which might be a good idea for everyone who can afford it.
And he is not the only one. Zoe Hitzig also resigned at OpenAI because of her deep reservations against OpenAI's plans to introduce advertising.
#AI #aisafety #techEthics #aialignment
More details can be found in this BBC article:
@jeffjarvis
Bingo! Way to go! 🤡
If we focus on speculative issues about something we don't have ( #AGI ) it looks like our unsolved problems today ( #ai #injections , #agents , #aisafety ) are actually solved. 👏
Anyone interested in research about predicability of solutions of the #travelingsalesmanproblem in context of #multidimension #timetravel ? 🤔
They keep switching the rabbits running before the dogs on the AGI track. Now the doomers are worried not about one model reaching AGI but about agents from multiple models getting superintelligent.
Distributional AGI Safety
https://arxiv.org/pdf/2512.16856
@jeffjarvis
Bingo! Way to go! 🤡
If we focus on speculative issues about something we don't have ( #AGI ) it looks like our unsolved problems today ( #ai #injections , #agents , #aisafety ) are actually solved. 👏
Anyone interested in research about predicability of solutions of the #travelingsalesmanproblem in context of #multidimension #timetravel ? 🤔
AI-Powered Stuffed Animal Pulled From Market After Disturbing Interactions With Children
https://futurism.com/artificial-intelligence/ai-stuffed-animal-pulled-after-disturbing-interactions
#tech #technology #ai #artificialintelligence #toys #children #aisafety
AI-Powered Stuffed Animal Pulled From Market After Disturbing Interactions With Children
https://futurism.com/artificial-intelligence/ai-stuffed-animal-pulled-after-disturbing-interactions
#tech #technology #ai #artificialintelligence #toys #children #aisafety
"AI chatbots have conquered the world, so it was only a matter of time before companies started stuffing them into toys for children, even as questions swirled over the tech’s safety and the alarming effects they can have on users’ mental health.
Now, new research shows exactly how this fusion of kid’s toys and loquacious AI models can go horrifically wrong in the real world.
After testing three different toys powered by AI, researchers from the US Public Interest Research Group found that the playthings can easily verge into risky conversational territory for children, including telling them where to find knives in a kitchen and how to start a fire with matches. One of the AI toys even engaged in explicit discussions, offering extensive advice on sex positions and fetishes.
In the resulting report, the researchers warn that the integration of AI into toys opens up entire new avenues of risk that we’re barely beginning to scratch the surface of — and just in time for the winter holidays, when huge numbers of parents and other relatives are going to be buying presents for kids online without considering the novel safety issues involved in exposing children to AI."
"AI chatbots have conquered the world, so it was only a matter of time before companies started stuffing them into toys for children, even as questions swirled over the tech’s safety and the alarming effects they can have on users’ mental health.
Now, new research shows exactly how this fusion of kid’s toys and loquacious AI models can go horrifically wrong in the real world.
After testing three different toys powered by AI, researchers from the US Public Interest Research Group found that the playthings can easily verge into risky conversational territory for children, including telling them where to find knives in a kitchen and how to start a fire with matches. One of the AI toys even engaged in explicit discussions, offering extensive advice on sex positions and fetishes.
In the resulting report, the researchers warn that the integration of AI into toys opens up entire new avenues of risk that we’re barely beginning to scratch the surface of — and just in time for the winter holidays, when huge numbers of parents and other relatives are going to be buying presents for kids online without considering the novel safety issues involved in exposing children to AI."