One example widely shared on social media — & which Willison duplicated — asked #Grok to comment on the conflict in the #MiddleEast. The prompted question made no mention of #Musk, but the #chatbot looked for his guidance anyway.

As a so-called reasoning model, much like those made by rivals #OpenAI or #Anthropic, #Grok4 shows its “thinking” as it goes through the steps of processing a question & coming up with an answer.

#tech#MediaLiteracy #bias

#McDonald's, le incredibili falle di sicurezza del sistema AI per le assunzioni

Due ricercatori sono entrati con grande facilità nel backend del #chatbot usato dalla catena negli Stati Uniti per valutare i candidati, trovando i dati di milioni di persone

https://www.wired.it/article/mcdonalds-falle-sicurezza-sistema-ai-assunzioni-dati-personali-candidati/

@aitech

@thisismissem

So

Jumped, or was pushed?

mhmm...

"New York CNN —

#LindaYaccarino is stepping down as CEO of #X after two years leading #ElonMusk’s social media company.

#Yaccarino’s departure comes one day after the company’s #Grok#Chatbot began pushing antisemitic tropes in responses to users. It’s not clear that the events were connected. "

I mean seriously… what did they expect from an AI owned by a sig-heiling Nazi.

“The year is 2025, and an AI model belonging to the richest man in the world has turned into a neo-Nazi.”

For the media’s next trick they will be surprised when the new political party owned by a Nazi billionaire calls itself a “workers party” but hates communists and socialists alike.

Atlantic: https://www.theatlantic.com/technology/archive/2025/07/grok-anti-semitic-tweets/683463/

Archive link: https://archive.is/2025.07.09-012609/https://www.theatlantic.com/technology/archive/2025/07/grok-anti-semitic-tweets/683463/
#grok #musk#AI #chatbot #twitter#X

This is obviously bad from #whatsapp, but also, the way the journalist describes what the chatbot does, as if it had intentions, is pretty bad too.

"It was the beginning of a bizarre exchange of the kind more and more people are having with AI systems, in which chatbots try to negotiate their way out of trouble, deflect attention from their mistakes and contradict themselves, all in an attempt to continue to appear useful."

No, the chatbot isn't "trying to negotiate", and is not "attempting to appear useful". It's a program that follows programming rules to output something that looks like English langage. It doesn't have desires or intentions, and it cannot lie because it doesn't know what truth is.

‘It’s terrifying’: WhatsApp AI helper mistakenly shares user’s number
https://www.theguardian.com/technology/2025/jun/18/whatsapp-ai-helper-mistakenly-shares-users-number?CMP=Share_AndroidApp_Other

#genAI#ChatBot#TheGuardian