Just got asked to sign an open letter to OpenAI asking for transparency on their announced restructuring. You’ll hear about it soon enough, no doubt, given some “big names” are attached to it.

While I agree with the premise of the letter, there’s no way I can sign it after seeing the level of cluelessness and perpetuation of harmful assumptions regurgitated in it. It’s depressing to see those supposedly pushing back against Big Tech’s AI grift having themselves accepted the core myths of this bullshit.

It starts:

“We write to you as the legal beneficiaries of your charitable mission.”

What charitable mission? Are you idiots? You’re talking to a ~$4B organisation.

“Your current structure includes important safeguards designed to ensure your technology serves humanity rather than merely generating profit…”

Oh, really, that’s news to me. I guess I must be missing how their current bullshit serves humanity.

“However, you have proposed a significant corporate restructuring that appears to weaken or eliminate many of these protections, and the public deserves to know the details.”

Ah, so they’re removing the smoke and mirrors, is that it?

Then a bunch of questions, including:

“Does OpenAI plan to commercialize AGI once developed?”

You do understand that there is NO path that leads from today’s mass bullshit factories that are LLMs to AGI, right? None. Zero. Nada. You’re playing right into their hands by taking this as given.

“We believe your response will help restore trust and establish whether OpenAI remains committed to its founding principles, or whether it is prioritizing private interests over its public mission.”

What trust? You trusted these assholes to begin with why exactly? Was it the asshat billionaire founder? How bloody naïve can you be?

“The stakes could not be higher. The decisions you make about governance, profit distribution, and accountability will shape not only OpenAI's future but also the future of society at large.”

Please, sirs, be kind.

No, fuck you. Why are we pleading? Burn this shit to the ground and dance on its smoldering remains.

“We look forward to your response and to working together to ensure AGI truly benefits everyone.”

🤦‍♂️

Yeah, no, I won’t be signing this. If this is what “resistance” looks like, we’re well and truly fucked.

Just got asked to sign an open letter to OpenAI asking for transparency on their announced restructuring. You’ll hear about it soon enough, no doubt, given some “big names” are attached to it.

While I agree with the premise of the letter, there’s no way I can sign it after seeing the level of cluelessness and perpetuation of harmful assumptions regurgitated in it. It’s depressing to see those supposedly pushing back against Big Tech’s AI grift having themselves accepted the core myths of this bullshit.

It starts:

“We write to you as the legal beneficiaries of your charitable mission.”

What charitable mission? Are you idiots? You’re talking to a ~$4B organisation.

“Your current structure includes important safeguards designed to ensure your technology serves humanity rather than merely generating profit…”

Oh, really, that’s news to me. I guess I must be missing how their current bullshit serves humanity.

“However, you have proposed a significant corporate restructuring that appears to weaken or eliminate many of these protections, and the public deserves to know the details.”

Ah, so they’re removing the smoke and mirrors, is that it?

Then a bunch of questions, including:

“Does OpenAI plan to commercialize AGI once developed?”

You do understand that there is NO path that leads from today’s mass bullshit factories that are LLMs to AGI, right? None. Zero. Nada. You’re playing right into their hands by taking this as given.

“We believe your response will help restore trust and establish whether OpenAI remains committed to its founding principles, or whether it is prioritizing private interests over its public mission.”

What trust? You trusted these assholes to begin with why exactly? Was it the asshat billionaire founder? How bloody naïve can you be?

“The stakes could not be higher. The decisions you make about governance, profit distribution, and accountability will shape not only OpenAI's future but also the future of society at large.”

Please, sirs, be kind.

No, fuck you. Why are we pleading? Burn this shit to the ground and dance on its smoldering remains.

“We look forward to your response and to working together to ensure AGI truly benefits everyone.”

🤦‍♂️

Yeah, no, I won’t be signing this. If this is what “resistance” looks like, we’re well and truly fucked.

KI, AGI: Nur ein Hype und welche Ideologie steckt dahinter? Gespräch mit Prof. Rainer Mühlhoff

https://www.l-iz.de/Topposts/2025/07/ki-agi-nur-ein-hype-und-welche-ideologie-steckt-dahinter-gespraech-mit-prof-rainer-muehlhoff-teil-1-630114

> Mit seinem Buch „Künstliche Intelligenz und der neue Faschismus“ hat er eine ausführliche Analyse der zu erwartenden/befürchtenden gesellschaftlichen Verwerfungen bei einer Machtübernahme durch die Apologeten des Heilsversprechens der KI ausgearbeitet. (Teil 1)

#AI#KI #agi#Faschismus

1/2

KI und die Kategorisierung von Menschen nach Nützlichkeit: Gespräch mit Professor Rainer Mühlhoff – Teil 2

https://www.l-iz.de/bildung/forschung/2025/07/ki-und-die-kategorisierung-von-menschen-nach-nutzlichkeit-gesprach-professor-rainer-muhlhoff-teil-2-630118

> "Wichtig ist zunächst, KI ist nicht einfach nur eine Technologie, also nicht nur ein technischer Apparat. Sondern KI ist auch ein Hype und eine Ideologie, eine Bereitschaft an die Intelligenzfähigkeit von Maschinen zu glauben. Diese Bereitschaft wird gesellschaftlich gerade sehr gut genährt. KI ist ein Hype in der Form von Investmentkapital, politischem Willen und relativ blindem Gehorsam gegenüber der Industrie, die sogar so etwas wie den Grundrechtsschutz unter das Rad wirft."

#KI#AI#AGI#Faschismus

2/2

@lawfare 🧵Scaling Laws — Lawfare podcast

“Mollick discusses the transformative potential of AI in various fields, particularly education and medicine, as well as the need for empirical research to understand AI's impact, the importance of adapting teaching methods, and the challenges of cognitive de-skilling.”

Includes extended discussion on law school teaching. The Apple podcast has a generated transcript.

#lawfare#ScalingLaws #podcast#AGI#AI#LLM #pedagogy

https://podcasts.apple.com/us/podcast/scaling-laws/id1607949880?i=1000715411530

Mike Olson
Mike Olson boosted

Gary Marcus is onto something in here. Maybe true AGI is not so impossible to reach after all. Just probably not in the near future but likely within 20 years.

"For all the efforts that OpenAI and other leaders of deep learning, such as Geoffrey Hinton and Yann LeCun, have put into running neurosymbolic AI, and me personally, down over the last decade, the cutting edge is finally, if quietly and without public acknowledgement, tilting towards neurosymbolic AI.

This essay explains what neurosymbolic AI is, why you should believe it, how deep learning advocates long fought against it, and how in 2025, OpenAI and xAI have accidentally vindicated it.

And it is about why, in 2025, neurosymbolic AI has emerged as the team to beat.

It is also an essay about sociology.

The essential premise of neurosymbolic AI is this: the two most common approaches to AI, neural networks and classical symbolic AI, have complementary strengths and weaknesses. Neural networks are good at learning but weak at generalization; symbolic systems are good at generalization, but not at learning."

https://garymarcus.substack.com/p/how-o3-and-grok-4-accidentally-vindicated

#AI#NeuralNetworks#DeepLearning#SymbolicAI#NeuroSymbolicAI#AGI

Gary Marcus is onto something in here. Maybe true AGI is not so impossible to reach after all. Just probably not in the near future but likely within 20 years.

"For all the efforts that OpenAI and other leaders of deep learning, such as Geoffrey Hinton and Yann LeCun, have put into running neurosymbolic AI, and me personally, down over the last decade, the cutting edge is finally, if quietly and without public acknowledgement, tilting towards neurosymbolic AI.

This essay explains what neurosymbolic AI is, why you should believe it, how deep learning advocates long fought against it, and how in 2025, OpenAI and xAI have accidentally vindicated it.

And it is about why, in 2025, neurosymbolic AI has emerged as the team to beat.

It is also an essay about sociology.

The essential premise of neurosymbolic AI is this: the two most common approaches to AI, neural networks and classical symbolic AI, have complementary strengths and weaknesses. Neural networks are good at learning but weak at generalization; symbolic systems are good at generalization, but not at learning."

https://garymarcus.substack.com/p/how-o3-and-grok-4-accidentally-vindicated

#AI#NeuralNetworks#DeepLearning#SymbolicAI#NeuroSymbolicAI#AGI

…the empires of AI won’t give up their power easily. The rest of us will need to wrest back control of this technology’s future. …we can all resist the narratives that OpenAI and the AI industry have told us to hide the mounting social and environmental costs of this technology behind an elusive vision of progress.
—Karen Hao, Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI
#ai #agi #openai #samaltman #altman

…the empires of AI won’t give up their power easily. The rest of us will need to wrest back control of this technology’s future. …we can all resist the narratives that OpenAI and the AI industry have told us to hide the mounting social and environmental costs of this technology behind an elusive vision of progress.
—Karen Hao, Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI
#ai #agi #openai #samaltman #altman

Automated planning and scheduling

https://en.wikipedia.org/wiki/Automated_planning_and_scheduling

Satplan

https://en.wikipedia.org/wiki/Satplan

"Satplan (better known as Planning as Satisfiability) is a method for automated planning. It converts the planning problem instance into an instance of the Boolean satisfiability problem (SAT), which is then solved using a method for establishing satisfiability such as the DPLL algorithm or WalkSAT"

Fascinating! 🤓

2/3

#SAT #AI #ArtificialIntelligence#NoLLM

AIXI

https://en.wikipedia.org/wiki/AIXI

"AIXI /ˈaɪksi/ is a theoretical mathematical formalism for artificial general intelligence. It combines Solomonoff induction with sequential decision theory. AIXI was first proposed by Marcus Hutter in 2000[...]."

"[...]AIXI is incomputable."

3/3

#AI #ArtificialIntelligence#Gödel#AGI#ArtificialGeneralIntelligence