Ma discussion "au coin du feu" avec Ben Goertzel au #BGIsummit Istanbul
I reclaim the original DISTRIBUTED architecture of the internet
"CISCO, why not opening #Multicast for #SingularityNet #agi
Et j'ai ajouté en off : "If CISCO don't, Huawei will"
https://www.youtube.com/live/gb5ESFppdD8?si=iHr6wGPYHmwdX4cB&t=11704
10+ years ago I began detoxing myself from ads. Especially TV ads. Then, a few years later, when I tried YouTube Premium I really felt the positive effect.
Now it is mainly when watching live sport on linear TV I see it, and wow how bad it has become. Scary even to now know and really feel the contrast.
I am quite sure that successful detoxification of my mind - very much is helping to avoid falling for the AI hype. I am simply not numbed down enough for #FOMO to do it trickery.
But, we also have the race to find the #HolyGrail - that for AI is AGI.
The premise is simple: The game is over then - The one getting #AGI first will rule the world.
That has roped world politics into the mess, especially between USA and China.
Thus, we have a perfect storm fueled by tech companies and bad leaders in cahoot, who don't care about anything else than not letting the other side getting there first.
Our only hope is that this #superbubble burst as soon as possible.
Anyway, here's a very wordy but relevant meme.
¿Qué hay detrás de la carrera por alcanzar la AGI? Un paquete de ideologías agrupadas en el acrónimo TESCREAL (transhumanismo, Extropianismo, singularitarismo, cosmismo, Racionalismo, Altruismo Efectivo y largoplacismo). Ideas de una secta a la que se le está dando el control del mundo.
Paper original de los autores @timnitGebru y Émile P. Torres: https://doi.org/10.5210/fm.v29i4.13636
Traducción al español disponible en nuestra web: https://arteesetica.org/el-paquete-tescreal/
The TESCREAL bundle: Eugenics and the promise of utopia through artificial general intelligence
¿Qué hay detrás de la carrera por alcanzar la AGI? Un paquete de ideologías agrupadas en el acrónimo TESCREAL (transhumanismo, Extropianismo, singularitarismo, cosmismo, Racionalismo, Altruismo Efectivo y largoplacismo). Ideas de una secta a la que se le está dando el control del mundo.
Paper original de los autores @timnitGebru y Émile P. Torres: https://doi.org/10.5210/fm.v29i4.13636
Traducción al español disponible en nuestra web: https://arteesetica.org/el-paquete-tescreal/
The TESCREAL bundle: Eugenics and the promise of utopia through artificial general intelligence
On the cult of #AGI in 4 steps: 1) we give it all our online knowledge, 2) we give it all our energy, 3) knowing the world is in flames, we ask it for a solution and 4) the answer is...
That is what @davidrevoy painted in this great comic https://framapiaf.org/@davidrevoy/115180874986726269
Check out his other fun works of art :-)
By Stephen Greenblat:
We Are Watching a Scientific Superpower Destroy Itself https://www.nytimes.com/2025/09/08/opinion/universities-science-trump-china.html?smid=tw-share
This return to the Dark Ages with all the thoughtfulness of Atilla The Hun seems to be driven by the arrogance of those who own #AI assets.
The #SiliconValley overlords believe that they are a year or two away from disintermediating academics and scientists through the creation of an #AGI. I have my doubts on that front
#TechnoFeudalism#USPolitics#TechnoFascism#Broligarchy#Technogarchy
Just got asked to sign an open letter to OpenAI asking for transparency on their announced restructuring. You’ll hear about it soon enough, no doubt, given some “big names” are attached to it.
While I agree with the premise of the letter, there’s no way I can sign it after seeing the level of cluelessness and perpetuation of harmful assumptions regurgitated in it. It’s depressing to see those supposedly pushing back against Big Tech’s AI grift having themselves accepted the core myths of this bullshit.
It starts:
“We write to you as the legal beneficiaries of your charitable mission.”
What charitable mission? Are you idiots? You’re talking to a ~$4B organisation.
“Your current structure includes important safeguards designed to ensure your technology serves humanity rather than merely generating profit…”
Oh, really, that’s news to me. I guess I must be missing how their current bullshit serves humanity.
“However, you have proposed a significant corporate restructuring that appears to weaken or eliminate many of these protections, and the public deserves to know the details.”
Ah, so they’re removing the smoke and mirrors, is that it?
Then a bunch of questions, including:
“Does OpenAI plan to commercialize AGI once developed?”
You do understand that there is NO path that leads from today’s mass bullshit factories that are LLMs to AGI, right? None. Zero. Nada. You’re playing right into their hands by taking this as given.
“We believe your response will help restore trust and establish whether OpenAI remains committed to its founding principles, or whether it is prioritizing private interests over its public mission.”
What trust? You trusted these assholes to begin with why exactly? Was it the asshat billionaire founder? How bloody naïve can you be?
“The stakes could not be higher. The decisions you make about governance, profit distribution, and accountability will shape not only OpenAI's future but also the future of society at large.”
Please, sirs, be kind.
No, fuck you. Why are we pleading? Burn this shit to the ground and dance on its smoldering remains.
“We look forward to your response and to working together to ensure AGI truly benefits everyone.”
🤦♂️
Yeah, no, I won’t be signing this. If this is what “resistance” looks like, we’re well and truly fucked.
#AI #AGI #LLMs #OpenAI #openLetter #wtf #getAFuckingClue #doBetter
Just got asked to sign an open letter to OpenAI asking for transparency on their announced restructuring. You’ll hear about it soon enough, no doubt, given some “big names” are attached to it.
While I agree with the premise of the letter, there’s no way I can sign it after seeing the level of cluelessness and perpetuation of harmful assumptions regurgitated in it. It’s depressing to see those supposedly pushing back against Big Tech’s AI grift having themselves accepted the core myths of this bullshit.
It starts:
“We write to you as the legal beneficiaries of your charitable mission.”
What charitable mission? Are you idiots? You’re talking to a ~$4B organisation.
“Your current structure includes important safeguards designed to ensure your technology serves humanity rather than merely generating profit…”
Oh, really, that’s news to me. I guess I must be missing how their current bullshit serves humanity.
“However, you have proposed a significant corporate restructuring that appears to weaken or eliminate many of these protections, and the public deserves to know the details.”
Ah, so they’re removing the smoke and mirrors, is that it?
Then a bunch of questions, including:
“Does OpenAI plan to commercialize AGI once developed?”
You do understand that there is NO path that leads from today’s mass bullshit factories that are LLMs to AGI, right? None. Zero. Nada. You’re playing right into their hands by taking this as given.
“We believe your response will help restore trust and establish whether OpenAI remains committed to its founding principles, or whether it is prioritizing private interests over its public mission.”
What trust? You trusted these assholes to begin with why exactly? Was it the asshat billionaire founder? How bloody naïve can you be?
“The stakes could not be higher. The decisions you make about governance, profit distribution, and accountability will shape not only OpenAI's future but also the future of society at large.”
Please, sirs, be kind.
No, fuck you. Why are we pleading? Burn this shit to the ground and dance on its smoldering remains.
“We look forward to your response and to working together to ensure AGI truly benefits everyone.”
🤦♂️
Yeah, no, I won’t be signing this. If this is what “resistance” looks like, we’re well and truly fucked.
#AI #AGI #LLMs #OpenAI #openLetter #wtf #getAFuckingClue #doBetter
KI, AGI: Nur ein Hype und welche Ideologie steckt dahinter? Gespräch mit Prof. Rainer Mühlhoff
> Mit seinem Buch „Künstliche Intelligenz und der neue Faschismus“ hat er eine ausführliche Analyse der zu erwartenden/befürchtenden gesellschaftlichen Verwerfungen bei einer Machtübernahme durch die Apologeten des Heilsversprechens der KI ausgearbeitet. (Teil 1)
1/2
KI und die Kategorisierung von Menschen nach Nützlichkeit: Gespräch mit Professor Rainer Mühlhoff – Teil 2
> "Wichtig ist zunächst, KI ist nicht einfach nur eine Technologie, also nicht nur ein technischer Apparat. Sondern KI ist auch ein Hype und eine Ideologie, eine Bereitschaft an die Intelligenzfähigkeit von Maschinen zu glauben. Diese Bereitschaft wird gesellschaftlich gerade sehr gut genährt. KI ist ein Hype in der Form von Investmentkapital, politischem Willen und relativ blindem Gehorsam gegenüber der Industrie, die sogar so etwas wie den Grundrechtsschutz unter das Rad wirft."
2/2
My 4-month-old kid is not DDoSing Wikipedia right now, nor will they ever do so before learning to speak, read, or write. Their entire "training corpus" will not top even 100 million "tokens" before they can speak & understand language, and do so with real intentionally.
Just to emphasize that point: 100 words-per-minute times 60 minutes-per-hour times 12 hours-per-day times 365 days-per-year times 4 years is a mere 105,120,000 words. That's a ludicrously high estimate of words-per-minute and hours-per-day, and 4 years old (the age of my other kid) is well after basic speech capabilities are developed in many children, etc. More likely the available "training data" is at least 1 or 2 orders of magnitude less than this.
The point here is that large language models, trained as they are on multiple billions of tokens, are not developing their behavioral capabilities in a way that's remotely similar to humans, even if you believe those capabilities are similar (they are by certain very biased ways of measurement; they very much aren't by others). This idea that humans must be naturally good at acquiring language is an old one (see e.g. https://en.m.wikipedia.org/wiki/Language_acquisition_device). Why should this matter though?
The AI hypelords are trying to argue (because it will benefit them personally, in most cases) that more research into LLMs alone will lead to so-called "Artificial General Intelligence." However, "general intelligence" doesn't have any widely-accepted definition (although Microsoft's contract with OpenAI seems to think the definition involves a certain level of profit: https://techcrunch.com/2024/12/26/microsoft-and-openai-have-a-financial-definition-of-agi-report/). But I think it's pretty fair to claim that a system which is bad at learning probably does not have something we'd want to call "general intelligence" since the capacity to learn is an important part of what intelligence is. It might have particular capabilities we'd call "intelligent" but by missing out on the capacity to learn, its "intelligence" would by definition be narrow.
Although there are definitely people working on LLM training efficiency, the underlying technical approach is fundamentally incompatible with the ability to learn language using merely millions of tokens. Any approach that achieves reasonable language capacity without billions of tokens of training data will deviate from the LLM blueprint in one of two ways: either it will start from a pre-trained model that has more data available to it, or it will be using other AI techniques to learn more efficiently.
"But wait, don't humans have a pre-trained language model in our DNA?" you might ask. We certainly have some capability that other species lack, but it's more likely a learning capability than just stored linguistic information, for a few reasons. First, any stored information would have to be completely language-agnostic, since genes don't vary by language spoken. Second, the entire human genome has a raw information content of ~700 MB (see: https://medium.com/precision-medicine/how-big-is-the-human-genome-e90caa3409b0). That's not nearly enough to encode a useful amount of pre-training data in modern model terms, and you've got to leave room to encode all of human biology... Just to emphasize this point, the "small" 8-billion-parameter Llama 3.1 model needs ~12 gigabytes of RAM to store the parameters (https://llamaimodel.com/requirements/#Llama3-1).
The point here is that if we want to be serious about a quest for "AGI," or it we're worried about whether "AGI is just around he corner," we can be pretty sure that more fundamental AI research breakthroughs stand between the state of the art and whatever our favorite idea of AGI is, and that these breakthroughs, assuming they will happen, will not happen as a result of investing more money in larger language models. They won't even need larger and more environmentally questionable datacenters to run, in fact, if they truly achieve efficient learning. Just the opposite in fact: LLM research and the datacenter arms race are currently sucking researcher time and material resources away from the investments that might achieve AGI. To put it more succinctly:
The bullshit-machine peddlers are peddling bullshit when it comes to AGI claims.
Have I mentioned my 4-year-old is also learning to draw?
Scaling Laws - I strongly recommend this #podcast series on #AI #LLM related #Legal topics, including Law #Education #ScalingLaws
https://mastodon.social/@lawfare/114874664114525328
“Mollick discusses the transformative potential of AI in various fields, particularly education and medicine, as well as the need for empirical research to understand AI's impact, the importance of adapting teaching methods, and the challenges of cognitive de-skilling.”
Includes extended discussion on law school teaching. The Apple podcast has a generated transcript.
#lawfare#ScalingLaws #podcast#AGI#AI#LLM #pedagogy
https://podcasts.apple.com/us/podcast/scaling-laws/id1607949880?i=1000715411530
Gary Marcus is onto something in here. Maybe true AGI is not so impossible to reach after all. Just probably not in the near future but likely within 20 years.
"For all the efforts that OpenAI and other leaders of deep learning, such as Geoffrey Hinton and Yann LeCun, have put into running neurosymbolic AI, and me personally, down over the last decade, the cutting edge is finally, if quietly and without public acknowledgement, tilting towards neurosymbolic AI.
This essay explains what neurosymbolic AI is, why you should believe it, how deep learning advocates long fought against it, and how in 2025, OpenAI and xAI have accidentally vindicated it.
And it is about why, in 2025, neurosymbolic AI has emerged as the team to beat.
It is also an essay about sociology.
The essential premise of neurosymbolic AI is this: the two most common approaches to AI, neural networks and classical symbolic AI, have complementary strengths and weaknesses. Neural networks are good at learning but weak at generalization; symbolic systems are good at generalization, but not at learning."
https://garymarcus.substack.com/p/how-o3-and-grok-4-accidentally-vindicated
#AI#NeuralNetworks#DeepLearning#SymbolicAI#NeuroSymbolicAI#AGI
Gary Marcus is onto something in here. Maybe true AGI is not so impossible to reach after all. Just probably not in the near future but likely within 20 years.
"For all the efforts that OpenAI and other leaders of deep learning, such as Geoffrey Hinton and Yann LeCun, have put into running neurosymbolic AI, and me personally, down over the last decade, the cutting edge is finally, if quietly and without public acknowledgement, tilting towards neurosymbolic AI.
This essay explains what neurosymbolic AI is, why you should believe it, how deep learning advocates long fought against it, and how in 2025, OpenAI and xAI have accidentally vindicated it.
And it is about why, in 2025, neurosymbolic AI has emerged as the team to beat.
It is also an essay about sociology.
The essential premise of neurosymbolic AI is this: the two most common approaches to AI, neural networks and classical symbolic AI, have complementary strengths and weaknesses. Neural networks are good at learning but weak at generalization; symbolic systems are good at generalization, but not at learning."
https://garymarcus.substack.com/p/how-o3-and-grok-4-accidentally-vindicated
#AI#NeuralNetworks#DeepLearning#SymbolicAI#NeuroSymbolicAI#AGI
…the empires of AI won’t give up their power easily. The rest of us will need to wrest back control of this technology’s future. …we can all resist the narratives that OpenAI and the AI industry have told us to hide the mounting social and environmental costs of this technology behind an elusive vision of progress.
—Karen Hao, Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI
#ai #agi #openai #samaltman #altman
…the empires of AI won’t give up their power easily. The rest of us will need to wrest back control of this technology’s future. …we can all resist the narratives that OpenAI and the AI industry have told us to hide the mounting social and environmental costs of this technology behind an elusive vision of progress.
—Karen Hao, Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI
#ai #agi #openai #samaltman #altman
Automated planning and scheduling
https://en.wikipedia.org/wiki/Automated_planning_and_scheduling
Satplan
https://en.wikipedia.org/wiki/Satplan
"Satplan (better known as Planning as Satisfiability) is a method for automated planning. It converts the planning problem instance into an instance of the Boolean satisfiability problem (SAT), which is then solved using a method for establishing satisfiability such as the DPLL algorithm or WalkSAT"
Fascinating! 🤓
2/3
AIXI
https://en.wikipedia.org/wiki/AIXI
"AIXI /ˈaɪksi/ is a theoretical mathematical formalism for artificial general intelligence. It combines Solomonoff induction with sequential decision theory. AIXI was first proposed by Marcus Hutter in 2000[...]."
"[...]AIXI is incomputable."
3/3
#AI #ArtificialIntelligence#Gödel#AGI#ArtificialGeneralIntelligence