This also touches upon the age-old distinction between being smart and knowing facts. AI in the older definition of the word is about being smart, not (seemingly) knowing facts. And machine learning technology is not smart. But people with a naïve attitude to knowledge are really impressed when the LLM a) seems to understand what they say, b) always has an answer.
It's not AI. It's a scam.
"under appropriate confidence thresholds, AI systems would naturally express uncertainty rather than guess. So this would lead to fewer hallucinations. The problem is what it would do to user experience.
Consider the implications if ChatGPT started saying "I don't know" to even 30% of queries ... Users accustomed to receiving confident answers to virtually any question would likely abandon such systems rapidly."
😄 😄 😄
It's not AI. It's a scam.
"under appropriate confidence thresholds, AI systems would naturally express uncertainty rather than guess. So this would lead to fewer hallucinations. The problem is what it would do to user experience.
Consider the implications if ChatGPT started saying "I don't know" to even 30% of queries ... Users accustomed to receiving confident answers to virtually any question would likely abandon such systems rapidly."
😄 😄 😄
This also touches upon the age-old distinction between being smart and knowing facts. AI in the older definition of the word is about being smart, not (seemingly) knowing facts. And machine learning technology is not smart. But people with a naïve attitude to knowledge are really impressed when the LLM a) seems to understand what they say, b) always has an answer.
It's not AI. It's a scam.
"under appropriate confidence thresholds, AI systems would naturally express uncertainty rather than guess. So this would lead to fewer hallucinations. The problem is what it would do to user experience.
Consider the implications if ChatGPT started saying "I don't know" to even 30% of queries ... Users accustomed to receiving confident answers to virtually any question would likely abandon such systems rapidly."
😄 😄 😄
#Google releases #VaultGemma, its first #privacy-preserving #LLM
#GoogleResearch shows that #AI models can keep training data private.
This work on differential privacy has led to a new #openweight Google model called VaultGemma. The model uses differential privacy to reduce the possibility of memorization, which could change how Google builds privacy into its future AI agents. For now, though, the company's first differential privacy model is an experiment.
https://arstechnica.com/ai/2025/09/google-releases-vaultgemma-its-first-privacy-preserving-llm/
I remembered someone saying "WEBP is bad" so I asked an #LLM to explain and suggest alternatives.
It suggested AVIF which I didn't know about.
Asked if it is supported by the major browsers and by Wordperss. Yes!
I have Python code that uses PIL to extract the first page of PDF files and save them as an image. I asked the machine to modify the code to extract as AVIF. It did and it works.
Good to have machines that can do all those things for me.
#Google releases #VaultGemma, its first #privacy-preserving #LLM
#GoogleResearch shows that #AI models can keep training data private.
This work on differential privacy has led to a new #openweight Google model called VaultGemma. The model uses differential privacy to reduce the possibility of memorization, which could change how Google builds privacy into its future AI agents. For now, though, the company's first differential privacy model is an experiment.
https://arstechnica.com/ai/2025/09/google-releases-vaultgemma-its-first-privacy-preserving-llm/
quelmap is an open source local data analysis assistant that uses Lightning-4b requiring only 16GB RAM.
📊 Data visualization
🚀 Table joins
📈 Run statistical tests
📂 Unlimited rows, analyze 30+ tables at once
🐍 Built-in Python sandbox
🦙 Ollama or LM Studio API integration
looks like the original repo got removed, but here's a clone since it was released under Aparche
Gentoo Linux's AI policy forbids any content, including code, created with LLMs, for contributions to official Gentoo projects due to copyright, quality, and ethical concerns.
https://wiki.gentoo.org/wiki/Project:Council/AI_policy
In the age of batshit AI companies like Google, Microsoft, OpenAI, and others, a few opensource projects are making the correct call. Can Linux foundation also ban LLM?
Gentoo Linux's AI policy forbids any content, including code, created with LLMs, for contributions to official Gentoo projects due to copyright, quality, and ethical concerns.
https://wiki.gentoo.org/wiki/Project:Council/AI_policy
In the age of batshit AI companies like Google, Microsoft, OpenAI, and others, a few opensource projects are making the correct call. Can Linux foundation also ban LLM?
One negative effect of #LLM s on learning/teaching I have observed multiple times so far:
The LLM sirens leads students towards wrong solutions and I have to be constantly vigilant and guide them back to safety.
Most recent instance:
Really enjoyed @tonybaloney talk at #pyconau around how to make your #LLM models faster in production.
Key takeaways are that smaller models are faster, and you need to make your models smaller through quantisation, distillation or semantic caching.
Really tractable, immediately implementable 👏👏
More of this, pls
One negative effect of #LLM s on learning/teaching I have observed multiple times so far:
The LLM sirens leads students towards wrong solutions and I have to be constantly vigilant and guide them back to safety.
Most recent instance:
Au #NDDcamp, j'apprends qu'un promptologue est un expert en écriture de requêtes (« prompts ») pour les LLM.
(Pas encore dans le Wiktionnaire.)
Quelle est la différence entre un proctologue et un promptologue ?
Aucune, les deux travails dans le caca :)