
The price of intelligence (Three risks inherent in LLMs). ~ Mark Russinovich, Ahmed Salem, Santiago Zanella-Béguelin, Yonatan Zunger. https://cacm.acm.org/practice/the-price-of-intelligence/#AI #LLMs
The price of intelligence (Three risks inherent in LLMs). ~ Mark Russinovich, Ahmed Salem, Santiago Zanella-Béguelin, Yonatan Zunger. https://cacm.acm.org/practice/the-price-of-intelligence/#AI #LLMs
The price of intelligence (Three risks inherent in LLMs). ~ Mark Russinovich, Ahmed Salem, Santiago Zanella-Béguelin, Yonatan Zunger. https://cacm.acm.org/practice/the-price-of-intelligence/#AI #LLMs
Imagine if only 50% of the energy that is currently used to talk about #LLMs and "#AgenticCoding", was instead used to talk, teach and practice #TDD.
Imagine what kind of #software we will have in either scenario 10 years down the line...
Should Test-Driven Development (TDD) Be Used MORE In Software Engineering? - by the channel Modern Software Engineering:
https://inv.nadeko.net/watch?v=6yb7jKpxTjM
(or YT: https://www.youtube.com/watch?v=6yb7jKpxTjM)
Imagine if only 50% of the energy that is currently used to talk about #LLMs and "#AgenticCoding", was instead used to talk, teach and practice #TDD.
Imagine what kind of #software we will have in either scenario 10 years down the line...
Should Test-Driven Development (TDD) Be Used MORE In Software Engineering? - by the channel Modern Software Engineering:
https://inv.nadeko.net/watch?v=6yb7jKpxTjM
(or YT: https://www.youtube.com/watch?v=6yb7jKpxTjM)
5/ In short, LLMs can learn representations that are structural and modular in both activation and weight space. But at the same time, they remain context sensitive - so they capture ways in which human cognition deviates from purely symbolic architectures. In this way, they can move forward this long standing debate by providing an example computational system that combines these properties.
I posted about Ellie Pavlick’s excellent talk on compositionality in #LLMs at #cogsci25 last week. I just saw that she is also giving this keynote #ccn2025 and anyone can watch it here:
I recommend it!
https://hva-uva.cloud.panopto.eu/Panopto/Pages/Embed.aspx?id=b26bd214-6afd-413e-898d-b2dc00787139
I argued a couple of days ago that the sector is unprepared for our first academic year where the use of generative AI is completely normalised amongst students. HEPI found 92% of undergraduates using LLMs this year, up from 66% the previous year, which matches AdvancedHE’s finding of 62% using AI in their studies “in a way that is allowed by their university” (huge caveat). This largely accords with my own experience in which it appeared that last year LLMs become mainstream amongst students and this year they it to become a near uniform phenomenon.
The problem arises from the gap between near uniform use of LLMs in some way and the the lack of support being offered. Only 36% of students in the HEPI survey said they had been offered support by their university: a 56% gap. Only 26% of students say their university provides access to AI tools: a 66% gap. This is particularly problematic because we have evidence that wealthier students are tending to use LLMs more and in more analytical and reflective ways. They are more likely to use LLMs in a way that supports rather than hinders learning.
How do we close that gap between student LLM use and the support students are offered? My concern is that centralised training is either going to tend towards banality or irrelevance because the objective of GenAI training for students needs to be how to learn with LLMs rather than outsource learning to them. There are general principles which can be offered here but the concrete questions which have to be answered for students are going to vary between disciplinary areas:
Furthermore answering these questions is a process taking place in relating to changes in the technology and the culture emerging around it. Even if those changes are now slowing down, they are certainly not stopping. We need infrastructure for continuous adaptation in a context where the sector is already in crisis for entirely unrelated reasons. Furthermore, that has to willingly enrol academics in a way consistent with their workload and outlook. My sense is we have to find ways of embedding this within existing conversations and processes. The only way to do this I think is to genuinely give academics voice within the process, finding ways to network existing interactions in order that norms and standards emerge from practice rather than the institution expecting practice adapts to another centrally imposed policy.
#higherEducation #technology #university #academic #students #generativeAI #malpractice #LLMs #HEPI
I argued a couple of days ago that the sector is unprepared for our first academic year where the use of generative AI is completely normalised amongst students. HEPI found 92% of undergraduates using LLMs this year, up from 66% the previous year, which matches AdvancedHE’s finding of 62% using AI in their studies “in a way that is allowed by their university” (huge caveat). This largely accords with my own experience in which it appeared that last year LLMs become mainstream amongst students and this year they it to become a near uniform phenomenon.
The problem arises from the gap between near uniform use of LLMs in some way and the the lack of support being offered. Only 36% of students in the HEPI survey said they had been offered support by their university: a 56% gap. Only 26% of students say their university provides access to AI tools: a 66% gap. This is particularly problematic because we have evidence that wealthier students are tending to use LLMs more and in more analytical and reflective ways. They are more likely to use LLMs in a way that supports rather than hinders learning.
How do we close that gap between student LLM use and the support students are offered? My concern is that centralised training is either going to tend towards banality or irrelevance because the objective of GenAI training for students needs to be how to learn with LLMs rather than outsource learning to them. There are general principles which can be offered here but the concrete questions which have to be answered for students are going to vary between disciplinary areas:
Furthermore answering these questions is a process taking place in relating to changes in the technology and the culture emerging around it. Even if those changes are now slowing down, they are certainly not stopping. We need infrastructure for continuous adaptation in a context where the sector is already in crisis for entirely unrelated reasons. Furthermore, that has to willingly enrol academics in a way consistent with their workload and outlook. My sense is we have to find ways of embedding this within existing conversations and processes. The only way to do this I think is to genuinely give academics voice within the process, finding ways to network existing interactions in order that norms and standards emerge from practice rather than the institution expecting practice adapts to another centrally imposed policy.
#higherEducation #technology #university #academic #students #generativeAI #malpractice #LLMs #HEPI
💡 Unlike other #Fediverse servers, we didn't need to "wait and see" before preventing #Meta from using our community's content to train their #LLMs. When corporations show you who they are, believe them.
https://www.dropsitenews.com/p/meta-facebook-tech-copyright-privacy-whistleblower
Our #FediPact actions timeline:
June 2023: Preemptively blocked federation with #Meta #Facebook and #Threads. We also didn't attend any secret meetings with them or sign their NDAs.
July 2023: Blocked their IP networks from accessing our instance.
💡 Because protecting human rights shouldn't ever be something up for a vote.
💡 Unlike other #Fediverse servers, we didn't need to "wait and see" before preventing #Meta from using our community's content to train their #LLMs. When corporations show you who they are, believe them.
https://www.dropsitenews.com/p/meta-facebook-tech-copyright-privacy-whistleblower
New study: "To test whether the results might sometimes include retracted research, we identified 217 retracted or otherwise concerning academic studies with high altmetric scores and asked #ChatGPT 4o-mini to evaluate their quality 30 times each. Surprisingly, none of its 6510 reports mentioned that the articles were retracted or had relevant errors."
https://onlinelibrary.wiley.com/doi/full/10.1002/leap.2018?campaign=woletoc
New study: "To test whether the results might sometimes include retracted research, we identified 217 retracted or otherwise concerning academic studies with high altmetric scores and asked #ChatGPT 4o-mini to evaluate their quality 30 times each. Surprisingly, none of its 6510 reports mentioned that the articles were retracted or had relevant errors."
https://onlinelibrary.wiley.com/doi/full/10.1002/leap.2018?campaign=woletoc
👆
This is an example for another latent dysfunction:
Fewer (public) questions get asked on the internet and so knowledge is not spread, but contained, making it more individualized.
This also creates an even stronger bias towards older content, so people might take the shortcut and use a more established technology, instead of looking into new, less explored, but more innovative solutions.
👆
This is an example for another latent dysfunction:
Fewer (public) questions get asked on the internet and so knowledge is not spread, but contained, making it more individualized.
This also creates an even stronger bias towards older content, so people might take the shortcut and use a more established technology, instead of looking into new, less explored, but more innovative solutions.
Just got asked to sign an open letter to OpenAI asking for transparency on their announced restructuring. You’ll hear about it soon enough, no doubt, given some “big names” are attached to it.
While I agree with the premise of the letter, there’s no way I can sign it after seeing the level of cluelessness and perpetuation of harmful assumptions regurgitated in it. It’s depressing to see those supposedly pushing back against Big Tech’s AI grift having themselves accepted the core myths of this bullshit.
It starts:
“We write to you as the legal beneficiaries of your charitable mission.”
What charitable mission? Are you idiots? You’re talking to a ~$4B organisation.
“Your current structure includes important safeguards designed to ensure your technology serves humanity rather than merely generating profit…”
Oh, really, that’s news to me. I guess I must be missing how their current bullshit serves humanity.
“However, you have proposed a significant corporate restructuring that appears to weaken or eliminate many of these protections, and the public deserves to know the details.”
Ah, so they’re removing the smoke and mirrors, is that it?
Then a bunch of questions, including:
“Does OpenAI plan to commercialize AGI once developed?”
You do understand that there is NO path that leads from today’s mass bullshit factories that are LLMs to AGI, right? None. Zero. Nada. You’re playing right into their hands by taking this as given.
“We believe your response will help restore trust and establish whether OpenAI remains committed to its founding principles, or whether it is prioritizing private interests over its public mission.”
What trust? You trusted these assholes to begin with why exactly? Was it the asshat billionaire founder? How bloody naïve can you be?
“The stakes could not be higher. The decisions you make about governance, profit distribution, and accountability will shape not only OpenAI's future but also the future of society at large.”
Please, sirs, be kind.
No, fuck you. Why are we pleading? Burn this shit to the ground and dance on its smoldering remains.
“We look forward to your response and to working together to ensure AGI truly benefits everyone.”
🤦♂️
Yeah, no, I won’t be signing this. If this is what “resistance” looks like, we’re well and truly fucked.
#AI #AGI #LLMs #OpenAI #openLetter #wtf #getAFuckingClue #doBetter
Just got asked to sign an open letter to OpenAI asking for transparency on their announced restructuring. You’ll hear about it soon enough, no doubt, given some “big names” are attached to it.
While I agree with the premise of the letter, there’s no way I can sign it after seeing the level of cluelessness and perpetuation of harmful assumptions regurgitated in it. It’s depressing to see those supposedly pushing back against Big Tech’s AI grift having themselves accepted the core myths of this bullshit.
It starts:
“We write to you as the legal beneficiaries of your charitable mission.”
What charitable mission? Are you idiots? You’re talking to a ~$4B organisation.
“Your current structure includes important safeguards designed to ensure your technology serves humanity rather than merely generating profit…”
Oh, really, that’s news to me. I guess I must be missing how their current bullshit serves humanity.
“However, you have proposed a significant corporate restructuring that appears to weaken or eliminate many of these protections, and the public deserves to know the details.”
Ah, so they’re removing the smoke and mirrors, is that it?
Then a bunch of questions, including:
“Does OpenAI plan to commercialize AGI once developed?”
You do understand that there is NO path that leads from today’s mass bullshit factories that are LLMs to AGI, right? None. Zero. Nada. You’re playing right into their hands by taking this as given.
“We believe your response will help restore trust and establish whether OpenAI remains committed to its founding principles, or whether it is prioritizing private interests over its public mission.”
What trust? You trusted these assholes to begin with why exactly? Was it the asshat billionaire founder? How bloody naïve can you be?
“The stakes could not be higher. The decisions you make about governance, profit distribution, and accountability will shape not only OpenAI's future but also the future of society at large.”
Please, sirs, be kind.
No, fuck you. Why are we pleading? Burn this shit to the ground and dance on its smoldering remains.
“We look forward to your response and to working together to ensure AGI truly benefits everyone.”
🤦♂️
Yeah, no, I won’t be signing this. If this is what “resistance” looks like, we’re well and truly fucked.
#AI #AGI #LLMs #OpenAI #openLetter #wtf #getAFuckingClue #doBetter
📣 we’re hiring more colleagues!
The Faculty of Information at the University of Toronto is hiring a Professor at the Assistant rank in the area of Public AI and Cultural Institutions, with an anticipated start date of July 1, 2026 (or shortly thereafter). The closing date is Sept. 10, 2025. @academicjobs #tenuretrack #TT#AI
Inching further through the writing of a talk on whether #LLMs do or do not reason, I’ve now read Shannon Vallor’s recent book “The AI mirror”
(thanks to @ecosdelfuturo for pointing it out in this context!).
It’s a really worthwhile read 🧵
This talk by Cameron Buckner in a seminar organized by @UlrikeHahn has the most insightful way of explaining attention in the transformer architecture. A really cool talk as well, sad that the discussion is not on YouTube 🥲
This talk by Cameron Buckner in a seminar organized by @UlrikeHahn has the most insightful way of explaining attention in the transformer architecture. A really cool talk as well, sad that the discussion is not on YouTube 🥲
* Primary source (a preprint)
https://www.medrxiv.org/content/10.1101/2025.07.07.25331008v1
* Summary (in Nature)
https://www.nature.com/articles/d41586-025-02241-2
PS: The conclusion doesn't follow from the premises. This is like arguing that we should restrict clean air, clean air, and crowbars because criminals take advantage of them to commit crimes.
A space for Bonfire maintainers and contributors to communicate