New study: "To test whether the results might sometimes include retracted research, we identified 217 retracted or otherwise concerning academic studies with high altmetric scores and asked #ChatGPT 4o-mini to evaluate their quality 30 times each. Surprisingly, none of its 6510 reports mentioned that the articles were retracted or had relevant errors."
https://onlinelibrary.wiley.com/doi/full/10.1002/leap.2018?campaign=woletoc

#AI #LLMs#Retractions

New study: "To test whether the results might sometimes include retracted research, we identified 217 retracted or otherwise concerning academic studies with high altmetric scores and asked #ChatGPT 4o-mini to evaluate their quality 30 times each. Surprisingly, none of its 6510 reports mentioned that the articles were retracted or had relevant errors."
https://onlinelibrary.wiley.com/doi/full/10.1002/leap.2018?campaign=woletoc

#AI #LLMs#Retractions

alcinnz
alcinnz boosted

👆

This is an example for another latent dysfunction:

Fewer (public) questions get asked on the internet and so knowledge is not spread, but contained, making it more individualized.

This also creates an even stronger bias towards older content, so people might take the shortcut and use a more established technology, instead of looking into new, less explored, but more innovative solutions.

#LatentFunctions #LLMs#Innovation#Society#Philosophy

👆

This is an example for another latent dysfunction:

Fewer (public) questions get asked on the internet and so knowledge is not spread, but contained, making it more individualized.

This also creates an even stronger bias towards older content, so people might take the shortcut and use a more established technology, instead of looking into new, less explored, but more innovative solutions.

#LatentFunctions #LLMs#Innovation#Society#Philosophy

Just got asked to sign an open letter to OpenAI asking for transparency on their announced restructuring. You’ll hear about it soon enough, no doubt, given some “big names” are attached to it.

While I agree with the premise of the letter, there’s no way I can sign it after seeing the level of cluelessness and perpetuation of harmful assumptions regurgitated in it. It’s depressing to see those supposedly pushing back against Big Tech’s AI grift having themselves accepted the core myths of this bullshit.

It starts:

“We write to you as the legal beneficiaries of your charitable mission.”

What charitable mission? Are you idiots? You’re talking to a ~$4B organisation.

“Your current structure includes important safeguards designed to ensure your technology serves humanity rather than merely generating profit…”

Oh, really, that’s news to me. I guess I must be missing how their current bullshit serves humanity.

“However, you have proposed a significant corporate restructuring that appears to weaken or eliminate many of these protections, and the public deserves to know the details.”

Ah, so they’re removing the smoke and mirrors, is that it?

Then a bunch of questions, including:

“Does OpenAI plan to commercialize AGI once developed?”

You do understand that there is NO path that leads from today’s mass bullshit factories that are LLMs to AGI, right? None. Zero. Nada. You’re playing right into their hands by taking this as given.

“We believe your response will help restore trust and establish whether OpenAI remains committed to its founding principles, or whether it is prioritizing private interests over its public mission.”

What trust? You trusted these assholes to begin with why exactly? Was it the asshat billionaire founder? How bloody naïve can you be?

“The stakes could not be higher. The decisions you make about governance, profit distribution, and accountability will shape not only OpenAI's future but also the future of society at large.”

Please, sirs, be kind.

No, fuck you. Why are we pleading? Burn this shit to the ground and dance on its smoldering remains.

“We look forward to your response and to working together to ensure AGI truly benefits everyone.”

🤦‍♂️

Yeah, no, I won’t be signing this. If this is what “resistance” looks like, we’re well and truly fucked.

Just got asked to sign an open letter to OpenAI asking for transparency on their announced restructuring. You’ll hear about it soon enough, no doubt, given some “big names” are attached to it.

While I agree with the premise of the letter, there’s no way I can sign it after seeing the level of cluelessness and perpetuation of harmful assumptions regurgitated in it. It’s depressing to see those supposedly pushing back against Big Tech’s AI grift having themselves accepted the core myths of this bullshit.

It starts:

“We write to you as the legal beneficiaries of your charitable mission.”

What charitable mission? Are you idiots? You’re talking to a ~$4B organisation.

“Your current structure includes important safeguards designed to ensure your technology serves humanity rather than merely generating profit…”

Oh, really, that’s news to me. I guess I must be missing how their current bullshit serves humanity.

“However, you have proposed a significant corporate restructuring that appears to weaken or eliminate many of these protections, and the public deserves to know the details.”

Ah, so they’re removing the smoke and mirrors, is that it?

Then a bunch of questions, including:

“Does OpenAI plan to commercialize AGI once developed?”

You do understand that there is NO path that leads from today’s mass bullshit factories that are LLMs to AGI, right? None. Zero. Nada. You’re playing right into their hands by taking this as given.

“We believe your response will help restore trust and establish whether OpenAI remains committed to its founding principles, or whether it is prioritizing private interests over its public mission.”

What trust? You trusted these assholes to begin with why exactly? Was it the asshat billionaire founder? How bloody naïve can you be?

“The stakes could not be higher. The decisions you make about governance, profit distribution, and accountability will shape not only OpenAI's future but also the future of society at large.”

Please, sirs, be kind.

No, fuck you. Why are we pleading? Burn this shit to the ground and dance on its smoldering remains.

“We look forward to your response and to working together to ensure AGI truly benefits everyone.”

🤦‍♂️

Yeah, no, I won’t be signing this. If this is what “resistance” looks like, we’re well and truly fucked.

📣 we’re hiring more colleagues!

The Faculty of Information at the University of Toronto is hiring a Professor at the Assistant rank in the area of Public AI and Cultural Institutions, with an anticipated start date of July 1, 2026 (or shortly thereafter). The closing date is Sept. 10, 2025. @academicjobs #tenuretrack #TT#AI

https://jobs.utoronto.ca/job/Toronto-Assistant-Professor-Public-AI-and-Cultural-Institutions-ON/594413417/

We would love to hear from people who take a critically informed, engaged stance, steering the futures of #AI away from #AIhype or #AIslop to genuinely interesting places oriented beyond #LLMs.

@DAIR @alex @emilymbender @timnitGebru

Greg Lloyd
Greg Lloyd boosted

Inching further through the writing of a talk on whether #LLMs do or do not reason, I’ve now read Shannon Vallor’s recent book “The AI mirror”

(thanks to @ecosdelfuturo for pointing it out in this context!).

It’s a really worthwhile read 🧵

@philosophy @cogsci

#AI tools seem to be generating a large swath of low-quality, formulaic biomedical articles drawn from #OpenAccess biomedical databases. For example, since the rise of #LLMs about three years ago, the number of new biomedical articles is about 5k larger than previous moving average would have predicted. The researchers who noticed this trend argue for the "adoption of controlled data-access mechanisms" -- that is, pulling back from #OpenData.

* Primary source (a preprint)
https://www.medrxiv.org/content/10.1101/2025.07.07.25331008v1

* Summary (in Nature)
https://www.nature.com/articles/d41586-025-02241-2

PS: The conclusion doesn't follow from the premises. This is like arguing that we should restrict clean air, clean air, and crowbars because criminals take advantage of them to commit crimes.

#Medicine #PaperMills#ScholComm

#AI tools seem to be generating a large swath of low-quality, formulaic biomedical articles drawn from #OpenAccess biomedical databases. For example, since the rise of #LLMs about three years ago, the number of new biomedical articles is about 5k larger than previous moving average would have predicted. The researchers who noticed this trend argue for the "adoption of controlled data-access mechanisms" -- that is, pulling back from #OpenData.

* Primary source (a preprint)
https://www.medrxiv.org/content/10.1101/2025.07.07.25331008v1

* Summary (in Nature)
https://www.nature.com/articles/d41586-025-02241-2

PS: The conclusion doesn't follow from the premises. This is like arguing that we should restrict clean air, clean air, and crowbars because criminals take advantage of them to commit crimes.

#Medicine #PaperMills#ScholComm

Inching further through the writing of a talk on whether #LLMs do or do not reason, I’ve now read Shannon Vallor’s recent book “The AI mirror”

(thanks to @ecosdelfuturo for pointing it out in this context!).

It’s a really worthwhile read 🧵

@philosophy @cogsci

@UlrikeHahn

"We further show that these methods increased persuasion by ... and that, strikingly, where they increased [ #LLM ] persuasiveness they also systematically decreased factual accuracy."

Sounds like a characteristic of human #persuasion, especially in politics and #advertising / #propaganda.

#ArXiv_2507_13919 #LLMs

#ESETresearch has mapped the labyrinth of #AsyncRAT forks, identifying the most prevalent versions of this open-source malware. While some variants are mere curiosities, others pose a more tenacious threat. https://www.welivesecurity.com/en/eset-research/unmasking-asyncrat-navigating-labyrinth-forks/
AsyncRAT comes with the typical RAT functionalities, including keylogging, screen capturing, and credential theft. Other threat actors have developed a multitude of variants based on its source code.
Our analysis revealed the most widely used and deployed forks of AsyncRAT, with the most prevalent among them being #DcRat.
Although DcRat holds a smaller share compared to AsyncRAT, it offers notable improvements. These include advanced evasion techniques, and the use of an open-source library for more efficient binary data serialization.
AsyncRAT forks often include prank-style plugins, such as for opening and closing the CD tray and turning off the monitor. Spoof versions dubbed SantaRAT and BoratRAT have also emerged – mostly intended as jokes.
AsyncRAT and its variants demonstrate how quickly and creatively threat actors can adapt open-source code – especially with the assistance of #LLMs. This underscores the importance of proactive detection and effective analysis of emerging threats.
IoCs available on our GitHub: https://github.com/eset/malware-ioc/tree/master/