⇒ Please help me find #GenAI truth-telling sites! ⇐
In the past I've come across several websites that effectively debunk #GenerativeAI hype.
However, now that I actually need them, to help me make the case at work for strong oversight of the company's GenAI use, I can't find any of them.
It seems like no matter what search terms and search engine I use, I get garbage search results (hype, indeed!).
What are your go-to websites for debunking #AI hype?
boostRequest #tech#LLM

Because search engines (Google in particular) have absolutely failed me, I'm gonna crowd source this:

I'm looking for long-form blog posts on the state of #AI today. I don't mind if they get a bit technically, I'm just trying to get a deeper understanding of what #llms can do, how they work, their limitations and potential. I feel like most of what I've been exposed to is either overly optimistic takes (which I generally find off-putting) and the pessimistic takes which appeal to my cynicism (unfortunately). But I'm trying to be more open-minded, now.

I've seen a few talks on YouTube, one from Andrej Karpathy on his channel, and another one by Jodie Burchell on GOTO conferences which I think were pretty good. I'm just tired of being a non-believer who can't properly explain from a technical perspective why I don't believe other than the fact that I've tried to use LLMs for actual complex tasks and even the almighty Claude seems to crumble under real pressure

#askfedi #generativeAI #technology #blog

A depressing fable about how ChatGPT is corroding trust in scholarship

In preparation for next week’s keynote on generative AI and the crisis of trust, I picked up a book about trust by a philosopher, who I’ve decided not to name, when I saw it in the Tate bookshop earlier today. It began with a quote from bell hooks which caught my attention:

Trust is both a personal and a political endeavour, an affirmation of our shared humanity and our collective potential for growth and transformation. By embracing trust, by fostering connections, grounded in love and compassion, we have the power to not only change our own lives but also to reshape the world around us…

I wanted to post it on my blog, so I immediately looked for a citation. I could find no result for the exact quote but Google returned this site at the top of the list, where I found nearly the same quote:

In the end, trust is both a personal and a political endeavor, an affirmation of our shared humanity and our collective potential for growth and transformation. By embracing trust, by fostering connections grounded in love and compassion, we have the power to not only change our own lives but also to reshape the world around us, one relationship at a time.

The problem is that this site hosts imagined responses by philosophers to the question ‘what is trust?’ produced by ChatGPT. These (genuinely quite interesting) LLM outputs were posted in April 2023, only to feature in a book published in 2024. I can find no other source for the quote the author includes, other than this nearly exact quote produced by ChatGPT.

The most obvious explanation here is that they decided they want to start the book with a quote from bell hooks. They then typed in ‘bell hooks and trust’ which returns the site above as its second result. They didn’t read the introduction which explains the exercise with the LLM and instead copy & pasted the ChatGPT output into his book, without checking for the source of the citation.

The irony being that I now don’t trust the rest of the book. A philosopher writing a book about trust begins the book with such lazy scholarship that I now struggle to trust them. I hope I’m wrong. But without wishing to personalise things, I’m tempted to use this an example in next week’s keynote. It illustrate how LLMs are contributing to an environment in which lazy scholarship, cherry picking a quote from a google search, becomes much riskier given the circulation of synthetic content.

#AI #artificialIntelligence #ChatGPT #generativeAI #PascalGielen #scholarship #technology #trust #writing

"Let’s not forget that the industry building AI Assistants has already made billions of dollars honing the targeted advertising business model. They built their empires by drawing our attention, collecting our data, inferring our interests, and selling access to us.

AI Assistants supercharge this problem. First because they access and process incredibly intimate information, and second because the computing power they require to handle certain tasks is likely too immense for a personal device. This means that very personal data, including data about other people that exists on your phone, might leave your device to be processed on their servers. This opens the door to reuse and misuse. If you want your Assistant to work seemlessly for you across all your devices, then it’s also likely companies will solve that issue by offering cloud-enabled synchronisation, or more likely, cloud processing.

Once data has left your device, it’s incredibly hard to get companies to be clear about where it ends up and what it will be used for. The companies may use your data to train their systems, and could allow their staff and ‘trusted service providers’ to access your data for reasons like to improve model performance. It’s unlikely what you had all of this in mind when you asked your Assistant a simple question.

This is why it’s so important that we demand that our data be processed on our devices as much as possible, and used only for limited and specific purposes we are aware of, and have consented to. Companies must be provide clear and continuous information about where queries are processed (locally or in the cloud) and what data has been shared for that to happen, and what will happen to that data next."

https://privacyinternational.org/news-analysis/5591/are-ai-assistants-built-us-or-exploit-us-and-other-questions-ai-industry

#AI#GenerativeAI #LLMs #Chatbots#AIAssistants#Privacy#AdTech#DataProtection#AdTargeting

"Asking scientists to identify a paradigm shift, especially in real time, can be tricky. After all, truly ground-shifting updates in knowledge may take decades to unfold. But you don’t necessarily have to invoke the P-word to acknowledge that one field in particular — natural language processing, or NLP — has changed. A lot.

The goal of natural language processing is right there on the tin: making the unruliness of human language (the “natural” part) tractable by computers (the “processing” part). A blend of engineering and science that dates back to the 1940s, NLP gave Stephen Hawking a voice, Siri a brain and social media companies another way to target us with ads. It was also ground zero for the emergence of large language models — a technology that NLP helped to invent but whose explosive growth and transformative power still managed to take many people in the field entirely by surprise.

To put it another way: In 2019, Quanta reported on a then-groundbreaking NLP system called BERT without once using the phrase “large language model.” A mere five and a half years later, LLMs are everywhere, igniting discovery, disruption and debate in whatever scientific community they touch. But the one they touched first — for better, worse and everything in between — was natural language processing. What did that impact feel like to the people experiencing it firsthand?

Quanta interviewed 19 current and former NLP researchers to tell that story. From experts to students, tenured academics to startup founders, they describe a series of moments — dawning realizations, elated encounters and at least one “existential crisis” — that changed their world. And ours."

https://www.quantamagazine.org/when-chatgpt-broke-an-entire-field-an-oral-history-20250430/

#AI#GenerativeAI#ChatGPT#NLP#OralHistory #LLMs #Chatbots