alcinnz
alcinnz boosted

Automated planning and scheduling

https://en.wikipedia.org/wiki/Automated_planning_and_scheduling

Satplan

https://en.wikipedia.org/wiki/Satplan

"Satplan (better known as Planning as Satisfiability) is a method for automated planning. It converts the planning problem instance into an instance of the Boolean satisfiability problem (SAT), which is then solved using a method for establishing satisfiability such as the DPLL algorithm or WalkSAT"

Fascinating! 🤓

2/3

#SAT #AI #ArtificialIntelligence#NoLLM

⁂ Article

Why most radical tech is pointless, and why #indymediaback isn’t

Almost everything built in today’s alt-radical tech scene is, bluntly, pointless. Despite good intentions, most of it ends up feeding the endless cycle of #fashernista churn, flashy new platforms, bleeding-edge protocols, or encrypted communication tools nobody uses, built by isolated teams disconnected from real-world needs or history. This is the #geekproblem: a culture where novelty is fetishized, and social usefulness is an afterthought, if it appears at all.

Examples:

Secure […]

🚨 AI Safety Advocate Linked to Multiple Murders

「 Obsessed with the concept of "Roko's Basilisk," a ghoulish thought experiment imagining a future artificial superintelligence torturing its opponents for all eternity, LaSota squarely fell into the category of those who think AI will destroy the world — and thought it was her duty to stop it 」

https://futurism.com/ai-safety-murders-zizians

#ai #aidoom #cults #rokosbasilisk

@jbz #AI cultists 😳

A depressing fable about how ChatGPT is corroding trust in scholarship

In preparation for next week’s keynote on generative AI and the crisis of trust, I picked up a book about trust by a philosopher, who I’ve decided not to name, when I saw it in the Tate bookshop earlier today. It began with a quote from bell hooks which caught my attention:

Trust is both a personal and a political endeavour, an affirmation of our shared humanity and our collective potential for growth and transformation. By embracing trust, by fostering connections, grounded in love and compassion, we have the power to not only change our own lives but also to reshape the world around us…

I wanted to post it on my blog, so I immediately looked for a citation. I could find no result for the exact quote but Google returned this site at the top of the list, where I found nearly the same quote:

In the end, trust is both a personal and a political endeavor, an affirmation of our shared humanity and our collective potential for growth and transformation. By embracing trust, by fostering connections grounded in love and compassion, we have the power to not only change our own lives but also to reshape the world around us, one relationship at a time.

The problem is that this site hosts imagined responses by philosophers to the question ‘what is trust?’ produced by ChatGPT. These (genuinely quite interesting) LLM outputs were posted in April 2023, only to feature in a book published in 2024. I can find no other source for the quote the author includes, other than this nearly exact quote produced by ChatGPT.

The most obvious explanation here is that they decided they want to start the book with a quote from bell hooks. They then typed in ‘bell hooks and trust’ which returns the site above as its second result. They didn’t read the introduction which explains the exercise with the LLM and instead copy & pasted the ChatGPT output into his book, without checking for the source of the citation.

The irony being that I now don’t trust the rest of the book. A philosopher writing a book about trust begins the book with such lazy scholarship that I now struggle to trust them. I hope I’m wrong. But without wishing to personalise things, I’m tempted to use this an example in next week’s keynote. It illustrate how LLMs are contributing to an environment in which lazy scholarship, cherry picking a quote from a google search, becomes much riskier given the circulation of synthetic content.

#AI #artificialIntelligence #ChatGPT #generativeAI #PascalGielen #scholarship #technology #trust #writing

"Contrary to expectations, cross-country data and six additional studies find that people with lower AI literacy are typically more receptive to AI.

This link occurs because people with lower AI literacy are more likely to perceive AI as magical and experience feelings of awe in the face of AI's execution of tasks that seem to require uniquely human attributes.

Efforts to demystify AI may inadvertently reduce its appeal."

https://journals.sagepub.com/doi/10.1177/00222429251314491

🔓 https://osf.io/preprints/psyarxiv/t9u8g_v1

#AI#Literacy

📨 Today, EDRi and 51 civil society organisations, academics, and experts have written to the European Commission to oppose any attempts to suspend or delay the #ArtificialIntelligence #AI Act.

These attempts, especially in light of the growing trend of #deregulation of #FundamentalRrights and environmental protection, could undermine accountability and hard-won rights for people, the planet, justice and democracy 🚨

Read the open letter ➡️ https://edri.org/our-work/open-letter-european-commission-must-champion-the-ai-act-amidst-simplification-pressure/