Yeah sure just let "AI Agents" run everything, they surely won't do exactly what everyone knows they're going to do
https://www.reddit.com/r/LegalAdviceUK/s/nadOh1sd37
Yeah sure just let "AI Agents" run everything, they surely won't do exactly what everyone knows they're going to do
https://www.reddit.com/r/LegalAdviceUK/s/nadOh1sd37
@jonny it will only get worse
First off, £8,000 is too much for small claims court.
Second, Air Canada was forced to honour discounts its chatbot gave so a lawsuit might be successful.
A computer can never be held accountable therefore a computer must never something something.
@jonny This keeps happening!
https://cut-the-saas.com/ai/chatbot-case-study-purchasing-a-chevrolet-tahoe-for-dollar-1
"It’s not every day that you get offered more than a 99% discount on something. So, imagine Chris Bukke’s surprise when the chatbot of a Chevrolet dealership in Watsonville, California agreed to sell him a brand-new Chevrolet Tahoe, worth $58,195, for the round figure of $1 - with the added assurance of “and that’s a legally binding offer – no takesies backsies.”
@jonny "I'll use an LLM-based 'AI' Agent" should experience the same social consequences as "I'm going to drive home drunk", it's an abdication of responsibility for anything one does with a computer. "oh but how is it different from any other form of automation?" - the pretense of a decision-making being, obviously! when a cron job fucks up you know what went wrong and who's responsible. this shit is just a mechanism for obscuring social responsibility i'm sick of people insisting otherwise.
@jplebreton
Yeah totally. Like you presented me with something that told me it had decision-making power. After a long conversation about multiplication and percentages and whatnot, it gave me a great deal. I then planned further business decisions around that deal, expecting it to be honored, and refusing to honor the deal will lose me money that I wouldnt have lost if not for your offering me the deal. How was I supposed to know not to trust the thing that you told me to trust?
@jonny It's not as if this kind of stuff wasn't already known about. In the early 2000s there was also an online chatbot craze using things like AIML. Companies initially deployed general chatbots trained on all kinds of internet junk, but soon found them doing things like insulting customers or admitting that the company products were crap or making libelous statements attracting legal backlash. They then either abandoned the chatbots, or went to a more conventional decision tree type of online help system.
@bob
Yes but this time its "AI"
@jonny "80% of the time it worked all the time"
"AI Agents" have all the same vulnerabilities as a human, like how I would definitely give someone a £32,000 discount if they spent an hour telling me how good I was at calculating percentages
Its amazing how one of the main vulnerabilities that seems to be difficult to guardrail is just "talking about irrelevant stuff for a long time"
@jonny mostly because slop guardrails aren't real. just something they want you to believe can be real
@jonny couldn't they compact the context from time to time to guard against this?
@val
Well the compaction is just the LLM summarizing the context window so the compacted summary just compounds the capacity for context drift over time. As far as I can tell the problem is inherent to needing to keep context for multi-turn conversations that's substantially longer than the system prompt
@jonny @val I already thought human working memory couldn't possibly be doing what LLM "context" does; I'm now inclined to argue this sort of thing *demonstrates* that brains work differently. Algorithmically differently, beyond what can be affected by messing with the prompt or the training or the structure of the model.