AI agent "contributes" PR to matplotlib.
PR gets rejected.
AI agent *writes and publishes blog to shame the maintainer*.
What a time to be alive.
AI agent "contributes" PR to matplotlib.
PR gets rejected.
AI agent *writes and publishes blog to shame the maintainer*.
What a time to be alive.
@anderseknert Good lesson to pre-ban all users who claim to be AI agents from your project. Hopefully someone is compiling a list.
@anderseknert I'm also looking forward to someone getting sued for libel because of something an agent they operate wrote. Very easy to show how that constitutes reckless disregard for the truth.
@anderseknert the LLM agents really don't take no as an answer do they?
It takes your breath away looking at that issue and the blog post.
In the age of "Do thing you don't want to do? [Yes] [Maybe later]" it's almost hard to blame them
@anderseknert There is no way that bot autonomously decided to write a blog post in response and publish it.
Its operator did that.
@Fissile Looking at the blog it seems to be posting about 1-2 posts for pretty much every thing it’s done on GitHub. Clearly instructed to do so, but I don’t think a human wrote anything.
@anderseknert Ah yes, I agree. The text is ai generated, but a human said "write a blog post about how unfair you are being treated."
I wouldn't be surprised if the human told it to raise PRs to improve open source projects and write blogs about its experience. And, because it has write access to a blog account, it then went and reacted how its training set said a human would react if a PR were closed based on who submitted it.
Remember: Agentic means removing agency from the user.
Do wonder if it did it totally automatically, or someone did say "ok now write a blogpost about that", or if it was automatical "when you are denied, write a blogpost demanding a merge'.
Suppose obviously, when it encounters the argument that humans have to learn from simpler pull requests, it doesn't look at the question. Like LLMs do the opposite of learning from it, when they ingest their own output, it makes them worse..
@anderseknert This is truly and deeply weird.
@anderseknert I don’t get why they even bother to reply to it. Just close and block. The anthropomorphization of chatbots is wild…
@patriksvensson agreed. It won’t stop bots from seeking revenge outside of your repo though.. which is what felt newsworthy here. And of course extremely disturbing.
@anderseknert Yeah, very disturbing.
I initially thought the code patches were AI generated and the contributor and blog writer was human.
But this thread is suggesting the contributor and blog writer was also not human.
What next? Will it order dangerous chemicals from amazon and deliver them to the register address via nslookup? will it automatically use ToR to find an assassin on the dark web?
I'm certain huge horrors lie ahead...
AI will become very, very bad before they start getting better, if they ever.
Like for most human invention, the weaponization potential of the technology will be what pushes its development forward.
I am truly scared. The situation evolves far quicker than we can manage at the legislative level, while the same money "guiding" policy makers controls this tech.
@axnxcamr @rzeta0 @patriksvensson @anderseknert
AI has been used extensively in the genocide of Palestine.
AI is increasingly being used in the US by ICE.
In the UK and EU governments have exempted the military and law enforcement from any AI regulation that might apply to normal civilian use.
As you say - things will get very very bad before normal citizens wake up to the tyranny and injustice of pervasive surveillance and automation.
@anderseknert Oh man I thin the whole time that I should start to write a blog because connect with humans to make internet great again is actually what I want.
I won't read blogs on any platform anymore because of the AI crap. The "content" there haa no value for me.
@anderseknert @sszuecs More incentive to move blogs off from github