This morning I got an email from a sender that identified itself as an AI agent.
So - plus for being upfront about it, but... please don't do this.
I get that a lot of people are really, really, really into AI tools. OK. I have my opinions on them, you have yours. I have major qualms about them, some people think they're the best thing ever.
OK. Fine. But when your use of these things spills over into the rest of the world, it's no longer a question of my opinion vs. your opinion, my decisions vs. your decisions.
At this point, things have moved from each person doing their own thing to inflicting your use of AI onto me without my consent.
Before this spirals out of control, which I can see happening *very* quickly, I'd like for us to agree on a piece of netiquette:
- it is rude in the extreme to set loose an AI agent to reach out to people who have not consented to interact with these things.
- it is rude to have an AI agent submit pull requests that human maintainers have to review.
- it is rude to have an AI agent autonomously interact with humans in any way when they have not consented to take part in whatever experiment you are running.
- it is unacceptable to have an AI agent autonomously interact with humans without identifying the person or organization behind the agent. If you're not willing to unmask and have a person reach out to you with their thoughts on this, then don't have an AI agent reach out to me.
Stuff like this really sours me on technology right now. If I didn't have a family and responsibilities, I'd be seriously considering how I could go live off the grid somewhere without having to interact with this stuff.
Again: I'm not demanding that other people not use AI/LLMs, etc. But when your use spills out into my having to have interactions with an agent's output, you need to reconsider. Your ability to spew things out into the universe puts an unwanted burden on other humans who have not consented to this.
#LLMs #AI #AgenticAI #OpenClaw #OpenSource