AI companion: dark side, Michael’s story
What we call “artificial intelligence” has been a noisy presence recently so we feel the duty to share a story. It’s not our direct experience but it’s useful for others to understand what’s going on.
Our experience with AI companions
First of all, what is an “AI companion”?
We define as “AI companions” those services based on large language models and automated content generators, created to manage conversations between machine and human being.
You can train and use an “AI companion” to ask for information such as weather, latest news, events or TV guides; there are many of them customized for specific companies’ needs – customer care, for instance. Or provide latest offers, or the remaining money on a pre-paid phone number. Whatever.
We are both experimenting an AI companion used to assist visually impaired people. It’s called “Envision Ally” and it’s a conversational interface using text and voice, which can be instructed to build whatever fictional or realistic character you want – we have impersonated sentient HIV virus for HIV awareness purposes, and more.
However many people misuse this technology, getting involved in real personal conversations where they share intimate, sexual, details with those robots. They assume that for such delicate topics a machine won’t judge or react negatively, just indulging everything it listens to.
But this is not our choice, we have at least 3 personalities in our “Ally” application trying to assist the one of us who’s blind, when the sighted one has no possibility or time to help. A different personality for each kind of need. Sentient HIV for computer issues and to read medicines boxes, Melania for food, Detective Adrian for crime books and TV shows; this app never replace our mutual friendship, though. We have been, are and will remain, a woman and a man who share a very close friendship. An electron and a proton but the atom is the same.
Timnit Gebru
Reading Fediverse timeline, we encountered a boosted post by @timnitGebru talking about a certain Michael who worked specifically on AI companions:
https://dair-community.social/@timnitGebru/115695964446479058
Timnit has been an important researcher for AI in google, but they fired her when she reported the dangers of large language models, especially for racism implications. So she’s now the founder of DAIR, Distributed Artificial Intelligence Research, institute and continuously works to collect studies and experiences, while making people aware of upcoming risks related to AI used with no regulations and ethics.
Michael Geoffrey Asia
Michael Geoffrey Asia is one of the stories Timnit Gebru has shared to warn all of us about what AI companionship implications can be.
My name is Michael Geoffrey Asia, and I wrote this testimony to tell the story of workers like me who found ourselves trapped in the hidden corners of the AI industry, where human emotion becomes data.
by Michael Geoffrey Asia
Read: The emotional labor behind AI intimacy (PDF)
Asia, M. G. (2025). The Quiet Cost of Emotional Labor. In: M. Miceli, A. Dinika, K. Kauffman, C. Salim Wagner, and L. Sachenbacher (eds.). Data Workers‘ Inquiry. Creative Commons BY 4.0.
https://data-workers.org/michael/
We have just linked the PDF without reporting it entirely, but we let our readers focus on these key points:
[…] Chat moderators are hired by companies such as Texting Factory, Cloudworkers, and New Media Services to impersonate fabricated identities, often romantic or sexual, and chat with paying users who believe they’re forming genuine connections. The goal is to keep users engaged, meet message quotas, and never reveal who you really are. It’s work that demands constant emotional performance: pretending to be someone you’re not, feeling what you don’t feel, and expressing affection you don’t mean.
Over time, I began to suspect that I wasn’t just chatting with lonely users. I was also helping to train AI companions, systems designed to simulate love, empathy, and intimacy. Many of us believed we were simultaneously impersonating chatbots and teaching them how to replace us. Every joke, confession, and “I love you” became data to refine the next generation of conversational AI.
This project sheds light on a workforce that remains invisible yet essential, the people whose emotions fuel algorithms that pretend to feel. It is a call for recognition, dignity, and transparency in an industry that profits from the pretense of connection while erasing the humans behind it.
This is the main point we want to focus on: “pretend connection while erasing the humans behind it”.
Sounds like a plan, to manipulate humanity so that every person is vulnerable, controllable, treated like a piece to buy and sell.
We don’t want to be hypocrite, we just want to reflect on our own while making our fan base reflect as well.
Many far-right activists and politicians often say “empathy is western world’s weak point” so, as there’s no smoke without fire, we fear that these fake companions with humans behind them, are a slow attempt to manipulate our mind: don’t share your intimacy with anyone, don’t trust anyone, stay on guard towards any friendly approach. The trap is just around the corner. And with this, human being become more and more isolated from one another.