OpenAI and Anthropic are launching AI tools in health care, but their success may depend on transparency about reliability and accuracy, an issue that has already caused Google to fail in the sector. https://www.japantimes.co.jp/commentary/2026/01/20/world/chatgpts-ai-health-care-push/?utm_medium=Social&utm_source=mastodon #commentary #worldnews #ai #chatgpt #anthropic #gemini #claude #openai #healthcare #medicine
Am I to understand from this that SearXNG is in the process of becoming AI poisoned?
- https://github.com/searxng/searxng/issues/2163
- https://github.com/searxng/searxng/issues/2008
- https://github.com/searxng/searxng/issues/2273
#SearX #SearXNG #SearchEngines #AlternateSearchEngines #MetaSearchEngines #web #dev #tech #FOSS #OpenSource #AI #AIPoisoning #AISlop #AI #GenAI #GenerativeAI #LLM #ChatGPT #Claude #Perplexity
A thought that popped into my head when I woke up at 4 am and couldn’t get back to sleep…
Imagine that AI/LLM tools were being marketed to workers as a way to do the same work more quickly and work fewer hours without telling their employers.
“Use ChatGPT to write your TPS reports, go home at lunchtime. Spend more time with your kids!” “Use Claude to write your code, turn 60-hour weeks into four-day weekends!” “Collect two paychecks by using AI! You can hold two jobs without the boss knowing the difference!”
Imagine if AI/LLM tools were not shareholder catnip, but a grassroots movement of tooling that workers were sharing with each other to work less. Same quality of output, but instead of being pushed top-down, being adopted to empower people to work less and “cheat” employers.
Imagine if unions were arguing for the right of workers to use LLMs as labor saving devices, instead of trying to protect members from their damage.
CEOs would be screaming bloody murder. There’d be an overnight industry in AI-detection tools and immediate bans on AI in the workplace. Instead of Microsoft CoPilot 365, Satya would be out promoting Microsoft SlopGuard - add ons that detect LLM tools running on Windows and prevent AI scrapers from harvesting your company’s valuable content for training.
The media would be running horror stories about the terrible trend of workers getting the same pay for working less, and the awful quality of LLM output. Maybe they’d still call them “hallucinations,” but it’d be in the terrified tone of 80s anti-drug PSAs.
What I’m trying to say in my sleep-deprived state is that you shouldn’t ignore the intent and ill effects of these tools. If they were good for you, shareholders would hate them.
You should understand that they’re anti-worker and anti-human. TPTB would be fighting them tooth and nail if their benefits were reversed. It doesn’t matter how good they get, or how interesting they are: the ultimate purpose of the industry behind them is to create less demand for labor and aggregate more wealth in fewer hands.
Unless you happen to be in a very very small club of ultra-wealthy tech bros, they’re not for you, they’re against you. #AI #LLMs #claude #chatgpt
OpenAI to test ads in ChatGPT as it burns through billions
#HackerNews #OpenAI #ChatGPT #Ads #billion #burn #test #technology
I started putting my thoughts together on why I don't use AI.
It will take time to write it all out, it feels like an essay. I could probably come up with a hundred reasons, but I am currently at eight just off the top of my head.
I want to share one that no one talks about.
Big tech companies are using cheap labor in the global south to filter the human trauma on the internet in order to train LLM models. People in countries like Kenya, India, and the Philippines spend 9+ hours per day viewing the most horrific content imaginable (violence, abuse, hate speech, etc) at $1.50/hr. They are literally using the minds of people in precarious economic situations as a filter to protect the sensibilities of the wealthy. They claim it is opportunity when it is actually predatory.
The next time you ask an "ai" to summarize a 450 word article for you, think about the human filter required to make that answer safe. You are not just saving time, you are benefiting from a predatory labor practice that pays pennies to protect your sensibilities.
Further reading:
https://time.com/6247678/openai-chatgpt-kenya-workers/
#NoAI #NoAI #EthicalTech #LaborRights #HumanRights #FOSS #OpenAI #ChatGPT #Google #Gemini
I started putting my thoughts together on why I don't use AI.
It will take time to write it all out, it feels like an essay. I could probably come up with a hundred reasons, but I am currently at eight just off the top of my head.
I want to share one that no one talks about.
Big tech companies are using cheap labor in the global south to filter the human trauma on the internet in order to train LLM models. People in countries like Kenya, India, and the Philippines spend 9+ hours per day viewing the most horrific content imaginable (violence, abuse, hate speech, etc) at $1.50/hr. They are literally using the minds of people in precarious economic situations as a filter to protect the sensibilities of the wealthy. They claim it is opportunity when it is actually predatory.
The next time you ask an "ai" to summarize a 450 word article for you, think about the human filter required to make that answer safe. You are not just saving time, you are benefiting from a predatory labor practice that pays pennies to protect your sensibilities.
Further reading:
https://time.com/6247678/openai-chatgpt-kenya-workers/
#NoAI #NoAI #EthicalTech #LaborRights #HumanRights #FOSS #OpenAI #ChatGPT #Google #Gemini
Ahh, the #enshittification finally begins in earnest:
https://openai.com/index/our-approach-to-advertising-and-expanding-access/
Imagine all those private things people are typing to their trusted AI friend. Now imagine the machine using that information to maximize engagement and tailor messages to drive them to specific products or services or movements or political parties.
I cannot think of anything more terrifying.
If you thought the Facebook or X algos were scary tools for propaganda, be afraid. Be *very* afraid. #bigtech #ai #chatgpt
Ahh, the #enshittification finally begins in earnest:
https://openai.com/index/our-approach-to-advertising-and-expanding-access/
Imagine all those private things people are typing to their trusted AI friend. Now imagine the machine using that information to maximize engagement and tailor messages to drive them to specific products or services or movements or political parties.
I cannot think of anything more terrifying.
If you thought the Facebook or X algos were scary tools for propaganda, be afraid. Be *very* afraid. #bigtech #ai #chatgpt
The future of #AI chat bots is here folks. But honestly, I would not be surprised when (not if) #Google does the same thing for the free tier.
👉🏾 #ChatGPT users are about to get hit with targeted ads https://techcrunch.com/2026/01/16/chatgpt-users-are-about-to-get-hit-with-targeted-ads/
Our approach to advertising and expanding access to ChatGPT
https://openai.com/index/our-approach-to-advertising-and-expanding-access/
#HackerNews #Our #approach #to #advertising #and #expanding #access #to #ChatGPT #advertising #access #ChatGPT #OpenAI #innovation