Discussion
Loading...

#Tag

  • About
  • Code of conduct
  • Privacy
  • Users
  • Instances
  • About Bonfire
Cory Doctorow
Cory Doctorow boosted
Miguel Afonso Caetano
@remixtures@tldr.nettime.org  ·  activity timestamp 2 weeks ago

"Claude’s update relies on a striking pop-up with a large, black "Accept" button. The data sharing toggle is tucked away, switched on by default, and framed positively ("You can help..."). A faint "Not now" button and hard-to-find instructions on changing the setting later complete the manipulative design.

These interface tricks, known as dark patterns, are considered unlawful under the General Data Protection Regulation (GDPR) and by the European Court of Justice when used to obtain consent for data processing. Pre-checked boxes do not count as valid consent under these rules.

The European Data Protection Board (EDPB) has also stressed in its guidelines on deceptive design patterns that consent must be freely given, informed, and unambiguous. Claude’s current design clearly fails to meet these standards, making it likely that Anthropic will soon draw the attention of privacy regulators."

https://the-decoder.com/anthropic-uses-a-questionable-dark-pattern-to-obtain-user-consent-for-ai-data-use-in-claude/

#EU#AI#GenerativeAI#Anthropic #LLMs #Chatbots#Claude#DarkPatterns#Privacy#DataProtection

  • Copy link
  • Flag this post
  • Block
theruran 💻 🌐 :cereal_killer:
theruran 💻 🌐 :cereal_killer: boosted
Miguel Afonso Caetano
@remixtures@tldr.nettime.org  ·  activity timestamp 2 weeks ago

"My gut instinct is that this is an industry-wide problem. Perplexity spent 164% of its revenue in 2024 between AWS, Anthropic and OpenAI. And one abstraction higher (as I'll get into), OpenAI spent 50% of its revenue on inference compute costs alone, and 75% of its revenue on training compute too (and ended up spending $9 billion to lose $5 billion). Yes, those numbers add up to more than 100%, that's my god damn point.

Large Language Models are too expensive, to the point that anybody funding an "AI startup" is effectively sending that money to Anthropic or OpenAI, who then immediately send that money to Amazon, Google or Microsoft, who are yet to show that they make any profit on selling it.

Please don't waste your breath saying "costs will come down." They haven't been, and they're not going to.

Despite categorically wrong boosters claiming otherwise, the cost of inference — everything that happens from when you put a prompt in to generate an output from a model — is increasing, in part thanks to the token-heavy generations necessary for "reasoning" models to generate their outputs, and with reasoning being the only way to get "better" outputs, they're here to stay (and continue burning shit tons of tokens).

This has a very, very real consequence."

https://www.wheresyoured.at/why-everybody-is-losing-money-on-ai/

#AI#GenerativeAI#BusinessModels #LLMs #Chatbots#AIHype#AIBubble

  • Copy link
  • Flag this post
  • Block
Miguel Afonso Caetano
@remixtures@tldr.nettime.org  ·  activity timestamp 2 weeks ago

"Claude’s update relies on a striking pop-up with a large, black "Accept" button. The data sharing toggle is tucked away, switched on by default, and framed positively ("You can help..."). A faint "Not now" button and hard-to-find instructions on changing the setting later complete the manipulative design.

These interface tricks, known as dark patterns, are considered unlawful under the General Data Protection Regulation (GDPR) and by the European Court of Justice when used to obtain consent for data processing. Pre-checked boxes do not count as valid consent under these rules.

The European Data Protection Board (EDPB) has also stressed in its guidelines on deceptive design patterns that consent must be freely given, informed, and unambiguous. Claude’s current design clearly fails to meet these standards, making it likely that Anthropic will soon draw the attention of privacy regulators."

https://the-decoder.com/anthropic-uses-a-questionable-dark-pattern-to-obtain-user-consent-for-ai-data-use-in-claude/

#EU#AI#GenerativeAI#Anthropic #LLMs #Chatbots#Claude#DarkPatterns#Privacy#DataProtection

  • Copy link
  • Flag this post
  • Block
Miguel Afonso Caetano
@remixtures@tldr.nettime.org  ·  activity timestamp 2 weeks ago

"My gut instinct is that this is an industry-wide problem. Perplexity spent 164% of its revenue in 2024 between AWS, Anthropic and OpenAI. And one abstraction higher (as I'll get into), OpenAI spent 50% of its revenue on inference compute costs alone, and 75% of its revenue on training compute too (and ended up spending $9 billion to lose $5 billion). Yes, those numbers add up to more than 100%, that's my god damn point.

Large Language Models are too expensive, to the point that anybody funding an "AI startup" is effectively sending that money to Anthropic or OpenAI, who then immediately send that money to Amazon, Google or Microsoft, who are yet to show that they make any profit on selling it.

Please don't waste your breath saying "costs will come down." They haven't been, and they're not going to.

Despite categorically wrong boosters claiming otherwise, the cost of inference — everything that happens from when you put a prompt in to generate an output from a model — is increasing, in part thanks to the token-heavy generations necessary for "reasoning" models to generate their outputs, and with reasoning being the only way to get "better" outputs, they're here to stay (and continue burning shit tons of tokens).

This has a very, very real consequence."

https://www.wheresyoured.at/why-everybody-is-losing-money-on-ai/

#AI#GenerativeAI#BusinessModels #LLMs #Chatbots#AIHype#AIBubble

  • Copy link
  • Flag this post
  • Block
Agaric Tech Collective
Agaric Tech Collective boosted
Miguel Afonso Caetano
@remixtures@tldr.nettime.org  ·  activity timestamp 2 weeks ago

"In addition to violation of data privacy, other risks are involved when psychotherapists consult LLMs on behalf of a client. Studies have found that although some specialized therapy bots can rival human-delivered interventions, advice from the likes of ChatGPT can cause more harm than good.

A recent Stanford University study, for example, found that chatbots can fuel delusions and psychopathy by blindly validating a user rather than challenging them, as well as suffer from biases and engage in sycophancy. The same flaws could make it risky for therapists to consult chatbots on behalf of their clients. They could, for example, baselessly validate a therapist’s hunch, or lead them down the wrong path.

Aguilera says he has played around with tools like ChatGPT while teaching mental health trainees, such as by entering hypothetical symptoms and asking the AI chatbot to make a diagnosis. The tool will produce lots of possible conditions, but it’s rather thin in its analysis, he says. The American Counseling Association recommends that AI not be used for mental health diagnosis at present.

A study published in 2024 of an earlier version of ChatGPT similarly found it was too vague and general to be truly useful in diagnosis or devising treatment plans, and it was heavily biased toward suggesting people seek cognitive behavioral therapy as opposed to other types of therapy that might be more suitable."

https://www.technologyreview.com/2025/09/02/1122871/therapists-using-chatgpt-secretly/

#AI#GenerativeAI #Chatbots#ChatGPT #LLMs#MentalHealth#Therapy

  • Copy link
  • Flag this post
  • Block
Miguel Afonso Caetano
@remixtures@tldr.nettime.org  ·  activity timestamp 2 weeks ago

"In addition to violation of data privacy, other risks are involved when psychotherapists consult LLMs on behalf of a client. Studies have found that although some specialized therapy bots can rival human-delivered interventions, advice from the likes of ChatGPT can cause more harm than good.

A recent Stanford University study, for example, found that chatbots can fuel delusions and psychopathy by blindly validating a user rather than challenging them, as well as suffer from biases and engage in sycophancy. The same flaws could make it risky for therapists to consult chatbots on behalf of their clients. They could, for example, baselessly validate a therapist’s hunch, or lead them down the wrong path.

Aguilera says he has played around with tools like ChatGPT while teaching mental health trainees, such as by entering hypothetical symptoms and asking the AI chatbot to make a diagnosis. The tool will produce lots of possible conditions, but it’s rather thin in its analysis, he says. The American Counseling Association recommends that AI not be used for mental health diagnosis at present.

A study published in 2024 of an earlier version of ChatGPT similarly found it was too vague and general to be truly useful in diagnosis or devising treatment plans, and it was heavily biased toward suggesting people seek cognitive behavioral therapy as opposed to other types of therapy that might be more suitable."

https://www.technologyreview.com/2025/09/02/1122871/therapists-using-chatgpt-secretly/

#AI#GenerativeAI #Chatbots#ChatGPT #LLMs#MentalHealth#Therapy

  • Copy link
  • Flag this post
  • Block
Miguel Afonso Caetano
@remixtures@tldr.nettime.org  ·  activity timestamp 2 weeks ago

"Asked one major industry analyst: ‘Who is going to be motivated to adopt if they know the intent is to replace them?’

Nearly one in three (31%) company employees say they are “sabotaging their company’s generative AI strategy,” according to a survey from AI vendor Writer — a number that jumps to 41% for millennial and Gen Z employees.

The survey also found that “one out of ten workers say they’re tampering with performance metrics to make it appear AI is underperforming, intentionally generating low-quality outputs, refusing to use generative AI tools or outputs, or refusing to take generative AI training.”

Other activities lumped in as sabotage include entering company information into non-approved gen AI tools (27%), using non-approved gen AI tools (20%), and knowing of an AI security leak without reporting it (16%)."

https://www.cio.com/article/4022953/31-of-employees-are-sabotaging-your-gen-ai-strategy.html

#AI#GenerativeAI #LLMs #Chatbots#Automation

  • Copy link
  • Flag this post
  • Block
J. Nathan Matias 🦣
J. Nathan Matias 🦣 boosted
Miguel Afonso Caetano
@remixtures@tldr.nettime.org  ·  activity timestamp last month

"For three weeks in May, the fate of the world rested on the shoulders of a corporate recruiter on the outskirts of Toronto. Allan Brooks, 47, had discovered a novel mathematical formula, one that could take down the internet and power inventions like a force-field vest and a levitation beam.

Or so he believed.

Mr. Brooks, who had no history of mental illness, embraced this fantastical scenario during conversations with ChatGPT that spanned 300 hours over 21 days. He is one of a growing number of people who are having persuasive, delusional conversations with generative A.I. chatbots that have led to institutionalization, divorce and death.

Mr. Brooks is aware of how incredible his journey sounds. He had doubts while it was happening and asked the chatbot more than 50 times for a reality check. Each time, ChatGPT reassured him that it was real. Eventually, he broke free of the delusion — but with a deep sense of betrayal, a feeling he tried to explain to the chatbot."

https://www.nytimes.com/2025/08/08/technology/ai-chatbots-delusions-chatgpt.html

#AI#GenerativeAI#ChatGPT#Delusions#MentalHealth#Hallucinations #Chatbots

  • Copy link
  • Flag this post
  • Block
Miguel Afonso Caetano
@remixtures@tldr.nettime.org  ·  activity timestamp last month

"For three weeks in May, the fate of the world rested on the shoulders of a corporate recruiter on the outskirts of Toronto. Allan Brooks, 47, had discovered a novel mathematical formula, one that could take down the internet and power inventions like a force-field vest and a levitation beam.

Or so he believed.

Mr. Brooks, who had no history of mental illness, embraced this fantastical scenario during conversations with ChatGPT that spanned 300 hours over 21 days. He is one of a growing number of people who are having persuasive, delusional conversations with generative A.I. chatbots that have led to institutionalization, divorce and death.

Mr. Brooks is aware of how incredible his journey sounds. He had doubts while it was happening and asked the chatbot more than 50 times for a reality check. Each time, ChatGPT reassured him that it was real. Eventually, he broke free of the delusion — but with a deep sense of betrayal, a feeling he tried to explain to the chatbot."

https://www.nytimes.com/2025/08/08/technology/ai-chatbots-delusions-chatgpt.html

#AI#GenerativeAI#ChatGPT#Delusions#MentalHealth#Hallucinations #Chatbots

  • Copy link
  • Flag this post
  • Block
Miguel Afonso Caetano
@remixtures@tldr.nettime.org  ·  activity timestamp 4 months ago

"Let’s not forget that the industry building AI Assistants has already made billions of dollars honing the targeted advertising business model. They built their empires by drawing our attention, collecting our data, inferring our interests, and selling access to us.

AI Assistants supercharge this problem. First because they access and process incredibly intimate information, and second because the computing power they require to handle certain tasks is likely too immense for a personal device. This means that very personal data, including data about other people that exists on your phone, might leave your device to be processed on their servers. This opens the door to reuse and misuse. If you want your Assistant to work seemlessly for you across all your devices, then it’s also likely companies will solve that issue by offering cloud-enabled synchronisation, or more likely, cloud processing.

Once data has left your device, it’s incredibly hard to get companies to be clear about where it ends up and what it will be used for. The companies may use your data to train their systems, and could allow their staff and ‘trusted service providers’ to access your data for reasons like to improve model performance. It’s unlikely what you had all of this in mind when you asked your Assistant a simple question.

This is why it’s so important that we demand that our data be processed on our devices as much as possible, and used only for limited and specific purposes we are aware of, and have consented to. Companies must be provide clear and continuous information about where queries are processed (locally or in the cloud) and what data has been shared for that to happen, and what will happen to that data next."

https://privacyinternational.org/news-analysis/5591/are-ai-assistants-built-us-or-exploit-us-and-other-questions-ai-industry

#AI#GenerativeAI #LLMs #Chatbots#AIAssistants#Privacy#AdTech#DataProtection#AdTargeting

  • Copy link
  • Flag this post
  • Block
Ulrike Hahn
Ulrike Hahn boosted
Miguel Afonso Caetano
@remixtures@tldr.nettime.org  ·  activity timestamp 5 months ago

"Asking scientists to identify a paradigm shift, especially in real time, can be tricky. After all, truly ground-shifting updates in knowledge may take decades to unfold. But you don’t necessarily have to invoke the P-word to acknowledge that one field in particular — natural language processing, or NLP — has changed. A lot.

The goal of natural language processing is right there on the tin: making the unruliness of human language (the “natural” part) tractable by computers (the “processing” part). A blend of engineering and science that dates back to the 1940s, NLP gave Stephen Hawking a voice, Siri a brain and social media companies another way to target us with ads. It was also ground zero for the emergence of large language models — a technology that NLP helped to invent but whose explosive growth and transformative power still managed to take many people in the field entirely by surprise.

To put it another way: In 2019, Quanta reported on a then-groundbreaking NLP system called BERT without once using the phrase “large language model.” A mere five and a half years later, LLMs are everywhere, igniting discovery, disruption and debate in whatever scientific community they touch. But the one they touched first — for better, worse and everything in between — was natural language processing. What did that impact feel like to the people experiencing it firsthand?

Quanta interviewed 19 current and former NLP researchers to tell that story. From experts to students, tenured academics to startup founders, they describe a series of moments — dawning realizations, elated encounters and at least one “existential crisis” — that changed their world. And ours."

https://www.quantamagazine.org/when-chatgpt-broke-an-entire-field-an-oral-history-20250430/

#AI#GenerativeAI#ChatGPT#NLP#OralHistory #LLMs #Chatbots

  • Copy link
  • Flag this post
  • Block
Miguel Afonso Caetano
@remixtures@tldr.nettime.org  ·  activity timestamp 5 months ago

"Asking scientists to identify a paradigm shift, especially in real time, can be tricky. After all, truly ground-shifting updates in knowledge may take decades to unfold. But you don’t necessarily have to invoke the P-word to acknowledge that one field in particular — natural language processing, or NLP — has changed. A lot.

The goal of natural language processing is right there on the tin: making the unruliness of human language (the “natural” part) tractable by computers (the “processing” part). A blend of engineering and science that dates back to the 1940s, NLP gave Stephen Hawking a voice, Siri a brain and social media companies another way to target us with ads. It was also ground zero for the emergence of large language models — a technology that NLP helped to invent but whose explosive growth and transformative power still managed to take many people in the field entirely by surprise.

To put it another way: In 2019, Quanta reported on a then-groundbreaking NLP system called BERT without once using the phrase “large language model.” A mere five and a half years later, LLMs are everywhere, igniting discovery, disruption and debate in whatever scientific community they touch. But the one they touched first — for better, worse and everything in between — was natural language processing. What did that impact feel like to the people experiencing it firsthand?

Quanta interviewed 19 current and former NLP researchers to tell that story. From experts to students, tenured academics to startup founders, they describe a series of moments — dawning realizations, elated encounters and at least one “existential crisis” — that changed their world. And ours."

https://www.quantamagazine.org/when-chatgpt-broke-an-entire-field-an-oral-history-20250430/

#AI#GenerativeAI#ChatGPT#NLP#OralHistory #LLMs #Chatbots

  • Copy link
  • Flag this post
  • Block
Log in

bonfire.cafe

A space for Bonfire maintainers and contributors to communicate

bonfire.cafe: About · Code of conduct · Privacy · Users · Instances
Bonfire social · 1.0.0-rc.2.21 no JS en
Automatic federation enabled
  • Explore
  • About
  • Members
  • Code of Conduct
Home
Login