In big news overnight, #Anthropic have made a major change to their user data retention and training policy - giving customers until September 28th to opt out, or have their chats, code sessions and other artefacts used for training for up to five years.

This is a major departure from their previous privacy-first stance.

But what's really behind this change? As Connie Loizos points out in this @Techcrunch article, it's all about the #data.

As I've spoken about recently, we've passed #PeakToken - the point in history where we have the maximum amount of authentic, human-generated data available. Now, the internet is polluted with synthetically-generated #AIslop. If you're an #AI company scraping the web for new data to train on, that's bad news, because you also scoop up the AI slop. If models are trained on AI slop, they're likely to encounter #ModelCollapse - like a bad photocopy.

Anthropic's play here is all about the #TokenCrisis - the voracious appetite for new, authentic, human-generated data to train on - part of a broader phenomenon I've termed the #TokenWars.

As new data becomes scarcer and more valuable, it will be more sought after and contested. We're still in the early days of the #TokenWars, and we should expect to see more moves like this to secure more data for AI training.

https://techcrunch.com/2025/08/28/anthropic-users-face-a-new-choice-opt-out-or-share-your-data-for-ai-training/

@KathyReid I dare to predict that this ultra-wasteful industry will be remembered as a sophomoric project.

There is less understanding now of the nature of human reasoning than what scholars had before GPT training.

People who believed that intelligence is in form, the normal, that semantics can be turned into syntax, a marginal, were the derided few Carnap hold-outs. It's still not possible, it still leads to Orwellian absurdity and decay.

Relax and smile.

#aislop #philosophy
@Techcrunch