After the success of China's DeepSeek AI, ChatGPT developer OpenAI will release its own 'open' AI models that allow users to customize and run systems themselves. https://www.japantimes.co.jp/business/2025/08/06/tech/openai-release-models-deepseek/?utm_medium=Social&utm_source=mastodon #business #tech #openai #ai #chatgpt #samaltman #deepseek

Private Chats bei Google auffindbar: OpenAI nimmt Funktion zurück

Falsch verstandene Funktion macht private Chats öffentlich auffindbar – im Zweifel samt Namen. OpenAI greift nun ein.

https://www.heise.de/news/Private-Chats-bei-Google-auffindbar-OpenAI-nimmt-Funktion-zurueck-10508123.html?wt_mc=sm.red.ho.mastodon.mastodon.md_beitraege.md_beitraege&utm_source=mastodon

#ChatGPT#IT#OpenAI#Suchmaschine #news

#Anthropic revoked #OpenAI’s access to its #Claude API due to OpenAI violating its terms of service by using Claude to train competing AI models. This move comes as OpenAI prepares to release #GPT5, a new AI model rumoured to be better at #coding. Anthropic stated it will continue to allow OpenAI API access for benchmarking and safety evaluations. https://www.wired.com/story/anthropic-revokes-openais-access-to-claude/?eicker.news #tech #media#news
#Anthropic revoked #OpenAI’s access to its #Claude API due to OpenAI violating its terms of service by using Claude to train competing AI models. This move comes as OpenAI prepares to release #GPT5, a new AI model rumoured to be better at #coding. Anthropic stated it will continue to allow OpenAI API access for benchmarking and safety evaluations. https://www.wired.com/story/anthropic-revokes-openais-access-to-claude/?eicker.news #tech #media#news
jbz
Yogthos
jbz and 1 other boosted

🔥 IT BEGINS

Anthropic Revokes OpenAI's Access to Claude

「 According to Anthropic’s commercial terms of service, customers are barred from using the service to “build a competing product or service, including to train competing AI models” or “reverse engineer or duplicate” the services 」

web.archive.org/web/2025080123

archive.ph/ZnGMu

🔥 IT BEGINS

Anthropic Revokes OpenAI's Access to Claude

「 According to Anthropic’s commercial terms of service, customers are barred from using the service to “build a competing product or service, including to train competing AI models” or “reverse engineer or duplicate” the services 」

web.archive.org/web/2025080123

archive.ph/ZnGMu

@giacomo

No compression algorithm can decompress your file into a poem about the complicated romantic relationship between a nematode and a rosemary bush.

Beyond "lossiness", it's the user-provided context and the prediction based on that context and their own generated text that makes these models functionally useful.

Predictive text generation algorithms are not a "weapon of oppression". They are being instrumentalized to augment power, but that is not quite the same thing.

@eloquence@social.coop
No compression algorithm can decompress your file into a poem about the complicated romantic relationship between a nematode and a rosemary bush.
Why not?

If the original file contained poems about complicated romantic relationships, and texts about rosemary bushes and about nematodes, a section of the compressed high dimensional matrix of frequenced can be extracted to maximize its statistical correlation with the prompt vectors.

That's exactly what happens in any #LLM: given a lossy compression of a high dimensional frequency matrix, the software traverse and decompress unrelated fragments of the source texts according to their statistical correlation with the prompt (and the previously generated tokens).
Beyond "lossiness", it's the user-provided context and the prediction based on that context and their own generated text that makes these models functionally useful.
Plausible, not useful.

You can use them if all you need is to fool people about their interlocutor, for example to spread disinformation or win the imitation game, but that's all.

Whenever you need something correct, not just plausible to an uninformed human, they stop being useful. In fact a recent #OpenAI study recognise an error rate > 90% for its LLM on simple verifiable answer to questions.
As always, OpenAI is trying to set up a benchmark it can easily cheat, but the numbers are clear: even on basic tasks, #GenAI is totally unreliable.
Predictive text generation algorithms are not a "weapon of oppression".
The technologies in itself (the algorithms described in papers and textbooks) are not a "weapon of oppression", but all of real world models are, no matter how you use them.

So if you build your LLM from scratch properly collecting and selecting all of the source texts, you might get a model that is not harmful to people.
But if you hope to just use models from huggingface "for greated good", you are fooling yourself.

Just got asked to sign an open letter to OpenAI asking for transparency on their announced restructuring. You’ll hear about it soon enough, no doubt, given some “big names” are attached to it.

While I agree with the premise of the letter, there’s no way I can sign it after seeing the level of cluelessness and perpetuation of harmful assumptions regurgitated in it. It’s depressing to see those supposedly pushing back against Big Tech’s AI grift having themselves accepted the core myths of this bullshit.

It starts:

“We write to you as the legal beneficiaries of your charitable mission.”

What charitable mission? Are you idiots? You’re talking to a ~$4B organisation.

“Your current structure includes important safeguards designed to ensure your technology serves humanity rather than merely generating profit…”

Oh, really, that’s news to me. I guess I must be missing how their current bullshit serves humanity.

“However, you have proposed a significant corporate restructuring that appears to weaken or eliminate many of these protections, and the public deserves to know the details.”

Ah, so they’re removing the smoke and mirrors, is that it?

Then a bunch of questions, including:

“Does OpenAI plan to commercialize AGI once developed?”

You do understand that there is NO path that leads from today’s mass bullshit factories that are LLMs to AGI, right? None. Zero. Nada. You’re playing right into their hands by taking this as given.

“We believe your response will help restore trust and establish whether OpenAI remains committed to its founding principles, or whether it is prioritizing private interests over its public mission.”

What trust? You trusted these assholes to begin with why exactly? Was it the asshat billionaire founder? How bloody naïve can you be?

“The stakes could not be higher. The decisions you make about governance, profit distribution, and accountability will shape not only OpenAI's future but also the future of society at large.”

Please, sirs, be kind.

No, fuck you. Why are we pleading? Burn this shit to the ground and dance on its smoldering remains.

“We look forward to your response and to working together to ensure AGI truly benefits everyone.”

🤦‍♂️

Yeah, no, I won’t be signing this. If this is what “resistance” looks like, we’re well and truly fucked.

Just got asked to sign an open letter to OpenAI asking for transparency on their announced restructuring. You’ll hear about it soon enough, no doubt, given some “big names” are attached to it.

While I agree with the premise of the letter, there’s no way I can sign it after seeing the level of cluelessness and perpetuation of harmful assumptions regurgitated in it. It’s depressing to see those supposedly pushing back against Big Tech’s AI grift having themselves accepted the core myths of this bullshit.

It starts:

“We write to you as the legal beneficiaries of your charitable mission.”

What charitable mission? Are you idiots? You’re talking to a ~$4B organisation.

“Your current structure includes important safeguards designed to ensure your technology serves humanity rather than merely generating profit…”

Oh, really, that’s news to me. I guess I must be missing how their current bullshit serves humanity.

“However, you have proposed a significant corporate restructuring that appears to weaken or eliminate many of these protections, and the public deserves to know the details.”

Ah, so they’re removing the smoke and mirrors, is that it?

Then a bunch of questions, including:

“Does OpenAI plan to commercialize AGI once developed?”

You do understand that there is NO path that leads from today’s mass bullshit factories that are LLMs to AGI, right? None. Zero. Nada. You’re playing right into their hands by taking this as given.

“We believe your response will help restore trust and establish whether OpenAI remains committed to its founding principles, or whether it is prioritizing private interests over its public mission.”

What trust? You trusted these assholes to begin with why exactly? Was it the asshat billionaire founder? How bloody naïve can you be?

“The stakes could not be higher. The decisions you make about governance, profit distribution, and accountability will shape not only OpenAI's future but also the future of society at large.”

Please, sirs, be kind.

No, fuck you. Why are we pleading? Burn this shit to the ground and dance on its smoldering remains.

“We look forward to your response and to working together to ensure AGI truly benefits everyone.”

🤦‍♂️

Yeah, no, I won’t be signing this. If this is what “resistance” looks like, we’re well and truly fucked.

🫧 OpenAI Is Quietly Trying to Get More Money as It Burns Through Cash at a Staggering Pace

「 There's just one wrinkle: the funds roll out in two tranches, with $10 billion dispersed immediately, and $30 billion only available if OpenAI restructures to a for-profit company. If it doesn't, the Wall Street Journal reported back in March, SoftBank has the option to withhold $20 billion, cutting the historic funding round off at the knees 」

https://futurism.com/openai-money-softbank-investors

#openai #ai #aihype #softbank