OpenAI demonstriert mit GPT-5 die Steuerung eines physischen Biolabors bei Ginkgo Bioworks. Das Modell optimierte Parameter für die zellfreie Proteinsynthese über 36.000 autonome Experimente. Im Gegensatz zur Bayes-Optimierung nutzte die KI biologisches Vorwissen, um den Suchraum effizienter zu navigieren. Die Produktionskosten sanken dabei um 40 Prozent. #OpenAI #GPT5 #GinkgoBioworks
https://www.all-ai.de/news/beitrage2026/gpt5-labor-biologie
OpenAI demonstriert mit GPT-5 die Steuerung eines physischen Biolabors bei Ginkgo Bioworks. Das Modell optimierte Parameter für die zellfreie Proteinsynthese über 36.000 autonome Experimente. Im Gegensatz zur Bayes-Optimierung nutzte die KI biologisches Vorwissen, um den Suchraum effizienter zu navigieren. Die Produktionskosten sanken dabei um 40 Prozent. #OpenAI #GPT5 #GinkgoBioworks
https://www.all-ai.de/news/beitrage2026/gpt5-labor-biologie
GPT-5 ist programmiert, eine Antwort zu geben auch wenn es keine Informationen hat (Screenshot des "Reasoning" Protokolls bei Copilot). Es ist also programmiert zu lügen.
Für ein Gemälde hat es sich bei mir gerade einen Katalog ausgedacht und dann die Information zurückgezogen ("retracted").
Ich glaube das sind interessante Beispiele für die Lehre, diese Schritte mal mit Studierenden durchzugehen, und zu sehen wie schnell der Chatbot zugibt zu lügen.
I ported JustHTML from Python to JavaScript with Codex CLI and GPT-5.2 in hours
https://simonwillison.net/2025/Dec/15/porting-justhtml/
#HackerNews #JustHTML #Codex #GPT5 #Programming #JavaScript #Python
Building more with GPT-5.1-Codex-Max
GPT-5.1: A smarter, more conversational ChatGPT
https://openai.com/index/gpt-5-1/
#HackerNews #GPT5.1 #ChatGPT #AI #Conversational #Technology #Innovation
@Roundtrip @mjd this is what I love about search engines: you put in a prompt and it gives you ✨ only clickable links ✨ as the response! and you don't need to do any prompt engineering. using Kagi, you can also exclude sites from all your results
My prompt experiments with Claude (and ChatGPT-5) have been more to get their reports to ‘show their work’ by including clickable links…
Here’s a thread on getting a research report to help fix broken links in an old blog post, and dive deeper to find original sourced Neal Armstrong quotes in a NASA debrief transcript I knew must exist, but couldn’t find. https://federate.social/@Roundtrip/115062497251838137
Reverse engineering Codex CLI to get GPT-5-Codex-Mini to draw me a pelican
https://simonwillison.net/2025/Nov/9/gpt-5-codex-mini/
#HackerNews #ReverseEngineering #CodexCLI #GPT5 #CodexMini #Pelican #AIArt
🗣️ OpenAI Announces That It's Making GPT-5 More Sycophantic After User Backlash
「 Those who had become accustomed to the "sycophantic" tone of GPT-4o, which sometimes lavished praise even on users' terrible ideas, were taken aback by GPT-5's "cold" brashness and short answers, highlighting just how emotionally attached many of them had become 」
University of Rhode Island's AI lab estimates that GPT-5 averages just over 18 Wh per query, so putting all of #ChatGPT's reported 2.5B requests a day through the model could see energy usage as high as 45GWh equivalent of two to three nuclear power reactors, enough to power a small country.
https://www.tomshardware.com/tech-industry/artificial-intelligence/chatgpt-5-power-consumption-could-be-as-much-as-eight-times-higher-than-gpt-4-research-institute-estimates-medium-sized-gpt-5-response-can-consume-up-to-40-watt-hours-of-electricity
University of Rhode Island's AI lab estimates that GPT-5 averages just over 18 Wh per query, so putting all of #ChatGPT's reported 2.5B requests a day through the model could see energy usage as high as 45GWh equivalent of two to three nuclear power reactors, enough to power a small country.
https://www.tomshardware.com/tech-industry/artificial-intelligence/chatgpt-5-power-consumption-could-be-as-much-as-eight-times-higher-than-gpt-4-research-institute-estimates-medium-sized-gpt-5-response-can-consume-up-to-40-watt-hours-of-electricity
GPT-5 reaction amongst developers has been...mixed. In fact one of the developers OpenAI featured on its launch-day promotions has done a complete 180 — he now says he was wrong about GPT-5 (Theo Browne). Questions of coding quality aside, GPT-5 raises some interesting longer term questions for devs: if AI can build things just using web standards, will that lead to less reliance on React frameworks? https://thenewstack.io/gpt-5-a-choose-your-own-adventure-for-frontend-developers/ #gpt5 #frontend
"My gloss is that GPT-5 had become something of an albatross around OpenAI’s neck. And this particular juncture, not long after inking big deals with Softbank et al. and riding as high on its cultural and political trajectory as it’s likely to get—and perhaps seeing declining rates of progress on model improvement in the labs—a calculated decision was made to pull the trigger on releasing the long-awaited model. People were going to be disappointed no matter what; let them be disappointed now, while the wind is still at OpenAI’s back, and it can credibly make a claim to providing hyper-advanced worker automation.
I don’t think the GPT-5 flop ultimately matters all that much to most folks, and it can certainly be papered over well enough by a skilled salesman in an enterprise pitch meeting. Again, all this is clarifying: OpenAI is again centering workplace automation, while retreating from messianic AGI talk."
https://www.bloodinthemachine.com/p/gpt-5-is-a-joke-will-it-matter
openai put a deceptive graph... on the slide about how gpt-5 deceives users less than previous generations
the jokes write themselves at this point
#gpt5