Ed Summers
Ed Summers boosted

The Whitelee wind farm near Glasgow, the largest on-shore wind farm in the UK and one of the largest in Europe, has a maximum generative capacity of 539 MW and covers an area of 55 km² (about the size of Manhattan).

Plans have been announced for an AI data centre of 550 MW, and this is one of five such sites planned in central Scotland. (1/3)

#FrugalComputing#GenAI

Let's assume this data centre actually draws 300 MW all the time then it consumes 2.628 TWh/year. At a typical Water Usage Effectiveness of 0.3 l/kWh it would need 788 thousand m³ of water for cooling.

The site for the data centre, Ravenscraig, is in Motherwell (near Glasgow). That is a town of 33,000 people. The households of such a town consume 2 million m³ of water per year. So that data centre will consume 40% of that. (3/3)

#FrugalComputing#GenAI

The article calls the developing company, Apatura, a "renewable energy developer", but the reality is that they specialise in the land acquisition, design, planning, and operation of large-scale Battery Energy Storage Systems for hyperscale data centres. (2/3)

#FrugalComputing#GenAI

https://www.itpro.com/infrastructure/data-centres/plans-announced-to-resurrect-former-steelworks-as-a-green-data-center

La IA generativa no aumenta la creatividad como su lema publicitario promete. Un estudio del MIT Media Lab, liderado por Natalia Kosmyna, revela que, al usarla, la conectividad neuronal disminuye hasta un 55%.

Preocupación en el entorno educativo por consecuencias perjudiciales para el aprendizaje a largo plazo. 🧵

Compilamos una selección de artículos y frases destacadas analizando el estudio liderado por Nataliya Kosmyna, investigadora del MIT Media Lab, el cual ratifica muchas de las cosas que dijimos siempre sobre los efectos adversos que tiene la delegación cognitiva en sistemas de IA generativa.
.
.
.
#education#genAI#ChatGPT

It seems to be thoughtfully designed (with a long article explaining the considerations), follows academic practices, and simply is genuinely helpful. https://allenai.org/blog/paper-finder It builds on #SemanticScholar and verifies #GenAI output against an actual database of academic publications. 2/

This is obviously bad from #whatsapp, but also, the way the journalist describes what the chatbot does, as if it had intentions, is pretty bad too.

"It was the beginning of a bizarre exchange of the kind more and more people are having with AI systems, in which chatbots try to negotiate their way out of trouble, deflect attention from their mistakes and contradict themselves, all in an attempt to continue to appear useful."

No, the chatbot isn't "trying to negotiate", and is not "attempting to appear useful". It's a program that follows programming rules to output something that looks like English langage. It doesn't have desires or intentions, and it cannot lie because it doesn't know what truth is.

‘It’s terrifying’: WhatsApp AI helper mistakenly shares user’s number
https://www.theguardian.com/technology/2025/jun/18/whatsapp-ai-helper-mistakenly-shares-users-number?CMP=Share_AndroidApp_Other

#genAI#ChatBot#TheGuardian

Sorry for linking to Substack, but this one is so very good:
https://garymarcus.substack.com/p/a-knockout-blow-for-llms

A few excerpts:

Apple has a new paper; it’s pretty devastating to LLMs.

Whenever people ask me why I (contrary to widespread myth) actually like AI, and think that AI (though not GenAI) may ultimately be of great benefit to humanity, I invariably point to the advances in science and technology we might make if we could combine the causal reasoning abilities of our best scientists with the sheer compute power of modern digital computers.

What the Apple paper shows, most fundamentally, regardless of how you define AGI, is that LLMs are no substitute for good well-specified conventional algorithms. (They also can’t play chess as well as conventional algorithms, can’t fold proteins like special-purpose neurosymbolic hybrids, can’t run databases as well as conventional databases, etc.)

#AI #LLM#GenAI#Apple

Those in power of a #genai service can control the results. Recently, #openai turned down for how much the machine, #chatgpt should praise the user. Now, this study on political values shows they've turned up for the right-winged values;
'Our findings reveal that while newer versions of ChatGPT consistently maintain values within the libertarian-left quadrant, there is a statistically significant rightward shift in political values over time.'

https://www.nature.com/articles/s41599-025-04465-z

The thing about this is, I worked for bosses in the 1990s who'd spend an afternoon goofing around with FileMaker Pro, and then tell their entire staff they made a "database" that had to be put into "production" ASAP. My belief then was that the boss person went golfing with one of their equally-uninformed boss buddies, heard a bunch of tall tales about some software or another, and then had to mimic what their buddy did so they could brag next time.

This looks like that, except the outputs are creepier.

https://defector.com/henry-blodget-invents-hires-sexually-harasses-blogs-about-nonexistent-ai-subordinate

#AI #GenAI #GenerativeAI #LaborDisciplineAsAService #bosses