¡Abbie!
alcinnz
¡Abbie! and 1 other boosted

A valid HTML zip bomb, https://ache.one/notes/html_zip_bomb by @ache

The article shows how to create an HTML zip bomb for AI crawlers not respecting the robots.txt file.

A zip bomb is a huge file (like 10Gib), that once compressed, has a reasonable size like 10Mib. An AI crawler will uncompressed it and will see all its memory being consumed, leading to a possible crash.

That’s an effective way to counter-attack disrespectful AI crawlers.

#html#ZipBomb#ai

⁂ Article

The #dotcons drained the VC swamp and now guzzle from the mainstream trough of corporate socialism

In the USA #techshit mess, #OpenAI is busy wrapping itself in the stars and stripes, pushing the fantasy of “democratic AI” while the democracy fig leaf is collapsing. This isn’t democracy – it’s branding. It’s the normal Silicon Valley laundering greed through the American imperialism.

The #nastyfew, Sam Altman, Larry Ellison, Masayoshi Son, and the Saudis will do fine. They’ll gorge on taxpayer, subsides, pouring billions down the drain, just like the political as normal […]

⁂ Article

The #dotcons drained the VC swamp and now guzzle from the mainstream trough of corporate socialism

In the USA #techshit mess, #OpenAI is busy wrapping itself in the stars and stripes, pushing the fantasy of “democratic AI” while the democracy fig leaf is collapsing. This isn’t democracy – it’s branding. It’s the normal Silicon Valley laundering greed through the American imperialism.

The #nastyfew, Sam Altman, Larry Ellison, Masayoshi Son, and the Saudis will do fine. They’ll gorge on taxpayer, subsides, pouring billions down the drain, just like the political as normal […]

"The per-prompt energy impact is equivalent to watching TV for less than nine seconds."

At long last, some hard data on generative AI power use (and water use & CO2 emissions). Google research paper also says energy use per prompt dropped 33x in last 12 months: https://cloud.google.com/blog/products/infrastructure/measuring-the-environmental-impact-of-ai-inference

#AI

The Third Wave of Government Disruption

When printed telephone directories first started including blue pages for government offices in the 1950s and 60s, they created a new expectation: citizens should be able to reach their government by phone. The Internet revolution of the 1990s raised these expectations exponentially—if you could bank online and shop on Amazon, why couldn’t you renew your license or apply for benefits with the same ease?

Now, with 34% of U.S. adults having used ChatGPT—roughly double the share since 2023—we’re witnessing the third major wave of technology-driven transformation in how citizens expect to interact with their government. And once again, we’re watching the same pattern unfold: rapid consumer adoption creating new expectations, followed by delayed government adaptation, followed (potentially) by a long period of playing catch-up.

The difference this time? The stakes are higher, the pace is faster, and the consequences of falling behind may be more severe than ever.

Three Waves of Technological Disruption

Each wave of technological change outlined above has followed a similar trajectory, but with accelerating speed:

The telephone era unfolded over decades. Telephone adoption began in the late 1870s as an expensive luxury for the wealthy, with monthly costs of $20-40 (equivalent to $500-1,000 today). It took until the mid-20th century for phones to become commonplace in households. Governments had time to establish call centers and phone-based services without fundamentally redesigning how it operated. The pace was manageable—measured in decades, not in years or months.

The Internet era compressed this timeline to years. Internet users exploded from 45 million in 1996 to 407 million by 2000—a ninefold increase in just four years. Citizens who could accomplish complex tasks online in minutes naturally expected similar efficiency from their government. But while private companies were redesigning their entire business models around digital capabilities, governments largely treated the Internet as a new channel for existing processes.

The AI era is compressing change to months. Generative AI has been adopted at a faster pace than PCs or the internet, with breakthroughs moving from laboratory to widespread deployment in timeframes that would have seemed impossible just a few years ago.

The Structural Challenge: Democracy vs. Speed

As I’ve written about extensively before, governments aren’t slow at technology adoption by accident—they’re designed that way. The very features that are intended to make democratic government more trustworthy and accountable also make it structurally unsuited for rapid technological change.

The classic example is government procurement. The average technology buying cycle for government is 22 months compared to 6-7 months in the private sector. These delays aren’t the result of bureaucratic incompetence—they’re the deliberate result of requirements designed to help ensure fairness, transparency, and accountability. Public bid posting periods, vendor diversity requirements, the acquisition of performance bonds, and detailed financial scrutiny all represent important values imbued in public procurement processes. But they can also add months to timelines in a world where technology solutions can have shorter development cycles than government procurement processes.

The same pattern can be seen across government operations. Budget processes designed to prevent waste and enable legislative oversight create “use it or lose it” dynamics that discourage efficiency innovations. Civil service systems meant to prevent patronage and ensure merit-based hiring create lengthy processes that struggle to compete for scarce technical talent against private companies that can hire faster and pay more.

These aren’t bugs—they’re features. The transparency requirements, deliberative processes, and risk aversion that can slow down government technology adoption exist to uphold fundamental values and principles. The design of these processes is deliberate. The problem is that these principles are increasingly in tension with the pace of technological change.

The Compounding Crisis

This structural mismatch becomes more problematic with each technological wave because the pace of change keeps accelerating while government processes remain largely constant. What was a manageable gap during the telephone era became a significant lag during the Internet era and is becoming an existential challenge in the AI era.

The Internet wave provides a sobering lesson in the cost of delayed adaptation. Despite clear evidence throughout the 1990s that digital services were transforming how people expected to interact with institutions, most governments were slow to recognize that the rapid evolution of the Internet was changing people’s expectations for how they communicated and interacted with their government. Two decades later, we’re still playing catch-up, retrofitting digital services onto processes designed for paper-based workflows, and struggling to make basic websites and online services accessible.

The consequences aren’t just about inefficiency, they are about the loss of public trust. When citizens can accomplish complex tasks seamlessly with private companies but struggle with basic government services, the contrast erodes confidence in government competence and accountability.

The Stakes Are Higher

The AI wave presents an even greater challenge because it doesn’t just change how governments deliver services—it potentially changes how governments make decisions. Unlike previous technological waves that primarily affected operational efficiency, AI touches the core of democratic governance: the exercise of judgment and discretion in applying laws and policies to individual circumstances.

The stakes couldn’t be higher. As an example, when Spain implemented an algorithmic system to assess domestic violence risk, the software became “so woven into law enforcement that it is hard to know where its recommendations end and human decision-making begins.” Tragically, people assessed as low-risk still became victims of violence.

This use of AI in decision making processes highlights a troubling pattern identified by researchers like Virginia Eubanks in her analysis of Pennsylvania’s Allegheny Family Screening Tool. While AI systems are meant to “support, not supplant, human decision-making,” in practice “the algorithm seems to be training the intake workers.” Staff begin to defer to algorithmic judgments, believing the model is less fallible than human screeners.

The “human-in-the-loop” approach—where people supposedly maintain oversight of AI decisions—may not be sufficient protection against the human tendency to cede authority to software. When New York City’s AI chatbot tells businesses they can take workers’ tips and that landlords can discriminate based on source of income, both illegal, it demonstrates how AI systems can undermine the rule of law even in seemingly routine interactions.

The acceleration of AI adoption in government is happening precisely in contexts where lives hang in the balance—decisions about protection from violence, child welfare, emergency response, and access to vital resources. Unlike the more gradual telephone and Internet adoption cycles that gave governments some time—limited as it was—to learn and adapt, AI deployment can sometimes happen without proper safeguards, training, or accountability mechanisms in place.

Getting It Right This Time

The lesson from previous technological waves is clear: the cost of delayed or unorganized adaptation grows exponentially. Governments that fell behind during the Internet era spent decades and billions of dollars trying to catch up, often with mixed results. With AI moving even faster and touching more fundamental aspects of governance, the penalty for falling behind again could be severe.

But speed without safeguards is equally dangerous. The challenge isn’t choosing between moving fast and maintaining accountability—it’s developing the capacity to do both simultaneously. This means building safeguards into the adoption process from the start, not retrofitting them later. It means creating review mechanisms that can operate at the speed of technology development, not the traditional pace of government oversight.

The solution requires adapting democratic processes for technological speed without abandoning democratic values. This means creating “fast lanes” for certain types of technological adoption while maintaining rigorous oversight. It means developing rapid-response teams for AI evaluation that include technical experts, legal reviewers, and community representatives. It means investing in government workforce development so staff can properly assess and oversee AI systems rather than simply defer to them.

Most importantly, it means recognizing that the structural challenges governments face with technology adoption aren’t bugs in the system—they’re features designed to serve important functions. The transparency requirements, deliberative processes, and accountability mechanisms that slow government down exist for a reason. The question isn’t how to eliminate these constraints, but how to redesign them so they can operate effectively when technological change happens faster than traditional democratic processes were designed to accommodate.

As this historical pattern of technology adoption has advanced, governments have played catch-up before, each time with higher stakes and less time to adapt. Given the pace and implications of AI adoption in government services, we can’t afford to play catch up again.

#technology#AI #ChatGPT #artificialIntelligence #business #government #GenAI #Procurement

maxlath
maxlath boosted

This is one of the best things I read this summer:

"In the Future All Food Will Be Cooked in a Microwave, and if You Can’t Deal With That Then You Need to Get Out of the Kitchen." Thank you for writing it so brilliantly @colincornaby 🙏

https://www.colincornaby.me/2025/08/in-the-future-all-food-will-be-cooked-in-a-microwave-and-if-you-cant-deal-with-that-then-you-need-to-get-out-of-the-kitchen/

#AI #aibubble #hype

This is one of the best things I read this summer:

"In the Future All Food Will Be Cooked in a Microwave, and if You Can’t Deal With That Then You Need to Get Out of the Kitchen." Thank you for writing it so brilliantly @colincornaby 🙏

https://www.colincornaby.me/2025/08/in-the-future-all-food-will-be-cooked-in-a-microwave-and-if-you-cant-deal-with-that-then-you-need-to-get-out-of-the-kitchen/

#AI #aibubble #hype

Alex Akselrod
Charlie Stross
Alex Akselrod and 1 other boosted

Books will soon be obsolete in school

https://shkspr.mobi/blog/2025/08/books-will-soon-be-obsolete-in-school/

I recently had a chance to ask a question to one of the top AI people. At a Q&A session, I raised my hand and asked simply "What is your estimation of the future educational value of AI?"

The response was swift and utterly devastating for those laggards who want to hold back progress. The AI guy said:

Books will soon be obsolete in schools. Scholars will be instructed through AI. It is possible to teach every branch of human knowledge with AI. Our school system will be completely changed inside of ten years.

We have been working for some time on educational AI. It proves conclusively the worth of AI in chemistry, physics and other branches of study, making the scientific truths, difficult to understand from text books, plain and clear to children.

That's it. We can throw away all those outdated paper books. Children will learn directly from an AI which, coincidentally, is sold by the company. We can trust their studies on such matters and be assured that they have no ulterior motive.

But, ah my friends, I have told a slight untruth. I didn't ask that question. Frederick James Smith asked the question to Thomas Edison in 1913. The question was about the new and exciting world of motion pictures.

Scan of old newsprint. "What is your estimation of the future educationalvalue of pictures?" I asked." Books." declared the inventor with decision, " will soon be obsolete in the public schools. Scholars will be instructed through the eye. It is possible to teach every branch of human knowledge with the motion picture. Our school system will be completely changed inside of ten years. " We have been working for some time on the school pictures. We have been studying and reproducing the life of the fly. mosquito, silk weaving moth, brown moth, gypsy moth, butterflies, scale and various other insects, as well as chemical cbrystallization. It proves conclusively the worth of motion pictures in chemistry, physics and other branches of study, making the scientific truths, difficult to understand from text books, plain and clear to children

You can read the full exchange from The New York Dramatic Mirror.

A hundred-plus years since the great and humble Edison made his prediction and… books are still used in schools! Those of us of a certain age remember a TV occasionally being wheeled in for one lesson or another. Today's kids watch more video content than ever - of mixed quality - but still rely on books and teachers.

Videos are good for some aspects of learning, but woefully inadequate for others.

I'm not trying to say that just because one technology failed, so will all others. But it is amazing how AI-proponents are recycling the same arguments with basically the same timescale. Will AI be part of education? Sure! Just like videos, pocket computers, the Metaverse, and performance enhancing drugs.

Will it be the only tool ever needed for education? I doubt it. Will vested interests and uncritical journalists continue to boost it? You don't need to have read many history books to work out the answer.

Further reading: In the Future All Food Will Be Cooked in a Microwave, and if You Can’t Deal With That Then You Need to Get Out of the Kitchen

#AI #education #history #schools

Books will soon be obsolete in school

https://shkspr.mobi/blog/2025/08/books-will-soon-be-obsolete-in-school/

I recently had a chance to ask a question to one of the top AI people. At a Q&A session, I raised my hand and asked simply "What is your estimation of the future educational value of AI?"

The response was swift and utterly devastating for those laggards who want to hold back progress. The AI guy said:

Books will soon be obsolete in schools. Scholars will be instructed through AI. It is possible to teach every branch of human knowledge with AI. Our school system will be completely changed inside of ten years.

We have been working for some time on educational AI. It proves conclusively the worth of AI in chemistry, physics and other branches of study, making the scientific truths, difficult to understand from text books, plain and clear to children.

That's it. We can throw away all those outdated paper books. Children will learn directly from an AI which, coincidentally, is sold by the company. We can trust their studies on such matters and be assured that they have no ulterior motive.

But, ah my friends, I have told a slight untruth. I didn't ask that question. Frederick James Smith asked the question to Thomas Edison in 1913. The question was about the new and exciting world of motion pictures.

Scan of old newsprint. "What is your estimation of the future educationalvalue of pictures?" I asked." Books." declared the inventor with decision, " will soon be obsolete in the public schools. Scholars will be instructed through the eye. It is possible to teach every branch of human knowledge with the motion picture. Our school system will be completely changed inside of ten years. " We have been working for some time on the school pictures. We have been studying and reproducing the life of the fly. mosquito, silk weaving moth, brown moth, gypsy moth, butterflies, scale and various other insects, as well as chemical cbrystallization. It proves conclusively the worth of motion pictures in chemistry, physics and other branches of study, making the scientific truths, difficult to understand from text books, plain and clear to children

You can read the full exchange from The New York Dramatic Mirror.

A hundred-plus years since the great and humble Edison made his prediction and… books are still used in schools! Those of us of a certain age remember a TV occasionally being wheeled in for one lesson or another. Today's kids watch more video content than ever - of mixed quality - but still rely on books and teachers.

Videos are good for some aspects of learning, but woefully inadequate for others.

I'm not trying to say that just because one technology failed, so will all others. But it is amazing how AI-proponents are recycling the same arguments with basically the same timescale. Will AI be part of education? Sure! Just like videos, pocket computers, the Metaverse, and performance enhancing drugs.

Will it be the only tool ever needed for education? I doubt it. Will vested interests and uncritical journalists continue to boost it? You don't need to have read many history books to work out the answer.

Further reading: In the Future All Food Will Be Cooked in a Microwave, and if You Can’t Deal With That Then You Need to Get Out of the Kitchen

#AI #education #history #schools

I really like these 2 posts from @conradirwin.bsky.social‬ and @jimniels. They manage to put in words one of my struggles with AI coding tools.

The distinguishing factor of effective engineers is their ability to build and maintain clear mental models.

Why LLMs Can’t Really Build Software

  • With LLMs, you stuff more and more information into context until it (hopefully) has enough to generate a solution.
  • With your brain, you tweak, revise, or simplify your mental model more and more until the solution presents itself.

One adds information — complexity you might even say — to solve a problem. The other eliminates it.

Just a Little More Context Bro, I Promise, and It’ll Fix Everything

I find this is especially true when maintaining big and complex codebases that I know well. When tacking a new task, I don’t start from scratch. I’ve built mental models, my view was challenged, and I’ve updated my model at a result. And again. And again. I went through hundreds of engineering loops with that codebase. I can put some of that baggage into words and feed it to the LLM, but the loop stops there. LLMs cannot iterate through a loop on their own. They cannot find the most optimal implementation on their own.

At least for now.

#AI #development

I really like these 2 posts from @conradirwin.bsky.social‬ and @jimniels. They manage to put in words one of my struggles with AI coding tools.

The distinguishing factor of effective engineers is their ability to build and maintain clear mental models.

Why LLMs Can’t Really Build Software

  • With LLMs, you stuff more and more information into context until it (hopefully) has enough to generate a solution.
  • With your brain, you tweak, revise, or simplify your mental model more and more until the solution presents itself.

One adds information — complexity you might even say — to solve a problem. The other eliminates it.

Just a Little More Context Bro, I Promise, and It’ll Fix Everything

I find this is especially true when maintaining big and complex codebases that I know well. When tacking a new task, I don’t start from scratch. I’ve built mental models, my view was challenged, and I’ve updated my model at a result. And again. And again. I went through hundreds of engineering loops with that codebase. I can put some of that baggage into words and feed it to the LLM, but the loop stops there. LLMs cannot iterate through a loop on their own. They cannot find the most optimal implementation on their own.

At least for now.

#AI #development