The Third Wave of Government Disruption

When printed telephone directories first started including blue pages for government offices in the 1950s and 60s, they created a new expectation: citizens should be able to reach their government by phone. The Internet revolution of the 1990s raised these expectations exponentially—if you could bank online and shop on Amazon, why couldn’t you renew your license or apply for benefits with the same ease?

Now, with 34% of U.S. adults having used ChatGPT—roughly double the share since 2023—we’re witnessing the third major wave of technology-driven transformation in how citizens expect to interact with their government. And once again, we’re watching the same pattern unfold: rapid consumer adoption creating new expectations, followed by delayed government adaptation, followed (potentially) by a long period of playing catch-up.

The difference this time? The stakes are higher, the pace is faster, and the consequences of falling behind may be more severe than ever.

Three Waves of Technological Disruption

Each wave of technological change outlined above has followed a similar trajectory, but with accelerating speed:

The telephone era unfolded over decades. Telephone adoption began in the late 1870s as an expensive luxury for the wealthy, with monthly costs of $20-40 (equivalent to $500-1,000 today). It took until the mid-20th century for phones to become commonplace in households. Governments had time to establish call centers and phone-based services without fundamentally redesigning how it operated. The pace was manageable—measured in decades, not in years or months.

The Internet era compressed this timeline to years. Internet users exploded from 45 million in 1996 to 407 million by 2000—a ninefold increase in just four years. Citizens who could accomplish complex tasks online in minutes naturally expected similar efficiency from their government. But while private companies were redesigning their entire business models around digital capabilities, governments largely treated the Internet as a new channel for existing processes.

The AI era is compressing change to months. Generative AI has been adopted at a faster pace than PCs or the internet, with breakthroughs moving from laboratory to widespread deployment in timeframes that would have seemed impossible just a few years ago.

The Structural Challenge: Democracy vs. Speed

As I’ve written about extensively before, governments aren’t slow at technology adoption by accident—they’re designed that way. The very features that are intended to make democratic government more trustworthy and accountable also make it structurally unsuited for rapid technological change.

The classic example is government procurement. The average technology buying cycle for government is 22 months compared to 6-7 months in the private sector. These delays aren’t the result of bureaucratic incompetence—they’re the deliberate result of requirements designed to help ensure fairness, transparency, and accountability. Public bid posting periods, vendor diversity requirements, the acquisition of performance bonds, and detailed financial scrutiny all represent important values imbued in public procurement processes. But they can also add months to timelines in a world where technology solutions can have shorter development cycles than government procurement processes.

The same pattern can be seen across government operations. Budget processes designed to prevent waste and enable legislative oversight create “use it or lose it” dynamics that discourage efficiency innovations. Civil service systems meant to prevent patronage and ensure merit-based hiring create lengthy processes that struggle to compete for scarce technical talent against private companies that can hire faster and pay more.

These aren’t bugs—they’re features. The transparency requirements, deliberative processes, and risk aversion that can slow down government technology adoption exist to uphold fundamental values and principles. The design of these processes is deliberate. The problem is that these principles are increasingly in tension with the pace of technological change.

The Compounding Crisis

This structural mismatch becomes more problematic with each technological wave because the pace of change keeps accelerating while government processes remain largely constant. What was a manageable gap during the telephone era became a significant lag during the Internet era and is becoming an existential challenge in the AI era.

The Internet wave provides a sobering lesson in the cost of delayed adaptation. Despite clear evidence throughout the 1990s that digital services were transforming how people expected to interact with institutions, most governments were slow to recognize that the rapid evolution of the Internet was changing people’s expectations for how they communicated and interacted with their government. Two decades later, we’re still playing catch-up, retrofitting digital services onto processes designed for paper-based workflows, and struggling to make basic websites and online services accessible.

The consequences aren’t just about inefficiency, they are about the loss of public trust. When citizens can accomplish complex tasks seamlessly with private companies but struggle with basic government services, the contrast erodes confidence in government competence and accountability.

The Stakes Are Higher

The AI wave presents an even greater challenge because it doesn’t just change how governments deliver services—it potentially changes how governments make decisions. Unlike previous technological waves that primarily affected operational efficiency, AI touches the core of democratic governance: the exercise of judgment and discretion in applying laws and policies to individual circumstances.

The stakes couldn’t be higher. As an example, when Spain implemented an algorithmic system to assess domestic violence risk, the software became “so woven into law enforcement that it is hard to know where its recommendations end and human decision-making begins.” Tragically, people assessed as low-risk still became victims of violence.

This use of AI in decision making processes highlights a troubling pattern identified by researchers like Virginia Eubanks in her analysis of Pennsylvania’s Allegheny Family Screening Tool. While AI systems are meant to “support, not supplant, human decision-making,” in practice “the algorithm seems to be training the intake workers.” Staff begin to defer to algorithmic judgments, believing the model is less fallible than human screeners.

The “human-in-the-loop” approach—where people supposedly maintain oversight of AI decisions—may not be sufficient protection against the human tendency to cede authority to software. When New York City’s AI chatbot tells businesses they can take workers’ tips and that landlords can discriminate based on source of income, both illegal, it demonstrates how AI systems can undermine the rule of law even in seemingly routine interactions.

The acceleration of AI adoption in government is happening precisely in contexts where lives hang in the balance—decisions about protection from violence, child welfare, emergency response, and access to vital resources. Unlike the more gradual telephone and Internet adoption cycles that gave governments some time—limited as it was—to learn and adapt, AI deployment can sometimes happen without proper safeguards, training, or accountability mechanisms in place.

Getting It Right This Time

The lesson from previous technological waves is clear: the cost of delayed or unorganized adaptation grows exponentially. Governments that fell behind during the Internet era spent decades and billions of dollars trying to catch up, often with mixed results. With AI moving even faster and touching more fundamental aspects of governance, the penalty for falling behind again could be severe.

But speed without safeguards is equally dangerous. The challenge isn’t choosing between moving fast and maintaining accountability—it’s developing the capacity to do both simultaneously. This means building safeguards into the adoption process from the start, not retrofitting them later. It means creating review mechanisms that can operate at the speed of technology development, not the traditional pace of government oversight.

The solution requires adapting democratic processes for technological speed without abandoning democratic values. This means creating “fast lanes” for certain types of technological adoption while maintaining rigorous oversight. It means developing rapid-response teams for AI evaluation that include technical experts, legal reviewers, and community representatives. It means investing in government workforce development so staff can properly assess and oversee AI systems rather than simply defer to them.

Most importantly, it means recognizing that the structural challenges governments face with technology adoption aren’t bugs in the system—they’re features designed to serve important functions. The transparency requirements, deliberative processes, and accountability mechanisms that slow government down exist for a reason. The question isn’t how to eliminate these constraints, but how to redesign them so they can operate effectively when technological change happens faster than traditional democratic processes were designed to accommodate.

As this historical pattern of technology adoption has advanced, governments have played catch-up before, each time with higher stakes and less time to adapt. Given the pace and implications of AI adoption in government services, we can’t afford to play catch up again.

#technology#AI #ChatGPT #artificialIntelligence #business #government #GenAI #Procurement

A depressing fable about how ChatGPT is corroding trust in scholarship

In preparation for next week’s keynote on generative AI and the crisis of trust, I picked up a book about trust by a philosopher, who I’ve decided not to name, when I saw it in the Tate bookshop earlier today. It began with a quote from bell hooks which caught my attention:

Trust is both a personal and a political endeavour, an affirmation of our shared humanity and our collective potential for growth and transformation. By embracing trust, by fostering connections, grounded in love and compassion, we have the power to not only change our own lives but also to reshape the world around us…

I wanted to post it on my blog, so I immediately looked for a citation. I could find no result for the exact quote but Google returned this site at the top of the list, where I found nearly the same quote:

In the end, trust is both a personal and a political endeavor, an affirmation of our shared humanity and our collective potential for growth and transformation. By embracing trust, by fostering connections grounded in love and compassion, we have the power to not only change our own lives but also to reshape the world around us, one relationship at a time.

The problem is that this site hosts imagined responses by philosophers to the question ‘what is trust?’ produced by ChatGPT. These (genuinely quite interesting) LLM outputs were posted in April 2023, only to feature in a book published in 2024. I can find no other source for the quote the author includes, other than this nearly exact quote produced by ChatGPT.

The most obvious explanation here is that they decided they want to start the book with a quote from bell hooks. They then typed in ‘bell hooks and trust’ which returns the site above as its second result. They didn’t read the introduction which explains the exercise with the LLM and instead copy & pasted the ChatGPT output into his book, without checking for the source of the citation.

The irony being that I now don’t trust the rest of the book. A philosopher writing a book about trust begins the book with such lazy scholarship that I now struggle to trust them. I hope I’m wrong. But without wishing to personalise things, I’m tempted to use this an example in next week’s keynote. It illustrate how LLMs are contributing to an environment in which lazy scholarship, cherry picking a quote from a google search, becomes much riskier given the circulation of synthetic content.

#AI #artificialIntelligence #ChatGPT #generativeAI #PascalGielen #scholarship #technology #trust #writing

A depressing fable about how ChatGPT is corroding trust in scholarship

In preparation for next week’s keynote on generative AI and the crisis of trust, I picked up a book about trust by a philosopher, who I’ve decided not to name, when I saw it in the Tate bookshop earlier today. It began with a quote from bell hooks which caught my attention:

Trust is both a personal and a political endeavour, an affirmation of our shared humanity and our collective potential for growth and transformation. By embracing trust, by fostering connections, grounded in love and compassion, we have the power to not only change our own lives but also to reshape the world around us…

I wanted to post it on my blog, so I immediately looked for a citation. I could find no result for the exact quote but Google returned this site at the top of the list, where I found nearly the same quote:

In the end, trust is both a personal and a political endeavor, an affirmation of our shared humanity and our collective potential for growth and transformation. By embracing trust, by fostering connections grounded in love and compassion, we have the power to not only change our own lives but also to reshape the world around us, one relationship at a time.

The problem is that this site hosts imagined responses by philosophers to the question ‘what is trust?’ produced by ChatGPT. These (genuinely quite interesting) LLM outputs were posted in April 2023, only to feature in a book published in 2024. I can find no other source for the quote the author includes, other than this nearly exact quote produced by ChatGPT.

The most obvious explanation here is that they decided they want to start the book with a quote from bell hooks. They then typed in ‘bell hooks and trust’ which returns the site above as its second result. They didn’t read the introduction which explains the exercise with the LLM and instead copy & pasted the ChatGPT output into his book, without checking for the source of the citation.

The irony being that I now don’t trust the rest of the book. A philosopher writing a book about trust begins the book with such lazy scholarship that I now struggle to trust them. I hope I’m wrong. But without wishing to personalise things, I’m tempted to use this an example in next week’s keynote. It illustrate how LLMs are contributing to an environment in which lazy scholarship, cherry picking a quote from a google search, becomes much riskier given the circulation of synthetic content.

#AI #artificialIntelligence #ChatGPT #generativeAI #PascalGielen #scholarship #technology #trust #writing