Discussion
Loading...

#Tag

  • About
  • Code of conduct
  • Privacy
  • Users
  • Instances
  • About Bonfire
Civic Innovations
@civic.io@civic.io  ·  activity timestamp last week

AI instructions as platform infrastructure

In previous posts on the Ad Hoc LLC website, I’ve talked about how platforms reduce complexity by providing reusable building blocks and how AI-assisted code generation is transforming custom development. I think that there is an emerging opportunity that sits at the intersection of these two trends that warrants some attention: treating AI coding agent instructions as a core component of platform infrastructure itself.

From documentation to executable knowledge

Platform teams invest significant effort in creating documentation that helps application developers understand and use platform services. This documentation typically includes architecture diagrams, API references, code samples, best practices, security guidelines, and deployment instructions.

While this documentation is essential, it still places a burden on developers to read, interpret, and correctly apply this information. Even with excellent documentation, developers must synthesize information across multiple documents, translate examples to their specific use case, and navigate the gap between understanding and implementation.

This cognitive load, while reduced compared to starting from scratch, can still create friction in the development process. Developers need to spend time deciphering documentation rather than solving their unique business problems.

AI instructions as infrastructure

What if platform teams could encode their knowledge directly into AI coding agent instructions that developers could use immediately? Instead of providing documentation that developers must interpret, platforms would provide machine-readable instructions that AI agents can execute directly.

This represents a fundamental shift in how platform knowledge is packaged and delivered. In this model, AI coding agent instructions become a first-class component of platform infrastructure, alongside the other building blocks I’ve discussed in the past. Just as platforms provide deployment pipelines and monitoring services, they would also provide curated AI instructions that embody institutional knowledge about how to build on the platform.

Platform-provided AI instructions would codify the collective knowledge of the platform team, including how to organize code within the platform’s architecture, required security controls and their implementation patterns, how to connect to platform APIs and shared services, coding standards and testing approaches, and agency-specific business domain knowledge.

The developer experience transformed

For application developers, this would transform the experience of starting a new project. Instead of reading through multiple documentation pages, finding and adapting code examples, and setting up basic scaffolding manually, developers would provide their AI coding assistant with the platform’s instruction set, describe their specific application needs, and focus immediately on unique business logic.

The AI agent, armed with platform-specific instructions, would generate code that already incorporates correct platform integration patterns, required security controls, proper configuration for deployment, and compliance with platform governance.

This approach dramatically reduces cognitive load on product teams in several ways. Developers don’t need to mentally process and translate documentation—the AI agent handles the interpretation and application of platform knowledge. Every project starts with the correct patterns and practices baked in. New team members can become productive more quickly, as the AI agent acts as an expert guide. And teams can spend their mental energy on the aspects of their application that make it unique, rather than on boilerplate integration with the platform.

Implementation considerations

For platform teams looking to adopt this approach, several factors are important.

  • AI instructions should be versioned alongside platform releases, ensuring developers use guidance that matches their platform version.
  • Platform teams should validate that instructions generate code that actually works correctly on the platform, treating instructions as code that requires testing. Instructions will improve over time based on feedback from developers and analysis of what the AI agents produce.

AI instructions can’t (and shouldn’t) replace documentation entirely. They would complement it by making knowledge executable while documentation remains important for deeper understanding. And in secure government environments, access to detailed platform instructions may need to be controlled, ensuring only authorized developers can use them.

The broader impact

This evolution of platforms has implications beyond individual developer productivity. New agencies or teams adopting a platform can become productive faster when AI instructions encode platform expertise. Institutional knowledge about platform best practices becomes more durable when encoded in AI instructions rather than living only in documentation or team members’ heads. Organizations can achieve greater consistency across applications when the same AI instructions guide initial implementation.

When AI-generated scaffolding already incorporates required security controls and compliance patterns, the path to ATO becomes even faster. And platform teams can support more application teams without proportionally increasing support burden, as AI instructions scale knowledge delivery.

A natural evolution

AI coding agent instructions represent a natural next step in platform evolution. Platforms have always been about encoding solutions to common problems so teams don’t reinvent the wheel. Documentation and code samples were the first generation of this encoded knowledge. AI instructions are the next generation—knowledge that isn’t just readable but directly executable.

For government agencies navigating the dual mandate to adopt commercial solutions and leverage AI capabilities, this approach offers a practical path forward. Platform foundations provide the commercial infrastructure, traditional documentation ensures understanding and oversight, and AI instructions dramatically accelerate the development of agency-specific applications on that foundation.

The result is a platform that doesn’t just provide services but actively helps developers use those services correctly and efficiently. It’s an approach that makes the right way not just the easiest way, but increasingly the automatic way—freeing government product teams to focus on what matters most: delivering value to the people who depend on government digital services.

#AI #artificialIntelligence #llm #technology

Sorry, no caption provided by author
Sorry, no caption provided by author
Sorry, no caption provided by author
  • Copy link
  • Flag this post
  • Block
Civic Innovations
@civic.io@civic.io  ·  activity timestamp 2 months ago

The Third Wave of Government Disruption

When printed telephone directories first started including blue pages for government offices in the 1950s and 60s, they created a new expectation: citizens should be able to reach their government by phone. The Internet revolution of the 1990s raised these expectations exponentially—if you could bank online and shop on Amazon, why couldn’t you renew your license or apply for benefits with the same ease?

Now, with 34% of U.S. adults having used ChatGPT—roughly double the share since 2023—we’re witnessing the third major wave of technology-driven transformation in how citizens expect to interact with their government. And once again, we’re watching the same pattern unfold: rapid consumer adoption creating new expectations, followed by delayed government adaptation, followed (potentially) by a long period of playing catch-up.

The difference this time? The stakes are higher, the pace is faster, and the consequences of falling behind may be more severe than ever.

Three Waves of Technological Disruption

Each wave of technological change outlined above has followed a similar trajectory, but with accelerating speed:

The telephone era unfolded over decades. Telephone adoption began in the late 1870s as an expensive luxury for the wealthy, with monthly costs of $20-40 (equivalent to $500-1,000 today). It took until the mid-20th century for phones to become commonplace in households. Governments had time to establish call centers and phone-based services without fundamentally redesigning how it operated. The pace was manageable—measured in decades, not in years or months.

The Internet era compressed this timeline to years. Internet users exploded from 45 million in 1996 to 407 million by 2000—a ninefold increase in just four years. Citizens who could accomplish complex tasks online in minutes naturally expected similar efficiency from their government. But while private companies were redesigning their entire business models around digital capabilities, governments largely treated the Internet as a new channel for existing processes.

The AI era is compressing change to months. Generative AI has been adopted at a faster pace than PCs or the internet, with breakthroughs moving from laboratory to widespread deployment in timeframes that would have seemed impossible just a few years ago.

The Structural Challenge: Democracy vs. Speed

As I’ve written about extensively before, governments aren’t slow at technology adoption by accident—they’re designed that way. The very features that are intended to make democratic government more trustworthy and accountable also make it structurally unsuited for rapid technological change.

The classic example is government procurement. The average technology buying cycle for government is 22 months compared to 6-7 months in the private sector. These delays aren’t the result of bureaucratic incompetence—they’re the deliberate result of requirements designed to help ensure fairness, transparency, and accountability. Public bid posting periods, vendor diversity requirements, the acquisition of performance bonds, and detailed financial scrutiny all represent important values imbued in public procurement processes. But they can also add months to timelines in a world where technology solutions can have shorter development cycles than government procurement processes.

The same pattern can be seen across government operations. Budget processes designed to prevent waste and enable legislative oversight create “use it or lose it” dynamics that discourage efficiency innovations. Civil service systems meant to prevent patronage and ensure merit-based hiring create lengthy processes that struggle to compete for scarce technical talent against private companies that can hire faster and pay more.

These aren’t bugs—they’re features. The transparency requirements, deliberative processes, and risk aversion that can slow down government technology adoption exist to uphold fundamental values and principles. The design of these processes is deliberate. The problem is that these principles are increasingly in tension with the pace of technological change.

The Compounding Crisis

This structural mismatch becomes more problematic with each technological wave because the pace of change keeps accelerating while government processes remain largely constant. What was a manageable gap during the telephone era became a significant lag during the Internet era and is becoming an existential challenge in the AI era.

The Internet wave provides a sobering lesson in the cost of delayed adaptation. Despite clear evidence throughout the 1990s that digital services were transforming how people expected to interact with institutions, most governments were slow to recognize that the rapid evolution of the Internet was changing people’s expectations for how they communicated and interacted with their government. Two decades later, we’re still playing catch-up, retrofitting digital services onto processes designed for paper-based workflows, and struggling to make basic websites and online services accessible.

The consequences aren’t just about inefficiency, they are about the loss of public trust. When citizens can accomplish complex tasks seamlessly with private companies but struggle with basic government services, the contrast erodes confidence in government competence and accountability.

The Stakes Are Higher

The AI wave presents an even greater challenge because it doesn’t just change how governments deliver services—it potentially changes how governments make decisions. Unlike previous technological waves that primarily affected operational efficiency, AI touches the core of democratic governance: the exercise of judgment and discretion in applying laws and policies to individual circumstances.

The stakes couldn’t be higher. As an example, when Spain implemented an algorithmic system to assess domestic violence risk, the software became “so woven into law enforcement that it is hard to know where its recommendations end and human decision-making begins.” Tragically, people assessed as low-risk still became victims of violence.

This use of AI in decision making processes highlights a troubling pattern identified by researchers like Virginia Eubanks in her analysis of Pennsylvania’s Allegheny Family Screening Tool. While AI systems are meant to “support, not supplant, human decision-making,” in practice “the algorithm seems to be training the intake workers.” Staff begin to defer to algorithmic judgments, believing the model is less fallible than human screeners.

The “human-in-the-loop” approach—where people supposedly maintain oversight of AI decisions—may not be sufficient protection against the human tendency to cede authority to software. When New York City’s AI chatbot tells businesses they can take workers’ tips and that landlords can discriminate based on source of income, both illegal, it demonstrates how AI systems can undermine the rule of law even in seemingly routine interactions.

The acceleration of AI adoption in government is happening precisely in contexts where lives hang in the balance—decisions about protection from violence, child welfare, emergency response, and access to vital resources. Unlike the more gradual telephone and Internet adoption cycles that gave governments some time—limited as it was—to learn and adapt, AI deployment can sometimes happen without proper safeguards, training, or accountability mechanisms in place.

Getting It Right This Time

The lesson from previous technological waves is clear: the cost of delayed or unorganized adaptation grows exponentially. Governments that fell behind during the Internet era spent decades and billions of dollars trying to catch up, often with mixed results. With AI moving even faster and touching more fundamental aspects of governance, the penalty for falling behind again could be severe.

But speed without safeguards is equally dangerous. The challenge isn’t choosing between moving fast and maintaining accountability—it’s developing the capacity to do both simultaneously. This means building safeguards into the adoption process from the start, not retrofitting them later. It means creating review mechanisms that can operate at the speed of technology development, not the traditional pace of government oversight.

The solution requires adapting democratic processes for technological speed without abandoning democratic values. This means creating “fast lanes” for certain types of technological adoption while maintaining rigorous oversight. It means developing rapid-response teams for AI evaluation that include technical experts, legal reviewers, and community representatives. It means investing in government workforce development so staff can properly assess and oversee AI systems rather than simply defer to them.

Most importantly, it means recognizing that the structural challenges governments face with technology adoption aren’t bugs in the system—they’re features designed to serve important functions. The transparency requirements, deliberative processes, and accountability mechanisms that slow government down exist for a reason. The question isn’t how to eliminate these constraints, but how to redesign them so they can operate effectively when technological change happens faster than traditional democratic processes were designed to accommodate.

As this historical pattern of technology adoption has advanced, governments have played catch-up before, each time with higher stakes and less time to adapt. Given the pace and implications of AI adoption in government services, we can’t afford to play catch up again.

#technology#AI #ChatGPT #artificialIntelligence #business #government #GenAI #Procurement

Sorry, no caption provided by author
Sorry, no caption provided by author
Sorry, no caption provided by author
  • Copy link
  • Flag this post
  • Block
Kate Bowles
Kate Bowles boosted
Mark Carrigan
@markcarrigan.net@markcarrigan.net  ·  activity timestamp 4 months ago

A depressing fable about how ChatGPT is corroding trust in scholarship

In preparation for next week’s keynote on generative AI and the crisis of trust, I picked up a book about trust by a philosopher, who I’ve decided not to name, when I saw it in the Tate bookshop earlier today. It began with a quote from bell hooks which caught my attention:

Trust is both a personal and a political endeavour, an affirmation of our shared humanity and our collective potential for growth and transformation. By embracing trust, by fostering connections, grounded in love and compassion, we have the power to not only change our own lives but also to reshape the world around us…

I wanted to post it on my blog, so I immediately looked for a citation. I could find no result for the exact quote but Google returned this site at the top of the list, where I found nearly the same quote:

In the end, trust is both a personal and a political endeavor, an affirmation of our shared humanity and our collective potential for growth and transformation. By embracing trust, by fostering connections grounded in love and compassion, we have the power to not only change our own lives but also to reshape the world around us, one relationship at a time.

The problem is that this site hosts imagined responses by philosophers to the question ‘what is trust?’ produced by ChatGPT. These (genuinely quite interesting) LLM outputs were posted in April 2023, only to feature in a book published in 2024. I can find no other source for the quote the author includes, other than this nearly exact quote produced by ChatGPT.

The most obvious explanation here is that they decided they want to start the book with a quote from bell hooks. They then typed in ‘bell hooks and trust’ which returns the site above as its second result. They didn’t read the introduction which explains the exercise with the LLM and instead copy & pasted the ChatGPT output into his book, without checking for the source of the citation.

The irony being that I now don’t trust the rest of the book. A philosopher writing a book about trust begins the book with such lazy scholarship that I now struggle to trust them. I hope I’m wrong. But without wishing to personalise things, I’m tempted to use this an example in next week’s keynote. It illustrate how LLMs are contributing to an environment in which lazy scholarship, cherry picking a quote from a google search, becomes much riskier given the circulation of synthetic content.

#AI #artificialIntelligence #ChatGPT #generativeAI #PascalGielen #scholarship #technology #trust #writing

Sorry, no caption provided by author
Sorry, no caption provided by author
Sorry, no caption provided by author
  • Copy link
  • Flag this post
  • Block
Mark Carrigan
@markcarrigan.net@markcarrigan.net  ·  activity timestamp 4 months ago

A depressing fable about how ChatGPT is corroding trust in scholarship

In preparation for next week’s keynote on generative AI and the crisis of trust, I picked up a book about trust by a philosopher, who I’ve decided not to name, when I saw it in the Tate bookshop earlier today. It began with a quote from bell hooks which caught my attention:

Trust is both a personal and a political endeavour, an affirmation of our shared humanity and our collective potential for growth and transformation. By embracing trust, by fostering connections, grounded in love and compassion, we have the power to not only change our own lives but also to reshape the world around us…

I wanted to post it on my blog, so I immediately looked for a citation. I could find no result for the exact quote but Google returned this site at the top of the list, where I found nearly the same quote:

In the end, trust is both a personal and a political endeavor, an affirmation of our shared humanity and our collective potential for growth and transformation. By embracing trust, by fostering connections grounded in love and compassion, we have the power to not only change our own lives but also to reshape the world around us, one relationship at a time.

The problem is that this site hosts imagined responses by philosophers to the question ‘what is trust?’ produced by ChatGPT. These (genuinely quite interesting) LLM outputs were posted in April 2023, only to feature in a book published in 2024. I can find no other source for the quote the author includes, other than this nearly exact quote produced by ChatGPT.

The most obvious explanation here is that they decided they want to start the book with a quote from bell hooks. They then typed in ‘bell hooks and trust’ which returns the site above as its second result. They didn’t read the introduction which explains the exercise with the LLM and instead copy & pasted the ChatGPT output into his book, without checking for the source of the citation.

The irony being that I now don’t trust the rest of the book. A philosopher writing a book about trust begins the book with such lazy scholarship that I now struggle to trust them. I hope I’m wrong. But without wishing to personalise things, I’m tempted to use this an example in next week’s keynote. It illustrate how LLMs are contributing to an environment in which lazy scholarship, cherry picking a quote from a google search, becomes much riskier given the circulation of synthetic content.

#AI #artificialIntelligence #ChatGPT #generativeAI #PascalGielen #scholarship #technology #trust #writing

Sorry, no caption provided by author
Sorry, no caption provided by author
Sorry, no caption provided by author
  • Copy link
  • Flag this post
  • Block
Log in

bonfire.cafe

A space for Bonfire maintainers and contributors to communicate

bonfire.cafe: About · Code of conduct · Privacy · Users · Instances
Bonfire social · 1.0.0-rc.3.13 no JS en
Automatic federation enabled
  • Explore
  • About
  • Members
  • Code of Conduct
Home
Login