Maybe We Shouldn’t Call Them AI “Agents”
Beware of pretty faces that you find. A pretty face can hide an evil mind.
– Johnny Rivers, Secret Agent Man
As artificial intelligence capabilities expand into government service delivery, it’s worth pausing to think carefully about the language we’re using. The terms “agentic services” and “agentic AI” have gained significant traction in the tech industry, and for good reason — it captures something important about AI systems that can act autonomously. I myself am as guilty as anyone of using this term frequently. But for those of us working in government contexts, there are some considerations worth keeping in mind.
The “Agent” Problem in Government
In government, the word “agent” carries particular connotations. FBI agents. Border patrol agents. IRS agents. These are enforcement and investigative roles. When citizens hear “government agent,” they often think of authority, compliance, and oversight — not helpful service delivery.
This isn’t an insurmountable problem, but it’s worth being aware of. The language we choose shapes how citizens perceive and respond to new service models. If we’re trying to build trust in AI-enabled services, starting with terminology that might trigger concerns about surveillance or enforcement may not be ideal.
(And yes, for a certain generation, The Matrix movies didn’t exactly help the cultural perception of “agents” either. 😅)
What the term “agents” might obscure
There’s a deeper consideration beyond just the word “agent” itself. Calling these services “agentic” can make them sound radically new — a complete departure enabled by cutting-edge AI. But that framing might obscure an important reality.
Delegation-based government services aren’t new. They’ve existed for decades, and are extremely common today.
Tax preparers handle filing returns on behalf of clients. Immigration attorneys navigate visa applications. Customs brokers manage import/export documentation for businesses. Permit expediters guide building approval processes. Benefits navigators help people apply for disability or veterans services.
These are all delegation relationships. Citizens hand over complex, high-stakes government interactions to trusted specialists who handle the administrative burden on their behalf. AI doesn’t enable this service delivery paradigm, but it does potentially make it more scalable and affordable.
Why Words Matter
Thinking about these services as “delegation-based” rather than simply “agentic” opens up useful design questions.
When you frame it as delegation, you can look to existing delegation relationships for guidance. What makes someone comfortable delegating their tax filing to a CPA? What trust factors matter when hiring an immigration attorney? These aren’t abstract questions — there are decades of real-world answers.
The language of delegation also centers the citizen experience more clearly. It’s not about what the AI can do autonomously; it’s about what citizens are willing to hand over and under what conditions. That subtle shift in framing can lead to different design choices around transparency, control and oversight.
Moving Forward
This isn’t a call to abandon the term “agentic services” entirely. It’s widely used in industry, and there’s value in using common language when talking with technology partners and vendors.
But maybe for internal discussions, policy development, and especially citizen-facing communications, it might be worth experimenting with terms like “delegation-based services” or similar language. It acknowledges continuity with existing practices, avoids potentially problematic associations with “government agents,” and keeps the focus on what citizens are actually doing: choosing to delegate burdensome tasks while maintaining appropriate oversight and accountability.
The technology may be new, but the underlying service delivery paradigm isn’t. Our language should reflect that.
Note – this post originally appeared on GovLoop.
#agent #AI #artificialIntelligence #ChatGPT #serviceDelivery