GAO on AI agents: Lead with intelligent AI adoption
Earlier this year, the Government Accountability Office shared a report on AI agents as policymakers look to balance the promise of the new capabilities with concern about potential misuse and other unintended consequences.
The report will help support a shift thatβs already happening related to AI agents. Weβre no longer asking what generative AI canΒ create,Β but what autonomous AI canΒ do. These systems are no longer simply responding β now, they sense, plan and act.
The GAOβs report isnβt an βAI 101,β but rather sets a baseline for government to make informed, mission-driven choices before AI agents become haphazardly embedded in critical operations. The message is clear: Donβt just adopt AI, adopt it smart.
There are steps agencies can take to facilitate smart adoption from the start of their AI journey. To prepare, they will need to increase digital literacy, recognize and understand the shift from genAI to agentic AI and identify where they are on the spectrum of autonomy.
Increasing digital literacy
AI systems can create great efficiencies, but they also require a significant shift in thinking, skill sets and workflows for federal employees.
A major part of this is understanding the data. Government wants to move away from rigid systems of record and into an approach where data can move around freely. Users who were hired to become experts on one tool for a particular mission need to increase their understanding of all aspects of the project and the tools that touch it.
Workers who use government datasets require a deeper knowledge of this information. Government datasets are particularly complex, requiring an understanding of nuances around source, frequency of updates, data quality and legal constraints.
These users need to understand whatβs available, how itβs structured and its limits in order to ask AI the right questions and receive valuable responses. Poorly framed questions can result in incomplete, inaccurate or misleading answers.
At a basic level, there are some skills all government employees need to prepare for a workplace that benefits from agentic AI.
Even non-technical staff need to understand the data landscape, enabling collaboration with IT and data teams to accelerate problem-solving and innovation. All government employees must be able to navigate dashboards and basic tools, ask sharper, more targeted questions of data and AI, recognize limitations like bias, gaps or outdated info, and make decisions informed by facts, not assumptions.
Shifting from GenAI to agentic AI
Overall, weβre seeing a major shift from AI pilots to solutions. Within that, weβre seeing an overall transition from generative to agentic AI. This requires both understanding the limits of agentic AI and how to show results and deliver return on investment.
As the GAOβs report recognizes, use of agentic AI in government is currently limited to software development and autonomous vehicles β but it ultimately will enable more complex tasks and support all federal agencies.
As government use of agentic AI expands, weβll see increased efficiency and greater overall support for the workforce. Agentic AI shines by performing rules-based, repeatable tasks that slow humans down β ideally, it will amplify, not replace humans.
When looking to identify prime candidates for agentic AI projects, agencies should prioritize routine but labor intensive workflows that follow clear rules. These can include tasks like document processing, data entry and compliance checks.
AI is also valuable in situations where it can triage, recommend or surface insights, then passing final decision making on to a human. Knowledge management and retrieval is also an ideal function for agentic AI, helping government workers to quickly find accurate, relevant information.
There are a few ways agencies can prepare for this shift, beginning with an inventory of automation-ready workflows and investment in data readiness and workforce upskilling.
Understanding the spectrum of autonomy
Like the move fromΒ cloud-firstΒ toΒ cloud-smart, agencies need to understand the spectrum of autonomy.
The question is notΒ whetherΒ to use AI agents. ItβsΒ how to match the right level of autonomy to the mission, with governance scaled to the risk. As a guideline, less agentic AIΒ is the right fit for routine, low-risk, high-volume tasks. Overall, for these types of programs, agencies should be thinking about efficiency with guardrails.
For example, cyber defense functions like detecting anomalies, auto-triaging alerts, isolating compromised assets and generating response playbooks for security analysts do not require much agentic AI support. On the other hand, tasks like autonomously containing active intrusions, dynamically adjusting network defenses and escalating only edge cases or ambiguous threats to human operators greatly benefit from the use of agentic AI.
Similarly in the realm of logistics and supply chain, tasks like anticipating demand surges using predictive analytics, automatically initiating replenishment and suggesting optimized delivery routes for approval only require a low level of agentic AI, while it plays a more significant role for dynamic rerouting shipments in response to disruptions or coordination of cross-agency assets.
Through reports like the recent one from GAO and other guidelines and efforts by the federal government, agencies are positioned for significant progress and increased efficiencies in the years to come. Agentic AI will undeniably become a core piece of agenciesβ technology toolsets β now is the time to prepare.
Laura Stash is executive vice president of solutions architecture at iTech AG.
The post GAO on AI agents: Lead with intelligent AI adoption first appeared on Federal News Network.

Β© Federal News Network