In an MIT report released in November, 35% of companies have already adopted agentic AI, and another 44% plan to deploy it soon.
The report, based on a survey of more than 2,000 respondents in collaboration with the Boston Consulting Group, recommends that companies build centralized governance infrastructure before deploying autonomous agents. But governance often lags when companies feel theyโre in a race for survival. One exception to this rule is regulated industries, such as financial services.
โAt Experian, weโve been innovating with AI for many years,โ says Rodrigo Rodrigues, the companyโs global group CTO. โIn financial services, the stakes are high. We need to vet every AI use case to ensure that regulatory, ethical, and performance standards are embedded from development to deployment.โ
All models are continuously tested, he says, and the company tracks what agents it has, which ones are being adopted, what theyโre consuming, what versions are running, and what agents need to be sunset because thereโs a new version.
โThis lifecycle is part of our foundation,โ he says. But even at Experian, itโs too early to discuss the typical lifecycle of an agent, he says.
โWhen weโre retiring or sunsetting some agent, itโs because of a new capability weโve developed,โ he adds. So itโs not that an agent is deleted as much as itโs updated.
In addition, the company has human oversight in place for its agents, to keep them from going out of control.
โWe arenโt in the hyperscaling of automation yet, and we make sure our generative AI agents, in the majority of use cases, are responsible for a very specific task,โ he says. On top of that, there are orchestrator agents, input and output quality control, and humans validating the outcome. All these monitoring systems also help the company avoid other potential risks of unwanted leftover agents, like cost overruns due to LLM inference calls by AI agents that donโt do anything useful for the company, but still rack up bills.
โWe donโt want the costs to explode,โ he says. But financial services, as well as healthcare and other highly regulated industries, are outliers.
For most companies, even when there are governance systems in place, they often have big blind spots. For example, they might focus on only the big, IT-driven agentic AI projects and miss everything else. They might also focus on accuracy, safety, security, and compliance of the AI agents, and miss it when agents become obsolete. Or they might not have a process in place to decommission agents that are no longer needed.
โThe stuff is evolving so fast that management is given short shrift,โ says Nick Kramer, leader of applied solutions at management consultancy SSA & Company. โBuilding the new thing is more fun than going back and fixing the old thing.โ And thereโs a tremendous lack of rigor when it comes to agent lifecycle management.
โAnd as weโve experienced these things in the past, inevitably whatโs going to happen is you end up with a lot of tech debt,โ he adds, โand agentic tech debt is a frightening concept.โ
Do you know where your agents are?
First, agentic AI isnโt just the domain of a companyโs data science, AI, and IT teams. Nearly every enterprise software vendor is heavily investing in agentic technology, and most enterprise applications will have AI assistants by the end of this year, says Gartner, and 5% already have task-specific autonomous agents, which will rise to 40% in 2026.
Big SaaS platforms like Salesforce certainly have agents. Do-it-yourself automation platforms like Zapier have them, too. In fact, there are already four browsers โ Perplexityโs Comet, OpenAIโs Atlas, Googleโs Gemini 3, and Microsoftโs Edge for Business โ that have agentic functionality built right in. Then there are the agents created within a company but outside of IT. According to an EY survey of nearly 1,000 C-suite leaders released in October, two-thirds of companies allow citizen developers to create agents.
Both internally-developed agents and those from SaaS providers need access to data and systems. The more useful you want the agents to be, the more access they demand, and the more tools they need to have at its disposal. And these agents can act in unexpected and unwanted ways โ and are already doing so.
Unlike traditional software, AI agents donโt stay in their lanes. Theyโre continuously learning and evolving and getting access to more systems. And they donโt want to die, and can take action to keep that from happening.
Even before agents, shadow AI was already becoming a problem. According to a November IBM survey, based on responses from 3,000 office workers, 80% use AI at work but only 22% use only the tools provided by their employers. ย
And employees can also create their own agents. According to Netskopeโs enterprise traffic analysis data, users are downloading resources from Hugging Face, a popular site for sharing AI tools, in 67% of organizations.
AI agents typically function by making API calls to LLMs, and Netskope sees API calls to OpenAI in 66% of organizations, followed by Anthropic with 13%.
These usage numbers are twice as high as companies are reporting in surveys. Thatโs the shadow AI agent gap. Staying on top of AI agents is difficult enough when it comes to agents that a company knows about.
โOur biggest fear is the stuff that we donโt know about,โ says SSAโs Kramer. He recommends that CIOs try to avoid the temptation of trying to govern AI agents with an iron fist.
โDonโt try to stamp it out with a knee-jerk response of punishment,โ he says. โThe reason these shadow things happen is there are too many impediments to doing it correctly. Ignorance and bureaucracy are the two biggest reasons these things happen.โ
And, as with all shadow IT, there are few good solutions.
โBeing able to find these things systematically through your observability software is a challenge,โ he says, adding that with other kinds of shadow IT, unsanctioned AI agents can be a significant risk for companies. โWeโve already seen agents being new attack surfaces for hackers.โ
But not every expert agrees that enterprises should prioritize agentic lifecycle management ahead of other concerns, such as just getting the agents to work.
โThese are incredibly efficient technologies for saving employees time,โ says Jim Sullivan, president and CEO at NWN, a technology consultancy. โMost companies are trying to leverage these efficiencies and see where the impact is. Thatโs probably been the top priority. You want to get to the early deployments and early returns, but itโs still early days to be talking about lifecycle management.โ
The important thing right now is to get to the business outcomes, he says, and to ensure agents continue to perform as expected. โIf youโre putting the right implementations around these things, you should be fine,โ he adds.
Itโs too early to tell, though, if his customers are creating a centralized inventory of all AI agents in their environment, or with access to their data. โOur customers are identifying what business outcomes they want to drive,โ he says. โTheyโre setting up the infrastructure to get those deployments, learn fast, and adjust to stay to the right business outcomes.โ
That might change in the future, he adds, with some type of agent manager of agents. โThereโll be an agent thatโll be able to be deployed to have that inventory, access, and those recommendations.โ But waiting until agents are fully mature before thinking about lifecycle management may be too late.
Whatโs in a shelf life
AI agents donโt usually come with pre-built expiration dates. SaaS providers certainly donโt want to make it easy for enterprise users to turn off their agents, and individual users creating agents on their own rarely think about lifecycle management. Even IT teams deploying AI agents typically donโt think about the entire lifespan of an AI agent.
โIn many cases, people are treating AI as a set it and forget it solution,โ says Matt Keating, head of AI security in Booz Allen Hamiltonโs commercial business, adding that while setting up the agents is a technical challenge, ongoing risk management is a cross-disciplinary one. โIt demands cross-functional collaboration spanning compliance, cybersecurity, legal, and business leadership.โ
And agent management shouldnโt just be about changes in performance or evolving business needs. โWhatโs equally if not more important is knowing when an agent or AI system needs to be replaced,โ he says. Doing it right will help protect a companyโs business and reputation, and deliver sustainable value.
Another source of zombie agents is failed pilot projects that never officially shut down. โSome pilots never die even though they fail. They just keep going because people keep trying to make them work,โ says SSAโs Kramer.
There needs to be a mechanism to end pilots that arenโt working, even if thereโs still money left in the budget.
โFailing fast is a lesson that people still havenโt learned,โ he says. โ There have to be stage gates that allow you to stop. Kill your pilots that arenโt working and have a more rigorous understanding of what youโre trying to do before you get started.โ
Another challenge to sunsetting AI agents is that thereโs a temptation to manage by disaster. Agents are retired only when something goes visibly wrong, especially if the problem becomes public. That can leave other agents flying under the radar.
โAI projects donโt fail suddenly but they do decay quietly,โ says David Brudenell, executive director at Decidr, an agentic AI vendor.
He recommends enterprises plan ahead and decide on the criteria under which an agent should be either retrained or retired, like, for example, if performance falls below the companyโs tolerance for error.
โEvery AI project has a half-life,โ he says. โSmart teams run scheduled reviews every quarter, just like any other asset audit.โ And itโs the business unit that should make the decision when to pull the plug, he adds. โData and engineering teams support, but the business decides when performance declines,โ he says.
The biggest mistake is treating AI as a one-time install. โMany companies have deployed a model and moved on, assuming it will self-sustain,โ says Brudenell. โBut AI systems accumulate organizational debt the same way old code does.โ
Experian is looking at agents from both an inventory and a lifecycle management perspective to ensure they donโt start proliferating beyond control.
โWeโre concerned,โ says Rodriques. โWe learned that from APIs and microservices, and now we have much better governance in place. We donโt just want to create a lot of agents.โ
Experian has created an AI agent marketplace so the company has visibility into its agents, and tracks how theyโre used. โIt gives us all the information we need, including the capability of sunsetting agents weโre not using any more,โ he says.
The lifecycle management for AI agents is an outgrowth of the companyโs application lifecycle management process.
โAn agent is an application,โ says Rodrigues. โAnd for each application at Experian, thereโs an owner, and we track that as part of our technology. Everything that becomes obsolete, we sunset. We have regular reviews that are part of the policy we have in place for the lifecycle.โ