Reading view

There are new articles available, click to refresh the page.

7 challenges IT leaders will face in 2026

Today’s CIOs face increasing expectations on multiple fronts: They’re driving operational and business strategy while simultaneously leading AI initiatives and balancing related compliance and governance concerns.

Additionally, Ranjit Rajan, vice president and head of research at IDC, says CIOs will be called to justify previous investment in automation while managing related costs.

“CIOs will be tasked with creating enterprise AI value playbooks, featuring expanded ROI models to define, measure, and showcase impact across efficiency, growth, and innovation,” Rajan says.

Meanwhile, tech leaders who spent the past decade or more focused on digital transformation are now driving cultural change within their organizations. CIOs emphasize that transformation in 2026 requires a focus on people as well as technology.

Here’s how CIOs say they’re preparing to address and overcome these and other challenges in 2026.

Talent gap and training

The most often cited challenge by CIOs is a consistent and widening shortage of tech talent. Because it’s impossible to meet their objectives without the right people to execute them, tech leaders are training internally as well as exploring non-traditional paths for new hires.

In CIO’s most recent State of the CIO survey 2025, more than half the respondents said staffing and skills shortages “took time away from more strategic and innovation pursuits.” Tech leaders expect that trend to continue in 2026.

“As we look at our talent roadmap from an IT perspective, we feel like AI, cloud, and cybersecurity are the three areas that are going to be extremely pivotal to our organizational strategy,” says Josh Hamit, CIO of Altra Federal Credit Union.

Hamit said the company will address the need by bringing in specialized talent, where necessary, and helping existing staff expand their skillsets. “As an example, traditional cybersecurity professionals will need upskilling to properly assess the risks of AI and understand the different attack vectors,” he says.

Pegasystems CIO David Vidoni has had success identifying staff with a mix of technology and business skills and then pairing them with AI experts who can mentor them.

“We’ve found that business-savvy technologists with creative mindsets are best positioned to effectively apply AI to business situations with the right guidance,” Vidoni says. “After a few projects, new people can quickly become self-sufficient and make a greater impact on the organization.”

Daryl Clark, CTO of Washington Trust, says the financial services company has moved away from degree requirements and focused on demonstrated competencies. He said they’ve had luck partnering with Year Up United, a nonprofit that offers job training for young people.

“We currently have seven full-time employees in our IT department who started with us at Year Up United interns,” Clark says. “One of them is now an assistant vice president of information assurance. It’s a proven pathway for early career talent to enter technology roles, gain mentorship, and grow into future high impact contributors.”

Coordinated AI integration

CIOs say in 2026 AI must move from experimentation and pilot projects to a unified approach that shows measurable results. Specifically, tech leaders say a comprehensive AI plan should integrate data, workflows, and governance rather than relying on scattered initiatives that are more likely to fail.

By 2026, 40% of organizations will miss AI goals, IDC’s Rajan claims. Why? “Implementation complexity, fragmented tools, and poor lifecycle integration,” he says, which is prompting CIOs to increase investment in unified platforms and workflows.

“We simply cannot afford more AI investments that operate in the dark,” says Flexera CIO Conal Gallagher. “Success with AI today depends on discipline, transparency, and the ability to connect every dollar spent to a business result.”

Trevor Schulze, CIO of Genesys, argues AI pilot programs weren’t wasted — as long as they provide lessons that can be applied going forward to drive business value.

“Those early efforts gave CIOs critical insight into what it takes to build the right foundations for the next phase of AI maturity. The organizations that rapidly apply those lessons will be best positioned to capture real ROI.”

Governance for rapidly expanding AI efforts

IDC’s Rajan says that by the end of the decade organizations will see lawsuits, fines, and CIO dismissals due to disruptions from inadequate AI controls. As a result, CIOs say, governance has become an urgent concern — not an afterthought.

“The biggest challenge I’m preparing for in 2026 is scaling AI enterprise-wide without losing control,” says Barracuda CIO Siroui Mushegian. “AI requests flood in from every department. Without proper governance, organizations risk conflicting data pipelines, inconsistent architectures, and compliance gaps that undermine the entire tech stack.”

To stay on top of the requests, Mushegian created an AI council that prioritizes projects, determines business value, and ensures compliance.

“The key is building governance that encourages experimentation rather than bottlenecking it,” she says. “CIOs need frameworks that give visibility and control as they scale, especially in industries like finance and healthcare where regulatory pressures are intensifying.”

Morgan Watts, vice president of IT and business systems at cloud-based VoIP company 8×8, says AI-generated code has accelerated productivity and freed up IT teams for other important tasks such as improving user experience. But those gains come with risks.

“Leading IT organizations are adapting existing guardrails around model usage, code review, security validation, and data integrity,” Watts says. “Scaling AI without governance invites cost overruns, trust issues, and technical debt, so embedding safeguards from the beginning is essential.”

Aligning people and culture

CIOs say one of their top challenges is aligning their organization’s people and culture with the rapid pace of change. Technology, always fast-moving, is now outpacing teams’ ability to keep up. AI in particular requires staff who work responsibly and securely.

Maria Cardow, CIO of cybersecurity company LevelBlue, says organizations often mistakenly believe technology can solve anything if they just choose the right tool. This leads to a lack of attention and investment in people.

“The key is building resilient systems and resilient people,” she says. “That means investing in continuous learning, integrating security early in every project, and fostering a culture that encourages diverse thinking.”

Rishi Kaushal, CIO of digital identity and data protection services company Entrust, says he’s preparing for 2026 with a focus on cultural readiness, continuous learning, and preparing people and the tech stack for rapid AI-driven changes.

“The CIO role has moved beyond managing applications and infrastructure,” Kaushal says. “It’s now about shaping the future. As AI reshapes enterprise ecosystems, accelerating adoption without alignment risks technical debt, skills gaps, and greater cyber vulnerabilities. Ultimately, the true measure of a modern CIO isn’t how quickly we deploy new applications or AI — it’s how effectively we prepare our people and businesses for what’s next.”

Balancing cost and agility

CIOs say 2026 will see an end to unchecked spending on AI projects, where cost discipline must go hand-in-hand with strategy and innovation.

“We’re focusing on practical applications of AI that augment our workforce and streamline operations,” says Pegasystems’ Vidoni. “Every technology investment must be aligned with business goals and financial discipline.”

When modernizing applications, Vidoni argues that teams need to stay outcome-focused, phasing in improvements that directly support their goals.

“This means application modernization and cloud cost-optimization initiatives are required to stay competitive and relevant,” he says. “The challenge is to modernize and become more agile without letting costs spiral. By empowering an organization to develop applications faster and more efficiently, we can accelerate modernization efforts, respond more quickly to the pace of tech change, and maintain control over cloud expenditures.”

Tech leaders also face challenges in driving efficiency through AI while vendors are increasing prices to cover their own investments in the technology, says Mark Troller, CIO of Tangoe.

“Balancing these competing expectations — to deliver more AI-driven value, absorb rising costs, and protect customer data — will be a defining challenge for CIOs in the year ahead,” Troller says. “Complicating matters further, many of my peers in our customer base are embracing AI internally but are understandably drawing the line that their data cannot be used in training models or automation to enhance third-party services and applications they use.”

Cybersecurity

Marc Rubbinaccio, vice president of information security at Secureframe, expects a dramatic shift in the sophistication of security attacks that looks nothing like current phishing attempts.

“In 2026, we’ll see AI-powered social engineering attacks that are indistinguishable from legitimate communications,” Rubbinaccio says. “With social engineering linked to almost every successful cyberattack, threat actors are already using AI to clone voices, copy writing styles, and generate deepfake videos of executives.”

Rubbinaccio says these attacks will require adaptive, behavior-based detection and identity verification along with simulations tailored to AI-driven threats.

In the most recent State of the CIO survey, about a third of respondents said they anticipated difficulty in finding cybersecurity talent who can address modern attacks.

“We feel it’s extremely important for our team to look at training and certifications that drill down into these areas,” says Altra’s Hamit. He suggests the certifications such as ISACA Advanced in AI Security Management (AAISM) and the upcoming ISACA Advanced in AI Risk (AAIR).

Managing workload and rising demands on CIOs

Pegasystems’s Vidoni says it’s an exciting time as AI prompts CIOs to solve problems in new ways. The role requires blending strategy, business savvy, and day-to-day operations. At the same time the pace of transformation can lead to increased workload and stress.

“My approach is simple: Focus on the highest-priority initiatives that will drive better outcomes through automation, scale, and end-user experience. By automating manual, repetitive tasks, we free up our teams to focus on higher-value, more engaging work,” he says. “Ultimately, the CIO of 2026 must be a business leader first and a technologist second. The challenge is leading organizations through a cultural and operational shift — using AI not just for efficiency, but to build a more agile, intelligent, and human-centric enterprise.”

Get one year of access to one of our favorite budgeting apps for only $50

A new year is the perfect time to get your spending in order, and if you're not trying to build your own spreadsheet, budgeting apps are one of the best ways to do it. To save yourself some money in the process, you can pick up a year-long subscription to Monarch Money, one of Engadget's favorite budgeting apps, for just $50 if you use code NEWYEAR2026 at checkout and you're a new subscriber. That's a 50 percent discount on the service's normal $100 price.

Monarch Money makes for a capable and detailed budgeting companion. You can use the service via apps for iOS, Android, iPadOS or the web, and Monarch also offers a Chrome extension that can sync your Amazon and Target transactions and automatically categorize them. Like other budgeting apps, Monarch Money lets you connect multiple financial accounts and track your money based on where you spend it over time. Monarch offers two different approaches to tracking budgeting (flexible and category budgeting) depending on what fits your life best, and the ability to add a budget widget on your phone so you can know how you're tracking that month.

How budgeting apps turn your raw transactions into visuals you can understand at a glance is one of the big things that differentiates one app from another, and Monarch Money offers multiple graphs and charts to look at for things like spending, investments or categories of your choice based on how you've labelled your expenses. The app can also monitor the spending of you and your partner all in one place, to make it easier to plan together.

The main drawbacks Engadget found in testing Monarch Money were the app's learning curve, and the differences in features (and bugginess) between Monarch's web and mobile versions. Still, for 50 percent off, the Monarch Money is well worth experimenting with if you're trying to save money in 2026, especially if you want to do it collaboratively with a partner.

Follow @EngadgetDeals on X for the latest tech deals and buying advice.

This article originally appeared on Engadget at https://www.engadget.com/apps/get-one-year-of-access-to-one-of-our-favorite-budgeting-apps-for-only-50-204507787.html?src=rss

©

© Monarch Money

A chart showing a user's cash flow in the Monarch Money app.

Beyond the cloud bill: The hidden operational costs of AI governance

In my work helping large enterprises deploy AI, I keep seeing the same story play out. A brilliant data science team builds a breakthrough model. The business gets excited but then the project hits a wall; a wall built of fear and confusion that lives at the intersection of cost and risk. Leadership asks two questions that nobody seems equipped to answer at once: “How much will this cost to run safely?” and “How much risk are we taking on?”

The problem is that the people responsible for cost and the people responsible for risk operate in different worlds. The FinOps team, reporting to the CFO, is obsessed with optimizing the cloud bill. The governance, risk and compliance (GRC) team, answering to the chief risk officer, is focused on legal exposure. And the AI and MLOps teams, driven by innovation under the CTO, are caught in the middle.

This organizational structure leads to projects that are either too expensive to run or too risky to deploy. The solution is not better FinOps or stricter governance in isolation; it is the practice of managing AI cost and governance risk as a single, measurable system rather than as competing concerns owned by different departments. I call this “responsible AI FinOps.”

To understand why this system is necessary, we first have to unmask the hidden costs that governance imposes long before a model ever sees a customer.

 Phase 1: The pre-deployment costs of governance

The first hidden costs appear during development, in what I call the development rework cost. In regulated industries, a model needs to not only be accurate, it must be proven to be fair. It is a common scenario: a model clears every technical accuracy benchmark, only to be flagged for noncompliance during the final bias review.

As I detailed in a recent VentureBeat article, this rework is a primary driver of the velocity gap that stalls AI strategies. This forces the team back to square one, leading to weeks or months of rework, resampling data, re-engineering features and retraining the model; all of which burns expensive developer time and delays time-to-market.

Even when a model works perfectly, regulated industries demand a mountain of paperwork. Teams must create detailed records explaining exactly how the model makes decisions and where its data comes from. You won’t see this expense on a cloud invoice, but it is a major part measured in the salary hours of your most senior experts.

These aren’t just technical problems, they’re a financial drain caused by an AI governance standard process failure.

Phase 2: The recurring operational costs in production

Once a model is deployed, the governance costs become a permanent part of the operational budget.

The explainability overhead

For high-risk decisions, governance mandates that every prediction be explainable. While the libraries used to achieve this (like the popular SHAP and LIME) are open source, they are not free to run. They are computationally intensive. In practice, this means running a second, heavy algorithm alongside your main model for every single transaction. This can easily double the compute resources and latency, creating a significant and recurring governance overhead on every prediction.

The continuous monitoring burden

Standard MLOps involves monitoring for performance drift (e.g., is the model getting less accurate?). But AI governance adds a second, more complex layer: governance monitoring. This means constantly checking for bias drift (e.g., is the model becoming unfair to a specific group over time?) and explainability drift. This requires a separate, always-on infrastructure that ingests production data, runs statistical tests and stores results, adding a continuous and independent cost stream to the project.

The audit and storage bill

To be auditable, you must log everything. In finance, regulations from bodies like FINRA require member firms to adhere to SEC rules for electronic recordkeeping, which can mandate retention for at least six years in a non-erasable format. This means every prediction, input and model version creates a data artifact that incurs a storage cost, a cost that grows every single day for years.

Regulated vs. non-regulated difference: Why a social media app and a bank can’t use the same AI playbook

Not all AI is created equal and the failure to distinguish between use cases is a primary source of budget and risk misalignment. The so-called governance taxes I described above are not universally applied because the stakes are vastly different.

Consider a non-regulated use case, like a video recommendation engine on a social media app. If the model recommends a video I don’t like, the consequence is trivial; I simply scroll past it. The cost of a bad prediction is nearly zero. The MLOps team can prioritize speed and engagement metrics, with a relatively light touch on governance.

Now consider a regulated use case I frequently encounter: an AI model used for mortgage underwriting at a bank. A biased model that unfairly denies loans to a protected class doesn’t just create a bad customer experience, it can trigger federal investigations, multimillion-dollar fines under fair lending laws and a PR catastrophe. In this world, explainability, bias monitoring and auditability are not optional; they are non-negotiable costs of doing business. This fundamental difference is why a single version of AI platform dictated solely by the MLOps, FinOps or GRC team is doomed to fail.

Responsible AI FinOps: A practical playbook for unifying cost and risk

Bridging the gap between the CFO, CRO and CTO requires a new operating model built on shared language and accountability.

  1. Create a unified language with new metrics. FinOps tracks business metrics like cost per user and technical metrics like cost per inference or cost per API call. Governance tracks risk exposure. A responsible AI FinOps approach fuses these by creating metrics like cost per compliant decision. In my own research, I’ve focused on metrics that quantify not just the cost of retraining a model, but the cost-benefit of that retraining relative to the compliance lift it provides.
  2. Build a cross-functional tiger team. Instead of siloed departments, leading organizations are creating empowered pods that include members from FinOps, GRC and MLOps. This team is jointly responsible for the entire lifecycle of a high-risk AI product; its success is measured on the overall risk-adjusted profitability of the system. This team should not only define cross-functional AI cost governance metrics, but also standards that every engineer, scientist and operations team has to follow for every AI model across the organization.
  3. Invest in a unified platform. The market is responding to this need. The explosive growth of the MLOps market, which Fortune Business Insights projects will reach nearly $20 billion by 2032, is proof that the market is responding to this need for a unified one-enterprise-level control plane for AI. The right platform provides a single dashboard where the CTO sees model performance, the CFO sees its associated cloud spend and the CRO sees its real-time compliance status.

The organizational challenge

The greatest barrier to realizing the value of AI is no longer purely technical, it is organizational. The companies that win will be those who break down the walls between their finance, risk and technology teams.

They will recognize that A) You cannot optimize cost without understanding risk; B) You cannot manage risk without quantifying its cost; and C) You can achieve neither without a deep engineering understanding of how the model actually works. By embracing a fused responsible AI FinOps discipline, leaders can finally stop the alarms from ringing in separate buildings and start conducting a symphony of innovation that is both profitable and responsible.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

❌