Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

The New Rules of Cyber Resilience in an AI-Driven Threat Landscape

23 January 2026 at 11:03

For years, cybersecurity strategy revolved around a simple goal: keep attackers out. That mindset no longer matches reality. Today’s threat landscape assumes compromise. Adversaries do not just encrypt data and demand payment. They exfiltrate it, resell it, reuse it, and weaponize it long after the initial breach. As we look toward 2026, cyber resilience, not..

The post The New Rules of Cyber Resilience in an AI-Driven Threat Landscape appeared first on Security Boulevard.

From static workflows to intelligent automation: Architecting the self-driving enterprise

20 January 2026 at 05:15

I want you to think about the most fragile employee in your organization. They don’t take coffee breaks, they work 24/7 and they cost a fortune to recruit. But if a button on a website moves a few pixels to the right, this employee has a complete mental breakdown and stops working entirely.

I am talking, of course, about your RPA (robotic process automation) bots.

For the last few years, I have observed IT leaders, CIOs and business leaders pour millions into what we call automation. We’ve hired armies of consultants to draw architecture diagrams and map out every possible scenario. We’ve built rigid digital train tracks, convinced that if we just laid enough rail, efficiency would follow.

But we didn’t build resilience. We built fragility.

As an AI solution architect, I see the cracks in this foundation every day. The strategy for 2026 isn’t just about adopting AI; it is about attacking the fragility of traditional automation. The era of deterministic, rule-based systems is ending. We are witnessing the death of determinism and the rise of probabilistic systems — what I call the shift from static workflows to intelligent automation.

The fragility tax of old automation

There is a painful truth we need to acknowledge: Your current bot portfolio is likely a liability.

In my experience and architectural practice, I frequently encounter what I call the fragility tax. This is the hidden cost of maintaining deterministic bots in a dynamic world. The industry rule of thumb  —  and one that I see validated in budget sheets constantly — is that for every $1 you spend on BPA licenses, you end up spending $3 on maintenance.

Why? Because traditional BPA is blind. It doesn’t understand the screen it is looking at; it only understands coordinates (x, y). It doesn’t understand the email it is reading; it only scrapes for keywords. When the user interface updates or the vendor changes an invoice format, the bot crashes.

I recall a disaster with an enterprise client who had an automated customer engagement process. It was a flagship project. It worked perfectly until the third-party system provider updated their solution. The submit button changed from green to blue. The bot, which was hardcoded to look for green pixels at specific coordinates, failed silently.

But fragility isn’t just about pixel colors. It is about the fragility of trust in external platforms.

We often assume fragility only applies to bad code, but it also applies to our dependencies. Even the vanguard of the industry isn’t immune. In September 2024, OpenAI’s official newsroom account on X (formerly Twitter) was hijacked by scammers promoting a crypto token.

Think about the irony: The company building the most sophisticated intelligence in human history was momentarily compromised not by a failure of their neural networks, but by the fragility of a third-party platform. This is the fragility tax in action. When you build your enterprise on deterministic connections to external platforms you don’t control, you inherit their vulnerabilities. If you had a standard bot programmed to Retweet@OpenAINewsroom, you would have automatically amplified a scam to your entire customer base.

The old way of scripting cannot handle this volatility. We spent years trying to predict the future and hard-code it into scripts. But the world is too chaotic for scripts. We need architecture that can heal itself.

The architectural pivot: From rules to goals

To capture the value of intelligent automation (IA), you must frame it as an architectural paradigm shift, not just a software upgrade. We are moving from task automation (mimicking hands) to decision automation (mimicking brains).

When I architect these systems, I look not only for rules but also for goals.

In the old paradigm, we gave the computer a script: Click button A, then type text B, then wait 5 seconds. In the new paradigm, we use cognitive orchestrators. We give the AI a goal: Perform this goal.

The difference is profound. If the submit button turns blue, a goal-based system using a large language model (LLM) and vision capabilities sees the button. It understands that despite the color change, it is still the submission mechanism. It adjusts its own path to achieving the goal.

Think of it like the difference between a train and an off-road vehicle. A train is fast and efficient, but it requires expensive infrastructure (tracks) and cannot steer around a rock on the line. Intelligent automation is the off-road vehicle. It uses sensors to perceive the environment. If it sees a rock, it doesn’t derail; it decides to go around it.

This isn’t magic; it’s a specific architectural pattern. The tech stack required to support this is fundamentally different from what most CIOs currently have installed. It is no longer just a workflow engine. The new stack requires three distinct components working in concert:

  1. The workflow engine: The hands that execute actions.
  2. The reasoning layer (LLM): The brain that figures out the steps dynamically and handles the logic.
  3. The vector database: The memory that stores context, past experiences and embedded data to reduce hallucinations.

By combining these, we move from brittle scripts to resilient agents.

Breaking the unstructured data barrier

The most significant limitation of the old way was its inability to handle unstructured data. We know that roughly 80% of enterprise data is unstructured, locked away in PDFs, email threads, Slack and MS Teams chats, and call logs. Traditional business process automation cannot touch this. It requires structured inputs: rows and columns.

This is where the multi-modal understanding of intelligent automation changes the architecture.

I urge you to adopt a new mantra: Data entry is dead. Data understanding is the new standard.

I am currently designing architectures where the system doesn’t just move a PDF from folder A to folder B. It reads the PDF. It understands the sentiment of the email attached to it. It extracts the intent from the call log referenced in the footer.

Consider a complex claims-processing scenario. In the past, a human had to manually review a handwritten accident report, cross-reference it with a policy PDF and check a photo of the damage. A deterministic bot is useless here because the inputs are never the same twice.

Intelligent automation changes the equation. It can ingest the handwritten note (using OCR), analyze the photo (using computer vision) and read the policy (using an LLM). It synthesizes these disparate, messy inputs into a structured claim object. It turns chaos into order.

This is the difference between digitization (making it electronic) and digitalization (making it intelligent).

Human-in-the-loop as a governance pattern

Whenever we present this self-driving enterprise concept to clients, the immediate reaction is “You want an LLM to talk to our customers?” This is a valid fear. But the answer isn’t to ban AI; it is to architect confidence-based routing.

We don’t hand over the keys blindly. We build governance directly into the code. In this pattern, the AI assesses its own confidence level before acting.

This brings us back to the importance of verification. Why do we need humans in the loop? Because trusted endpoints don’t always stay trusted.

Revisiting the security incident I mentioned earlier: If you had a fully autonomous sentient loop that automatically acted upon every post from a verified partner account, your enterprise would be at risk. A deterministic bot says: Signal comes from a trusted source -> execute.

A probabilistic, governed agent says: Signal comes from a trusted source, but the content deviates 99% from their semantic norm (crypto scam vs. tech news). The confidence score is low. Alert human.

That is the architectural shift we need.

  • Scenario A: The AI is 99% confident it understands the invoice, the vendor matches the master record and the semantics align with past behavior. The system auto-executes.
  • Scenario B: The AI is only 70% confident because the address is slightly different, the image is blurry or the request seems out of character (like the hacked tweet example). The system routes this specific case to a human for approval.

This turns automation into a partnership. The AI handles the mundane, high-volume work and your humans handle the edge cases. It solves the black box problem that keeps compliance officers awake at night.

Kill the zombie bots

If you want to prepare your organization for this shift, you don’t need to buy more software tomorrow. You need to start with an audit.

Look at your current automation portfolio. Identify the zombie bots, which are the scripts that are technically alive but require constant intervention to keep moving. These bots fail whenever vendors update their software. These are the bots that are costing you more in fragility tax than they save in labor.

Stop trying to patch them. These are the prime candidates for intelligent automation.

The future belongs to the probabilistic. It belongs to architectures that can reason through ambiguity, handle unstructured chaos and self-correct when the world changes. As leaders, we need to stop building trains and start building off-road vehicles.

The technology is ready. The question is, are you ready to let go of the steering wheel?

Disclaimer: This and any related publications are provided in the author’s personal capacity and do not represent the views, positions or opinions of the author’s employer or any affiliated organization.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

“보안·데이터·조직이 승부 가른다” 2026년 CIO 10대 과제

20 January 2026 at 01:30

CIO의 ‘희망 목록’은 늘 길고 비용도 많이 든다. 하지만 우선순위를 합리적으로 세우면, 팀과 예산을 소진하지 않으면서도 급변하는 요구에 대응할 수 있다.

특히 2026년에는 IT 운영을 ‘비용 센터’가 아니라 손익 관점에서 재정의하면서, 기술로 비즈니스를 재창조하는 접근이 필요하다. 액센추어(Accenture)의 기술 전략·자문 글로벌 리드 코엔라트 셸포트는 “최소한의 투자로 ‘불만 꺼지지 않게 유지’하는 데서 벗어나, 기술로 매출 성장을 견인하고 새로운 디지털 제품을 만들며, 새 비즈니스 모델을 더 빠르게 시장에 내놓는 쪽으로 초점을 옮겨야 한다”라고 권고했다.

다음은 CIO가 2026년에 우선순위 상단에 올려야 할 10가지 핵심 과제다.

1. 사이버 보안 회복탄력성과 데이터 프라이버시 강화

기업이 생성형 AI와 에이전틱 AI를 핵심 워크플로우 깊숙이 통합하면서, 공격자 역시 같은 AI 기술로 워크플로우를 교란하고 지식재산(IP)과 민감 데이터를 노릴 가능성이 커졌다. 소비자 신용평가 기업 트랜스유니언(TransUnion)의 글로벌 제품 플랫폼 담당 수석부사장 요게시 조시는 “그 결과 CIO와 CISO는 나쁜 행위자들이 동일한 AI 기술을 활용해 워크플로우를 방해하고, 고객 민감 데이터와 경쟁우위에 해당하는 정보·자산을 포함한 IP를 탈취하려 할 것임을 예상해야 한다”라고 지적했다.

조시는 디지털 전환 가속과 AI 통합 확대로 리스크 환경이 크게 넓어질 것으로 보고, 2026년 최우선 과제로 ‘보안 회복탄력성’과 ‘데이터 프라이버시’를 꼽았다. 특히, “민감 데이터 보호와 글로벌 규제 준수는 협상 대상이 아니다”라고 강조했다.

2. 보안 도구 통합

AI의 효과를 제대로 끌어내려면 기반을 다시 다져야 한다는 주장도 있다. 딜로이트의 미국 사이버 플랫폼 및 기술·미디어·통신(TMT) 산업 리더 아룬 페린콜람은 “필수 조건 중 하나는 파편화된 보안 도구를 통합·연동된 사이버 기술 플랫폼으로 묶는 것인데, 이를 ‘플랫폼화(platformization)’라고 부른다”라고 설명했다.

페린콜람은 통합은 보안을 ‘여러 포인트 솔루션의 누더기’에서 빠른 혁신과 확장 가능한 AI 중심 운영을 위한 민첩하고 확장된 기반으로 바꿀 것이라며, “위협이 정교해질수록 통합 플랫폼이 중요해지며, 도구 난립을 방치하면 오히려 분절된 보안 태세가 공격자에게 유리하게 작동해 위험이 커진다”라고 지적했다. 또 “기업은 날로 증가하는 위협에 직면할 것이며, 이를 관리하기 위해 보안 도구가 무분별하게 확산될 것이다. 공격자가 이렇게 파편화된 보안 태세를 악용할 수 있으므로, 플랫폼화를 늦추면 위험만 증폭될 것”이라고 덧붙였다.

3. 데이터 보호 ‘기본기’ 재점검

조직이 효율·속도·혁신을 위해 새로운 AI 모델 도입 경쟁에 나서고 있지만, 민감 데이터 보호를 위한 기본 단계조차 놓치는 경우가 적지 않다는 경고도 나온다. 데이터 프라이버시·보존 전문업체 도노마 소프트웨어(Donoma Software)의 최고전략책임자 파커 피어슨은 “새 AI 기술을 풀기 전에 민감 데이터를 보호하기 위한 기본 조치를 하지 않는 조직이 많다”라며 2026년에는 “데이터 프라이버시를 긴급 과제로 봐야 한다”라고 강조했다.

피어슨은 데이터 수집·사용·보호 이슈가 초기 학습부터 운영까지 AI 라이프사이클 전반에서 발생한다고 설명했다. 또 많은 기업이 “AI를 무시해 경쟁에서 뒤처지거나 민감 데이터를 노출할 수 있는 LLM을 도입하는 두 가지 나쁜 선택지 사이에 놓여 있다”라고 진단했다.

핵심은 ‘AI를 할 것인가’가 아니라 ‘민감 데이터를 위험에 빠뜨리지 않으면서 AI 가치를 최적화하는 방법’이다. 피어슨은 특히 “데이터가 ‘완전히’ 또는 ‘엔드 투 엔드’로 암호화돼 있다”는 조직의 자신감과 달리, 실제로는 사용 중 데이터까지 포함해 모든 상태에서 연속적으로 보호하는 체계가 필요하다고 주장했다. 프라이버시 강화 기술을 지금 도입하면 이후 AI 모델 적용에서도 데이터 구조화·보안이 선행돼 학습 효율이 좋아지고, 재학습에 따른 비용·리스크도 줄일 수 있다는 설명이다.

4. 팀 정체성과 경험에 집중

2026년 CIO 과제로 ‘기업 정체성’과 직원 경험을 재정비해야 한다는 목소리도 있다. IT 보안 소프트웨어 업체 넷위릭스(Netwrix)의 CIO 마이클 웻젤은 “정체성은 사람들이 조직에 합류하고 협업하고 기여하는 기반”이라며, “정체성과 직원 경험을 제대로 잡으면 보안, 생산성, 도입 등 다른 모든 것이 자연스럽게 따라온다”라고 말했다.

웻젤은 직원들이 직장에서 ‘소비자급’ 경험을 기대한다고 진단했다. 내부 기술이 불편하면 사용하지 않고 우회하게 되며, 그 순간 조직은 보안과 속도를 동시에 잃는다는 지적이다. 반대로 ‘정체성에 뿌리를 둔 매끄러운 경험’을 구축한 기업이 혁신 속도에서 앞서갈 것이라고 내다봤다.

5. 값비싼 ERP 마이그레이션 대응 방안 마련

ERP 마이그레이션은 2026년에도 CIO를 강하게 압박할 전망이다. 인보이스 라이프사이클 관리 소프트웨어 업체 바스웨어(Basware)의 CIO 배럿 쉬위츠는 “예를 들어 SAP S/4HANA 마이그레이션은 복잡하고, 계획보다 길어지면서 비용이 증가하는 경우가 많다”라고 지적했다. 쉬위츠는 업그레이드 비용이 기업 규모와 복잡도에 따라 1억 달러 이상, 많게는 5억 달러까지 뛸 수 있다고 말했다.

또한, ERP가 ‘모든 것을 하려는’ 구조인 만큼, 인보이스 처리처럼 특정 업무를 아주 잘 해내는 데에는 한계가 있다고 말했다. 여기에 수많은 애드온 커스터마이징이 더해지면 리스크가 커진다. 시위츠는 이에 대한 대안으로는 SAP가 강점을 갖는 핵심은 그대로 두고, 주변 기능은 베스트 오브 브리드 도구로 보완하는 ‘클린 코어(clean core)’ 전략을 제시했다.

6. 혁신을 확장할 수 있는 데이터 거버넌스

2026년 혁신을 지속 가능하게 만들려면, 모듈형·확장형 아키텍처와 데이터 전략이 핵심이라는 의견도 나왔다. 컴플라이언스 플랫폼 업체 삼사라(Samsara)의 CIO 스티븐 프란체티는 “혁신이 확장 가능하고 지속 가능하며 안전하게 이뤄지도록 하는 기반을 설계하는 것이 중요한 우선순위 중 하나”라고 말했다.

프란체티는 느슨하게 결합된 API 우선 아키텍처를 구축 중이며, 이를 통해 더 빠르게 움직이고 변화에 유연하게 대응하면서 솔루션 업체와 플랫폼 종속을 피할 수 있다고 설명했다. 워크플로우·도구·AI 에이전트까지 더 역동적으로 바뀌는 환경에서 ‘강하게 결합된 스택’은 확장에 한계가 있다는 판단이다. 또한 데이터는 AI뿐 아니라 비즈니스 인사이트, 규제 대응, 고객 신뢰를 위한 장기 전략 자산이라며, 데이터 품질과 거버넌스, 접근성을 전사적으로 강화하고 있다고 덧붙였다.

7. 인력 전환 가속화

AI 시대 인력 전략은 ‘채용’만으로 해결되지 않는다. 임원 서치·경영 컨설팅 기업 하이드릭 앤 스트러글스(Heidrick & Struggles)의 파트너 스콧 톰슨은 “업스킬링과 리스킬링은 차세대 리더를 키우는 핵심”이라며, “2026년의 기술 리더는 제품 중심의 기술 리더로서, 제품·기술·비즈니스를 사실상 하나로 묶어야 한다”라고 강조했다.

톰슨은 ‘디지털 인재 공장’ 모델을 제안했다. 역량 분류 체계(기술 역량 분류), 역할 기반 학습 경로, 실전 프로젝트 순환을 구조화해 내부에서 인재를 키우는 방식이다. 또한 AI가 활성화된 환경에 맞춰 직무를 재설계하고 자동화로 고도의 전문 노동 의존도를 줄이며, ‘퓨전 팀(fusion teams)’으로 희소 역량을 조직 전반에 확산해야 한다고 설명했다.

8. 팀 커뮤니케이션 고도화

기술 조직에서 불확실성이 커질수록 불안이 확산되고, 그 양상은 개인별로 다르게 나타난다. CompTIA의 최고 기술 에반젤리스트 제임스 스탠저는 “기술 부서에서 불확실성이 미치는 1차 효과는 불안”이라며, “불안은 사람마다 다른 형태로 드러난다”라고 지적했다. 스탠저는 팀원과의 밀착 소통을 강화하고, 더 효과적이고 관련성 높은 교육으로 불안을 관리해야 한다고 제안했다.

9. 민첩성·신뢰·확장성을 위한 역량 강화

AI 자체뿐 아니라, 이를 운영할 수 있는 역량도 2026년 핵심 과제 중 하나다. 보안 솔루션 업체 넷스코프(Netskope)의 CDIO 마이크 앤더슨은 “AI를 넘어 2026년 CIO 우선순위는 민첩성, 신뢰, 확장성을 이끄는 기반 역량을 강화하는 것”이라고 말했다.

앤더슨은 제품 운영 모델(product operating model)이 전통적 소프트웨어 팀을 넘어, IAM, 데이터 플랫폼, 통합 서비스 같은 엔터프라이즈 기반 역량까지 포함하는 형태로 확장될 것이라고 내다봤다. 이때 직원·파트너·고객·서드파티·AI 에이전트 등 ‘인간/비인간 ID’를 모두 지원해야 하며, 최소 권한과 제로 트러스트 원칙을 바탕으로 한 안전하고 적응형 프레임워크가 필요하다고 강조했다.

10. 진화하는 IT 아키텍처

2026년에는 현재의 IT 아키텍처가 AI 에이전트의 자율성을 감당하지 못하는 ‘레거시 모델’이 될 수도 있다. 세일즈포스의 최고 아키텍트 에민 게르바는 “효과적으로 확장하려면 기업은 새로운 에이전틱 엔터프라이즈로 전환해야 한다”라며, 데이터 의미를 통합하는 공유 시맨틱 계층, 중앙화된 지능을 위한 통합 AI/ML 계층, 확장 가능한 에이전트 인력의 라이프사이클을 관리하는 에이전틱 계층, 복잡한 크로스 사일로 워크플로우를 안전하게 관리하는 엔터프라이즈 오케스트레이션 계층 등 4개 계층을 제시했다.

게르바는 이 전환이 “엔드 투 엔드 자동화를 달성한 기업과 에이전트가 애플리케이션 사일로에 갇힌 기업을 가르는 결정적 경쟁력”이 될 것이라고 강조했다.
dl-ciokorea@foundryco.com

10 top priorities for CIOs in 2026

19 January 2026 at 05:01

A CIO’s wish list is typically long and costly. Fortunately, by establishing reasonable priorities, it’s possible to keep pace with emerging demands without draining your team or budget.

As 2026 arrives, CIOs need to take a step back and consider how they can use technology to help reinvent their wider business while running their IT capabilities with a profit and loss mindset, advises Koenraad Schelfaut, technology strategy and advisory global lead at business advisory firm Accenture. “The focus should shift from ‘keeping the lights on’ at the lowest cost to using technology … to drive topline growth, create new digital products, and bring new business models faster to market.”

Here’s an overview of what should be at the top of your 2026 priorities list.

1. Strengthening cybersecurity resilience and data privacy

Enterprises are increasingly integrating generative and agentic AI deep into their business workflows, spanning all critical customer interactions and transactions, says Yogesh Joshi, senior vice president of global product platforms at consumer credit reporting firm TransUnion. “As a result, CIOs and CISOs must expect bad actors will use these same AI technologies to disrupt these workflows to compromise intellectual property, including customer sensitive data and competitively differentiated information and assets.”

Cybersecurity resilience and data privacy must be top priorities in 2026, Joshi says. He believes that as enterprises accelerate their digital transformation and increasingly integrate AI, the risk landscape will expand dramatically. “Protecting sensitive data and ensuring compliance with global regulations is non-negotiable,” Joshi states.

2. Consolidating security tools

CIOs should prioritize re-baselining their foundations to capitalize on the promise of AI, says Arun Perinkolam, Deloitte’s US cyber platforms and technology, media, and telecommunications industry leader. “One of the prerequisites is consolidating fragmented security tools into unified, integrated, cyber technology platforms — also known as platformization.”

Perinkolam says a consolidation shift will move security from a patchwork of isolated solutions to an agile, extensible foundation fit for rapid innovation and scalable AI-driven operations. “As cyber threats become increasingly sophisticated, and the technology landscape evolves, integrating cybersecurity solutions into unified platforms will be crucial,” he says.

“Enterprises now face a growing array of threats, resulting in a sprawling set of tools to manage them,” Perinkolam notes. “As adversaries exploit fractured security postures, delaying platformization only amplifies these risks.”

3. Ensuring data protection

To take advantage of enhanced efficiency, speed, and innovation, organizations of all types and sizes are now racing to adopt new AI models, says Parker Pearson, chief strategy officer at data privacy and preservation firm Donoma Software.

“Unfortunately, many organizations are failing to take the basic steps necessary to protect their sensitive data before unleashing new AI technologies that could potentially be left exposed,” she warns, adding that in 2026 “data privacy should be viewed as an urgent priority.”

Implementing new AI models can raise significant concerns around how data is collected, used, and protected, Pearson notes. These issues arise across the entire AI lifecycle, from how the data used for initial training to ongoing interactions with the model. “Until now, the choices for most enterprises are between two bad options: either ignore AI and face the consequences in an increasingly competitive marketplace; or implement an LLM that could potentially expose sensitive data,” she says. Both options, she adds, can result in an enormous amount of damage.

The question for CIOs is not whether to implement AI, but how to derive optimal value from AI without placing sensitive data at risk, Pearson says. “Many CIOs confidently report that their organization’s data is either ‘fully’ or ‘end to end’ encrypted.” Yet Pearson believes that true data protection requires continuous encryption that keeps information secure during all states, including when it’s being used. “Until organizations address this fundamental gap, they will continue to be blindsided by breaches that bypass all their traditional security measures.”

Organizations that implement privacy-enhancing technology today will have a distinct advantage in implementing future AI models, Pearson says. “Their data will be structured and secured correctly, and their AI training will be more efficient right from the start, rather than continually incurring the expense, and risk of retraining their models.”

4. Focusing on team identity and experience

A top priority for CIOs in 2026 should be resetting their enterprise identity and employee experience, says Michael Wetzel, CIO at IT security software company Netwrix. “Identity is the foundation of how people show up, collaborate, and contribute,” he states. “When you get identity and experience right, everything else, including security, productivity, and adoption, follows naturally.”

Employees expect a consumer-grade experience at work, Wetzel says. “If your internal technology is clunky, they simply won’t use it.” When people work around IT, the organization loses both security and speed, he warns. “Enterprises that build a seamless, identity-rooted experience will innovate faster while organizations that don’t will fall behind.”

5. Navigating increasingly costly ERP migrations

Effectively navigating costly ERP migrations should be at the top of the CIO agenda in 2026, says Barrett Schiwitz, CIO at invoice lifecycle management software firm Basware. “SAP S/4HANA migrations, for instance, are complex and often take longer than planned, leading to rising costs.” He notes that upgrades can cost enterprises upwards of $100 million, rising to as much as $500 million depending on the ERP’s size and complexity.

The problem is that while ERPs try to do everything, they rarely perform specific tasks, such as invoice processing, really well, Schiwitz says. “Many businesses overcomplicate their ERP systems, customizing them with lots of add-ons that further increase risk.” The answer, he suggests, is adopting a “clean core” strategy that lets SAP do what it does best and then supplement it with best-in-class tools to drive additional value.

6. Doubling-down on innovation — and data governance

One of the most important priorities for CIOs in 2026 is architecting a foundation that makes innovation scalable, sustainable, and secure, says Stephen Franchetti, CIO at compliance platform provider Samsara.

Franchetti says he’s currently building a loosely coupled, API-first architecture that’s designed to be modular, composable, and extensible. “This allows us to move faster, adapt to change more easily, and avoid vendor or platform lock-in.” Franchetti adds that in an era where workflows, tools, and even AI agents are increasingly dynamic, a tightly bound stack simply won’t scale.

Franchetti is also continuing to evolve his enterprise data strategy. “For us, data is a long-term strategic asset — not just for AI, but also for business insight, regulatory readiness, and customer trust,” he says. “This means doubling down on data quality, lineage, governance, and accessibility across all functions.”

7. Facilitating workforce transformation

CIOs must prioritize workforce transformation in 2026, says Scott Thompson, a partner in executive search and management consulting company Heidrick & Struggles. “Upskilling and reskilling teams will help develop the next generation of leaders,” he predicts. “The technology leader of 2026 needs to be a product-centric tech leader, ensuring that product, technology, and the business are all one and the same.”

CIOs can’t hire their way out of the talent gap, so they must build talent internally, not simply buy it on the market, Thompson says. “The most effective strategy is creating a digital talent factory with structured skills taxonomies, role-based learning paths, and hands-on project rotations.”

Thompson also believes that CIOs should redesign job roles for an AI-enabled environment and use automation to reduce the amount of specialized labor required. “Forming fusion teams will help spread scarce expertise across the organization, while strong career mobility and a modern engineering culture will improve retention,” he states. “Together, these approaches will let CIOs grow, multiply, and retain the talent they need at scale.”

8. Improving team communication

A CIO’s top priority should be developing sophisticated and nuanced approaches to communication, says James Stanger, chief technology evangelist at IT certification firm CompTIA. “The primary effect of uncertainty in tech departments is anxiety,” he observes. “Anxiety takes different forms, depending upon the individual worker.”

Stanger suggests working closer with team members as well as managing anxiety through more effective and relevant training.

9. Strengthening drive agility, trust, and scale

Beyond AI, the priority for CIOs in 2026 should be strengthening the enabling capabilities that drive agility, trust, and scale, says Mike Anderson, chief digital and information officer at security firm Netskope.

Anderson feels that the product operating model will be central to this shift, expanding beyond traditional software teams to include foundational enterprise capabilities, such as identity and access management, data platforms, and integration services.

“These capabilities must support both human and non-human identities — employees, partners, customers, third parties, and AI agents — through secure, adaptive frameworks built on least-privileged access and zero trust principles,” he says, noting that CIOs who invest in these enabling capabilities now will be positioned to move faster and innovate more confidently throughout 2026 and beyond.

10. Addressing an evolving IT architecture

In 2026, today’s IT architecture will become a legacy model, unable to support the autonomous power of AI agents, predicts Emin Gerba, chief architect at Salesforce. He believes that in order to effectively scale, enterprises will have to pivot to a new agentic enterprise blueprint with four new architectural layers: a shared semantic layer to unify data meaning, an integrated AI/ML layer for centralized intelligence, an agentic layer to manage the full lifecycle of a scalable agent workforce, and an enterprise orchestration layer to securely manage complex, cross-silo agent workflows.

“This architectural shift will be the defining competitive wedge, separating companies that achieve end-to-end automation from those whose agents remain trapped in application silos,” Gerba says.

칼럼 | 데이터 관리 방식이 달라진다···2026년 ‘뜨는 5가지, 지는 5가지’

19 January 2026 at 02:28

데이터 환경은 대부분의 기업이 따라가기 어려울 만큼 빠르게 변화하고 있다. 이런 변화 속도는 2가지 힘이 맞물리면서 가속화되고 있다. 하나는 점차 성숙 단계에 접어드는 엔터프라이즈 데이터 관리 관행이고, 다른 하나는 기업이 활용하는 데이터에 더 높은 수준의 일관성, 정합성, 신뢰를 요구하는 AI 플랫폼이다.

그 결과 2026년은 기업이 주변부를 조금씩 손보는 데서 벗어나, 데이터 관리의 핵심 구조를 본격적으로 전환하는 해가 될 전망이다. 데이터 관리 영역에서 무엇이 필요해지고 무엇이 아닌지에 대한 기준도 점차 뚜렷해지고 있으며, 이는 파편화된 도구 환경과 수작업 중심의 관리, 실질적인 인텔리전스를 제공하지 못하는 대시보드에 피로감을 느낀 시장의 현실을 그대로 보여준다.

2026년 데이터 관리 영역에서 ‘뜨는 요소’와 ‘지는 요소’를 정리해 본다.

뜨는 요소 1: 사람의 판단에 기반한 네이티브 거버넌스

데이터 거버넌스는 더 이상 부가적인 작업에 그치지 않는다. 유니티 카탈로그, 스노우플레이크 호라이즌, AWS 글루 카탈로그와 같은 플랫폼은 거버넌스를 아키텍처의 기초 요소로 직접 통합하고 있다. 이는 외부 거버넌스 계층이 오히려 마찰을 키우고, 데이터 전반을 일관되게 관리하는 데 한계로 작용한다는 인식이 반영된 결과다. 새롭게 자리 잡은 흐름의 핵심은 네이티브 자동화다. 데이터 품질 점검, 이상 징후 알림, 사용 현황 모니터링이 백그라운드에서 상시적으로 작동하며, 사람이 따라갈 수 없는 속도로 환경 전반의 변화를 포착한다.

다만 이러한 자동화가 사람의 판단을 대체하는 것은 아니다. 문제는 도구가 진단하지만, 심각도의 기준을 어떻게 정할지, 어떤 SLA가 중요한지, 에스컬레이션 경로를 어떻게 설계할지는 여전히 사람이 결정한다. 업계는 도구가 탐지를 담당하고, 의미 부여와 책임은 사람이 맡는 구조로 변화하고 있다. 이는 거버넌스가 언젠가 완전히 자동화될 것이라는 인식에서 벗어나는 흐름으로 볼 수 있다. 대신 기업은 네이티브 기술의 이점을 적극 활용하는 동시에, 사람의 의사결정이 지닌 가치를 다시 한번 강화하고 있다.

뜨는 요소 2: 플랫폼 통합과 포스트 웨어하우스 레이크하우스의 부상

수십 개의 특화된 데이터 도구를 이어 붙여 사용하던 시대가 막을 내리고 있다. 분산을 전제로 한 사고방식이 복잡성의 한계에 도달했기 때문이다. 그동안 기업은 데이터 수집 시스템, 파이프라인, 카탈로그, 거버넌스 계층, 웨어하우스 엔진, 대시보드 도구를 조합해 왔다. 그 결과 유지 비용은 높고 구조는 취약하며, 거버넌스 측면에서는 예상보다 훨씬 관리하기 어려운 환경이 형성됐다.

데이터브릭스, 스노우플레이크, 마이크로소프트는 이런 상황을 기회로 보고 플랫폼을 통합 환경으로 확장하고 있다. 레이크하우스는 데이터 아키텍처의 핵심 지향점으로 자리 잡았다. 정형 및 비정형 데이터를 하나의 플랫폼에서 처리하고, 분석과 머신러닝, AI 학습까지 아우를 수 있기 때문이다. 기업은 더 이상 데이터 사일로 간 이동이나 호환되지 않는 시스템을 동시에 관리하길 원하지 않는다. 필요한 것은 마찰을 줄이고 보안을 단순화하며 AI 개발 속도를 높일 수 있는 중앙 운영 환경이다. 플랫폼 통합은 이제 벤더 종속의 문제가 아니라, 데이터가 폭증하고 AI가 그 어느 때보다 높은 일관성을 요구하는 환경에서 생존을 위한 선택으로 받아들여지고 있다.

뜨는 요소 3: 제로 ETL을 통한 엔드투엔드 파이프라인 관리

수작업 기반의 ETL(추출, 전환, 적재)은 사실상 마지막 단계에 접어들고 있다. ETL은 여러 시스템에 흩어진 데이터를 추출하고, 분석에 적합한 형태로 변환한 뒤, 데이터 웨어하우스나 레이크 같은 저장소에 적재하는 과정을 의미한다. 파이썬 스크립트나 커스텀 SQL 작업은 유연성을 제공하지만, 작은 변화에도 쉽게 오류가 발생하고 엔지니어의 지속적인 관리 부담을 요구한다. 이런 공백을 관리형 파이프라인 도구가 빠르게 메우고 있다. 데이터브릭스 레이크플로우, 스노우플레이크 오픈플로우, AWS 글루는 데이터 추출부터 모니터링, 장애 복구까지 아우르는 차세대 오케스트레이션 환경을 제시한다.

복잡한 소스 시스템을 처리하는 과제는 여전히 남아있지만, 방향성은 분명하다. 기업은 스스로 유지되는 파이프라인을 원하고 있다. 구성 요소를 줄이고, 사소한 스크립트 누락으로 발생하는 야간 장애를 최소화하길 기대한다. 일부 조직은 파이프라인 자체를 우회하는 선택도 하고 있다. 제로 ETL 패턴을 통해 운영 시스템의 데이터를 분석 환경으로 즉시 복제함으로써, 야간 배치 작업이 지닌 취약성을 제거하는 방식이다. 이는 실시간 가시성과 신뢰할 수 있는 AI 학습 데이터를 요구하는 애플리케이션에서 새로운 표준으로 떠오르고 있다.

뜨는 요소 4: 대화형 분석과 에이전틱 BI

대시보드는 점차 기업 내 중심 도구로서의 입지를 잃고 있다. 수년간 투자가 이어졌음에도 실제 활용도는 여전히 낮고, 그 수도 계속해서 늘어나는 양상을 보이고 있다. 대부분의 비즈니스 사용자는 정적인 차트 속에 묻힌 인사이트를 직접 찾아내고 싶어 하지 않는다. 이들이 원하는 것은 단순한 시각화가 아니라 명확한 답변과 설명, 그리고 맥락이다.

이런 공백을 대화형 분석이 메우고 있다. 생성형 BI 시스템은 사용자가 원하는 대시보드를 말로 설명하거나, 에이전트에게 데이터를 직접 해석해 달라고 요청할 수 있도록 한다. 필터를 하나씩 클릭하는 대신 분기별 성과 요약을 요청하거나, 특정 지표가 왜 변했는지를 질문할 수 있다. 초기의 자연어 기반 SQL 자동 생성 기술은 쿼리 작성 과정을 자동화하는 데 초점을 맞춰 한계를 드러냈다. 반면 최근의 흐름은 다르다. AI 에이전트는 쿼리를 만드는 역할보다 인사이트를 종합하고, 필요에 따라 시각화를 생성하는 데 집중한다. 이들은 단순한 질의 처리 도구가 아니라, 데이터와 비즈니스 질문을 함께 이해하는 분석가에 가까운 존재로 진화하고 있다.

뜨는 요소 5: 벡터 네이티브 스토리지와 개방형 테이블 포맷

AI는 스토리지에 대한 요구 조건 자체를 바꾸고 있다. 특히 검색 증강 생성(RAG)은 벡터 임베딩을 전제로 한다. 이는 데이터베이스가 벡터 데이터를 별도의 확장 기능이 아닌, 기본 데이터 유형으로 저장하고 처리할 수 있어야 함을 의미한다. 이에 따라 벤더는 데이터 엔진 내부에 벡터 기능을 직접 내장하기 위해 경쟁적으로 움직이고 있다.

동시에 아파치 아이스버그(Apache Iceberg)가 개방형 테이블 포맷의 새로운 표준으로 자리 잡아가고 있다. 아이스버그는 데이터 복제나 별도의 변환 과정 없이도 다양한 컴퓨팅 엔진이 동일한 데이터를 사용할 수 있도록 지원한다. 그동안 업계를 괴롭혀 온 상호운용성 문제를 상당 부분 해소하고, 오브젝트 스토리지를 진정한 멀티 엔진 기반으로 전환시키는 역할을 한다. 이를 통해 기업은 데이터 생태계가 변화할 때마다 모든 구조를 다시 작성하지 않고도, 장기적인 관점에서 데이터를 안정적으로 활용할 수 있는 기반을 마련할 수 있다.

다음은 2026년에 지는 데이터 관리 요소다.

지는 요소 1: 기존 모놀리식 웨어하우스와 과도하게 분산된 도구 체계

하나의 거대한 시스템에 모든 기능을 탑재한 전통적인 데이터 웨어하우스는 대규모 비정형 데이터를 처리하는 데 한계가 있고, AI가 요구하는 실시간 처리 역량도 충분히 제공하지 못한다. 그렇다고 해서 그 반대 극단이 해법이 된 것도 아니다. 현대 데이터 스택은 수많은 소규모 도구에 역할과 책임을 분산시켰고, 그 결과 거버넌스는 복잡해졌으며 AI를 위한 준비 속도도 오히려 느려졌다. 데이터 메시 역시 상황은 비슷하다. 데이터 소유와 분산 책임이라는 원칙 자체는 여전히 의미를 갖지만, 이를 엄격하게 구현하려는 접근법은 점차 힘을 잃고 있다.

지는 요소 2: 수작업 기반 ETL과 커스텀 커넥터

야간 배치 스크립트는 문제를 즉각적으로 드러내지 않은 채 중단되기 쉽고, 처리 지연을 초래하며 엔지니어의 시간을 지속적으로 소모한다. 데이터 복제 도구와 관리형 파이프라인이 표준으로 자리 잡으면서, 업계는 이러한 취약한 워크플로우에서 빠르게 벗어나고 있다. 사람이 직접 연결하고 관리하던 수동적인 데이터 연계 방식은, 상시적으로 작동하고 지속적으로 모니터링되는 오케스트레이션 구조로 대체되고 있다.

지는 요소 3: 수동 데이터 관리와 수동적 카탈로그

사람이 데이터를 일일이 검토하고 관리하는 방식은 더 이상 현실적인 선택지가 아니다. 문제가 발생한 이후에 정리하는 방식은 비용 대비 효과가 낮고, 기대만큼의 성과를 내기도 어렵다. 단순히 정보를 나열하는 위키 형태의 수동형 데이터 카탈로그 역시 점차 비중이 줄어들고 있다. 대신 데이터 상태를 지속적으로 감시하고 변화와 이상 징후를 자동으로 파악하는 액티브 메타데이터 시스템이 필수 요소로 떠오르고 있다.

지는 요소 4: 정적 대시보드와 일방적 보고

추가 질문에 답하지 못하는 대시보드는 사용자에게 좌절감을 준다. 기업이 원하는 것은 단순히 결과를 보여주는 도구가 아니라 함께 생각할 수 있는 분석 환경이다. AI 비서 사용 경험으로 비즈니스 기대 수준이 높아지면서, 정적인 보고 방식은 그 부담을 감당하지 못하고 있다.

지는 요소 5: 온프레미스 하둡 클러스터

하둡 클러스터(Hadoop)는 대규모 데이터를 분산 저장·처리하기 위해 여러 서버를 하나의 시스템처럼 묶어 운영하는 오픈소스 빅데이터 처리 환경이다. 하지만 온프레미스 환경에서 이를 직접 운영하는 방식은 점점 설득력을 잃고 있다. 오브젝트 스토리지와 서버리스 컴퓨팅를 결합한 구조는 더 높은 확장성과 단순한 운영, 낮은 비용이라는 분명한 이점을 제공한다. 반면 수많은 구성 요소로 이뤄진 하둡 서비스 생태계는 현대적인 데이터 환경과 더 이상 잘 맞지 않는 구조가 되고 있다.

2026년의 데이터 관리는 ‘명확성’을 중심에 두고 있다. 시장은 파편화된 구조와 수작업 개입, 그리고 소통하지 못하는 분석 방식을 점차 외면하고 있다. 미래의 중심에는 통합 플랫폼, 네이티브 거버넌스, 벡터 네이티브 스토리지, 대화형 분석, 그리고 최소한의 인간 개입으로 운영되는 파이프라인이 자리 잡고 있다. AI는 데이터 관리를 대체하는 존재가 아니다. 대신 단순함과 개방성, 통합된 설계를 중시하는 방향으로 데이터 관리의 규칙 자체를 다시 쓰고 있다.
dl-ciokorea@foundryco.com

What’s in, and what’s out: Data management in 2026 has a new attitude

16 January 2026 at 07:00

The data landscape is shifting faster than most organizations can track. The pace of change is driven by two forces that are finally colliding productively: enterprise data management practices that are maturing and AI platforms that are demanding more coherence, consistency and trust in the data they consume.

As a result, 2026 is shaping up to be the year when companies stop tinkering on the edges and start transforming the core. What is emerging is a clear sense of what is in and what is out for data management, and it reflects a market that is tired of fragmented tooling, manual oversight and dashboards that fail to deliver real intelligence.

So, here’s a list of what’s “In” and what’s “Out” for data management in 2026:

IN: Native governance that automates the work but still relies on human process

Data governance is no longer a bolt-on exercise. Platforms like Unity Catalog, Snowflake Horizon and AWS Glue Catalog are building governance into the foundation itself. This shift is driven by the realization that external governance layers add friction and rarely deliver reliable end-to-end coverage. The new pattern is native automation. Data quality checks, anomaly alerts and usage monitoring run continuously in the background. They identify what is happening across the environment with speed that humans cannot match.

Yet this automation does not replace human judgment. The tools diagnose issues, but people still decide how severity is defined, which SLAs matter and how escalation paths work. The industry is settling into a balanced model. Tools handle detection. Humans handle meaning and accountability. It is a refreshing rejection of the idea that governance will someday be fully automated. Instead, organizations are taking advantage of native technology while reinforcing the value of human decision-making.

IN: Platform consolidation and the rise of the post-warehouse lakehouse

The era of cobbling together a dozen specialized data tools is ending. Complexity has caught up with the decentralized mindset. Teams have spent years stitching together ingestion systems, pipelines, catalogs, governance layers, warehouse engines and dashboard tools. The result has been fragile stacks that are expensive to maintain and surprisingly hard to govern.

Databricks, Snowflake and Microsoft see an opportunity and are extending their platforms into unified environments. The Lakehouse has emerged as the architectural north star. It gives organizations a single platform for structured and unstructured data, analytics, machine learning and AI training. Companies no longer want to move data between silos or juggle incompatible systems. What they need is a central operating environment that reduces friction, simplifies security and accelerates AI development. Consolidation is no longer about vendor lock-in. It is about survival in a world where data volumes are exploding and AI demands more consistency than ever.

IN: End-to-end pipeline management with zero ETL as the new ideal

Handwritten ETL is entering its final chapter. Python scripts and custom SQL jobs may offer flexibility, but they break too easily and demand constant care from engineers. Managed pipeline tools are stepping into the gap. Databricks Lakeflow, Snowflake Openflow and AWS Glue represent a new generation of orchestration that covers extraction through monitoring and recovery.

While there is still work to do in handling complex source systems, the direction is unmistakable. Companies want pipelines that maintain themselves. They want fewer moving parts and fewer late-night failures caused by an overlooked script. Some organizations are even bypassing pipes altogether. Zero ETL patterns replicate data from operational systems to analytical environments instantly, eliminating the fragility that comes with nightly batch jobs. It is an emerging standard for applications that need real-time visibility and reliable AI training data.

IN: Conversational analytics and agentic BI

Dashboards are losing their grip on the enterprise. Despite years of investment, adoption remains low and dashboard sprawl continues to grow. Most business users do not want to hunt for insights buried in static charts. They want answers. They want explanations. They want context.

Conversational analytics is stepping forward to fill the void. Generative BI systems let users describe the dashboard they want or ask an agent to explain the data directly. Instead of clicking through filters, a user might request a performance summary for the quarter or ask why a metric changed. Early attempts at Text to SQL struggled because they attempted to automate the query writing layer. The next wave is different. AI agents now focus on synthesizing insights and generating visualizations on demand. They act less like query engines and more like analysts who understand both the data and the business question.

IN: Vector native storage and open table formats

AI is reshaping storage requirements. Retrieval Augmented Generation depends on vector embeddings, which means that databases must store vectors as first-class objects. Vendors are racing to embed vector support directly in their engines.

At the same time, Apache Iceberg is becoming the new standard for open table formats. It allows every compute engine to work on the same data without duplication or transformation. Iceberg removes a decade of interoperability pain and turns object storage into a true multi-engine foundation. Organizations finally get a way to future-proof their data without rewriting everything each time the ecosystem shifts.

And here’s what’s “Out”:

OUT: Monolithic warehouses and hyper-decentralized tooling

Traditional enterprise warehouses cannot handle unstructured data at scale and cannot deliver the real-time capabilities needed for AI. Yet the opposite extreme has failed too. The highly fragmented Modern Data Stack scattered responsibilities across too many small tools. It created governance chaos and slowed down AI readiness. Even the rigid interpretation of Data Mesh has faded. The principles live on, but the strict implementation has lost momentum as companies focus more on AI integration and less on organizational theory.

OUT: Hand-coded ETL and custom connectors

Nightly batch scripts break silently, cause delays and consume engineering bandwidth. With replication tools and managed pipelines becoming mainstream, the industry is rapidly abandoning these brittle workflows. Manual plumbing is giving way to orchestration that is always on and always monitored.

OUT: Manual stewardship and passive catalogs

The idea of humans reviewing data manually is no longer realistic. Reactive cleanup costs too much and delivers too little. Passive catalogs that serve as wikis are declining. Active metadata systems that monitor data continuously are now essential.

Out: Static dashboards and one-way reporting

Dashboards that cannot answer follow up questions frustrate users. Companies want tools that converse. They want analytics that think with them. Static reporting is collapsing under the weight of business expectations shaped by AI assistants.

OUT: On-premises Hadoop clusters

Maintaining on-prem Hadoop is becoming indefensible. Object storage combined with serverless compute offers elasticity, simplicity and lower cost. The complex zoo of Hadoop services no longer fits the modern data landscape.

Data management in 2026 is about clarity. The market is rejecting fragmentation, manual intervention and analytics that fail to communicate. The future belongs to unified platforms, native governance, vector native storage, conversational analytics and pipelines that operate with minimal human interference. AI is not replacing data management. It is rewriting the rules in ways that reward simplicity, openness and integrated design.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

Why CIOs need a new approach to unstructured data management

16 January 2026 at 05:00

CIOs everywhere will be familiar with the major issues caused by collecting and retaining data at an increasingly rapid rate. Industry research shows 64% of enterprises manage at least 1 Petabyte of data, creating substantial cost, governance and compliance pressures.

If that wasn’t enough, organizations frequently default to retaining these enormous datasets, even when they are no longer needed. To put this into context, the average useful life of most enterprise data has now shrunk to 30–90 days; however, for various reasons, businesses continue to store it indefinitely, thereby adding to the cost and complexity of their underlying infrastructure.

As much as 90% of this information comes in the form of unstructured data files spread across hybrid, multi-vendor environments with little to no centralized oversight. This can include everything from MS Office docs to photo and video content routinely used by the likes of marketing teams, for example. The list is extensive, stretching to invoices, service reports, log files and in some organizations even scans or faxes of hand-written documents, often dating back decades.

In these circumstances, CIOs often lack clear visibility into what data exists, where it resides, who owns it, how old it is or whether it holds any business value. This matters because in many cases, it has tremendous value with the potential to offer insight into a range of important business issues, such as customer behaviour or field quality challenges, among many others.

With the advent of GenAI, it is now realistic to use the knowledge embedded in all kinds of documents and to retrieve their high-quality (i.e., relevant, useful and correct) content. This is even possible for documents having a low visual/graphical quality. As a result, running AI on a combination of structured and unstructured input can reconstruct the entire enterprise memory and the so-called “tribal knowledge”.

Visibility and governance

The first point to appreciate is that the biggest challenge is not the amount of data being collected and retained, but the absence of meaningful visibility into what is being stored.

Without an enterprise-wide view (a situation common to many organizations), teams cannot determine which data is valuable, which is redundant, or which poses a risk. In particular, metadata remains underutilised, even though insights such as creation date, last access date, ownership, activity levels and other basic indicators can immediately reveal security risks, duplication, orphaned content and stale data.

Visibility begins by building a thorough understanding of the existing data landscape. This can be done by using tools that scan storage platforms across multi-vendor and multi-location environments, collect metadata at scale, and generate virtual views of datasets. This allows teams to understand the size, age, usage and ownership of their data, enabling them to identify duplicate, forgotten or orphaned files.

It’s a complex challenge. In most cases, some data will be on-premises, some in the cloud, some stored as files and some as objects (such as S3 or Azure), all of which can be on-prem or in the cloud. In these circumstances, the multi-vendor infrastructure strategy adopted by many organizations is a sound strategy as it facilitates data redundancy and replication while also protecting against increasingly common cloud outages, such as those seen at Amazon and CloudFlare.

With visibility tools and processes in place, the next requirement is to introduce governance frameworks that bring structure and control to unstructured data estates. Good governance enables CIOs to align information with retention rules, compliance obligations and business requirements, reducing unnecessary storage and risk.

It’s also dependent on effective data classification processes, which help determine which data should be retained, which can be relocated to lower-cost platforms and which no longer serve a purpose. Together, these processes establish clearer ownership and ensure data is handled consistently across the organization while also providing the basis for reliable decision-making by ensuring that data remains accurate. Without it, visibility alone cannot deliver operational or financial benefits, because there is no framework for acting on what the organization discovers.

Lifecycle management

Once CIOs have a clear view of what exists and a framework to control it, they need a practical method for acting on those findings across the data lifecycle. By applying metadata-based policies, teams can migrate older or rarely accessed data to lower-cost platforms, thereby reducing pressure on primary storage. Files that have not been accessed for an extended period can be relocated to more economical systems, while long-inactive data can be archived or removed entirely if appropriate.

A big part of the challenge is that the data lifecycle is now much longer than it used to be, a situation that has profoundly affected how organizations approach storage strategy and spend.

For example, datasets considered ‘active’ will typically be stored on high- or mid-performance systems. Once again, there are both on-premises and cloud options to consider, depending on the use case, but typically they include both file and object requirements.

As time passes (often years), data gradually becomes eligible for archival. It is then moved to an archive venue, where it is better protected but may become less accessible or require more checks before access. Inside the archive, it can (after even more years) be tiered to cheaper storage such as tape. At this point, data retrieval times might range from minutes to hours, or even days. In each case, archived data is typically subject to all kinds of regulations and can be used during e-discovery.

In most circumstances, it is only after this stage has been reached that data is finally eligible to be deleted.

When organizations take this approach, many discover that a significant proportion of their stored information falls into the inactive or long-inactive category. Addressing this issue immediately frees capacity, reduces infrastructure expenditure and helps prevent the further accumulation of redundant content.

Policy-driven lifecycle management also improves operational control. It ensures that data is retained according to its relevance rather than by default and reduces the risk created by carrying forgotten or outdated information. It supports data quality by limiting the spread of stale content across the estate and provides CIOs with a clearer path to meeting retention and governance obligations.

What’s more, at a strategic level, lifecycle management transforms unstructured data from an unmanaged cost into a controlled process that aligns storage with business value. It strengthens compliance by ensuring only the data required for operational or legal reasons is kept, and it improves readiness for AI and analytics initiatives by ensuring that underlying datasets are accurate and reliable.

To put all these issues into perspective, the business obsession with data shows no sign of slowing up. Indeed, the growing adoption of AI technologies is raising the stakes even further, particularly for organizations that continue to prioritize data collection and storage over management and governance. As a result, getting data management and storage strategies in order sooner rather than later is likely to rise to the top of the to-do list for CIOs across the board.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

Beyond the hype: 4 critical misconceptions derailing enterprise AI adoption

14 January 2026 at 05:15

Despite unprecedented investment in artificial intelligence, with enterprises committing an estimated $35 billion annually, the stark reality is that most AI initiatives fail to deliver tangible business value. With AI initiatives, ROI determination is still rocket science. Research reveals that approximately 80% of AI projects never reach production, almost double the failure rate of traditional IT projects. More alarmingly, studies from MIT indicate that 95% of generative AI investments produce no measurable financial returns.

The prevailing narrative attributes these failures to technological inadequacy or insufficient investment. However, this perspective fundamentally misunderstands the problem. My experience reveals another root cause that lies not in the technological aspects themselves, but in strategic and cognitive biases that systematically distort how organizations define readiness and value, manage data, and adopt and operationalize the AI lifecycle.

Here are four critical misconceptions that consistently undermine enterprise AI strategies.

1. The organizational readiness illusion

Perhaps the most pervasive misconception plaguing AI adoption is the readiness illusion, where executives equate technology acquisition with organizational capability. This bias manifests in underestimating AI’s disruptive impact on organizational structures, power dynamics and established workflows. Leaders frequently assume AI adoption is purely technological when it represents a fundamental transformation that requires comprehensive change management, governance redesign and cultural evolution.

The readiness illusion obscures human and organizational barriers that determine success. As Li, Zhu and Hua observe, firms struggle to capture value not because technology fails, but because people, processes and politics do. During my engagements across various industries, I noticed that AI initiatives trigger turf wars. These kinds of defensive reactions from middle management, perceiving AI as threatening their authority or job security, quietly derail initiatives even in technically advanced companies.

S&P Global’s research reveals companies with higher failure rates encounter more employee and customer resistance. Organizations with lower failure rates demonstrate holistic approaches addressing cultural readiness alongside technical capability. MIT research found that older organizations experienced declines in structured management practices after adopting AI, accounting for one-third of their productivity losses. This suggests that established companies must rethink organizational design rather than merely overlaying AI onto existing structures.

2. AI expectation myths

The second critical bias involves inflated expectations about AI’s universal applicability. Leaders frequently assume AI can address every business challenge and guarantee immediate ROI, when empirical evidence demonstrates that AI delivers measurable value only in targeted, well-defined and precise use cases. This expectation reality gap contributes to pilot paralysis, in which companies undertake numerous AI experiments but struggle to scale any to production.

An S&P Global 2025 survey reveals that 42% of companies abandoned most AI initiatives during the year, up from just 17% in 2024, with the average organization scrapping 46% of proofs-of-concept before production. McKinsey’s research confirms that organizations reporting significant financial returns are twice as likely to have redesigned end-to-end workflows before selecting modeling techniques. Gartner indicates that more than 40% of agentic AI projects will be cancelled by 2027, largely because organizations pursue AI based on technological fascination rather than concrete business value.

3. Data readiness bias

The third misconception centers on data; specifically, the bias toward prioritizing volume over quality, claiming transparent and unbiased data, solid governance and contextual accuracy. Executives frequently claim their enterprise data is already clean or assume that collecting more data will ensure AI success — fundamentally misunderstanding that quality, stewardship and relevance matter exponentially more than raw quantity — and misunderstanding that the definition of clean data changes when AI is introduced.

Research exposes this readiness gap: while 91% of organizations acknowledge that a reliable data foundation is essential for AI success, only 55% believe their organization actually possesses one. This disconnect reveals executives’ tendency to overestimate data readiness while underinvesting in the governance, integration and quality management that AI systems require.

Analysis by FinTellect AI indicates that in financial services, 80% of AI projects fail to reach production and of those that do, 70% fail to deliver measurable business value, predominantly from poor data quality rather than technical deficiencies. Organizations that treat data as a product — investing in master data management, governance frameworks and data stewardship — are seven times more likely to deploy generative AI at scale.

This underscores that data infrastructure represents a strategic differentiator, not merely a technical prerequisite. Our understanding and definition for data readiness should be reconsidered by covering more inclusive aspects of data accessibility, integration and cleansing in the context of AI adoption.

4. The deployment fallacy

The fourth critical misconception involves treating AI implementation as traditional software deployment — a set-and-forget approach that’s incompatible with AI’s operational requirements. I’ve noticed that many executives believe deploying AI resembles rolling out ERP or CRM systems, assuming pilot performance translates directly to production.

This fallacy ignores AI’s fundamental characteristic: AI systems are probabilistic and require continuous lifecycle management. MIT research demonstrates manufacturing firms adopting AI frequently experience J-curve trajectories, where initial productivity declines but is then followed by longer-term gains. This is because AI deployment triggers organizational disruption requiring adjustment periods. Companies failing to anticipate this pattern abandon initiatives prematurely.

The fallacy manifests in inadequate deployment management, including planning for model monitoring, retraining, governance and adaptation. AI systems can suffer from data drift as underlying patterns evolve. Organizations treating AI as static technology systematically underinvest in the operational infrastructure necessary for sustained success.

Overcoming the AI adoption misconceptions

Successful AI adoption requires understanding that deployment represents not an endpoint but the beginning of continuous lifecycle management. Despite the abundance of technological stacks available for AI deployments, a comprehensive lifecycle management strategy is essential to harness the full potential of these capabilities and effectively implement them.

I propose that the adoption journey should be structured into six interconnected phases, each playing a crucial role in transforming AI from a mere concept into a fully operational capability.

Stage 1: Envisioning and strategic alignment

Organizations must establish clear strategic objectives connecting AI initiatives to measurable business outcomes across revenue growth, operational efficiency, cost reduction and competitive differentiation.

This phase requires engaging leadership and stakeholders through both top-down and bottom-up approaches. Top-down leadership provides strategic direction, resource allocation and organizational mandate, while bottom-up engagement ensures frontline insights, practical use case identification and grassroots adoption. This bidirectional alignment proves critical: executive vision without operational input leads to disconnected initiatives, while grassroots enthusiasm without strategic backing results in fragmented pilots.

Organizations must conduct an honest assessment of organizational maturity across governance, culture and change readiness, as those that skip rigorous self-assessment inevitably encounter the readiness illusion.

Stage 2: Data foundation and governance

Organizations must ensure data availability, quality, privacy and regulatory compliance across the enterprise. This stage involves implementing modern data architecture-whether centralized or federated-supported by robust governance frameworks including lineage tracking, security protocols and ethical AI principles. Critically, organizations must adopt data democratization concepts that make quality data accessible across organizational boundaries while maintaining appropriate governance and security controls. Data democratization breaks down silos that traditionally restrict data access to specialized teams, enabling cross-functional teams to leverage AI effectively. The infrastructure must support not only centralized data engineering teams but also distributed business users who can access, understand and utilize data for AI-driven decision-making. Organizations often underestimate this stage’s time requirements, yet it fundamentally determines subsequent success.

Stage 3: Pilot use cases with quick wins

Organizations prove AI value through quick wins by starting with low-risk, high-ROI use cases that demonstrate tangible impact. Successful organizations track outcomes through clear KPIs such as cost savings, customer experience improvements, fraud reduction and operational efficiency gains. Precision in use case definition proves essential — AI cannot solve general or wide-scope problems but excels when applied to well-defined, bounded challenges. Effective prioritization considers potential ROI, technical feasibility, data availability, regulatory constraints and organizational readiness. Organizations benefit from combining quick wins that build confidence with transformational initiatives that drive strategic differentiation. This phase encompasses feature engineering, model selection and training and rigorous testing, maintaining a clear distinction between proof-of-concept and production-ready solutions.

Stage 4: Monitor, optimize and govern

Unlike the traditional IT implementations, this stage must begin during pilot deployment rather than waiting for production rollout. Organizations define model risk management policies aligned with regulatory frameworks, establishing protocols for continuous monitoring, drift detection, fairness assessment and explainability validation. Early monitoring ensures detection of model drift, performance degradation and output inconsistencies before they impact business operations. Organizations implement feedback loops to retrain and fine-tune models based on real-world performance. This stage demands robust MLOps (Machine Learning Operations) practices that industrialize AI lifecycle management through automated monitoring, versioning, retraining pipelines and deployment workflows. MLOps provides the operational rigor necessary to manage AI systems at scale, treating it as a strategic capability rather than a tactical implementation detail.

Stage 5: Prepare for scale and adoption

Organizations establish foundational capabilities necessary for enterprise-wide AI scaling through comprehensive governance frameworks with clear policies for risk management, compliance and ethical AI use. Organizations must invest in talent and upskilling initiatives that develop AI literacy across leadership and technical teams, closing capability gaps. Cultural transformation proves equally critical-organizations must foster a data-driven, innovation-friendly environment supported by tailored change management practices. Critically, organizations must shift from traditional DevOps toward a Dev-GenAI-Biz-Ops lifecycle that integrates development, generative AI capabilities, business stakeholder engagement and operations in a unified workflow. This expanded paradigm acknowledges that AI solutions demand continuous collaboration between technical teams, business users who understand domain context and operations teams managing production systems. Unlike traditional software, where business involvement diminishes post-requirements, AI systems require ongoing business input to validate outputs and refine models.

Stage 6: Scale and industrialize AI

Organizations transform pilots into enterprise capabilities by embedding AI models into core workflows and customer journeys. This phase requires establishing comprehensive model management systems for versioning, bias detection, retraining automation and lifecycle governance. Organizations implement cloud-native platforms that provide scalable compute infrastructure. Deployment requires careful orchestration of technical integration, user training, security validation and phased rollout strategies that manage risk while building adoption. Organizations that treat this as mere technical implementation encounter the deployment fallacy, underestimating the organizational transformation required. Success demands integration of AI into business processes, technology ecosystems and decision-making frameworks, supported by operational teams with clear ownership and accountability.

Critically, this framework emphasizes continuous iteration across all phases rather than sequential progression. AI adoption represents an organizational capability to be developed over time, not a project with a defined endpoint.

The importance of system integrators with inclusive ecosystems

AI adoption rarely succeeds in isolation. The complexity spanning foundational models, custom applications, data provision, infrastructure and technical services requires orchestration capabilities beyond most organizations’ internal capacity. MIT research demonstrates AI pilots built with external partners are twice as likely to reach full deployment compared to internally developed tools.

Effective system integrators provide value through inclusive ecosystem orchestration, maintaining partnerships across model providers, application vendors, data marketplaces, infrastructure specialists and consulting firms. This ecosystem approach enables organizations to leverage best-of-breed solutions while maintaining architectural coherence and governance consistency. The integrator’s role extends beyond technical implementation to encompass change management, capability transfer and governance establishment.

I anticipate a paradigm shift in the next few years, with master system integrators leading the AI transformation journey, rather than technology vendors.

The path forward

The prevailing narrative that AI projects fail due to technological immaturity fundamentally misdiagnoses the problem. Evidence demonstrates that failure stems from predictable cognitive and strategic biases: overestimating organizational readiness for disruptive change, harboring unrealistic expectations about AI’s universal applicability, prioritizing data volume over quality and governance and treating AI deployment as traditional software implementation.

Organizations that achieve AI success share common characteristics: they honestly assess readiness across governance, culture and change capability before deploying technology; they pursue targeted use cases with measurable business value; they treat data as a strategic asset requiring sustained investment; and they recognize that AI requires continuous lifecycle management with dedicated operational capabilities.

The path forward requires cognitive discipline and strategic patience. As AI capabilities advance, competitive advantage lies not in algorithms but in organizational capability to deploy them effectively — a capability built through realistic readiness assessment, value-driven use case selection, strategic data infrastructure investment and commitment to continuous management and adoption of the right lifecycle management framework. The question facing enterprise leaders is not whether to adopt AI, but whether their organizations possess the maturity to navigate its inherent complexities and transform potential into performance.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

Southeast Asia CIOs Top Predictions on 2026: A Year of Maturing AI, Data Discipline, and Redefined Work

13 January 2026 at 01:25

As 2026 begins, my recent conversations with Chief Information Officers across Southeast Asia provided me with a grounded view of how digital transformation is evolving. While their perspectives differ in nuance, they converge on several defining shifts: the maturation of artificial intelligence, the emergence of autonomous systems, a renewed focus on data governance, and a reconfiguration of work. These changes signal not only technological advancement but a rethinking of how Southeast Asia organizations intend to compete and create value in an increasingly automated economy.

For our CIOs, the year ahead represents a decisive moment as AI moves beyond pilots and hype cycles. Organizations are expected to judge AI by measurable business outcomes rather than conceptual promise. AI capabilities will become standard features embedded across applications and infrastructure, fundamental rather than differentiating. The real challenge is no longer acquiring AI technology but operationalizing it in ways that align with strategic priorities.

Among the most transformative developments is the rise of agentic AI – autonomous agents capable of performing tasks and interacting across systems. CIOs anticipate that organizations will soon manage not a single AI system but networks of agents, each with distinct logic and behaviour. This shift ushers in a new strategic focus, agentic AI orchestration. Organizations will need platforms that coordinate multiple agents, enforce governance, manage digital identity, and ensure trust across heterogeneous technology environments. As AI ecosystems grow more complex, the CIO’s role evolves from integrator to orchestrator who directs a diverse array of intelligent systems.

As AI becomes more central to operations, data governance emerges as a critical enabler. Technology leaders expect 2026 to expose the limits of weak data foundations. Data quality, lineage, access controls, and regulatory compliance determine whether AI initiatives deliver value. Organizations that have accumulated “data debt” will be unable to scale, while those that invest early will move with greater speed and confidence.

Automation in physical environments is also set to accelerate as CIOs expect robotics to expand across healthcare, emergency services, retail, and food and beverage sectors. Robotics will shift from specialised deployments to routine service delivery, supporting productivity goals, standardizing quality, and addressing persistent labour constraints.

Looking ahead, our region’s CIOs point to the early signals of quantum computing’s relevance. While still emerging, quantum technologies are expected to gain visibility through evolving products and research. In my view, for Southeast Asia organizations, the priority is not immediate adoption but proactive monitoring, particularly in cybersecurity and long-term data protection, without undertaking premature architectural shifts.

IDGConnect_quantum_quantumcomputing_shutterstock_1043301451_1200x800

Shutterstock

Perhaps the most provocative prediction concerns the nature of work. As specialised AI agents take on increasingly complex task chains, one CIO anticipates the rise of “cognitive supply chains” in which work is executed largely autonomously. Traditional job roles may fragment into task-based models, pushing individuals to redefine their contributions. Workplace identity could shift from static roles to dynamic capabilities, a broader evolution in how people create value in an AI-native economy.

One CIOs spotlight the changing nature of software development where natural-language-driven “vibe coding” is expected to mature, enabling non-technical teams to extend digital capabilities more intuitively. This trend will not diminish the relevance of enterprise software as both approaches will coexist to support different organizational needs.

CIO ASEAN Editorial final take:

Collectively, these perspectives shared by Southeast Asia’s CIO community point to Southeast Asia preparing for a structurally different digital future, defined by embedded AI, scaled autonomous systems, and disciplined data practices. The opportunity is substantial, but so is the responsibility placed on technology leaders.

As 2026 continue to unfold, the defining question will not simply be who uses AI, but who governs it effectively, integrates it responsibly, and shapes its trajectory to strengthen long-term enterprise resilience. Enjoy reading these top predictions for 2026 by our region’s most influential CIOs who are also our CIO100 ASEAN & Hong Kong Award 2025 winners:

Ee Kiam Keong
Deputy Chief Executive (Policy & Development)
concurrent Chief Information Officer
InfoComm Technology Division
Gambling Regulatory Authority Singapore
 
Prediction 1
AI continue to lead its edge esp. Agentic AI would be getting more popular and used, and AI Governance in terms AI risks and ethnics would get more focused
 
Prediction 2
Quantum Computing related products should start to evolve and more apparent.
 
Prediction 3
Deployment of robotic applications would be widened esp. in medical, emergency response and casual activities such retail, and food and beverage etc.
Ng Yee Pern,
Chief Technology Officer
Far East Organization
 
Prediction 4
AI deployments will start to mature, as enterprises confront the disconnect between the inflated promises of AI vendors and the actual value delivered.
 
Prediction 5
Vibe coding will mature and grow in adoption, but enterprise software is not going away. There is plenty of room for both to co-exist.
Athikom Kanchanavibhu
Executive Vice President, Digital & Technology Transformation
& Chief Information Security Officer

Mitr Phol Group
 
Prediction 6
The Next Vendor Battleground: Agentic AI Orchestration
By 2026, AI will no longer be a differentiator, it will be a default feature, embedded as standard equipment across modern digital products. As every vendor develops its own Agentic AI, enterprises will manage not one AI, but an orchestra of autonomous agents, each optimized for its own ecosystem.
 
The new battleground will be Agentic AI Orchestration where platforms can coordinate, govern, and securely connect agentic AIs across vendors and domains. 2026 won’t be about smarter agents, but about who can conduct the symphony best-safely, at scale, and across boundaries.
 
Prediction 7
Enterprise AI Grows Up: Data Governance Takes Center Stage
2026 will mark the transition from AI pilots to AI in production. While out-of-the-box AI will become common, true competitive advantage will come from applying AI to enterprise-specific data and context. Many organizations will face a sobering realization: AI is only as good as the data it is trusted with.
 
As AI moves into core business processes, data governance, management, and security will become non-negotiable foundations. Data quality, access control, privacy, and compliance will determine whether AI scales or stalls. In essence, 2026 will be the year enterprises learn that governing data well is the quiet superpower behind successful AI.
Jackson Ng
Chief Technology Officer and Head of Fintech
Azimut Group
 
Prediction 8
In 2026, organizations will see AI seeking power while humans search for purpose. Cognitive supply chains of specialized AI agents will execute work autonomously, forcing individuals to redefine identity at work, in service, and in society. Roles will disintegrate, giving way to a task-based, AI-native economy
Big data technology and data science. Data flow. Querying, analyzing, visualizing complex information. Neural network for artificial intelligence. Data mining. Business analytics.

NicoElNino / Shutterstock

Ensuring the long-term reliability and accuracy of AI systems: Moving past AI drift

9 January 2026 at 07:23

For years, model drift was a manageable challenge. Model drift alludes to the phenomenon in which a given trained AI program degrades in its performance levels over time. One way to picture this is to think about a car. Even the best car experiences wear and tear once it is out in the open world, leading to below-par performance and more “noise” as it runs. It requires routine servicing like oil changes, tyre balancing, cleaning and periodic tuning.

AI models follow the same pattern. These programs can range from a simple machine learning-based model to a more advanced neural network-based model. When “out in the open world” shifts, whether through changes in consumer behavior, latest market trends, spending patterns, or any other macro and micro-level triggers, the model drift starts to appear.

In the pre-GenAI scheme of things, models could be refreshed with new data and put back on track. Retrain, recalibrate, redeploy and the AI program was ready to perform again. GenAI has changed that equation. Drift is no longer subtle or hidden in accuracy reports; it is out in the open, where systems can misinform customers, expose companies to legal challenges and erode trust in real time.

McKinsey reports that while 91% of organizations are exploring GenAI, only a fraction feel ready to deploy it responsibly. The gap between enthusiasm and readiness is exactly where drift grows, moving the challenge from the backroom of data science to the boardroom of reputation, regulation and trust.  

Still, some are showing what readiness looks like. A global life sciences company used GenAI to resolve a nagging bottleneck: Stock Keeping Unit (SKU) matching, which once took hours, now takes seconds. The result was faster research decisions, fewer errors and proof that when deployed with purpose, GenAI can deliver real business value.

This only sharpens the point: progress is possible and it can ensure the long-term reliability and accuracy of AI systems, but not without real-time governance.

Why governance must be real-time

 AI drift is messier. When a generative model drifts, it hallucinates, fabricates, or misleads.  That’s why governance needs to move from periodic check-ins to real-time vigilance. The NIST AI Risk Management Framework offers a strong foundation, but a checklist alone won’t be enough. Enterprises need coverage across two critical aspects:

  1. Ensure that the enterprise data is ready for AI. The data is typically fragmented across scores of systems and that non-coherence, along with lack of data quality and data governance, leads models to drift.
  2. The other is what I call “living governance”: Councils with the authority to stop unsafe deployments, adjust validators and bring humans back into the loop when confidence slips or rather to ensure that confidence never slips.

This is where guardrails matter. They’re not just filters but validation checkpoints that shape how models behave. They range from simple rule-based filters to ML-based detectors for bias or toxicity and to advanced LLM-driven validators for fact-checking and coherence. Layered together with humans in the loop, they create a defence-in-depth strategy.

Culture, people and the hidden causes of drift

In many enterprises, drift escalates fastest when ownership is fragmented. The strongest and most successful programs designate a senior leader who carries responsibility, with their credibility and resources tied directly to system performance. That clarity of ownership forces everyone around them to treat drift seriously.

Another, often overlooked, driver of drift is the state of enterprise data. In many organizations, data sits scattered across legacy systems, cloud platforms, departmental stores and third-party tools. This fragmentation creates inconsistent inputs that weaken even well-designed models. When data quality, lineage, or governance is unreliable, models don’t drift subtly; they diverge quickly because they are learning from incomplete or incoherent signals. Strengthening data readiness through unified pipelines, governed datasets and consistent metadata becomes one of the most effective ways to reduce drift before it reaches production.

A disciplined developer becomes more effective, while a careless one generates more errors. But individual gains are not enough; without coherence across the team, overall productivity stalls. Success comes when every member adapts in step, aligned in purpose and practice. That is why reskilling is not a luxury.

Culture now extends beyond individuals. In many enterprises, AI agents are beginning to interact directly with one another, both agent-to-agent and human-to-agent. That’s a new collaboration loop, one that demands new norms and maturity. If the culture isn’t ready, drift doesn’t creep in through the algorithm; it enters through the people and processes surrounding it.

Lessons from the field

If you want to see AI drift in action, just scan recent headlines. Fraudsters are already using AI cloning to generate convincing impostors, tricking people into sharing information or authorizing transactions.

But there are positive examples too. In financial services, for instance, some organizations have begun deploying layered guardrails, personal data detection, topic restriction and pattern-based filters that act like brakes before the output ever reaches the client. One bank I worked with moved from occasional audits to continuous validation. The result wasn’t perfection, but containment. Drift still appeared, as it always does, but it was caught upstream, long before it could damage customer trust or regulatory standing.

Why proactive guardrails matter

Regulators are increasingly beginning to align and the signals are encouraging. The White House Blueprint for an AI Bill of Rights stresses fairness, transparency and human oversight. NIST has published risk frameworks. Agencies like the SEC and the FDA are drafting sector-specific guidance.

Regulatory efforts are progressing, but they inevitably move more slowly than the pace of technology. In the meantime, adversaries are already exploiting the gaps with prompt injections, model poisoning and deepfake phishing. As one colleague told me bluntly, “The bad guys adapt faster than the good guys.” He was right and that asymmetry makes drift not just a technical problem, but a national one.

That’s why forward-thinking enterprises aren’t just meeting regulatory mandates, they are proactively going beyond them to safeguard against emerging risks. They’re embedding continuous evaluation, streaming validation and enterprise-grade protections like LLM firewalls now. Retrieval-augmented generation systems that seem fine in testing can fail spectacularly as base models evolve. Without real-time monitoring and layered guardrails, drift leaks through until customers or regulators notice, usually too late.

The leadership imperative

So, where does this leave leaders? With an uncomfortable truth: AI drift will happen. The test of leadership is whether you’re prepared when it does.

Preparation doesn’t look flashy. It’s not a keynote demo or a glossy slide. It’s continuous monitoring and treating guardrails not as compliance paperwork but as the backbone of reliable AI.

And it’s balanced. Innovation can’t mean moving fast and breaking things in regulated industries. Governance can’t mean paralysis. The organizations that succeed will be the ones that treat reliability as a discipline, not a one-time project.

AI drift isn’t a bug to be patched; it’s the cost of doing business with systems that learn, adapt and sometimes misfire. Enterprises that plan for that cost, with governance, culture and guardrails, won’t just avoid the headlines. They’ll earn the trust to lead.

AI drift forces us to rethink what resilience really means in the enterprise. It’s no longer about protecting against rare failure; it’s about operating in a world where failure is constant, visible and amplified. In that world, resilience is measured not by how rarely systems falter, but by how quickly leaders recognize the drift, contain it and adapt. That shift in mindset separates organizations that merely experiment with GenAI from those that will scale it with confidence.

My view is straightforward: treat drift as a given, not a surprise. Build governance that adapts in real time. Demand clarity on why your teams are using GenAI and what business outcomes justify it. Insist on accountability at the leadership level, not just within technical teams. And most importantly, invest in culture because the biggest source of drift is not always the algorithm but the people and processes around it.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

Beyond the cloud bill: The hidden operational costs of AI governance

7 January 2026 at 13:20

In my work helping large enterprises deploy AI, I keep seeing the same story play out. A brilliant data science team builds a breakthrough model. The business gets excited but then the project hits a wall; a wall built of fear and confusion that lives at the intersection of cost and risk. Leadership asks two questions that nobody seems equipped to answer at once: “How much will this cost to run safely?” and “How much risk are we taking on?”

The problem is that the people responsible for cost and the people responsible for risk operate in different worlds. The FinOps team, reporting to the CFO, is obsessed with optimizing the cloud bill. The governance, risk and compliance (GRC) team, answering to the chief risk officer, is focused on legal exposure. And the AI and MLOps teams, driven by innovation under the CTO, are caught in the middle.

This organizational structure leads to projects that are either too expensive to run or too risky to deploy. The solution is not better FinOps or stricter governance in isolation; it is the practice of managing AI cost and governance risk as a single, measurable system rather than as competing concerns owned by different departments. I call this “responsible AI FinOps.”

To understand why this system is necessary, we first have to unmask the hidden costs that governance imposes long before a model ever sees a customer.

 Phase 1: The pre-deployment costs of governance

The first hidden costs appear during development, in what I call the development rework cost. In regulated industries, a model needs to not only be accurate, it must be proven to be fair. It is a common scenario: a model clears every technical accuracy benchmark, only to be flagged for noncompliance during the final bias review.

As I detailed in a recent VentureBeat article, this rework is a primary driver of the velocity gap that stalls AI strategies. This forces the team back to square one, leading to weeks or months of rework, resampling data, re-engineering features and retraining the model; all of which burns expensive developer time and delays time-to-market.

Even when a model works perfectly, regulated industries demand a mountain of paperwork. Teams must create detailed records explaining exactly how the model makes decisions and where its data comes from. You won’t see this expense on a cloud invoice, but it is a major part measured in the salary hours of your most senior experts.

These aren’t just technical problems, they’re a financial drain caused by an AI governance standard process failure.

Phase 2: The recurring operational costs in production

Once a model is deployed, the governance costs become a permanent part of the operational budget.

The explainability overhead

For high-risk decisions, governance mandates that every prediction be explainable. While the libraries used to achieve this (like the popular SHAP and LIME) are open source, they are not free to run. They are computationally intensive. In practice, this means running a second, heavy algorithm alongside your main model for every single transaction. This can easily double the compute resources and latency, creating a significant and recurring governance overhead on every prediction.

The continuous monitoring burden

Standard MLOps involves monitoring for performance drift (e.g., is the model getting less accurate?). But AI governance adds a second, more complex layer: governance monitoring. This means constantly checking for bias drift (e.g., is the model becoming unfair to a specific group over time?) and explainability drift. This requires a separate, always-on infrastructure that ingests production data, runs statistical tests and stores results, adding a continuous and independent cost stream to the project.

The audit and storage bill

To be auditable, you must log everything. In finance, regulations from bodies like FINRA require member firms to adhere to SEC rules for electronic recordkeeping, which can mandate retention for at least six years in a non-erasable format. This means every prediction, input and model version creates a data artifact that incurs a storage cost, a cost that grows every single day for years.

Regulated vs. non-regulated difference: Why a social media app and a bank can’t use the same AI playbook

Not all AI is created equal and the failure to distinguish between use cases is a primary source of budget and risk misalignment. The so-called governance taxes I described above are not universally applied because the stakes are vastly different.

Consider a non-regulated use case, like a video recommendation engine on a social media app. If the model recommends a video I don’t like, the consequence is trivial; I simply scroll past it. The cost of a bad prediction is nearly zero. The MLOps team can prioritize speed and engagement metrics, with a relatively light touch on governance.

Now consider a regulated use case I frequently encounter: an AI model used for mortgage underwriting at a bank. A biased model that unfairly denies loans to a protected class doesn’t just create a bad customer experience, it can trigger federal investigations, multimillion-dollar fines under fair lending laws and a PR catastrophe. In this world, explainability, bias monitoring and auditability are not optional; they are non-negotiable costs of doing business. This fundamental difference is why a single version of AI platform dictated solely by the MLOps, FinOps or GRC team is doomed to fail.

Responsible AI FinOps: A practical playbook for unifying cost and risk

Bridging the gap between the CFO, CRO and CTO requires a new operating model built on shared language and accountability.

  1. Create a unified language with new metrics. FinOps tracks business metrics like cost per user and technical metrics like cost per inference or cost per API call. Governance tracks risk exposure. A responsible AI FinOps approach fuses these by creating metrics like cost per compliant decision. In my own research, I’ve focused on metrics that quantify not just the cost of retraining a model, but the cost-benefit of that retraining relative to the compliance lift it provides.
  2. Build a cross-functional tiger team. Instead of siloed departments, leading organizations are creating empowered pods that include members from FinOps, GRC and MLOps. This team is jointly responsible for the entire lifecycle of a high-risk AI product; its success is measured on the overall risk-adjusted profitability of the system. This team should not only define cross-functional AI cost governance metrics, but also standards that every engineer, scientist and operations team has to follow for every AI model across the organization.
  3. Invest in a unified platform. The market is responding to this need. The explosive growth of the MLOps market, which Fortune Business Insights projects will reach nearly $20 billion by 2032, is proof that the market is responding to this need for a unified one-enterprise-level control plane for AI. The right platform provides a single dashboard where the CTO sees model performance, the CFO sees its associated cloud spend and the CRO sees its real-time compliance status.

The organizational challenge

The greatest barrier to realizing the value of AI is no longer purely technical, it is organizational. The companies that win will be those who break down the walls between their finance, risk and technology teams.

They will recognize that A) You cannot optimize cost without understanding risk; B) You cannot manage risk without quantifying its cost; and C) You can achieve neither without a deep engineering understanding of how the model actually works. By embracing a fused responsible AI FinOps discipline, leaders can finally stop the alarms from ringing in separate buildings and start conducting a symphony of innovation that is both profitable and responsible.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

AI maturity is what happens when curiosity meets control

6 January 2026 at 07:38

When AI first entered the enterprise, every week brought a new tool, a new headline, a new promise of transformation. The excitement was real, but the results were inconsistent. Now, the conversation has matured.

We’ve learned that success isn’t about chasing every use case. The teams I work with aren’t asking “What can AI do?” anymore. They’re asking, “Where does AI make the most impact?”

That mindset shift is changing how enterprises think about AI adoption and innovation. We’ve moved beyond a ChatGPT-for-everything approach toward embedded, specialized tools for everything from code editing to data modeling to workflow coordination.

Balancing discovery with control

In the push to innovate ahead of the curve, how do you balance technology discovery with responsibility and control? If you’ve built a culture that rewards innovation, your talent won’t wait for permission to start trying out the latest and greatest technology releases. That’s a good thing. The key is to harness that curiosity safely, turning experimentation into transformation.

Encourage AI curiosity by padding it with structure and disciplined investment in the AI tools that work for your organization. Because without guardrails in place, employees will still explore, just without oversight.

Organizations that fail to orchestrate and communicate clear AI governance may see a flood of shadow AI, so-called workslop, and operational chaos in the place of transformation.

The pillars of safe, scalable AI adoption

AI can finally deliver on much of what vendors have promised and yet, according to BCG, 74% of organizations have yet to show tangible value from their AI investments and only 26% are moving beyond proofs of concept. A separate AI survey from Deloitte found that 31% of board directors and executives felt their organizations weren’t ready to deploy AI — at all.

This isn’t too surprising. Enterprises faced similar challenges during the cloud adoption era. But as with any new technology, the key to capitalizing on it lies in empowered people, clear policies and consistent processes.

Here’s what that looks like in practice.

1. The people pillar: Equip employees to experiment

Treat every employee like a scientist handling experiments that could result in burn or breakthrough. At CSG, we hold regular open forums where employees from various departments come together to authentically share AI use cases, best practices and new tool suggestions.

This upward feedback from the people closest to the technology has been invaluable. It fosters cross-functional learning between teams and leadership, inspires passion and helps shape our AI adoption strategy.

For example, one of our developers proposed switching to a new, AI-driven code generation solution that (after appropriate testing) has become an integral part of our enterprise toolkit.

Once curiosity is sparked, it’s critical to create a protected space for exploration to manage shadow AI effectively.

An EY survey revealed that two-thirds of organizations allow citizen developers to build or deploy AI agents independently. Shockingly, only 60% of those organizations have formal policies to ensure their AI agents follow responsible AI principles. This could be a costly oversight. Breaches involving unauthorized AI use cost an average of $4.63 million, nearly 16% more than the global average.

However, banning these practices outright will just drive usage underground. The better approach is enablement — empowering employees with access to secure, enterprise-grade platforms where they can safely test and build.

The other piece to this puzzle is talent upskilling. Curiosity only delivers value when people have the knowledge and confidence to start testing the waters.

For example, to better train CSG talent, we launched an internal AI academy — a self-guided learning journey that allows employees across the organization to realize benefits of AI that fit their curiosity. The courses cover role-specific AI use, authorized tools and responsible experimentation. We then check utilization reports to help identify adoption gaps, success stories and further training needs.

2. The policy pillar: Governance as the guardrails

Trust, governance and risk mitigation are the foundation of enterprise AI maturity. In that previously mentioned EY survey, almost all respondents (99%) reported their organizations suffered financial losses from AI-related risks, with the average loss conservatively estimated to be over $4.4 million. However, the same survey indicated that organizations with real-time monitoring and oversight committees are 34% more likely to see improvements in revenue growth and 65% more likely to improve cost savings.

That’s why we established a governance committee. It brought together leaders across legal, compliance, strategy and the CIO and CTO offices to eliminate silos and ensure every AI initiative has clear ownership, policy alignment and oversight from day one.

The committee wasn’t formed to slow down progress. On the contrary, governance rails keep innovation on track and sustainable.

With the initial structure in place, the committee’s focus shifts to protection. Enterprises sit on massive volumes of customer data and intellectual property and launching AI without controls exposes that data to real risk.

If one of your developers uploads IP into ChatGPT or a lawyer pastes contract text into a public model, the consequences could be devastating. To navigate these concerns, we authorized secure, internal access to popular AI tools with built-in notifications that remind users of approved usage.

Vendor management is another major focus area for us. With so many vendors embedding AI into their products, it’s easy to lose track of what’s actually in use. That’s where our governance committee will step in. We are working to audit every internal tool to identify risks and avoid vendor sprawl and overlap. Doing so will allow us to maintain visibility into and control over how our data is shared — a crucial piece in safeguarding our customers’ trust.

Finally, governance also needs to extend to how you reinvest the gains. AI creates efficiencies that free up capital, and those newfound resources require a strategy. As we think about strategy across our organization and balancing demands, it’s important that we reinvest those capital savings responsibly and sustainably, whether into new tools, new markets or further innovations that benefit our business and our customers.

3. The process pillar: Avoid the pilot graveyard

In 2025, 42% of businesses reported scrapping most of their AI initiatives (up from just 17% in 2024) according to an S&P Global survey. On average, organizations actually sunsetted nearly half (46%) of all AI proof-of-concepts before they reached production.

The truth is, even the most advanced technology will end up in the ubiquitous AI pilot graveyard without clear decision frameworks and proper procurement processes.

I’ve found success starts with knowing where AI is truly necessary. You don’t have to throw a large language model at a problem when simple automation actually delivers faster, cheaper results.

For example, many back-office workflows, like accounting processes with four or five manual steps, would probably benefit from standard automation. Save your sophisticated, agentic solutions for complex, tightly scoped functions that require contextual understanding and dynamic interaction.

As you do so, keep in mind that only 44% of consumers are comfortable letting AI take action on their behalf. Part of building trust with customers is making sure they don’t feel “stuck” with chatbots and agentic experiences that feel out of their control and not personalized to their needs.

Once you’ve identified the right use cases, a rigorous and disciplined selection process will ensure you can successfully bring them to life. We use bake-off style RFPs to evaluate vendors head-to-head, define success metrics before deployment and ensure every pilot aligns with measurable business outcomes.

During the selection process, it’s also important to plan for the future. Free tools can be a tempting way to test capabilities, but beware: if they become integral to your workflow, you may put your company at the mercy of pricing shifts or feature changes outside your control.

Finally, scaling success requires alignment and awareness. Once a platform or process proves itself, it needs to be deployed consistently across the organization. That’s how you turn one good pilot into a repeatable process.

Lead with curiosity, scale with control

When it comes to AI maturity in the enterprise, the best organizations move fast but with intention.

Curiosity fuels innovation, but structure sustains it. Without one, you stall. Without the other, you spiral. The future belongs to those that can balance both; building systems where ideas can move freely and securely.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

Seattle data governance startup Codified is ‘winding down’ as CEO takes on new role at Google

22 December 2025 at 16:44
Codified founder and CEO Yatharth Gupta. (LinkedIn Photo)

Codified, a Seattle startup that helps companies control who can access their data in AI-driven systems, is “winding down” according to the founder and former CEO.

Yatharth Gupta shared the update with GeekWire, but declined to provide any further details.

Gupta recently started a new position as a director of product management at Google in Kirkland, Wash.

“New beginnings! & also I’m hiring!” he wrote in a post on LinkedIn last week.

Founded in 2023 after being incubated at Madrona Venture Labs, Codified raised a $4 million seed round in February 2024 led by Madrona Venture Group and Vine Ventures, with participation from Soma Capital. Former Snowflake CEO Bob Muglia also invested in the startup, along with SAP exec and former Microsoft VP JG Chirapurath, and Shireesh Thota, vice president of databases at Microsoft.

Codified is described as an end-to-end data governance operating system. It uses generative AI to let users create data access rules by simply writing the policies in plain English.

The idea is to help companies speed up and improve how they decide who has access to what data, for what reason, and for how long.

Gupta spent more than 14 years at Microsoft, where he helped lead Azure-related data access and management projects. More recently he was a senior vice president of product management at enterprise database company SingleStore.

Other execs at Codified included Stefan Batres, former director of engineering at Tableau Software, who is now at Atlan; and Karan Thakker, former senior software engineer at ExtraHop and Alation, who is now back at ExtraHop.

Innovator Spotlight: Concentric AI

By: Gary
3 September 2025 at 14:38

Data Security’s New Frontier: How Generative AI is Rewriting the Cybersecurity Playbook Semantic Intelligence™ utilizes context-aware AI to discover structured and unstructured data across cloud and on-prem environments. The “Content...

The post Innovator Spotlight: Concentric AI appeared first on Cyber Defense Magazine.

❌
❌