Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

From static workflows to intelligent automation: Architecting the self-driving enterprise

20 January 2026 at 05:15

I want you to think about the most fragile employee in your organization. They don’t take coffee breaks, they work 24/7 and they cost a fortune to recruit. But if a button on a website moves a few pixels to the right, this employee has a complete mental breakdown and stops working entirely.

I am talking, of course, about your RPA (robotic process automation) bots.

For the last few years, I have observed IT leaders, CIOs and business leaders pour millions into what we call automation. We’ve hired armies of consultants to draw architecture diagrams and map out every possible scenario. We’ve built rigid digital train tracks, convinced that if we just laid enough rail, efficiency would follow.

But we didn’t build resilience. We built fragility.

As an AI solution architect, I see the cracks in this foundation every day. The strategy for 2026 isn’t just about adopting AI; it is about attacking the fragility of traditional automation. The era of deterministic, rule-based systems is ending. We are witnessing the death of determinism and the rise of probabilistic systems — what I call the shift from static workflows to intelligent automation.

The fragility tax of old automation

There is a painful truth we need to acknowledge: Your current bot portfolio is likely a liability.

In my experience and architectural practice, I frequently encounter what I call the fragility tax. This is the hidden cost of maintaining deterministic bots in a dynamic world. The industry rule of thumb  —  and one that I see validated in budget sheets constantly — is that for every $1 you spend on BPA licenses, you end up spending $3 on maintenance.

Why? Because traditional BPA is blind. It doesn’t understand the screen it is looking at; it only understands coordinates (x, y). It doesn’t understand the email it is reading; it only scrapes for keywords. When the user interface updates or the vendor changes an invoice format, the bot crashes.

I recall a disaster with an enterprise client who had an automated customer engagement process. It was a flagship project. It worked perfectly until the third-party system provider updated their solution. The submit button changed from green to blue. The bot, which was hardcoded to look for green pixels at specific coordinates, failed silently.

But fragility isn’t just about pixel colors. It is about the fragility of trust in external platforms.

We often assume fragility only applies to bad code, but it also applies to our dependencies. Even the vanguard of the industry isn’t immune. In September 2024, OpenAI’s official newsroom account on X (formerly Twitter) was hijacked by scammers promoting a crypto token.

Think about the irony: The company building the most sophisticated intelligence in human history was momentarily compromised not by a failure of their neural networks, but by the fragility of a third-party platform. This is the fragility tax in action. When you build your enterprise on deterministic connections to external platforms you don’t control, you inherit their vulnerabilities. If you had a standard bot programmed to Retweet@OpenAINewsroom, you would have automatically amplified a scam to your entire customer base.

The old way of scripting cannot handle this volatility. We spent years trying to predict the future and hard-code it into scripts. But the world is too chaotic for scripts. We need architecture that can heal itself.

The architectural pivot: From rules to goals

To capture the value of intelligent automation (IA), you must frame it as an architectural paradigm shift, not just a software upgrade. We are moving from task automation (mimicking hands) to decision automation (mimicking brains).

When I architect these systems, I look not only for rules but also for goals.

In the old paradigm, we gave the computer a script: Click button A, then type text B, then wait 5 seconds. In the new paradigm, we use cognitive orchestrators. We give the AI a goal: Perform this goal.

The difference is profound. If the submit button turns blue, a goal-based system using a large language model (LLM) and vision capabilities sees the button. It understands that despite the color change, it is still the submission mechanism. It adjusts its own path to achieving the goal.

Think of it like the difference between a train and an off-road vehicle. A train is fast and efficient, but it requires expensive infrastructure (tracks) and cannot steer around a rock on the line. Intelligent automation is the off-road vehicle. It uses sensors to perceive the environment. If it sees a rock, it doesn’t derail; it decides to go around it.

This isn’t magic; it’s a specific architectural pattern. The tech stack required to support this is fundamentally different from what most CIOs currently have installed. It is no longer just a workflow engine. The new stack requires three distinct components working in concert:

  1. The workflow engine: The hands that execute actions.
  2. The reasoning layer (LLM): The brain that figures out the steps dynamically and handles the logic.
  3. The vector database: The memory that stores context, past experiences and embedded data to reduce hallucinations.

By combining these, we move from brittle scripts to resilient agents.

Breaking the unstructured data barrier

The most significant limitation of the old way was its inability to handle unstructured data. We know that roughly 80% of enterprise data is unstructured, locked away in PDFs, email threads, Slack and MS Teams chats, and call logs. Traditional business process automation cannot touch this. It requires structured inputs: rows and columns.

This is where the multi-modal understanding of intelligent automation changes the architecture.

I urge you to adopt a new mantra: Data entry is dead. Data understanding is the new standard.

I am currently designing architectures where the system doesn’t just move a PDF from folder A to folder B. It reads the PDF. It understands the sentiment of the email attached to it. It extracts the intent from the call log referenced in the footer.

Consider a complex claims-processing scenario. In the past, a human had to manually review a handwritten accident report, cross-reference it with a policy PDF and check a photo of the damage. A deterministic bot is useless here because the inputs are never the same twice.

Intelligent automation changes the equation. It can ingest the handwritten note (using OCR), analyze the photo (using computer vision) and read the policy (using an LLM). It synthesizes these disparate, messy inputs into a structured claim object. It turns chaos into order.

This is the difference between digitization (making it electronic) and digitalization (making it intelligent).

Human-in-the-loop as a governance pattern

Whenever we present this self-driving enterprise concept to clients, the immediate reaction is “You want an LLM to talk to our customers?” This is a valid fear. But the answer isn’t to ban AI; it is to architect confidence-based routing.

We don’t hand over the keys blindly. We build governance directly into the code. In this pattern, the AI assesses its own confidence level before acting.

This brings us back to the importance of verification. Why do we need humans in the loop? Because trusted endpoints don’t always stay trusted.

Revisiting the security incident I mentioned earlier: If you had a fully autonomous sentient loop that automatically acted upon every post from a verified partner account, your enterprise would be at risk. A deterministic bot says: Signal comes from a trusted source -> execute.

A probabilistic, governed agent says: Signal comes from a trusted source, but the content deviates 99% from their semantic norm (crypto scam vs. tech news). The confidence score is low. Alert human.

That is the architectural shift we need.

  • Scenario A: The AI is 99% confident it understands the invoice, the vendor matches the master record and the semantics align with past behavior. The system auto-executes.
  • Scenario B: The AI is only 70% confident because the address is slightly different, the image is blurry or the request seems out of character (like the hacked tweet example). The system routes this specific case to a human for approval.

This turns automation into a partnership. The AI handles the mundane, high-volume work and your humans handle the edge cases. It solves the black box problem that keeps compliance officers awake at night.

Kill the zombie bots

If you want to prepare your organization for this shift, you don’t need to buy more software tomorrow. You need to start with an audit.

Look at your current automation portfolio. Identify the zombie bots, which are the scripts that are technically alive but require constant intervention to keep moving. These bots fail whenever vendors update their software. These are the bots that are costing you more in fragility tax than they save in labor.

Stop trying to patch them. These are the prime candidates for intelligent automation.

The future belongs to the probabilistic. It belongs to architectures that can reason through ambiguity, handle unstructured chaos and self-correct when the world changes. As leaders, we need to stop building trains and start building off-road vehicles.

The technology is ready. The question is, are you ready to let go of the steering wheel?

Disclaimer: This and any related publications are provided in the author’s personal capacity and do not represent the views, positions or opinions of the author’s employer or any affiliated organization.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

데이터 관리 솔루션 기업 엘티제로, 한국 시장 진출 및 홍성화 지사장 선임

20 January 2026 at 03:01

홍성화 지사장은 IBM, 썬마이크로시스템즈, 리버베드, 퀀텀 등 글로벌 IT 기업에서 영업 총괄을 맡아 왔으며, 25년 이상 영업 및 조직 관리 분야에서 경력을 쌓아온 인물이다.

엘티제로는 AI, 고성능 컴퓨팅(HPC), 클라우드 네이티브 워크로드 확산에 따라 증가하는 데이터를 관리하기 위한 통합 데이터 관리 환경을 제공한다. 주요 제품으로는 통합 스토리지 플랫폼, 클라우드 연동 테이프 라이브러리, 아카이브 인 어 박스(Archive in a Box), S3 테이프 라이브러리가 있다.

엘티제로의 홍성화 지사장은 “한국은 글로벌 금융, 공공, 연구·교육, 미디어·엔터테인먼트, 제조 산업의 핵심 거점으로서, 데이터 인프라의 중요성이 그 어느 때보다 커진 시장이다”라며 “엘티제로는 한국 기업들이 디지털 전환 전략을 추진하는 과정에서 데이터 성능, 보호 및 비용 효율성 사이의 타협 없이 최적의 데이터 관리를 구현할 수 있도록 현지 맞춤형 솔루션과 현지 지원 체계를 강화해 나갈 것이다”라고 밝혔다.

한편 엘티제로는 19일 서울 강남 현대아이파크(HDC)타워 포니정홀에서 ‘LT제로 한국 시장 공식 런칭 파트너 초청 세미나’를 개최했다. 행사에는 국내 주요 파트너사와 기업 IT 리더들이 참석해 LT제로의 한국 시장 진출 방향과 차세대 데이터 스토리지·아카이브 전략을 공유했다. 더불어 이날 자리에는 LT제로 회장 에드먼드 테이를 비롯한 본사 주요 임원진이 참석했다.

엘티제로는 싱가포르에 본사를 두고 아시아 주요 도시를 중심으로 세일즈 및 고객 지원 체계를 구축해 왔다. 이번 한국 시장 진출을 계기로 금융, 공공, 연구·교육, 미디어·엔터테인먼트, 제조 등 데이터 집약적 산업을 중심으로 사업을 전개할 계획이다.
dl-ciokorea@foundryco.com

클릭하우스, 랭퓨즈 인수 발표···데이터 플랫폼 AI 경쟁 가속화

20 January 2026 at 02:33

오픈소스 컬럼 기반 데이터베이스 기업인 클릭하우스(ClickHouse)가 오픈소스 LLM 엔지니어링 플랫폼 랭퓨즈(Langfuse)를 인수한다고 발표했다. 이로써 온라인 분석 처리와 AI 애플리케이션을 위해 설계된 자사 데이터베이스 서비스에 옵저버빌리티(observability) 기능을 추가했다.

분석가들은 많은 기업이 LLM 기반 애플리케이션을 실제 운영 환경으로 이전하려는 상황에서, 클릭하우스가 이번 거래를 통해 보다 완성도 높은 데이터·AI 플랫폼으로 도약하려고 한다고 평가했다.

HFS리서치의 애소시에이트 프랙티스 리더 악샤트 티아기는 프롬프트 추적, 모델 평가, 비용 및 지연 시간 추적 등 랭퓨즈의 옵저버빌리티 기술이 클릭하우스의 핵심 강점인 고성능 분석 역량을 직접적으로 보완한다고 진단했다.

티아기는 클릭하우스가 랭퓨즈를 인수함으로써 별도의 관측 도구에 의존하지 않고도 대규모 LLM 텔레메트리를 운영 및 비즈니스 데이터와 함께 수집·저장·분석할 수 있는 네이티브 방식을 고객에게 제공할 수 있게 됐다고 설명했다. 이를 통해 팀은 모델 디버깅 속도를 높이고 비용을 통제하는 한편, 보다 신뢰할 수 있는 AI 워크로드를 운영할 수 있다는 것이다.

그레이하운드리서치의 수석 애널리스트 산칫 비르 고기아는 클릭하우스가 기업 환경에서 생성형 AI가 실제로 한계에 부딪히는 지점에 대응하고 있다고 분석했다. 고기아는 “대부분의 기업은 AI 기능을 구축하지 못해서 실패하는 것이 아니라, 이를 설명하지 못하고 신뢰하지 못하며 비용을 감당하지 못하고 있다. 모델은 드리프트를 일으키고 비용은 급증하지만, 비즈니스 사용자는 의사결정이 정당한지 판단하기 어렵다. 클릭하우스는 바로 이 사각지대를 정확히 공략하고 있다”라고 평가했다.

또한 고기아는 “랭퓨즈를 내부에 두면서 클릭하우스가 모든 LLM 호출을 구조화되고 질의 가능한 기록으로 전환할 수 있는 도구를 갖추게 됐다”라고 말했다.

산업 전반의 변화

분석가들은 이번 인수가 데이터 웨어하우스와 데이터베이스 벤더 전반에서 AI 피드백 루프의 주도권을 확보하려는 흐름이 확산되고 있음을 보여준다고 분석했다.

에베레스트그룹의 애널리스트 이시 타쿠르는 기업이 파일럿 단계를 넘어 실제 운영 환경으로 이전하려는 과정에서, 벤더가 주로 분석용 기록 시스템 역할에 머물던 위치에서 벗어나 운영 워크플로를 지원하는 ‘실행 시스템’으로, 스택 상단으로 이동하고 있다고 설명했다.

티아기 역시 데이터 웨어하우스와 데이터베이스 벤더가 ‘AI를 만들 수 있는가’라는 질문에서 ‘AI를 안전하고 예측 가능하게, 대규모로 운영할 수 있는가’라는 단계로 인식을 전환하고 있다고 진단했다.

같은 맥락에서 타쿠르는 스노우플레이크(Snowflake)가 옵저브(Observe)를 인수한 사례와 이번 거래를 비교해 설명했다. 그는 두 인수 사례가 선도적인 데이터 플랫폼 전반에서 공통된 전략적 방향을 보여준다고 평가했다. 확장 가능한 텔레메트리 저장소에 통합된 옵저버빌리티 경험을 결합해, 운영 분석 영역에서 증가하는 지출을 더 많이 확보하려는 움직임이라는 설명이다.

고기아도 이에 동의하며, 클릭하우스의 랭퓨즈 인수가 사용자 기반 확장에도 기여할 것이라고 분석했다. 그는 “그동안 클릭하우스는 주로 분석과 인프라 팀을 중심으로 활용돼 왔다. 랭퓨즈를 통해 AI 엔지니어링, 프롬프트 운영, 제품 책임자, 나아가 리스크 및 컴플라이언스 팀까지 즉각적인 관련성을 확보하게 된다. 이는 매우 큰 사용자층 확대”라고 설명했다.

한편 데이터브릭스를 비롯한 다른 경쟁사들 역시 옵저버빌리티 기능을 제공하고 있다.

기존 랭퓨즈 고객에 대한 지원 지속

이번 거래는 클릭하우스의 전략적 강화뿐 아니라 기존 랭퓨즈 고객에게도 긍정적인 효과를 가져올 것으로 전망된다.

양사는 랭퓨즈 포털에 게시한 공지를 통해 클릭하우스가 랭퓨즈를 오픈소스 프로젝트로 계속 지원하고, 기존 사용자에 대한 연속성을 유지하는 한편 클릭하우스의 분석 엔진과 관리형 서비스와의 보다 긴밀한 통합을 통해 플랫폼을 단계적으로 고도화할 계획이라고 밝혔다.

이미 랭퓨즈를 운영 환경에서 사용 중인 기업의 경우, 이번 인수를 통해 기존 워크플로에 급격한 변화를 요구받지 않으면서도 장기적인 안정성과 확장성을 확보할 수 있을 것으로 회사 측은 내다봤다.

랭퓨즈는 2023년 클레멘스 라베르트, 막스 다이히만, 마르크 클링겐이 설립한 스타트업으로, 라이트스피드벤처스, 제너럴캐털리스트, 와이콤비네이터의 투자를 받았다. 현재 13명 규모의 팀을 보유하고 있으며, 마케팅과 영업을 전담하는 두 번째 사무소를 샌프란시스코에 두고 있다.
dl-ciokorea@foundryco.com

“보안·데이터·조직이 승부 가른다” 2026년 CIO 10대 과제

20 January 2026 at 01:30

CIO의 ‘희망 목록’은 늘 길고 비용도 많이 든다. 하지만 우선순위를 합리적으로 세우면, 팀과 예산을 소진하지 않으면서도 급변하는 요구에 대응할 수 있다.

특히 2026년에는 IT 운영을 ‘비용 센터’가 아니라 손익 관점에서 재정의하면서, 기술로 비즈니스를 재창조하는 접근이 필요하다. 액센추어(Accenture)의 기술 전략·자문 글로벌 리드 코엔라트 셸포트는 “최소한의 투자로 ‘불만 꺼지지 않게 유지’하는 데서 벗어나, 기술로 매출 성장을 견인하고 새로운 디지털 제품을 만들며, 새 비즈니스 모델을 더 빠르게 시장에 내놓는 쪽으로 초점을 옮겨야 한다”라고 권고했다.

다음은 CIO가 2026년에 우선순위 상단에 올려야 할 10가지 핵심 과제다.

1. 사이버 보안 회복탄력성과 데이터 프라이버시 강화

기업이 생성형 AI와 에이전틱 AI를 핵심 워크플로우 깊숙이 통합하면서, 공격자 역시 같은 AI 기술로 워크플로우를 교란하고 지식재산(IP)과 민감 데이터를 노릴 가능성이 커졌다. 소비자 신용평가 기업 트랜스유니언(TransUnion)의 글로벌 제품 플랫폼 담당 수석부사장 요게시 조시는 “그 결과 CIO와 CISO는 나쁜 행위자들이 동일한 AI 기술을 활용해 워크플로우를 방해하고, 고객 민감 데이터와 경쟁우위에 해당하는 정보·자산을 포함한 IP를 탈취하려 할 것임을 예상해야 한다”라고 지적했다.

조시는 디지털 전환 가속과 AI 통합 확대로 리스크 환경이 크게 넓어질 것으로 보고, 2026년 최우선 과제로 ‘보안 회복탄력성’과 ‘데이터 프라이버시’를 꼽았다. 특히, “민감 데이터 보호와 글로벌 규제 준수는 협상 대상이 아니다”라고 강조했다.

2. 보안 도구 통합

AI의 효과를 제대로 끌어내려면 기반을 다시 다져야 한다는 주장도 있다. 딜로이트의 미국 사이버 플랫폼 및 기술·미디어·통신(TMT) 산업 리더 아룬 페린콜람은 “필수 조건 중 하나는 파편화된 보안 도구를 통합·연동된 사이버 기술 플랫폼으로 묶는 것인데, 이를 ‘플랫폼화(platformization)’라고 부른다”라고 설명했다.

페린콜람은 통합은 보안을 ‘여러 포인트 솔루션의 누더기’에서 빠른 혁신과 확장 가능한 AI 중심 운영을 위한 민첩하고 확장된 기반으로 바꿀 것이라며, “위협이 정교해질수록 통합 플랫폼이 중요해지며, 도구 난립을 방치하면 오히려 분절된 보안 태세가 공격자에게 유리하게 작동해 위험이 커진다”라고 지적했다. 또 “기업은 날로 증가하는 위협에 직면할 것이며, 이를 관리하기 위해 보안 도구가 무분별하게 확산될 것이다. 공격자가 이렇게 파편화된 보안 태세를 악용할 수 있으므로, 플랫폼화를 늦추면 위험만 증폭될 것”이라고 덧붙였다.

3. 데이터 보호 ‘기본기’ 재점검

조직이 효율·속도·혁신을 위해 새로운 AI 모델 도입 경쟁에 나서고 있지만, 민감 데이터 보호를 위한 기본 단계조차 놓치는 경우가 적지 않다는 경고도 나온다. 데이터 프라이버시·보존 전문업체 도노마 소프트웨어(Donoma Software)의 최고전략책임자 파커 피어슨은 “새 AI 기술을 풀기 전에 민감 데이터를 보호하기 위한 기본 조치를 하지 않는 조직이 많다”라며 2026년에는 “데이터 프라이버시를 긴급 과제로 봐야 한다”라고 강조했다.

피어슨은 데이터 수집·사용·보호 이슈가 초기 학습부터 운영까지 AI 라이프사이클 전반에서 발생한다고 설명했다. 또 많은 기업이 “AI를 무시해 경쟁에서 뒤처지거나 민감 데이터를 노출할 수 있는 LLM을 도입하는 두 가지 나쁜 선택지 사이에 놓여 있다”라고 진단했다.

핵심은 ‘AI를 할 것인가’가 아니라 ‘민감 데이터를 위험에 빠뜨리지 않으면서 AI 가치를 최적화하는 방법’이다. 피어슨은 특히 “데이터가 ‘완전히’ 또는 ‘엔드 투 엔드’로 암호화돼 있다”는 조직의 자신감과 달리, 실제로는 사용 중 데이터까지 포함해 모든 상태에서 연속적으로 보호하는 체계가 필요하다고 주장했다. 프라이버시 강화 기술을 지금 도입하면 이후 AI 모델 적용에서도 데이터 구조화·보안이 선행돼 학습 효율이 좋아지고, 재학습에 따른 비용·리스크도 줄일 수 있다는 설명이다.

4. 팀 정체성과 경험에 집중

2026년 CIO 과제로 ‘기업 정체성’과 직원 경험을 재정비해야 한다는 목소리도 있다. IT 보안 소프트웨어 업체 넷위릭스(Netwrix)의 CIO 마이클 웻젤은 “정체성은 사람들이 조직에 합류하고 협업하고 기여하는 기반”이라며, “정체성과 직원 경험을 제대로 잡으면 보안, 생산성, 도입 등 다른 모든 것이 자연스럽게 따라온다”라고 말했다.

웻젤은 직원들이 직장에서 ‘소비자급’ 경험을 기대한다고 진단했다. 내부 기술이 불편하면 사용하지 않고 우회하게 되며, 그 순간 조직은 보안과 속도를 동시에 잃는다는 지적이다. 반대로 ‘정체성에 뿌리를 둔 매끄러운 경험’을 구축한 기업이 혁신 속도에서 앞서갈 것이라고 내다봤다.

5. 값비싼 ERP 마이그레이션 대응 방안 마련

ERP 마이그레이션은 2026년에도 CIO를 강하게 압박할 전망이다. 인보이스 라이프사이클 관리 소프트웨어 업체 바스웨어(Basware)의 CIO 배럿 쉬위츠는 “예를 들어 SAP S/4HANA 마이그레이션은 복잡하고, 계획보다 길어지면서 비용이 증가하는 경우가 많다”라고 지적했다. 쉬위츠는 업그레이드 비용이 기업 규모와 복잡도에 따라 1억 달러 이상, 많게는 5억 달러까지 뛸 수 있다고 말했다.

또한, ERP가 ‘모든 것을 하려는’ 구조인 만큼, 인보이스 처리처럼 특정 업무를 아주 잘 해내는 데에는 한계가 있다고 말했다. 여기에 수많은 애드온 커스터마이징이 더해지면 리스크가 커진다. 시위츠는 이에 대한 대안으로는 SAP가 강점을 갖는 핵심은 그대로 두고, 주변 기능은 베스트 오브 브리드 도구로 보완하는 ‘클린 코어(clean core)’ 전략을 제시했다.

6. 혁신을 확장할 수 있는 데이터 거버넌스

2026년 혁신을 지속 가능하게 만들려면, 모듈형·확장형 아키텍처와 데이터 전략이 핵심이라는 의견도 나왔다. 컴플라이언스 플랫폼 업체 삼사라(Samsara)의 CIO 스티븐 프란체티는 “혁신이 확장 가능하고 지속 가능하며 안전하게 이뤄지도록 하는 기반을 설계하는 것이 중요한 우선순위 중 하나”라고 말했다.

프란체티는 느슨하게 결합된 API 우선 아키텍처를 구축 중이며, 이를 통해 더 빠르게 움직이고 변화에 유연하게 대응하면서 솔루션 업체와 플랫폼 종속을 피할 수 있다고 설명했다. 워크플로우·도구·AI 에이전트까지 더 역동적으로 바뀌는 환경에서 ‘강하게 결합된 스택’은 확장에 한계가 있다는 판단이다. 또한 데이터는 AI뿐 아니라 비즈니스 인사이트, 규제 대응, 고객 신뢰를 위한 장기 전략 자산이라며, 데이터 품질과 거버넌스, 접근성을 전사적으로 강화하고 있다고 덧붙였다.

7. 인력 전환 가속화

AI 시대 인력 전략은 ‘채용’만으로 해결되지 않는다. 임원 서치·경영 컨설팅 기업 하이드릭 앤 스트러글스(Heidrick & Struggles)의 파트너 스콧 톰슨은 “업스킬링과 리스킬링은 차세대 리더를 키우는 핵심”이라며, “2026년의 기술 리더는 제품 중심의 기술 리더로서, 제품·기술·비즈니스를 사실상 하나로 묶어야 한다”라고 강조했다.

톰슨은 ‘디지털 인재 공장’ 모델을 제안했다. 역량 분류 체계(기술 역량 분류), 역할 기반 학습 경로, 실전 프로젝트 순환을 구조화해 내부에서 인재를 키우는 방식이다. 또한 AI가 활성화된 환경에 맞춰 직무를 재설계하고 자동화로 고도의 전문 노동 의존도를 줄이며, ‘퓨전 팀(fusion teams)’으로 희소 역량을 조직 전반에 확산해야 한다고 설명했다.

8. 팀 커뮤니케이션 고도화

기술 조직에서 불확실성이 커질수록 불안이 확산되고, 그 양상은 개인별로 다르게 나타난다. CompTIA의 최고 기술 에반젤리스트 제임스 스탠저는 “기술 부서에서 불확실성이 미치는 1차 효과는 불안”이라며, “불안은 사람마다 다른 형태로 드러난다”라고 지적했다. 스탠저는 팀원과의 밀착 소통을 강화하고, 더 효과적이고 관련성 높은 교육으로 불안을 관리해야 한다고 제안했다.

9. 민첩성·신뢰·확장성을 위한 역량 강화

AI 자체뿐 아니라, 이를 운영할 수 있는 역량도 2026년 핵심 과제 중 하나다. 보안 솔루션 업체 넷스코프(Netskope)의 CDIO 마이크 앤더슨은 “AI를 넘어 2026년 CIO 우선순위는 민첩성, 신뢰, 확장성을 이끄는 기반 역량을 강화하는 것”이라고 말했다.

앤더슨은 제품 운영 모델(product operating model)이 전통적 소프트웨어 팀을 넘어, IAM, 데이터 플랫폼, 통합 서비스 같은 엔터프라이즈 기반 역량까지 포함하는 형태로 확장될 것이라고 내다봤다. 이때 직원·파트너·고객·서드파티·AI 에이전트 등 ‘인간/비인간 ID’를 모두 지원해야 하며, 최소 권한과 제로 트러스트 원칙을 바탕으로 한 안전하고 적응형 프레임워크가 필요하다고 강조했다.

10. 진화하는 IT 아키텍처

2026년에는 현재의 IT 아키텍처가 AI 에이전트의 자율성을 감당하지 못하는 ‘레거시 모델’이 될 수도 있다. 세일즈포스의 최고 아키텍트 에민 게르바는 “효과적으로 확장하려면 기업은 새로운 에이전틱 엔터프라이즈로 전환해야 한다”라며, 데이터 의미를 통합하는 공유 시맨틱 계층, 중앙화된 지능을 위한 통합 AI/ML 계층, 확장 가능한 에이전트 인력의 라이프사이클을 관리하는 에이전틱 계층, 복잡한 크로스 사일로 워크플로우를 안전하게 관리하는 엔터프라이즈 오케스트레이션 계층 등 4개 계층을 제시했다.

게르바는 이 전환이 “엔드 투 엔드 자동화를 달성한 기업과 에이전트가 애플리케이션 사일로에 갇힌 기업을 가르는 결정적 경쟁력”이 될 것이라고 강조했다.
dl-ciokorea@foundryco.com

칼럼 | 데이터 관리 방식이 달라진다···2026년 ‘뜨는 5가지, 지는 5가지’

19 January 2026 at 02:28

데이터 환경은 대부분의 기업이 따라가기 어려울 만큼 빠르게 변화하고 있다. 이런 변화 속도는 2가지 힘이 맞물리면서 가속화되고 있다. 하나는 점차 성숙 단계에 접어드는 엔터프라이즈 데이터 관리 관행이고, 다른 하나는 기업이 활용하는 데이터에 더 높은 수준의 일관성, 정합성, 신뢰를 요구하는 AI 플랫폼이다.

그 결과 2026년은 기업이 주변부를 조금씩 손보는 데서 벗어나, 데이터 관리의 핵심 구조를 본격적으로 전환하는 해가 될 전망이다. 데이터 관리 영역에서 무엇이 필요해지고 무엇이 아닌지에 대한 기준도 점차 뚜렷해지고 있으며, 이는 파편화된 도구 환경과 수작업 중심의 관리, 실질적인 인텔리전스를 제공하지 못하는 대시보드에 피로감을 느낀 시장의 현실을 그대로 보여준다.

2026년 데이터 관리 영역에서 ‘뜨는 요소’와 ‘지는 요소’를 정리해 본다.

뜨는 요소 1: 사람의 판단에 기반한 네이티브 거버넌스

데이터 거버넌스는 더 이상 부가적인 작업에 그치지 않는다. 유니티 카탈로그, 스노우플레이크 호라이즌, AWS 글루 카탈로그와 같은 플랫폼은 거버넌스를 아키텍처의 기초 요소로 직접 통합하고 있다. 이는 외부 거버넌스 계층이 오히려 마찰을 키우고, 데이터 전반을 일관되게 관리하는 데 한계로 작용한다는 인식이 반영된 결과다. 새롭게 자리 잡은 흐름의 핵심은 네이티브 자동화다. 데이터 품질 점검, 이상 징후 알림, 사용 현황 모니터링이 백그라운드에서 상시적으로 작동하며, 사람이 따라갈 수 없는 속도로 환경 전반의 변화를 포착한다.

다만 이러한 자동화가 사람의 판단을 대체하는 것은 아니다. 문제는 도구가 진단하지만, 심각도의 기준을 어떻게 정할지, 어떤 SLA가 중요한지, 에스컬레이션 경로를 어떻게 설계할지는 여전히 사람이 결정한다. 업계는 도구가 탐지를 담당하고, 의미 부여와 책임은 사람이 맡는 구조로 변화하고 있다. 이는 거버넌스가 언젠가 완전히 자동화될 것이라는 인식에서 벗어나는 흐름으로 볼 수 있다. 대신 기업은 네이티브 기술의 이점을 적극 활용하는 동시에, 사람의 의사결정이 지닌 가치를 다시 한번 강화하고 있다.

뜨는 요소 2: 플랫폼 통합과 포스트 웨어하우스 레이크하우스의 부상

수십 개의 특화된 데이터 도구를 이어 붙여 사용하던 시대가 막을 내리고 있다. 분산을 전제로 한 사고방식이 복잡성의 한계에 도달했기 때문이다. 그동안 기업은 데이터 수집 시스템, 파이프라인, 카탈로그, 거버넌스 계층, 웨어하우스 엔진, 대시보드 도구를 조합해 왔다. 그 결과 유지 비용은 높고 구조는 취약하며, 거버넌스 측면에서는 예상보다 훨씬 관리하기 어려운 환경이 형성됐다.

데이터브릭스, 스노우플레이크, 마이크로소프트는 이런 상황을 기회로 보고 플랫폼을 통합 환경으로 확장하고 있다. 레이크하우스는 데이터 아키텍처의 핵심 지향점으로 자리 잡았다. 정형 및 비정형 데이터를 하나의 플랫폼에서 처리하고, 분석과 머신러닝, AI 학습까지 아우를 수 있기 때문이다. 기업은 더 이상 데이터 사일로 간 이동이나 호환되지 않는 시스템을 동시에 관리하길 원하지 않는다. 필요한 것은 마찰을 줄이고 보안을 단순화하며 AI 개발 속도를 높일 수 있는 중앙 운영 환경이다. 플랫폼 통합은 이제 벤더 종속의 문제가 아니라, 데이터가 폭증하고 AI가 그 어느 때보다 높은 일관성을 요구하는 환경에서 생존을 위한 선택으로 받아들여지고 있다.

뜨는 요소 3: 제로 ETL을 통한 엔드투엔드 파이프라인 관리

수작업 기반의 ETL(추출, 전환, 적재)은 사실상 마지막 단계에 접어들고 있다. ETL은 여러 시스템에 흩어진 데이터를 추출하고, 분석에 적합한 형태로 변환한 뒤, 데이터 웨어하우스나 레이크 같은 저장소에 적재하는 과정을 의미한다. 파이썬 스크립트나 커스텀 SQL 작업은 유연성을 제공하지만, 작은 변화에도 쉽게 오류가 발생하고 엔지니어의 지속적인 관리 부담을 요구한다. 이런 공백을 관리형 파이프라인 도구가 빠르게 메우고 있다. 데이터브릭스 레이크플로우, 스노우플레이크 오픈플로우, AWS 글루는 데이터 추출부터 모니터링, 장애 복구까지 아우르는 차세대 오케스트레이션 환경을 제시한다.

복잡한 소스 시스템을 처리하는 과제는 여전히 남아있지만, 방향성은 분명하다. 기업은 스스로 유지되는 파이프라인을 원하고 있다. 구성 요소를 줄이고, 사소한 스크립트 누락으로 발생하는 야간 장애를 최소화하길 기대한다. 일부 조직은 파이프라인 자체를 우회하는 선택도 하고 있다. 제로 ETL 패턴을 통해 운영 시스템의 데이터를 분석 환경으로 즉시 복제함으로써, 야간 배치 작업이 지닌 취약성을 제거하는 방식이다. 이는 실시간 가시성과 신뢰할 수 있는 AI 학습 데이터를 요구하는 애플리케이션에서 새로운 표준으로 떠오르고 있다.

뜨는 요소 4: 대화형 분석과 에이전틱 BI

대시보드는 점차 기업 내 중심 도구로서의 입지를 잃고 있다. 수년간 투자가 이어졌음에도 실제 활용도는 여전히 낮고, 그 수도 계속해서 늘어나는 양상을 보이고 있다. 대부분의 비즈니스 사용자는 정적인 차트 속에 묻힌 인사이트를 직접 찾아내고 싶어 하지 않는다. 이들이 원하는 것은 단순한 시각화가 아니라 명확한 답변과 설명, 그리고 맥락이다.

이런 공백을 대화형 분석이 메우고 있다. 생성형 BI 시스템은 사용자가 원하는 대시보드를 말로 설명하거나, 에이전트에게 데이터를 직접 해석해 달라고 요청할 수 있도록 한다. 필터를 하나씩 클릭하는 대신 분기별 성과 요약을 요청하거나, 특정 지표가 왜 변했는지를 질문할 수 있다. 초기의 자연어 기반 SQL 자동 생성 기술은 쿼리 작성 과정을 자동화하는 데 초점을 맞춰 한계를 드러냈다. 반면 최근의 흐름은 다르다. AI 에이전트는 쿼리를 만드는 역할보다 인사이트를 종합하고, 필요에 따라 시각화를 생성하는 데 집중한다. 이들은 단순한 질의 처리 도구가 아니라, 데이터와 비즈니스 질문을 함께 이해하는 분석가에 가까운 존재로 진화하고 있다.

뜨는 요소 5: 벡터 네이티브 스토리지와 개방형 테이블 포맷

AI는 스토리지에 대한 요구 조건 자체를 바꾸고 있다. 특히 검색 증강 생성(RAG)은 벡터 임베딩을 전제로 한다. 이는 데이터베이스가 벡터 데이터를 별도의 확장 기능이 아닌, 기본 데이터 유형으로 저장하고 처리할 수 있어야 함을 의미한다. 이에 따라 벤더는 데이터 엔진 내부에 벡터 기능을 직접 내장하기 위해 경쟁적으로 움직이고 있다.

동시에 아파치 아이스버그(Apache Iceberg)가 개방형 테이블 포맷의 새로운 표준으로 자리 잡아가고 있다. 아이스버그는 데이터 복제나 별도의 변환 과정 없이도 다양한 컴퓨팅 엔진이 동일한 데이터를 사용할 수 있도록 지원한다. 그동안 업계를 괴롭혀 온 상호운용성 문제를 상당 부분 해소하고, 오브젝트 스토리지를 진정한 멀티 엔진 기반으로 전환시키는 역할을 한다. 이를 통해 기업은 데이터 생태계가 변화할 때마다 모든 구조를 다시 작성하지 않고도, 장기적인 관점에서 데이터를 안정적으로 활용할 수 있는 기반을 마련할 수 있다.

다음은 2026년에 지는 데이터 관리 요소다.

지는 요소 1: 기존 모놀리식 웨어하우스와 과도하게 분산된 도구 체계

하나의 거대한 시스템에 모든 기능을 탑재한 전통적인 데이터 웨어하우스는 대규모 비정형 데이터를 처리하는 데 한계가 있고, AI가 요구하는 실시간 처리 역량도 충분히 제공하지 못한다. 그렇다고 해서 그 반대 극단이 해법이 된 것도 아니다. 현대 데이터 스택은 수많은 소규모 도구에 역할과 책임을 분산시켰고, 그 결과 거버넌스는 복잡해졌으며 AI를 위한 준비 속도도 오히려 느려졌다. 데이터 메시 역시 상황은 비슷하다. 데이터 소유와 분산 책임이라는 원칙 자체는 여전히 의미를 갖지만, 이를 엄격하게 구현하려는 접근법은 점차 힘을 잃고 있다.

지는 요소 2: 수작업 기반 ETL과 커스텀 커넥터

야간 배치 스크립트는 문제를 즉각적으로 드러내지 않은 채 중단되기 쉽고, 처리 지연을 초래하며 엔지니어의 시간을 지속적으로 소모한다. 데이터 복제 도구와 관리형 파이프라인이 표준으로 자리 잡으면서, 업계는 이러한 취약한 워크플로우에서 빠르게 벗어나고 있다. 사람이 직접 연결하고 관리하던 수동적인 데이터 연계 방식은, 상시적으로 작동하고 지속적으로 모니터링되는 오케스트레이션 구조로 대체되고 있다.

지는 요소 3: 수동 데이터 관리와 수동적 카탈로그

사람이 데이터를 일일이 검토하고 관리하는 방식은 더 이상 현실적인 선택지가 아니다. 문제가 발생한 이후에 정리하는 방식은 비용 대비 효과가 낮고, 기대만큼의 성과를 내기도 어렵다. 단순히 정보를 나열하는 위키 형태의 수동형 데이터 카탈로그 역시 점차 비중이 줄어들고 있다. 대신 데이터 상태를 지속적으로 감시하고 변화와 이상 징후를 자동으로 파악하는 액티브 메타데이터 시스템이 필수 요소로 떠오르고 있다.

지는 요소 4: 정적 대시보드와 일방적 보고

추가 질문에 답하지 못하는 대시보드는 사용자에게 좌절감을 준다. 기업이 원하는 것은 단순히 결과를 보여주는 도구가 아니라 함께 생각할 수 있는 분석 환경이다. AI 비서 사용 경험으로 비즈니스 기대 수준이 높아지면서, 정적인 보고 방식은 그 부담을 감당하지 못하고 있다.

지는 요소 5: 온프레미스 하둡 클러스터

하둡 클러스터(Hadoop)는 대규모 데이터를 분산 저장·처리하기 위해 여러 서버를 하나의 시스템처럼 묶어 운영하는 오픈소스 빅데이터 처리 환경이다. 하지만 온프레미스 환경에서 이를 직접 운영하는 방식은 점점 설득력을 잃고 있다. 오브젝트 스토리지와 서버리스 컴퓨팅를 결합한 구조는 더 높은 확장성과 단순한 운영, 낮은 비용이라는 분명한 이점을 제공한다. 반면 수많은 구성 요소로 이뤄진 하둡 서비스 생태계는 현대적인 데이터 환경과 더 이상 잘 맞지 않는 구조가 되고 있다.

2026년의 데이터 관리는 ‘명확성’을 중심에 두고 있다. 시장은 파편화된 구조와 수작업 개입, 그리고 소통하지 못하는 분석 방식을 점차 외면하고 있다. 미래의 중심에는 통합 플랫폼, 네이티브 거버넌스, 벡터 네이티브 스토리지, 대화형 분석, 그리고 최소한의 인간 개입으로 운영되는 파이프라인이 자리 잡고 있다. AI는 데이터 관리를 대체하는 존재가 아니다. 대신 단순함과 개방성, 통합된 설계를 중시하는 방향으로 데이터 관리의 규칙 자체를 다시 쓰고 있다.
dl-ciokorea@foundryco.com

What’s in, and what’s out: Data management in 2026 has a new attitude

16 January 2026 at 07:00

The data landscape is shifting faster than most organizations can track. The pace of change is driven by two forces that are finally colliding productively: enterprise data management practices that are maturing and AI platforms that are demanding more coherence, consistency and trust in the data they consume.

As a result, 2026 is shaping up to be the year when companies stop tinkering on the edges and start transforming the core. What is emerging is a clear sense of what is in and what is out for data management, and it reflects a market that is tired of fragmented tooling, manual oversight and dashboards that fail to deliver real intelligence.

So, here’s a list of what’s “In” and what’s “Out” for data management in 2026:

IN: Native governance that automates the work but still relies on human process

Data governance is no longer a bolt-on exercise. Platforms like Unity Catalog, Snowflake Horizon and AWS Glue Catalog are building governance into the foundation itself. This shift is driven by the realization that external governance layers add friction and rarely deliver reliable end-to-end coverage. The new pattern is native automation. Data quality checks, anomaly alerts and usage monitoring run continuously in the background. They identify what is happening across the environment with speed that humans cannot match.

Yet this automation does not replace human judgment. The tools diagnose issues, but people still decide how severity is defined, which SLAs matter and how escalation paths work. The industry is settling into a balanced model. Tools handle detection. Humans handle meaning and accountability. It is a refreshing rejection of the idea that governance will someday be fully automated. Instead, organizations are taking advantage of native technology while reinforcing the value of human decision-making.

IN: Platform consolidation and the rise of the post-warehouse lakehouse

The era of cobbling together a dozen specialized data tools is ending. Complexity has caught up with the decentralized mindset. Teams have spent years stitching together ingestion systems, pipelines, catalogs, governance layers, warehouse engines and dashboard tools. The result has been fragile stacks that are expensive to maintain and surprisingly hard to govern.

Databricks, Snowflake and Microsoft see an opportunity and are extending their platforms into unified environments. The Lakehouse has emerged as the architectural north star. It gives organizations a single platform for structured and unstructured data, analytics, machine learning and AI training. Companies no longer want to move data between silos or juggle incompatible systems. What they need is a central operating environment that reduces friction, simplifies security and accelerates AI development. Consolidation is no longer about vendor lock-in. It is about survival in a world where data volumes are exploding and AI demands more consistency than ever.

IN: End-to-end pipeline management with zero ETL as the new ideal

Handwritten ETL is entering its final chapter. Python scripts and custom SQL jobs may offer flexibility, but they break too easily and demand constant care from engineers. Managed pipeline tools are stepping into the gap. Databricks Lakeflow, Snowflake Openflow and AWS Glue represent a new generation of orchestration that covers extraction through monitoring and recovery.

While there is still work to do in handling complex source systems, the direction is unmistakable. Companies want pipelines that maintain themselves. They want fewer moving parts and fewer late-night failures caused by an overlooked script. Some organizations are even bypassing pipes altogether. Zero ETL patterns replicate data from operational systems to analytical environments instantly, eliminating the fragility that comes with nightly batch jobs. It is an emerging standard for applications that need real-time visibility and reliable AI training data.

IN: Conversational analytics and agentic BI

Dashboards are losing their grip on the enterprise. Despite years of investment, adoption remains low and dashboard sprawl continues to grow. Most business users do not want to hunt for insights buried in static charts. They want answers. They want explanations. They want context.

Conversational analytics is stepping forward to fill the void. Generative BI systems let users describe the dashboard they want or ask an agent to explain the data directly. Instead of clicking through filters, a user might request a performance summary for the quarter or ask why a metric changed. Early attempts at Text to SQL struggled because they attempted to automate the query writing layer. The next wave is different. AI agents now focus on synthesizing insights and generating visualizations on demand. They act less like query engines and more like analysts who understand both the data and the business question.

IN: Vector native storage and open table formats

AI is reshaping storage requirements. Retrieval Augmented Generation depends on vector embeddings, which means that databases must store vectors as first-class objects. Vendors are racing to embed vector support directly in their engines.

At the same time, Apache Iceberg is becoming the new standard for open table formats. It allows every compute engine to work on the same data without duplication or transformation. Iceberg removes a decade of interoperability pain and turns object storage into a true multi-engine foundation. Organizations finally get a way to future-proof their data without rewriting everything each time the ecosystem shifts.

And here’s what’s “Out”:

OUT: Monolithic warehouses and hyper-decentralized tooling

Traditional enterprise warehouses cannot handle unstructured data at scale and cannot deliver the real-time capabilities needed for AI. Yet the opposite extreme has failed too. The highly fragmented Modern Data Stack scattered responsibilities across too many small tools. It created governance chaos and slowed down AI readiness. Even the rigid interpretation of Data Mesh has faded. The principles live on, but the strict implementation has lost momentum as companies focus more on AI integration and less on organizational theory.

OUT: Hand-coded ETL and custom connectors

Nightly batch scripts break silently, cause delays and consume engineering bandwidth. With replication tools and managed pipelines becoming mainstream, the industry is rapidly abandoning these brittle workflows. Manual plumbing is giving way to orchestration that is always on and always monitored.

OUT: Manual stewardship and passive catalogs

The idea of humans reviewing data manually is no longer realistic. Reactive cleanup costs too much and delivers too little. Passive catalogs that serve as wikis are declining. Active metadata systems that monitor data continuously are now essential.

Out: Static dashboards and one-way reporting

Dashboards that cannot answer follow up questions frustrate users. Companies want tools that converse. They want analytics that think with them. Static reporting is collapsing under the weight of business expectations shaped by AI assistants.

OUT: On-premises Hadoop clusters

Maintaining on-prem Hadoop is becoming indefensible. Object storage combined with serverless compute offers elasticity, simplicity and lower cost. The complex zoo of Hadoop services no longer fits the modern data landscape.

Data management in 2026 is about clarity. The market is rejecting fragmentation, manual intervention and analytics that fail to communicate. The future belongs to unified platforms, native governance, vector native storage, conversational analytics and pipelines that operate with minimal human interference. AI is not replacing data management. It is rewriting the rules in ways that reward simplicity, openness and integrated design.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

Why CIOs need a new approach to unstructured data management

16 January 2026 at 05:00

CIOs everywhere will be familiar with the major issues caused by collecting and retaining data at an increasingly rapid rate. Industry research shows 64% of enterprises manage at least 1 Petabyte of data, creating substantial cost, governance and compliance pressures.

If that wasn’t enough, organizations frequently default to retaining these enormous datasets, even when they are no longer needed. To put this into context, the average useful life of most enterprise data has now shrunk to 30–90 days; however, for various reasons, businesses continue to store it indefinitely, thereby adding to the cost and complexity of their underlying infrastructure.

As much as 90% of this information comes in the form of unstructured data files spread across hybrid, multi-vendor environments with little to no centralized oversight. This can include everything from MS Office docs to photo and video content routinely used by the likes of marketing teams, for example. The list is extensive, stretching to invoices, service reports, log files and in some organizations even scans or faxes of hand-written documents, often dating back decades.

In these circumstances, CIOs often lack clear visibility into what data exists, where it resides, who owns it, how old it is or whether it holds any business value. This matters because in many cases, it has tremendous value with the potential to offer insight into a range of important business issues, such as customer behaviour or field quality challenges, among many others.

With the advent of GenAI, it is now realistic to use the knowledge embedded in all kinds of documents and to retrieve their high-quality (i.e., relevant, useful and correct) content. This is even possible for documents having a low visual/graphical quality. As a result, running AI on a combination of structured and unstructured input can reconstruct the entire enterprise memory and the so-called “tribal knowledge”.

Visibility and governance

The first point to appreciate is that the biggest challenge is not the amount of data being collected and retained, but the absence of meaningful visibility into what is being stored.

Without an enterprise-wide view (a situation common to many organizations), teams cannot determine which data is valuable, which is redundant, or which poses a risk. In particular, metadata remains underutilised, even though insights such as creation date, last access date, ownership, activity levels and other basic indicators can immediately reveal security risks, duplication, orphaned content and stale data.

Visibility begins by building a thorough understanding of the existing data landscape. This can be done by using tools that scan storage platforms across multi-vendor and multi-location environments, collect metadata at scale, and generate virtual views of datasets. This allows teams to understand the size, age, usage and ownership of their data, enabling them to identify duplicate, forgotten or orphaned files.

It’s a complex challenge. In most cases, some data will be on-premises, some in the cloud, some stored as files and some as objects (such as S3 or Azure), all of which can be on-prem or in the cloud. In these circumstances, the multi-vendor infrastructure strategy adopted by many organizations is a sound strategy as it facilitates data redundancy and replication while also protecting against increasingly common cloud outages, such as those seen at Amazon and CloudFlare.

With visibility tools and processes in place, the next requirement is to introduce governance frameworks that bring structure and control to unstructured data estates. Good governance enables CIOs to align information with retention rules, compliance obligations and business requirements, reducing unnecessary storage and risk.

It’s also dependent on effective data classification processes, which help determine which data should be retained, which can be relocated to lower-cost platforms and which no longer serve a purpose. Together, these processes establish clearer ownership and ensure data is handled consistently across the organization while also providing the basis for reliable decision-making by ensuring that data remains accurate. Without it, visibility alone cannot deliver operational or financial benefits, because there is no framework for acting on what the organization discovers.

Lifecycle management

Once CIOs have a clear view of what exists and a framework to control it, they need a practical method for acting on those findings across the data lifecycle. By applying metadata-based policies, teams can migrate older or rarely accessed data to lower-cost platforms, thereby reducing pressure on primary storage. Files that have not been accessed for an extended period can be relocated to more economical systems, while long-inactive data can be archived or removed entirely if appropriate.

A big part of the challenge is that the data lifecycle is now much longer than it used to be, a situation that has profoundly affected how organizations approach storage strategy and spend.

For example, datasets considered ‘active’ will typically be stored on high- or mid-performance systems. Once again, there are both on-premises and cloud options to consider, depending on the use case, but typically they include both file and object requirements.

As time passes (often years), data gradually becomes eligible for archival. It is then moved to an archive venue, where it is better protected but may become less accessible or require more checks before access. Inside the archive, it can (after even more years) be tiered to cheaper storage such as tape. At this point, data retrieval times might range from minutes to hours, or even days. In each case, archived data is typically subject to all kinds of regulations and can be used during e-discovery.

In most circumstances, it is only after this stage has been reached that data is finally eligible to be deleted.

When organizations take this approach, many discover that a significant proportion of their stored information falls into the inactive or long-inactive category. Addressing this issue immediately frees capacity, reduces infrastructure expenditure and helps prevent the further accumulation of redundant content.

Policy-driven lifecycle management also improves operational control. It ensures that data is retained according to its relevance rather than by default and reduces the risk created by carrying forgotten or outdated information. It supports data quality by limiting the spread of stale content across the estate and provides CIOs with a clearer path to meeting retention and governance obligations.

What’s more, at a strategic level, lifecycle management transforms unstructured data from an unmanaged cost into a controlled process that aligns storage with business value. It strengthens compliance by ensuring only the data required for operational or legal reasons is kept, and it improves readiness for AI and analytics initiatives by ensuring that underlying datasets are accurate and reliable.

To put all these issues into perspective, the business obsession with data shows no sign of slowing up. Indeed, the growing adoption of AI technologies is raising the stakes even further, particularly for organizations that continue to prioritize data collection and storage over management and governance. As a result, getting data management and storage strategies in order sooner rather than later is likely to rise to the top of the to-do list for CIOs across the board.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

El FEM alerta de que tener arquitecturas de datos obsoletas frena el impacto de la IA en sanidad

16 January 2026 at 04:22

Aunque la IA tiene el potencial de transformar la atención médica en todo el mundo, el progreso se está topando actualmente con un muro invisible. Los obstáculos son los sistemas de datos obsoletos. A esta conclusión llega el Foro Económico Mundial (FEM) en el informe publicado en vísperas de su reunión anual en Davos, llamado ‘La IA puede transformar la asistencia sanitaria si transformamos nuestra arquitectura de datos’.

Según el estudio, décadas de registros aislados, formatos incompatibles e infraestructuras rígidas frenan el progreso. Para que la IA no siga siendo solo una herramienta para tareas específicas, sino que se convierta en un sistema autónomo y capaz de aprender, el FEM considera que el sector sanitario debe replantearse desde cero su arquitectura de datos.

Urge salir de la trampa del silo

Hasta ahora, las estructuras se basaban a menudo en entradas manuales y actualizaciones diferidas. Sin embargo, el futuro pertenecerá a un canal de datos inteligente y unificado que limpie la información de los sensores y las fuentes automatizadas en tiempo real y la haga directamente legible para la IA. En lugar de almacenarse en rígidas bases de datos relacionales, la información se almacena cada vez más en bases de datos gráficas multidimensionales que permiten comprender inmediatamente el contexto y el significado.

El FEM considera que otro gran problema es la investigación médica. En este ámbito, muchos conocimientos valiosos permanecen ocultos en notas o imágenes complejas, ya que son difíciles de encontrar con una búsqueda convencional. Aquí es donde entra en juego la denominada vectorización: los datos multimodales, desde textos hasta secuencias genómicas y señales clínicas, se convierten en incrustaciones numéricas. Esto permite a la IA reconocer relaciones profundas, como comparar síntomas con casos anteriores o recuperar resultados de investigación relevantes con la máxima precisión.

Seguridad y confianza

En definitiva, según el FEM, un sistema sanitario moderno necesita un data lakehouse. Es decir, un lugar de almacenamiento centralizado en el que los datos de los laboratorios, los wearables y las aplicaciones de los pacientes confluyan de forma segura y estén disponibles para su análisis. Para que la protección de datos no se quede en el camino, una fábrica de datos inteligente debe garantizar que solo los usuarios autorizados tengan acceso y que la información sea coherente.

Para garantizar que las recomendaciones de IA para los médicos sean comprensibles y fiables, éstas deben basarse en conocimientos clínicos validados. Los denominados gráficos del conocimiento podrían servir como guías para garantizar que los resultados de la IA se ajusten a las directrices médicas.

Esta transformación de la IA es más que una simple renovación tecnológica. Según la valoración del FEM, para las naciones soberanas, la creación de una arquitectura de datos preparada para la IA significa considerar la sanidad como un recurso nacional. Y, desde el punto de vista del foro, esta transformación radical es indispensable. Solo así los países podrán garantizar una atención mejor y personalizada y aprovechar al máximo el potencial de una IA con capacidad de autoaprendizaje.

“모든 데이터가 똑같지는 않다” AI 성패 가를 데이터 전략의 과제

15 January 2026 at 00:13

생성형 AI는 거의 모든 산업에서 파괴적 영향력을 키우고 있지만, 최고 수준의 AI 모델과 도구를 쓰는 것만으로는 충분하지 않다. 모든 기업이 비슷한 모델과 도구를 쓰는 상황에서 경쟁우위를 만드는 핵심은 자체 모델을 학습하고 미세 조정하거나 모델에 차별화된 맥락을 제공하는 역량이며, 이런 역량에는 데이터가 필요하다.

코드 베이스와 문서, 변경 로그는 코딩 에이전트를 위한 데이터다. 과거 제안서와 계약서 라이브러리는 작문 어시스턴트의 학습 재료가 된다. 고객 DB와 지원 티켓은 고객 서비스 챗봇의 기반 데이터다. 다만 데이터가 많다고 해서 ‘좋은 데이터’가 되는 것은 아니다.

IT서비스 기업 유니시스(Unisys)의 클라우드·애플리케이션·인프라 솔루션 부문 수석부사장 겸 총괄 책임자 만주 나그라푸르는 “접근 가능한 어떤 데이터든 모델에 연결하기가 너무 쉽다”라며, “지난 3년간 같은 실수가 반복되는 걸 봤다. ‘쓰레기를 넣으면 쓰레기가 나온다’는 오래된 격언은 여전히 유효하다”라고 강조했다.

실제로 보스턴 컨설팅 그룹이 9월에 공개한 설문 조사에서 1,250명의 AI 의사결정권자 중 68%가 ‘고품질 데이터 접근 부족’을 AI 도입의 핵심 장애 요인으로 꼽았다. 10월에 시스코가 8,000명 이상 AI 리더를 대상으로 진행한 조사에서도 AI 에이전트에 필요한 ‘정제되고 중앙화된 데이터’를 실시간으로 통합해 둔 기업은 35%에 그쳤다. IDC는 2027년까지 고품질의 이른바 ‘AI 레디 데이터(AI-ready data)’를 우선순위로 두지 않는 기업은 생성형 AI와 에이전틱 솔루션 확장에 어려움을 겪고, 생산성이 15% 감소할 수 있다고 경고했다.

시맨틱 계층이 무너지는 순간

데이터를 한데 ‘뭉뚱그려’ 모아두면 또 다른 문제가 생긴다. 시맨틱(Semantic) 계층이 혼란스러워진다는 점이다. 여러 소스에서 들어온 데이터는 같은 정보라도 정의와 구조가 제각각일 수 있다. 신규 프로젝트나 인수합병으로 데이터 소스가 늘어날수록 이 문제는 커진다. 특히 ‘고객’처럼 가장 중요한 데이터조차 식별하고 정합성을 유지하기 어렵다는 호소가 많다.

데이터·신용정보 기업 던 앤 브래드스트리트(Dun & Bradstreet)는 지난해 조사에서 절반이 넘는 조직이 AI에 활용하는 데이터의 신뢰성과 품질을 우려한다고 보고했다. 금융 서비스 업종에서는 52%가 ‘데이터 품질 문제’로 AI 프로젝트가 실패했다고 답했고, 2,000명 이상 업계 전문가를 대상으로 12월 설문에서는 44%가 2026년 최대 우려로 ‘데이터 품질’을 꼽았다. 이는 ‘사이버보안’ 다음으로 큰 걱정거리였다.

클라우드 컨설팅 기업 레몬그라스(Lemongrass)의 CTO 이먼 오닐은 “데이터 표준이 서로 충돌하지 않는 곳이 없다. 불일치(mismatch) 하나하나가 리스크이지만, 사람이라면 어떻게든 해결한다”라고 지적했다. 오닐은 AI도 비슷한 방식으로 ‘문제를 우회’하게 만들 수 있지만, 그러려면 문제가 무엇인지 정확히 파악하고 이를 바로잡는 데 시간과 노력을 투입해야 한다고 짚었다. 데이터가 이미 깨끗하더라도 시맨틱 매핑은 필요하고, 데이터가 완벽하지 않다면 정리 작업에 더 많은 시간이 든다는 것이다.

오닐은 “작은 데이터로 시작해 해당 사용례를 제대로 맞추는 게 현실적인 접근”이라며 “그 다음에 확장하는 방식이 성공적인 도입의 모습”이라고 덧붙였다.

관리되지 않고 구조도 없는 데이터

오닐은 기업 정보에 AI를 연결할 때 또 다른 흔한 실수로 ‘비정형 데이터 소스’에 무작정 연결하는 방식을 꼽았다. LLM이 문서, 텍스트, 이미지에서 의미를 뽑아내는 데 강한 건 사실이지만, 모든 문서가 AI의 ‘관심’을 받을 자격이 있는 건 아니라는 지적이다.

예를 들어 문서가 구버전이거나 아직 교정되지 않은 초안이거나 오류가 포함된 버전일 수 있다. 오닐은 “사람들이 늘 겪는 문제다. 원드라이브나 파일 스토리지를 챗봇에 연결하면, ‘버전 2’와 ‘버전 2 최종’을 구분하지 못하는 상황이 생긴다”라고 전했다.

버전 관리는 사람에게도 어렵다. 오닐은 “마이크로소프트는 버전 관리를 도와주지만, 사용자들은 여전히 ‘다른 이름으로 저장’을 반복한다”며 “그 결과 비정형 데이터가 끝없이 늘어난다”고 말했다.

에이전틱 AI와 더 복잡해지는 보안

CIO가 AI 보안을 떠올릴 때 보통은 모델 가드레일, 학습 데이터 보호, RAG 임베딩용 데이터 보호 등을 생각한다. 하지만 챗봇 중심 AI가 에이전틱 AI로 진화하면서 보안 문제는 훨씬 복잡해진다.

예컨대 임직원 급여 DB가 있다고 가정해 보자. 직원이 급여를 문의하면, RAG 방식에서는 전통적인 코드로 필요한 데이터만 추출해 프롬프트에 포함시킨 뒤 AI에 질의한다. 이때 AI는 ‘허용된 정보’만 보게 되고, 나머지 데이터 보호는 전통적인 소프트웨어 스택이 맡는다.

반면 에이전틱 AI 시스템에서는 AI 에이전트가 MCP 서버 등을 통해 DB를 자율적으로 조회할 수 있다. 모든 직원 질문에 답해야 한다는 전제 때문에 에이전트가 전체 임직원 데이터에 접근해야 하고, 그 과정에서 정보가 잘못 흘러들어가지 않도록 막는 일이 큰 과제가 된다. 시스코 조사에 따르면, AI 시스템에 ‘동적이고 세밀한 접근 제어’를 갖춘 기업은 27%에 불과했고, 민감 데이터 보호나 무단 접근 방지에 자신 있다고 답한 비율도 절반을 밑돌았다.

오닐은 데이터 레이크로 모든 데이터를 모으는 방식이 문제를 더 키울 수 있다며, “각 데이터 소스에는 저마다의 보안 모델이 있다. 하지만 이를 블록 스토리지에 쌓아 올리면 그 ‘세분화된 통제’가 사라진다”라고 지적했다. 사후적으로 보안 계층을 덧붙이기보다 원천 데이터 소스에 직접 접근하고 데이터 레이크를 가능한 한 우회하는 전략이 더 현실적일 수 있다는 설명이다.

‘속도전’이 가장 위험한 함정

디지털 트랜스포메이션 컨설팅 기업 서덜랜드 글로벌(Sutherland Global)의 CIO 겸 CDO 더그 길버트는 CIO가 저지르는 1순위 실수로 ‘너무 빨리 가는 것’을 꼽았다. 길버트는 “대부분의 프로젝트가 실패하는 이유다. 속도 경쟁이 과열돼 있다”라고 분석했다.

데이터 이슈를 ‘병목’으로만 보고 건너뛰려 하지만, 사실상 그 모든 것이 큰 리스크로 돌아온다는 경고도 덧붙였다. 길버트는 “AI 프로젝트를 진행하는 많은 조직이 결국 감사를 받게 되고, 그때 가서 전부 다시 해야 할 수 있다”고 말했다. 이어 “데이터를 제대로 갖추는 건 속도를 늦추는 게 아니라, 올바른 인프라를 깔아 혁신 속도를 올리고 감사도 통과하며 컴플라이언스를 확보하는 길”이라고 강조했다.

테스트 역시 시간 낭비로 보기 쉽지만, 빠르게 만들고 나중에 고치는 전략이 항상 최선은 아니라는 지적이다. 길버트는 “빛의 속도로 움직이는 실수의 비용이 얼마인가”라고 반문하며 “나는 언제나 테스트를 먼저 보겠다. 테스트 없이 시장에 나오는 제품이 생각보다 많다”라고 지적했다.

데이터 정리를 돕는 AI

데이터 품질 문제는 사용례가 늘어날수록 더 악화될 것처럼 보인다. 데이터 관리 소프트웨어 기업 에이브포인트(AvePoint)가 10월에 775명의 글로벌 비즈니스 리더를 조사한 보고서에 따르면, 81%는 데이터 관리 또는 데이터 보안 문제로 AI 보조 도구 배포를 이미 미룬 경험이 있다고 답했다. 평균 지연 기간은 6개월이었다. 데이터 규모도 빠르게 불어난다. 응답자의 52%는 기업이 500페타바이트 이상의 데이터를 관리하고 있다고 답했는데, 이는 1년 전 41%에서 크게 늘었다.

그럼에도 AI가 역설적으로 데이터 정리를 더 쉽게 만들 것이라는 분석도 있다. 유니시스의 나글라푸르는 “고객에 대한 360도 뷰를 확보하고, 여러 데이터 소스를 정리하고 조정하는 일이 AI 덕분에 더 쉬워질 것”이라며, “역설적이지만, AI가 모든 걸 돕게 될 것”이라고 표현했다. 이어 “3년 걸릴 디지털 트랜스포메이션도 이제 AI로 12~18개월이면 가능해질 수 있다. AI 도구는 현실에 가까워지고 있고, 변화 속도를 더 끌어올릴 것”이라고 전망했다.
dl-ciokorea@foundryco.com

What you need to know — and do — about AI inferencing

14 January 2026 at 07:30

Inferencing is an important part of how the AI sausage is made. But, unlike the proverbial sausage — whose making is best left unobserved — it’s critical to know what inference is, why it’s important and what decisions you can and should make around it. Over the past two years, I have had the opportunity to observe and understand how generative AI (gen AI) inferencing is evolving and accelerating, both through my interactions with practitioners here at Red Hat as well as with customers.

Inferencing is actually the driving force behind what most people, in my experience, have come to understand as AI: It’s the operational phase of the machine learning process — the point at which machine learning models recognize patterns and draw conclusions from information that they haven’t seen before. This is the foundation for generative AI and for AI’s ability to think and reason like humans. As you expand the extent to which you integrate AI into new and existing applications, the decisions you make around inferencing technology will become more and more impactful.

Redefining what’s possible

During the past few years, I’ve watched the rise of AI agents redefine what’s possible in software delivery, automation and intelligent operations. But pretty much every conversation I have about deploying agents eventually circles back to the same core tension: serving and inferencing language models efficiently and securely — whether massive multi-task models capable of reasoning or smaller, fine-tuned ones trained for precision.

There are endless use cases for inferencing in enterprise applications, but I find it useful to explain the impact of inference in terms of applications in healthcare and finance. In healthcare, for example, a previously unseen piece of data — an extra heartbeat or an unexpected transaction — can be put into the context of an AI model’s training over time to identify potential risk. This combination of training and inferencing enables AI to mimic human thinking based on evidence and reasoning — the “why” of AI. Without inferencing, AI models and agents cannot apply learned data to new situations or problems presented by users.

Indeed, if you want to integrate new and existing data for decision-making, automation and/or insight generation — especially if the results must be delivered in real time at scale — you must prioritize AI inferencing capabilities as a key part of your technology stack.

OK…how do you do that?

AI inference servers enable the jump between training and inferencing. An AI inference server runs a compatible AI model. And when a user prompts or queries the model for an answer, the model server is the runtime software that uses the logic and knowledge in the model to produce an answer (tokens). I believe it’s one of the most — if not the most — important pieces in the puzzle for enterprises building AI applications. And it’s not just me saying this: According to Market.US, the global AI inference server market size will be worth about $133.2 billion by 2034, from $24.6 billion in 2024.

With that said, it’s important to understand that there are some significant challenges associated with delivering inferencing capabilities. Based on the work I’ve done with customers, the biggest issues include:

  • Inferencing is resource-intensive and expensive. As with AI model training, inferencing requires a great deal of computational power, specialized hardware and robust data pipelines. All of this is especially the case when inferencing at scale, in real time, while ensuring low latency and high availability.
  • Inferencing requires sophisticated infrastructure, application and API management and skills. Deploying inference servers and applications involves hardware orchestration, networking and storage management and the need to optimize for both performance and cost. After all, inferencing is in service of AI agents and therefore there is a need for traditional API management and DevOps. Any mistakes you make along the way will reduce the usefulness of inferencing and could even result in inferences that are inaccurate or outdated.
  • Inferencing increases security and privacy risk. Inferencing often involves processing sensitive and proprietary data, increasingly at the network edge or on devices. Ensuring compliance with data privacy laws and protecting data at every step of the process requires a defense-in-depth strategy built into the infrastructure and across platforms.

The right inference server solution can help you address these challenges. In fact, getting it right, from the start, is imperative: As gen AI models explode in complexity and production deployments scale, inference can become a significant bottleneck, devouring hardware resources and threatening to cripple responsiveness to users and customers and inflate operational costs.

Compounding this is the linkage among configuration (platform, deployment details, tooling), models and inference. I’ve seen incorrect or mismatched configurations that have led to inefficiency, poor performance, deployment failures and scalability issues. Any one of these can result in significant negative consequences for the business, from a slow or questionable response that creates mistrust among users, to exposure of sensitive private data, to costs that quickly spiral out of control.

Inference server pathways

Once you have decided that, yes, inference servers are a must, you can go down the usual pathways to procure one: build, buy or subscribe. However, there are some issues unique to the technology that you will have to consider for each of these pathways. 

Build your own inference server

Building a bespoke inference server involves assembling hardware tailored for AI, deploying and operating advanced serving software, optimizing for scale and efficiency and securing the full environment — not to mention maintaining and updating all that over time. This approach provides tight flexibility and control, with the ability to leverage what makes the most sense for the business.

For example, vLLM has become the de facto standard for DIY inference servers, with many organizations choosing the open source code library for its efficient use of the hardware needed to support AI workloads. This significantly improves speed, scalability and resourcefulness when deploying AI models in production environments. However, building an inference server also requires extensive knowledge and skills, as well as strict operational discipline. You will have to determine whether you have these resources in-house and, if you do, whether you can sustain all of this over time. Trust me, it is not easy!

Subscribe to IaaS (Inference-as-a-Service)

Inference servers can also be subscribed to as a service. IaaS (no, not that IaaS, but inference as a server) offerings allow organizations to deploy, scale and operate AI model inference without the need to buy and maintain dedicated hardware or set up complex local infrastructure. However, you will need to consider your organization’s tolerance for the risk associated with sending sensitive data to third-party cloud providers, as well as the latency and performance issues that can result when inferencing is performed remotely.

Other considerations include the potential for cost fluctuations and vendor lock-in. Using third-party providers may not even be an option depending on the regulatory mandates and privacy laws your organization must comply with.

Buy a pre-built inference server

A number of providers offer inference server software, and this pathway to inference bridges some of the gaps in the DIY and IaaS models. For example, you can purchase inference servers — on their own or as part of a platform — that provide DIY levels of control with managed-level ease, while also offering portability across platforms and freedom from lock-in to a single cloud provider or runtime environment.

Indeed, when acquiring a pre-built inference server, it’s important to keep cost, security, manageability and a commitment to openness and flexibility top of mind. An inference server needs to support your specific AI strategy, and not the other way around. This means that a given inference server must support any AI model, on any AI hardware accelerator (aka GPU), running on any cloud, preferably as close to your data as technically feasible.

Beyond supporting choice and flexibility, additional requirements include strong security measures (such as confidential computing practices, cryptographic model validation and strong endpoint protection), robust data governance and compliance controls, safety and guardrails (to protect against input threats via prompt injection and leaking of sensitive, private and regulated information) and flexible support for a variety of hardware and deployment environments (including edge, cloud and on-premises). These features help keep sensitive data and models protected, help meet regulatory requirements and make it possible to keep choice and control front-and-center with regards to the technology stack and vendors.

You should also prioritize inference servers that can deliver high performance and scalability while keeping costs and resources in check, with tools for efficient resource use, model version management and traffic optimization. For example, pre-built inference servers that use vLLM take advantage of the code library’s advanced memory management techniques, such as PagedAttention, continuous batching and optimized GPU kernel execution. When the number of users, prompts and the models to be served increases, one has to think about scaling to meet the demands with the required response times while optimizing the available compute resources. This is where inference-aware and distributed inferencing with open-source technologies such as llm-d comes into the picture. Continuous monitoring, centralized management and automated security updates are essential for operational resilience, as is model explainability, bias detection and transparent audit trails for trusted, transparent and accountable AI.

Inference decision-making

As AI becomes increasingly embedded in business operations, adopting the right inferencing strategy — whether by building, subscribing or buying — is essential for enabling intelligent, real-time decision-making at scale.

I’ve observed that any decision around inferencing is among the most critical you can make as your organization starts or continues along the AI path. It’s no wonder, then, that these decisions are also among the most overwhelming, especially given the rate at which AI technology and its applications and place in society is evolving. The most practical way forward is to be action-oriented and to start somewhere; then experiment, learn and iterate rapidly. While initially It may be difficult to identify what’s “right,” understand that it’s more important than ever to anchor any decision in principles that will move the business forward, no matter how things change.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

Beyond the hype: 4 critical misconceptions derailing enterprise AI adoption

14 January 2026 at 05:15

Despite unprecedented investment in artificial intelligence, with enterprises committing an estimated $35 billion annually, the stark reality is that most AI initiatives fail to deliver tangible business value. With AI initiatives, ROI determination is still rocket science. Research reveals that approximately 80% of AI projects never reach production, almost double the failure rate of traditional IT projects. More alarmingly, studies from MIT indicate that 95% of generative AI investments produce no measurable financial returns.

The prevailing narrative attributes these failures to technological inadequacy or insufficient investment. However, this perspective fundamentally misunderstands the problem. My experience reveals another root cause that lies not in the technological aspects themselves, but in strategic and cognitive biases that systematically distort how organizations define readiness and value, manage data, and adopt and operationalize the AI lifecycle.

Here are four critical misconceptions that consistently undermine enterprise AI strategies.

1. The organizational readiness illusion

Perhaps the most pervasive misconception plaguing AI adoption is the readiness illusion, where executives equate technology acquisition with organizational capability. This bias manifests in underestimating AI’s disruptive impact on organizational structures, power dynamics and established workflows. Leaders frequently assume AI adoption is purely technological when it represents a fundamental transformation that requires comprehensive change management, governance redesign and cultural evolution.

The readiness illusion obscures human and organizational barriers that determine success. As Li, Zhu and Hua observe, firms struggle to capture value not because technology fails, but because people, processes and politics do. During my engagements across various industries, I noticed that AI initiatives trigger turf wars. These kinds of defensive reactions from middle management, perceiving AI as threatening their authority or job security, quietly derail initiatives even in technically advanced companies.

S&P Global’s research reveals companies with higher failure rates encounter more employee and customer resistance. Organizations with lower failure rates demonstrate holistic approaches addressing cultural readiness alongside technical capability. MIT research found that older organizations experienced declines in structured management practices after adopting AI, accounting for one-third of their productivity losses. This suggests that established companies must rethink organizational design rather than merely overlaying AI onto existing structures.

2. AI expectation myths

The second critical bias involves inflated expectations about AI’s universal applicability. Leaders frequently assume AI can address every business challenge and guarantee immediate ROI, when empirical evidence demonstrates that AI delivers measurable value only in targeted, well-defined and precise use cases. This expectation reality gap contributes to pilot paralysis, in which companies undertake numerous AI experiments but struggle to scale any to production.

An S&P Global 2025 survey reveals that 42% of companies abandoned most AI initiatives during the year, up from just 17% in 2024, with the average organization scrapping 46% of proofs-of-concept before production. McKinsey’s research confirms that organizations reporting significant financial returns are twice as likely to have redesigned end-to-end workflows before selecting modeling techniques. Gartner indicates that more than 40% of agentic AI projects will be cancelled by 2027, largely because organizations pursue AI based on technological fascination rather than concrete business value.

3. Data readiness bias

The third misconception centers on data; specifically, the bias toward prioritizing volume over quality, claiming transparent and unbiased data, solid governance and contextual accuracy. Executives frequently claim their enterprise data is already clean or assume that collecting more data will ensure AI success — fundamentally misunderstanding that quality, stewardship and relevance matter exponentially more than raw quantity — and misunderstanding that the definition of clean data changes when AI is introduced.

Research exposes this readiness gap: while 91% of organizations acknowledge that a reliable data foundation is essential for AI success, only 55% believe their organization actually possesses one. This disconnect reveals executives’ tendency to overestimate data readiness while underinvesting in the governance, integration and quality management that AI systems require.

Analysis by FinTellect AI indicates that in financial services, 80% of AI projects fail to reach production and of those that do, 70% fail to deliver measurable business value, predominantly from poor data quality rather than technical deficiencies. Organizations that treat data as a product — investing in master data management, governance frameworks and data stewardship — are seven times more likely to deploy generative AI at scale.

This underscores that data infrastructure represents a strategic differentiator, not merely a technical prerequisite. Our understanding and definition for data readiness should be reconsidered by covering more inclusive aspects of data accessibility, integration and cleansing in the context of AI adoption.

4. The deployment fallacy

The fourth critical misconception involves treating AI implementation as traditional software deployment — a set-and-forget approach that’s incompatible with AI’s operational requirements. I’ve noticed that many executives believe deploying AI resembles rolling out ERP or CRM systems, assuming pilot performance translates directly to production.

This fallacy ignores AI’s fundamental characteristic: AI systems are probabilistic and require continuous lifecycle management. MIT research demonstrates manufacturing firms adopting AI frequently experience J-curve trajectories, where initial productivity declines but is then followed by longer-term gains. This is because AI deployment triggers organizational disruption requiring adjustment periods. Companies failing to anticipate this pattern abandon initiatives prematurely.

The fallacy manifests in inadequate deployment management, including planning for model monitoring, retraining, governance and adaptation. AI systems can suffer from data drift as underlying patterns evolve. Organizations treating AI as static technology systematically underinvest in the operational infrastructure necessary for sustained success.

Overcoming the AI adoption misconceptions

Successful AI adoption requires understanding that deployment represents not an endpoint but the beginning of continuous lifecycle management. Despite the abundance of technological stacks available for AI deployments, a comprehensive lifecycle management strategy is essential to harness the full potential of these capabilities and effectively implement them.

I propose that the adoption journey should be structured into six interconnected phases, each playing a crucial role in transforming AI from a mere concept into a fully operational capability.

Stage 1: Envisioning and strategic alignment

Organizations must establish clear strategic objectives connecting AI initiatives to measurable business outcomes across revenue growth, operational efficiency, cost reduction and competitive differentiation.

This phase requires engaging leadership and stakeholders through both top-down and bottom-up approaches. Top-down leadership provides strategic direction, resource allocation and organizational mandate, while bottom-up engagement ensures frontline insights, practical use case identification and grassroots adoption. This bidirectional alignment proves critical: executive vision without operational input leads to disconnected initiatives, while grassroots enthusiasm without strategic backing results in fragmented pilots.

Organizations must conduct an honest assessment of organizational maturity across governance, culture and change readiness, as those that skip rigorous self-assessment inevitably encounter the readiness illusion.

Stage 2: Data foundation and governance

Organizations must ensure data availability, quality, privacy and regulatory compliance across the enterprise. This stage involves implementing modern data architecture-whether centralized or federated-supported by robust governance frameworks including lineage tracking, security protocols and ethical AI principles. Critically, organizations must adopt data democratization concepts that make quality data accessible across organizational boundaries while maintaining appropriate governance and security controls. Data democratization breaks down silos that traditionally restrict data access to specialized teams, enabling cross-functional teams to leverage AI effectively. The infrastructure must support not only centralized data engineering teams but also distributed business users who can access, understand and utilize data for AI-driven decision-making. Organizations often underestimate this stage’s time requirements, yet it fundamentally determines subsequent success.

Stage 3: Pilot use cases with quick wins

Organizations prove AI value through quick wins by starting with low-risk, high-ROI use cases that demonstrate tangible impact. Successful organizations track outcomes through clear KPIs such as cost savings, customer experience improvements, fraud reduction and operational efficiency gains. Precision in use case definition proves essential — AI cannot solve general or wide-scope problems but excels when applied to well-defined, bounded challenges. Effective prioritization considers potential ROI, technical feasibility, data availability, regulatory constraints and organizational readiness. Organizations benefit from combining quick wins that build confidence with transformational initiatives that drive strategic differentiation. This phase encompasses feature engineering, model selection and training and rigorous testing, maintaining a clear distinction between proof-of-concept and production-ready solutions.

Stage 4: Monitor, optimize and govern

Unlike the traditional IT implementations, this stage must begin during pilot deployment rather than waiting for production rollout. Organizations define model risk management policies aligned with regulatory frameworks, establishing protocols for continuous monitoring, drift detection, fairness assessment and explainability validation. Early monitoring ensures detection of model drift, performance degradation and output inconsistencies before they impact business operations. Organizations implement feedback loops to retrain and fine-tune models based on real-world performance. This stage demands robust MLOps (Machine Learning Operations) practices that industrialize AI lifecycle management through automated monitoring, versioning, retraining pipelines and deployment workflows. MLOps provides the operational rigor necessary to manage AI systems at scale, treating it as a strategic capability rather than a tactical implementation detail.

Stage 5: Prepare for scale and adoption

Organizations establish foundational capabilities necessary for enterprise-wide AI scaling through comprehensive governance frameworks with clear policies for risk management, compliance and ethical AI use. Organizations must invest in talent and upskilling initiatives that develop AI literacy across leadership and technical teams, closing capability gaps. Cultural transformation proves equally critical-organizations must foster a data-driven, innovation-friendly environment supported by tailored change management practices. Critically, organizations must shift from traditional DevOps toward a Dev-GenAI-Biz-Ops lifecycle that integrates development, generative AI capabilities, business stakeholder engagement and operations in a unified workflow. This expanded paradigm acknowledges that AI solutions demand continuous collaboration between technical teams, business users who understand domain context and operations teams managing production systems. Unlike traditional software, where business involvement diminishes post-requirements, AI systems require ongoing business input to validate outputs and refine models.

Stage 6: Scale and industrialize AI

Organizations transform pilots into enterprise capabilities by embedding AI models into core workflows and customer journeys. This phase requires establishing comprehensive model management systems for versioning, bias detection, retraining automation and lifecycle governance. Organizations implement cloud-native platforms that provide scalable compute infrastructure. Deployment requires careful orchestration of technical integration, user training, security validation and phased rollout strategies that manage risk while building adoption. Organizations that treat this as mere technical implementation encounter the deployment fallacy, underestimating the organizational transformation required. Success demands integration of AI into business processes, technology ecosystems and decision-making frameworks, supported by operational teams with clear ownership and accountability.

Critically, this framework emphasizes continuous iteration across all phases rather than sequential progression. AI adoption represents an organizational capability to be developed over time, not a project with a defined endpoint.

The importance of system integrators with inclusive ecosystems

AI adoption rarely succeeds in isolation. The complexity spanning foundational models, custom applications, data provision, infrastructure and technical services requires orchestration capabilities beyond most organizations’ internal capacity. MIT research demonstrates AI pilots built with external partners are twice as likely to reach full deployment compared to internally developed tools.

Effective system integrators provide value through inclusive ecosystem orchestration, maintaining partnerships across model providers, application vendors, data marketplaces, infrastructure specialists and consulting firms. This ecosystem approach enables organizations to leverage best-of-breed solutions while maintaining architectural coherence and governance consistency. The integrator’s role extends beyond technical implementation to encompass change management, capability transfer and governance establishment.

I anticipate a paradigm shift in the next few years, with master system integrators leading the AI transformation journey, rather than technology vendors.

The path forward

The prevailing narrative that AI projects fail due to technological immaturity fundamentally misdiagnoses the problem. Evidence demonstrates that failure stems from predictable cognitive and strategic biases: overestimating organizational readiness for disruptive change, harboring unrealistic expectations about AI’s universal applicability, prioritizing data volume over quality and governance and treating AI deployment as traditional software implementation.

Organizations that achieve AI success share common characteristics: they honestly assess readiness across governance, culture and change capability before deploying technology; they pursue targeted use cases with measurable business value; they treat data as a strategic asset requiring sustained investment; and they recognize that AI requires continuous lifecycle management with dedicated operational capabilities.

The path forward requires cognitive discipline and strategic patience. As AI capabilities advance, competitive advantage lies not in algorithms but in organizational capability to deploy them effectively — a capability built through realistic readiness assessment, value-driven use case selection, strategic data infrastructure investment and commitment to continuous management and adoption of the right lifecycle management framework. The question facing enterprise leaders is not whether to adopt AI, but whether their organizations possess the maturity to navigate its inherent complexities and transform potential into performance.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

When it comes to AI, not all data is created equal

14 January 2026 at 05:00

Gen AI is becoming a disruptive influence on nearly every industry, but using the best AI models and tools isn’t enough. Everybody’s using the same ones but what really creates competitive advantage is being able to train and fine-tune your own models, or provide unique context to them, and that requires data.

Your company’s extensive code base, documentation, and change logs? That’s data for your coding agents. Your library of past proposals and contracts? Data for your writing assistants. Your customer databases and support tickets? Data for your customer service chatbot.

But just because all this data exists, doesn’t mean it’s good.

“It’s so easy to point your models to any data that’s available,” says Manju Naglapur, SVP and GM of cloud, applications, and infrastructure solutions at Unisys. “For the past three years, we’ve seen this mistake made over and over again. The old adage garbage in, garbage out still holds true.”

According to a Boston Consulting Group survey released in September, 68% of 1,250 senior AI decision makers said the lack of access to high-quality data was a key challenge when it came to adopting AI. Other recent research confirms this. In an October Cisco survey of over 8,000 AI leaders, only 35% of companies have clean, centralized data with real-time integration for AI agents. And by 2027, according to IDC, companies that don’t prioritize high-quality, AI-ready data will struggle scaling gen AI and agentic solutions, resulting in a 15% productivity loss.

Losing track of the semantics

Another problem using data that’s all lumped together is that the semantic layer gets confused. When data comes from multiple sources, the same type of information can be defined and structured in many ways. And as the number of data sources proliferates due to new projects or new acquisitions, the challenge increases. Even just keeping track of customers — the most critical data type — and basic data issues are difficult for many companies.

Dun & Bradstreet reported last year that more than half of organizations surveyed have concerns about the trustworthiness and quality of the data they’re leveraging for AI. For example, in the financial services sector, 52% of companies say AI projects have failed because of poor data. And for 44%, data quality is their biggest concern for 2026, second only to cybersecurity, based on a survey of over 2,000 industry professionals released in December.

Having multiple conflicting data standards is a challenge for everybody, says Eamonn O’Neill, CTO at Lemongrass, a cloud consultancy.

“Every mismatch is a risk,” he says. “But humans figure out ways around it.”

AI can also be configured to do something similar, he adds, if you understand what the challenge is, and dedicate time and effort to address it. Even if the data is clean, a company should still go through a semantic mapping exercise. And if the data isn’t perfect, it’ll take time to tidy it up.

“Take a use case with a small amount of data and get it right,” he says. “That’s feasible. And then you expand. That’s what successful adoption looks like.”

Unmanaged and unstructured

Another mistake companies make when connecting AI to company information is to point AI at unstructured data sources, says O’Neill. And, yes, LLMs are very good at reading unstructured data and making sense of text and images. The problem is not all documents are worthy of the AI’s attention.

Documents could be out of date, for example. Or they could be early versions of documents that haven’t been edited yet, or that have mistakes in them.

“People see this all the time,” he says. “We connect your OneDrive or your file storage to a chatbot, and suddenly it can’t tell the difference between ‘version 2’ and ‘version 2 final.’”

It’s very difficult for human users to maintain proper version control, he adds. “Microsoft can handle the different versions for you, but people still do ‘save as’ and you end up with a plethora of unstructured data,” O’Neill says.

Losing track of security

When CIOs typically think of security as it relates to AI systems, they might consider guardrails on the models, or protections around the training data and the data used for RAG embeddings. But as chatbot-based AI evolves into agentic AI, the security problems get more complex.

Say for example there’s a database of employee salaries. If an employee has a question about their salary and asks an AI chatbot embedded into their AI portal, the RAG embedding approach would be to collect only the relevant data from the database using traditional code, embed it into the prompt, then send the query off to the AI. The AI only sees the information it’s allowed to see and the traditional, deterministic software stack handles the problem of keeping the rest of the employee data secure.

But when the system evolves into an agentic one, the AI agents can query the databases autonomously via MCP servers, and since they need to be able to answer questions from any employee, they require access to all employee data, and keeping it from getting into the wrong hands becomes a big task.

According to the Cisco survey, only 27% of companies have dynamic and detailed access controls for AI systems, and fewer than half feel confident in safeguarding sensitive data or preventing unauthorized access.

And the situation gets even more complicated if all the data is collected into a data lake, says O’Neill.

“If you’ve put in data from lots of different sources, each of those individual sources might have its own security model,” he says. “When you pile it all into block storage, you lose that granularity of control.”

Trying to add the security layer in after the fact can be difficult. The solution, he says, is to go directly to the original data sources and skip the data lake entirely.

“It was about keeping history forever because storage was so cheap, and machine learning could see patterns over time and trends,” he says. “Plus, cross-disciplinary patterns could be spotted if you mix data from different sources.”

In general, data access changes dramatically when instead of humans, AI agents are involved, says Doug Gilbert, CIO and CDO at Sutherland Global, a digital transformation consultancy.

“With humans, there’s a tremendous amount of security that lives around the human,” he says. “For example, most user interfaces have been written so if it’s a number-only field, you can’t put a letter in there. But once you put in an AI, all that’s gone. It’s a raw back door into your systems.”

The speed trap

But the number-one mistake Gilbert sees CIOs making is they simply move too fast. “This is why most projects fail,” he says. “There’s such a race for speed.”

Too often, CIOs look at data issues as slowdowns, but all those things are massive risks, he adds. “A lot of people doing AI projects are going to get audited and they’ll have to stop and re-do everything,” he says.

So getting the data right isn’t a slowdown. “When you put the proper infrastructure in place, then you speed through your innovation, you pass audits, and you have compliance,” he says.

Another area that might feel like an unnecessary waste of time is testing. It’s not always a good strategy to move fast, break things, and then fix them later on after deployment.

“What’s the cost of a mistake that moves at the speed of light?” he asks. “I would always go to testing first. It’s amazing how many products we see that are pushed to market without any testing.”

Putting AI to work to fix the data

The lack of quality data might feel like a hopeless problem that’s only going to get worse as AI use cases expand.

In an October AvePoint report based on a survey of 775 global business leaders, 81% of organizations have already delayed deployment of AI assistants due to data management or data security issues, with an average delay of six months.

Meanwhile, not only the number of AI projects continues to grow but also the amount of data. Nearly 52% of respondents also said their companies were managing more than 500 petabytes of data, up from just 41% a year ago.

But Unisys’ Naglapur says it’s going to become easier to get a 360-degree view of a customer, and to clean up and reconcile other data sources, because of AI.

“This is the paradox,” he says. “AI will help with everything. If you think about a digital transformation that would take three years, you can do it now in 12 to 18 months with AI.” The tools are getting closer to reality, and they’ll accelerate the pace of change, he says.

The tech leadership realizing more than the sum of parts

14 January 2026 at 05:00

Waiting on replacement parts can be more than just an inconvenience. It can be a matter of sharp loss of income and opportunity. This is especially true for those who depend on industrial tools and equipment for agriculture and construction. So to keep things run as efficiently as possible, Parts ASAP CIO John Fraser makes sure end customer satisfaction is the highest motivation to get the tech implementation and distribution right.

“What it comes down to, in order to achieve that, is the team,” he says. “I came into this organization because of the culture, and the listen first, act later mentality. It’s something I believe in and I’m going to continue that culture.”

Bringing in talent and new products has been instrumental in creating a stable e-commerce model, so Fraser and his team can help digitally advertise to customers, establish the right partnerships to drive traffic, and provide the right amount of data.

“Once you’re a customer of ours, we have to make sure we’re a needs-based business,” he says. “We have to be the first thing that sticks in their mind because it’s not about a track on a Bobcat that just broke. It’s $1,000 a day someone’s not going to make due to a piece of equipment that’s down.”

Ultimately, this strategy helps and supports customers with a collection of highly-integrated tools to create an immersive experience. But the biggest challenge, says Fraser, is the variety of marketplace channels customers are on.

“Some people prefer our website,” he says. “But some are on Walmart or about 20 other commercial channels we sell on. Each has unique requirements, ways to purchase, and product descriptions. On a single product, we might have 20 variations to meet the character limits of eBay, for instance, or the brand limitations of Amazon. So we’ve built out our own product information management platform. It takes the right talent to use that technology and a feedback loop to refine the process.”

Of course, AI is always in the conversation since people can’t write updated descriptions for 250,000 SKUs.

“AI will fundamentally change what everybody’s job is,” he says. “I know I have to prepare for it and be forward thinking. We have to embrace it. If you don’t, you’re going to get left behind.”

Fraser also details practical AI adoption in terms of pricing, product data enhancement, and customer experience, while stressing experimentation without over-dependence. Watch the full video below for more insights, and be sure to subscribe to the monthly Center Stage newsletter by clicking here.

On consolidating disparate systems: You certainly run into challenges. People are on the same ERP system so they have some familiarity. But even within that, you have massive amounts of customization. Sometimes that’s very purpose-built for the type of process an organization is running, or that unique sales process, or whatever. But in other cases, it’s very hard. We’ve acquired companies with their own custom built ERP platform, where they spent 20 years curating it down to eliminate every button click. Those don’t go quite as well, but you start with a good culture, and being transparent with employees and customers about what’s happening, and you work through it together. The good news is it starts with putting the customer first and doing it in a consistent way. Tell people change is coming and build a rapport before you bring in massive changes. There are some quick wins and efficiencies, and so people begin to trust. Then, you’re not just dragging them along but bringing them along on the journey.

On AI: Everybody’s talking it, but there’s a danger to that, just like there was a danger with blockchain and other kinds of immersive technologies. You have to make sure you know why you’re going after AI. You can’t just use it because it’s a buzzword. You have to bake it into your strategy and existing use cases, and then leverage it. We’re doing it in a way that allows us to augment our existing strategy rather than completely and fundamentally change it. So for example, we’re going to use AI to help influence what our product pricing should be. We have great competitive data, and a great idea of what our margins need to be and where the market is for pricing. Some companies are in the news because they’ve gone all in on AI, and AI is doing some things that are maybe not so appropriate in terms of automation. But if you can go in and have it be a contributing factor to a human still deciding on pricing, that’s where we are rather than completely handing everything over to AI.

On pooling data: We have a 360-degree view of all of our customers. We know when they’re buying online and in person. If they’re buying construction equipment and material handling equipment, we’ll see that. But when somebody’s buying a custom fork for a forklift, that’s very different than someone needing a new water pump for a John Deere tractor. And having a manufacturing platform that allows us to predict a two and a half day lead time on that custom fork is a different system to making sure that water pump is at your door the next day. Trying to do all that in one platform just hasn’t been successful in my experience in the past. So we’ve chosen to take a bit of a hybrid approach where you combine the data but still have best in breed operational platforms for different segments of the business.

On scaling IT systems: The key is we’re not afraid to have more than one operational platform. Today, in our ecosystem of 23 different companies, we’re manufacturing parts in our material handling business, and that’s a very different operational platform than, say, purchasing overseas parts, bringing them in, and finding a way to sell them to people in need, where you need to be able to distribute them fast. It’s an entirely different model. So we’re not establishing one core platform in that case, but the right amount of platforms. It’s not 23, but it’s also not one. So as we think about being able to scale, it’s also saying that if you try to be all things to all people, you’re going to be a jack of all trades and an expert in none. So we want to make sure when we have disparate segments that have some operational efficiency in the back end — same finance team, same IT teams — we’ll have more than one operational platform. Then through different technologies, including AI, ensure we have one view of the customer, even if they’re purchasing out of two or three different systems.

On tech deployment: Experiment early and then make certain not to be too dependent on it immediately. We have 250,000 SKUs, and more than two million parts that we can special order for our customers, and you can’t possibly augment that data with a world-class description with humans. So we selectively choose how to make the best product listing for something on Amazon or eBay. But we’re using AI to build enhanced product descriptions for us, and instead of having, say, 10 people curating and creating custom descriptions for these products, we’re leveraging AI and using agents in a way that allow people to build the content. Now humans are simply approving, rejecting, or editing that content, so we’re leveraging them for the knowledge they need to have, and if this going to be a good product listing or not. We know there are thousands of AI companies, and for us to be able to pick a winner or loser is a gamble. Our approach is to make it a bit of a commoditized service. But we’re also pulling in that data and putting it back into our core operational platform, and there it rests. So if we’re with the wrong partner, or they get acquired, or go out of business, we can switch quickly without having to rewrite our entire set of systems because we take it in, use it a bit as a commoditized service, get the data, set it at rest, and then we can exchange that AI engine. We’ve already changed it five times and we’re okay to change it another five until we find the best possible partner so we can stay bleeding edge without having all the expense of building it too deeply into our core platforms.

Southeast Asia CIOs Top Predictions on 2026: A Year of Maturing AI, Data Discipline, and Redefined Work

13 January 2026 at 01:25

As 2026 begins, my recent conversations with Chief Information Officers across Southeast Asia provided me with a grounded view of how digital transformation is evolving. While their perspectives differ in nuance, they converge on several defining shifts: the maturation of artificial intelligence, the emergence of autonomous systems, a renewed focus on data governance, and a reconfiguration of work. These changes signal not only technological advancement but a rethinking of how Southeast Asia organizations intend to compete and create value in an increasingly automated economy.

For our CIOs, the year ahead represents a decisive moment as AI moves beyond pilots and hype cycles. Organizations are expected to judge AI by measurable business outcomes rather than conceptual promise. AI capabilities will become standard features embedded across applications and infrastructure, fundamental rather than differentiating. The real challenge is no longer acquiring AI technology but operationalizing it in ways that align with strategic priorities.

Among the most transformative developments is the rise of agentic AI – autonomous agents capable of performing tasks and interacting across systems. CIOs anticipate that organizations will soon manage not a single AI system but networks of agents, each with distinct logic and behaviour. This shift ushers in a new strategic focus, agentic AI orchestration. Organizations will need platforms that coordinate multiple agents, enforce governance, manage digital identity, and ensure trust across heterogeneous technology environments. As AI ecosystems grow more complex, the CIO’s role evolves from integrator to orchestrator who directs a diverse array of intelligent systems.

As AI becomes more central to operations, data governance emerges as a critical enabler. Technology leaders expect 2026 to expose the limits of weak data foundations. Data quality, lineage, access controls, and regulatory compliance determine whether AI initiatives deliver value. Organizations that have accumulated “data debt” will be unable to scale, while those that invest early will move with greater speed and confidence.

Automation in physical environments is also set to accelerate as CIOs expect robotics to expand across healthcare, emergency services, retail, and food and beverage sectors. Robotics will shift from specialised deployments to routine service delivery, supporting productivity goals, standardizing quality, and addressing persistent labour constraints.

Looking ahead, our region’s CIOs point to the early signals of quantum computing’s relevance. While still emerging, quantum technologies are expected to gain visibility through evolving products and research. In my view, for Southeast Asia organizations, the priority is not immediate adoption but proactive monitoring, particularly in cybersecurity and long-term data protection, without undertaking premature architectural shifts.

IDGConnect_quantum_quantumcomputing_shutterstock_1043301451_1200x800

Shutterstock

Perhaps the most provocative prediction concerns the nature of work. As specialised AI agents take on increasingly complex task chains, one CIO anticipates the rise of “cognitive supply chains” in which work is executed largely autonomously. Traditional job roles may fragment into task-based models, pushing individuals to redefine their contributions. Workplace identity could shift from static roles to dynamic capabilities, a broader evolution in how people create value in an AI-native economy.

One CIOs spotlight the changing nature of software development where natural-language-driven “vibe coding” is expected to mature, enabling non-technical teams to extend digital capabilities more intuitively. This trend will not diminish the relevance of enterprise software as both approaches will coexist to support different organizational needs.

CIO ASEAN Editorial final take:

Collectively, these perspectives shared by Southeast Asia’s CIO community point to Southeast Asia preparing for a structurally different digital future, defined by embedded AI, scaled autonomous systems, and disciplined data practices. The opportunity is substantial, but so is the responsibility placed on technology leaders.

As 2026 continue to unfold, the defining question will not simply be who uses AI, but who governs it effectively, integrates it responsibly, and shapes its trajectory to strengthen long-term enterprise resilience. Enjoy reading these top predictions for 2026 by our region’s most influential CIOs who are also our CIO100 ASEAN & Hong Kong Award 2025 winners:

Ee Kiam Keong
Deputy Chief Executive (Policy & Development)
concurrent Chief Information Officer
InfoComm Technology Division
Gambling Regulatory Authority Singapore
 
Prediction 1
AI continue to lead its edge esp. Agentic AI would be getting more popular and used, and AI Governance in terms AI risks and ethnics would get more focused
 
Prediction 2
Quantum Computing related products should start to evolve and more apparent.
 
Prediction 3
Deployment of robotic applications would be widened esp. in medical, emergency response and casual activities such retail, and food and beverage etc.
Ng Yee Pern,
Chief Technology Officer
Far East Organization
 
Prediction 4
AI deployments will start to mature, as enterprises confront the disconnect between the inflated promises of AI vendors and the actual value delivered.
 
Prediction 5
Vibe coding will mature and grow in adoption, but enterprise software is not going away. There is plenty of room for both to co-exist.
Athikom Kanchanavibhu
Executive Vice President, Digital & Technology Transformation
& Chief Information Security Officer

Mitr Phol Group
 
Prediction 6
The Next Vendor Battleground: Agentic AI Orchestration
By 2026, AI will no longer be a differentiator, it will be a default feature, embedded as standard equipment across modern digital products. As every vendor develops its own Agentic AI, enterprises will manage not one AI, but an orchestra of autonomous agents, each optimized for its own ecosystem.
 
The new battleground will be Agentic AI Orchestration where platforms can coordinate, govern, and securely connect agentic AIs across vendors and domains. 2026 won’t be about smarter agents, but about who can conduct the symphony best-safely, at scale, and across boundaries.
 
Prediction 7
Enterprise AI Grows Up: Data Governance Takes Center Stage
2026 will mark the transition from AI pilots to AI in production. While out-of-the-box AI will become common, true competitive advantage will come from applying AI to enterprise-specific data and context. Many organizations will face a sobering realization: AI is only as good as the data it is trusted with.
 
As AI moves into core business processes, data governance, management, and security will become non-negotiable foundations. Data quality, access control, privacy, and compliance will determine whether AI scales or stalls. In essence, 2026 will be the year enterprises learn that governing data well is the quiet superpower behind successful AI.
Jackson Ng
Chief Technology Officer and Head of Fintech
Azimut Group
 
Prediction 8
In 2026, organizations will see AI seeking power while humans search for purpose. Cognitive supply chains of specialized AI agents will execute work autonomously, forcing individuals to redefine identity at work, in service, and in society. Roles will disintegrate, giving way to a task-based, AI-native economy
Big data technology and data science. Data flow. Querying, analyzing, visualizing complex information. Neural network for artificial intelligence. Data mining. Business analytics.

NicoElNino / Shutterstock

“AI 기반 미디어 분석이 해법”···오픈텍스트, 디지털자산 관리 혁신 전략 제시

12 January 2026 at 19:00

기업 내 디지털 자산이 텍스트를 넘어 이미지, 영상, 음성 등 멀티미디어로 빠르게 확장되면서 사내 정보 관리 방식에도 근본적인 변화가 요구되고 있다. 사람과 애플리케이션, 사물이 생성하는 데이터의 양은 과거와 비교할 수 없을 정도로 늘어났으며, 다양한 저장소에 분산된 비정형 데이터와 멀티미디어 자산이 검색과 활용의 어려움을 가중시키고 있다. 이런 환경에서 디지털 자산을 얼마나 효율적으로 관리하고 활용하느냐가 기업 경쟁력을 좌우하는 요소가 되고 있다.

이러한 흐름 속에서 오픈텍스트는 오픈텍스트 서밋 코리아 2025(OpenText Summit Korea 2025)에서 ‘AI 기반 미디어 분석을 통한 사내 디지털 자산의 관리’를 주제로, AI를 활용한 차세대 디지털 자산 관리 전략을 공개했다. 이날 발표를 맡은 오픈텍스트의 장인석 상무는 멀티미디어 데이터를 아우르는 지능형 정보 관리의 필요성을 강조하며, AI 기반 분석을 통해 사내 디지털 자산을 전략적 자원으로 전환하는 방안을 소개했다.

장 상무는 기존의 전통적인 데이터 분석 방식이 주로 텍스트 기반 정형 데이터와 사후 보고에 머물러 있었다고 설명했다. 그러나 오늘날 기업 환경에서는 이미지, 영상, 음성 등 비정형 데이터가 폭증하면서, 단순 저장과 관리만으로는 필요한 정보를 제때 찾고 활용하기 어렵다고 지적했다. 그는 “데이터의 양이 아니라 데이터를 이해하고 활용하는 능력이 기업의 경쟁력을 결정한다”라고 강조했다.

오픈텍스트는 이런 과제를 해결하기 위한 해법으로 AI 기반 지식 발견 플랫폼인 ‘오픈텍스트 놀리지 디스커버리(OpenText Knowledge Discovery(IDOL))’를 제시했다. 이 플랫폼은 텍스트, 이미지, 음성, 영상 등 다양한 형태의 데이터를 통합 분석하고, 분산된 저장소를 연결해 하나의 지식 자산으로 관리할 수 있도록 지원한다. 160개 이상의 콘텐츠 커넥터와 2,000개 이상의 파일 형식을 지원해 기업 내외부에 흩어진 디지털 자산을 폭넓게 수집하고 처리할 수 있다.

오픈텍스트 놀리지 디스커버리는 머신러닝과 생성형 AI 기술을 활용해 데이터에 대한 실시간 이해와 인사이트를 제공한다. 자연어 기반 검색과 의미 검색, 벡터 검색을 통해 사용자는 단순 키워드가 아닌 질문 형태로 정보를 탐색할 수 있으며, 연관 데이터와 주제 클러스터링을 통해 데이터 간의 맥락을 파악할 수 있다. 이를 통해 방대한 멀티미디어 자산 속에서도 필요한 정보를 빠르고 정확하게 찾아낼 수 있다.

특히 미디어 분석 기능은 사내 디지털 자산 관리의 활용 범위를 크게 확장한다. 이미지와 영상, 음성 데이터에 대해 얼굴 및 객체 인식, 텍스트 추출(OCR), 음성 인식과 화자 식별, 이벤트 분석 등 다양한 AI 분석을 수행해 자동으로 메타데이터를 생성한다. 이를 통해 기존에는 사람이 수작업으로 분류해야 했던 멀티미디어 자산을 체계적으로 관리하고, 검색과 재사용 효율을 크게 높일 수 있다.

오픈텍스트는 또한 오픈텍스트 놀리지 디스커버리가 민감 정보 식별과 보안 기능을 통해 사내 디지털 자산을 안전하게 관리할 수 있도록 지원한다고 밝혔다. 개인정보와 같은 민감 데이터를 자동으로 식별하고, 보안 정책에 따라 접근을 제어함으로써 정보 활용과 보호 간의 균형을 유지한다는 설명이다. 이런 기능은 대규모 조직과 규제가 엄격한 산업 환경에서 특히 중요한 요소일 수 있다.

오픈텍스트는 실제 데모를 통해 뉴스 트렌드 분석, 이미지 학습과 객체 인식, 영상 분석을 통한 메타데이터 자동 생성 등 다양한 사용례를 소개했다. 이를 통해 기업은 사내에 축적된 방대한 디지털 자산을 단순 보관 대상이 아닌, 분석과 인사이트 도출이 가능한 전략적 자산으로 전환할 수 있다.

장인석 상무는 “AI 기반 미디어 분석은 사내 디지털 자산 관리의 효율성을 넘어, 기업의 지식 활용 방식을 근본적으로 변화시키는 기술”이라며 “오픈텍스트는 지능형 정보 관리를 통해 기업이 데이터에서 더 많은 가치를 창출할 수 있도록 지속적으로 지원할 것”이라고 말했다.
dl-ciokorea@foundryco.com

클라우드 운영의 동반자, MCSP의 장점과 한계는?

12 January 2026 at 02:21

관리형 클라우드 서비스 제공업체(Managed Cloud Services Provider, MCSP)는 기업이 클라우드 환경의 일부 또는 전반을 운영하는 데 도움을 주는 역할을 한다. 여기에는 시스템의 클라우드 이전, 모니터링과 유지 관리, 성능 개선, 보안 도구 운영, 비용 통제 지원 등이 포함된다. MCSP는 일반적으로 퍼블릭, 프라이빗, 하이브리드 클라우드 환경 전반에서 서비스를 제공한다.

기업은 클라우드 환경 가운데 어떤 영역을 제공업체에 맡기고, 어떤 부분을 내부에서 직접 운영할지를 결정한다. 대부분의 경우 기업과 MCSP는 책임을 공유하는 구조다. 제공업체는 일상적인 운영과 도구 관리를 담당하고, 기업은 비즈니스 의사결정과 데이터, 거버넌스에 대한 책임을 유지한다.

사이버보안 컨설팅 기업 사이엑셀(CyXcel)의 북미 디지털 포렌식 및 사고 대응 부문 MCSP 부사장인 브렌트 라일리는 MCSP를 선택하는 과정이 언제나 부담스럽다고 설명했다.

라일리는 “서비스 수준 계약(SLA)에 명시된 수준으로 서비스를 수행할 것이라는 신뢰에 크게 의존하지만, 실제로 이를 충족하고 있는지는 장애나 사이버 보안 사고가 발생해 문제가 드러나기 전까지 검증하기 어렵다”라며 “그 시점에는 이미 피해가 발생한 뒤인 경우가 많다”라고 전했다. 그는 또 “MCSP는 점검할 수 있는 물리적 인프라가 없고, 온프레미스 환경처럼 눈에 보이는 작업도 없어 평가와 선택이 더욱 까다롭다”라고 언급했다.

MCSP의 장점

운영 부담 감소: MCSP는 일상적인 클라우드 관리 업무를 대신 수행해 내부에 대규모 클라우드·인프라 조직을 유지해야 하는 부담을 줄여준다. 특히 내부에 클라우드나 핀옵스(FinOps) 전문성이 충분하지 않은 조직에 효과적이다.

신속한 문제 대응 : 대부분의 MCSP는 24시간 모니터링과 지원 체계를 제공한다. 문제가 발생하면 사용자나 애플리케이션에 큰 영향을 미치기 전에 빠르게 대응할 수 있다.

재해 복구와 복원력 지원 : MCSP는 백업과 재해 복구 환경의 설계, 운영, 테스트를 지원한다. 복구 목표는 고객이 정의하지만, 문제가 발생했을 때 시스템을 신속하게 복구할 수 있도록 돕는 역할을 맡는다.

지속적인 플랫폼 관리 : 클라우드 플랫폼은 변화 속도가 빠르다. MCSP는 인프라 구성 요소를 최신 상태로 유지하고 호환성을 관리해, 오래된 설정으로 인한 위험을 줄이는 동시에 주요 변경 시점에 대한 통제권은 고객이 유지할 수 있도록 한다.

보안 전문성과 도구 제공 : 클라우드 보안에는 수요가 높은 전문 역량이 요구된다. MCSP는 아이덴티티 관리, 모니터링, 규정 준수 도구, 보안 모범 사례에 대한 경험을 바탕으로 일상적인 보안 수준 강화를 지원한다. 보안 책임은 여전히 기업과 제공업체가 공유한다.

신뢰성과 성능 향상 : 대규모이면서 복잡한 환경을 운영해 온 경험을 바탕으로, 보다 안정적이고 확장 가능하며 복원력 있는 클라우드 인프라의 설계와 운영을 지원한다.

기존 시스템과의 통합 : MCSP는 클라우드 자원을 온프레미스 시스템, 애플리케이션, 아이덴티티 플랫폼과 연계해 사용자와 애플리케이션이 중단 없이 클라우드 서비스를 이용할 수 있도록 한다.

비용 절감보다는 예측 가능한 운영 : MCSP를 활용하면 내부 인력과 도구 비용을 줄일 수 있지만, 전체 클라우드 지출이 항상 감소하는 것은 아니다. 현재 MCSP의 가치는 저렴한 클라우드 요금보다는 운영 효율성, 전문성, 대응 속도에 더 있다.

MCSP 선택 시 핵심 고려 사항

IT 관리 소프트웨어 제공업체 커넥트와이즈(ConnectWise)의 최고경영자 매니 리벨로는 조직이 점점 더 자율적이고 AI 기반 서비스로 전환하는 과정에서, MCSP가 자동화를 실제 일상 운영에서 제대로 작동하도록 만드는 중요한 역할을 한다고 설명했다.

리벨로는 많은 조직이 예상보다 중요하게 인식하지 못하는 요소로 운영 투명성을 꼽았다. 그는 클라우드 환경이 어떻게 설계되고, 보안이 적용되며, 운영되고 있는지에 대한 명확한 가시성이 필요하다고 설명했다. 아울러 에이전틱 AI가 시스템을 어떻게 모니터링하고 의사결정을 내리며 실제 조치를 취하는지까지 조직이 이해하고 있어야, 인지하지 못한 상태에서 중요한 일이 진행되는 상황을 막을 수 있다고 언급했다.

리벨로는 “자율성이 높아질수록 운영 성숙도의 중요성도 커진다”라며 “여기에는 체계적인 데이터 거버넌스, 강력한 물리적·논리적 보안, 자동화와 인간 감독의 균형을 고려한 명확한 사고 대응 프로세스가 포함된다”라고 설명했다. 그는 “에이전틱 AI는 문제를 탐지하고 신호를 연관 분석하며 기계 속도로 대응할 수 있지만, 정책 설정과 결과 검증, 예상 범위를 벗어난 상황에서의 판단은 여전히 사람이 담당해야 한다”라고 말했다.

리벨로는 MCSP가 관리형 서비스 모델과 그 주변 생태계에 얼마나 잘 부합하는지도 중요하다고 강조했다. 적합한 제공업체는 자동화와 AI를 활용해 운영을 단순화해야 하며, 자동화가 제대로 작동할 경우 현업 인력을 뒷받침하고 운영의 일관성을 높이며, 팀이 또 다른 도구를 관리하는 데 시간을 쓰는 대신 실제로 중요한 업무에 집중할 수 있도록 돕는다고 설명했다.

소프트웨어 라이선스 활용 가치를 높이고 MS, 오라클, 시스코 등 벤더 감사 대응을 지원하는 NPI의 최고경영자 존 윈셋은 가격 구조의 유연성이 MCSP 선택 과정에서 종종 간과된다고 지적했다. 그는 MCSP와 관련된 위험이 초기 비용 증가보다는, 시간이 지나면서 인지하지 못한 채 협상력이 약화되는 데 있다고 분석했다.

윈셋은 소규모 팀이나 아직 클라우드 경험을 구축하는 단계에 있는 조직에는 MCSP가 큰 도움이 될 수 있다고 덧붙였다. 클라우드 지출을 통합하고 마이그레이션 지원, 리소스 최적화, 비용 통제와 같은 서비스를 패키지로 제공함으로써 낭비를 줄이고 클라우드 운영을 보다 수월하게 만들 수 있다는 설명이다. 내부에 클라우드나 핀옵스 역량이 충분하지 않은 조직이라면 이러한 이점이 일정 부분의 trade-off를 감수할 만한 가치가 있다고 전했다.

그는 “클라우드 환경이 확장될수록 가격 구조는 점점 불투명해진다”라며 “MCSP는 MS나 아마존웹서비스(AWS) 요금 위에 자체 마진을 더하는데, 기본 사용료 기준 최대 8% 수준이며 서비스가 묶일 경우 그 이상이 될 수 있다”라고 설명했다. 이어 “이 같은 관리 계층을 통해 MCSP는 약 30~40% 수준의 수익률을 확보한다”라고 언급했다.

MCSP의 단점

기술 컨설팅 기업 하이라인의 기술 부사장 라이언 맥엘로이는 MCSP를 활용할 때 가장 큰 단점으로 통제력 상실을 꼽았다.

맥엘로이는 “각종 라이선스 할인 혜택을 받더라도 계약에 묶여 필요 이상으로 구매해야 하는 구조라면 실제로는 비용을 절감하지 못할 수 있다”라며 “MCSP를 활용하면 조직의 공격 표면이 확대될 수 있다는 점도 고려해야 한다”라고 설명했다. 그는 또 “MS와 같은 대형 클라우드 벤더가 MCSP를 교육하고 가이드를 제공하더라도, 대규모 사이버 보안 사고 이후 작성되는 근본 원인 분석 보고서를 살펴보면 MCSP가 공격 경로로 작용한 사례가 우려스러울 정도로 자주 등장한다”라고 전했다.

리서치 기업 ISG의 디렉터 아네이 나와테는 MCSP 협업이 많은 이점을 제공하는 동시에 분명한 위험도 동반한다고 언급했다.

나와테는 “MCSP가 조직 내 아키텍처 논의의 중심적인 목소리가 되어서는 안 된다”라며 “핵심 시스템에 대한 지식을 내부에 유지하고, 벤더 종속을 줄이며, 시장 모범 사례와 비교했을 때 제공업체로 인한 아키텍처 편향을 완화하기 위해서는 아키텍처 의사결정을 내부에서 소유해야 한다”라고 설명했다.

그는 또 MCSP가 클라우드 비용 관리에 대해 실제로 클라우드를 사용하는 기업만큼의 압박을 느끼지 않는 경우가 많다고 덧붙였다. 결국 과도한 지출의 영향은 기업이 직접 감내하게 되며, 이러한 이유로 많은 기업이 클라우드 비용에 대한 통제력을 확보하기 위해 핀옵스 역할을 다시 내부로 가져온다고 설명했다.

글로벌 시장에서 주목받는 MCSP 6곳

관리형 클라우드 서비스 제공업체는 수십 곳에 이른다. 조사 부담을 줄이기 위해 독립적인 리서치와 애널리스트와의 논의를 바탕으로, 알파벳순으로 주요 MCSP 6곳을 정리했다. 가격 정보는 각 제공업체에 직접 문의해야 한다.

액센추어

액센추어(Accenture)는 전 세계 주요 지역과 시장에 분포한 팀과 센터를 기반으로 관리형 클라우드 서비스를 제공한다. 기업의 클라우드 환경 설계, 운영, 유지 관리를 지원하며, 초기 클라우드 구축부터 모니터링, 유지 보수, 보안을 포함한 지속적인 운영까지 폭넓게 다룬다. MS 애저, 구글 클라우드, AWS 등 주요 클라우드 플랫폼 전반에서 서비스를 제공하는 것도 특징이다. 기업은 복잡한 클라우드 시스템을 전부 내부에서 관리하는 대신, 액센추어를 통해 일상적인 운영과 기술적 관리 업무를 맡길 수 있다. 시스템 모니터링과 이슈 대응, 환경 업데이트 등 일상적인 인프라 운영을 액센추어가 담당함으로써, 내부 인력은 핵심 비즈니스 과제에 집중할 수 있다.

캡제미니

캡제미니(Capgemini)는 전 세계를 대상으로 관리형 클라우드 서비스를 제공하며, 특히 유럽과 북미를 중심으로 멀티클라우드 환경을 지원한다. 제조, 리테일, 금융 서비스, 보험 산업과의 협업 경험이 풍부하다. AWS, MS 애저, 구글 클라우드 등 주요 클라우드 플랫폼과 일부 특화된 엔터프라이즈 클라우드를 기반으로 애플리케이션과 인프라 운영을 지원한다. 모니터링, 백업, 기술 지원을 포함한 관리형 서비스와 함께, 클라우드 이전이 적합한 워크로드를 식별하고 해당 시스템을 이전·운영하는 과정까지 포괄적으로 지원한다. 중견기업보다는 대규모이면서 복잡한 환경을 가진 대기업에 적합한 서비스 성격을 갖고 있다.

딜로이트

딜로이트(Deloitte)는 전 세계 고객을 대상으로 클라우드 서비스를 제공하며, 북미와 유럽 지역의 비중이 크다. 금융·보험, 공공, 헬스케어 산업에서 특히 강점을 보인다. AWS, MS 애저, 구글 클라우드, VM웨어 클라우드, 오라클 클라우드를 포함한 멀티클라우드 환경을 지원한다. 기업의 비즈니스 목표에 맞춰 클라우드 환경을 기획·구축·운영하는 데 초점을 맞추고 있으며, 프로세스와 운영 개선을 포함한 클라우드 전환이 핵심 영역이다. 컨설팅이 주력 사업이지만, 디지털 전환을 추진하는 대기업을 중심으로 관리형 서비스 영역도 지속적으로 확대하고 있다.

HCL테크놀로지스

HCL테크놀로지스(HCL Technologies)는 전 세계에 분포한 팀과 센터를 통해 관리형 클라우드 서비스를 제공한다. AWS, MS 애저, 구글 클라우드 등 주요 클라우드 제공업체와 협력해 각 기업의 요구에 맞는 클라우드 환경을 설계·구축하고, 이후 안정적인 운영을 지원한다. 구축 이후에는 24시간 모니터링, 성능 관리, 장애 대응 등 일상적인 운영을 담당하며, 반복적인 IT 작업에는 자동화와 AI 도구를 활용한다. 금융, 제조, 헬스케어 등 다양한 산업에서 안정적인 클라우드 시스템 운영을 지원하는 것이 특징이다.

NTT데이터

NTT데이터(NTT Data)는 전 세계 고객을 대상으로 관리형 클라우드 서비스를 제공하며, 제조, 헬스케어, 금융 서비스, 보험 등 폭넓은 산업을 지원한다. MS 애저, 구글 클라우드, IBM 클라우드, AWS를 기반으로 한 멀티클라우드 전략을 채택하고 있다. 애플리케이션의 클라우드 이전, 노후 시스템 현대화, 레거시 기술 전환을 지원하는 한편, NTT 그룹 전반의 역량을 활용해 아이덴티티 및 접근 관리, 네트워킹, 관리형 보안 서비스도 함께 제공한다. 이를 통해 고객이 비즈니스를 보다 효과적으로 지원하는 클라우드 기반 시스템을 구축하도록 돕는다.

타타컨설턴시서비스

타타컨설턴시서비스(Tata Consultancy Services, TCS)는 전 세계 기업과 협력하고 있지만, 관리형 클라우드 서비스 고객은 주로 북미와 유럽에 집중돼 있다. 금융 서비스, 생명과학·제약, 리테일 산업에서 강한 경험을 보유하고 있다. MS 애저, 구글 클라우드, 오라클 클라우드, AWS를 중심으로 멀티클라우드 환경을 지원하며, 일부 IBM 클라우드도 제공한다. 주요 클라우드 파트너별 전담 팀을 운영하며, 대기업을 대상으로 클라우드 이전 전략 수립, 기존 시스템 이전, 애플리케이션 현대화를 지원한다. 서비스의 중심은 대기업에 맞춰져 있으며, 중견기업 대상 비중은 상대적으로 제한적이다.
dl-ciokorea@foundryco.com

The 37-point trust gap: It’s not the AI, it’s your organization

9 January 2026 at 09:23

I’ve been in the tech industry for over three decades, and if there’s one thing I’ve learned, it’s that the tech world loves a good mystery. And right now, we’ve got a fascinating one on our hands.

This year, two of the most respected surveys in our field asked developers a simple question: Do you trust the output from AI tools? The results couldn’t be more different!

  • The 2025 DORA report, a study with nearly 5,000 tech professionals that historically skews enterprise, found that a full 70% of respondents express some degree of confidence in the quality of AI-generated output.
  • Meanwhile, the 2025 Stack Overflow Developer Survey, with its own massive developer audience, found that only 33% of developers are “Somewhat” or “Highly” trusting of AI tools.

That’s a 37-point gap.

Think about that for a second. We’re talking about two surveys conducted during the same year, of the same profession and examining largely the same underlying AI models from providers like OpenAI, Anthropic and Google. How can two developer surveys report such fundamentally different realities?

DORA: AI is an amplifier

The mystery of the 37-point trust gap isn’t about the AI. It’s about the operational environment AI is surrounded with (more on that in the next section). As the DORA report notes in its executive summary, the main takeaway is: AI is an amplifier. Put bluntly, “the central question for technology leaders is no longer if they should adopt AI, but how to realize its value.”

DORA didn’t just measure AI adoption. They measured the organizational capabilities that determine whether AI helps or destroys your team’s velocity. And they found seven specific capabilities that separate the 70% confidence group in their survey from the 33% in the Stack Overflow results.

Let me walk you through them, because this is where we’ll get practical.

The 7 pillars of a high-trust AI environment

So, what does a good foundation look like? The DORA research team didn’t just identify the problem; they gave us a blueprint. They identified seven foundational “capabilities” that turn AI from a novelty into a force multiplier. When I read this list, I just nodded my head. It’s the stuff great engineering organizations have been working on for years.

Here are the keys to the kingdom, straight from the DORA AI Capabilities Model:

  1. A clear and communicated AI stance: Do your developers know the rules of the road? Or are they driving blind, worried they’ll get in trouble for using a tool or, worse, feeding it confidential data? When the rules are clear, friction goes down and effectiveness skyrockets.
  2. Healthy data ecosystems: AI is only as good as the data it learns from. Organizations that treat their data as a strategic asset—investing in its quality, accessibility and unification—see a massive amplification of AI’s benefits on organizational performance.
  3. AI-accessible internal data: Generic AI is useful. AI that understands your codebase, your documentation and your internal APIs is a game-changer. Connecting AI to your internal context is the difference between a helpful co-pilot and a true navigator.
  4. Strong version control practices: In an age of AI-accelerated code generation, your version control system is your most critical safety net. Teams that are masters of commits and rollbacks can experiment with confidence, knowing they can easily recover if something goes wrong. This is what enables speed without sacrificing sanity.
  5. Working in small batches: AI can generate a lot of code, fast. But bigger changes are harder to review and riskier to deploy. Disciplined teams that work in small, manageable chunks see better product performance and less friction, even if it feels like they’re pumping the brakes on individual code output.
  6. A user-centric focus: This one is a showstopper. The DORA report found that without a clear focus on the user, AI adoption can actually harm team performance. Why? Because you’re just getting faster at building the wrong thing. When teams are aligned on creating user value, AI becomes a powerful tool for achieving that shared goal.
  7. Quality internal platforms: A great platform is the paved road that lets developers drive the AI racecar. A bad one is a dirt track full of potholes. The data is unequivocal: a high-quality platform is the essential foundation for unlocking AI’s value at an organizational level.

What this means for you

This isn’t just an academic exercise. The 37-point DORA-Stack Overflow gap has real implications for how we work.

  • For developers: If you’re frustrated with AI, don’t just blame the tool. Look at the system around you. Are you being set up for success? This isn’t about your prompt engineering skills; it’s about whether you have the organizational support to use these tools effectively.
  • For engineering leaders: Your job isn’t to just buy AI licenses. It’s to build the ecosystem where those licenses create value. That DORA list of seven capabilities? That’s your new checklist. Your biggest ROI isn’t in the next AI model; it’s in fixing your internal platform, clarifying your data strategy and socializing your AI policy.
  • For CIOs: The DORA report states it plainly: successful AI adoption is a systems problem, not a tools problem. Pouring money into AI without investing in the foundational capabilities that amplify its benefits is a recipe for disappointment.

So, the next time you hear a debate about whether AI is “good” or “bad” for developers, remember the gap between these two surveys. The answer is both, and the difference has very little to do with the AI itself.

AI without a modern engineering culture and solid infrastructure is just expensive frustration. But AI with that foundation? That’s the future.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

Ensuring the long-term reliability and accuracy of AI systems: Moving past AI drift

9 January 2026 at 07:23

For years, model drift was a manageable challenge. Model drift alludes to the phenomenon in which a given trained AI program degrades in its performance levels over time. One way to picture this is to think about a car. Even the best car experiences wear and tear once it is out in the open world, leading to below-par performance and more “noise” as it runs. It requires routine servicing like oil changes, tyre balancing, cleaning and periodic tuning.

AI models follow the same pattern. These programs can range from a simple machine learning-based model to a more advanced neural network-based model. When “out in the open world” shifts, whether through changes in consumer behavior, latest market trends, spending patterns, or any other macro and micro-level triggers, the model drift starts to appear.

In the pre-GenAI scheme of things, models could be refreshed with new data and put back on track. Retrain, recalibrate, redeploy and the AI program was ready to perform again. GenAI has changed that equation. Drift is no longer subtle or hidden in accuracy reports; it is out in the open, where systems can misinform customers, expose companies to legal challenges and erode trust in real time.

McKinsey reports that while 91% of organizations are exploring GenAI, only a fraction feel ready to deploy it responsibly. The gap between enthusiasm and readiness is exactly where drift grows, moving the challenge from the backroom of data science to the boardroom of reputation, regulation and trust.  

Still, some are showing what readiness looks like. A global life sciences company used GenAI to resolve a nagging bottleneck: Stock Keeping Unit (SKU) matching, which once took hours, now takes seconds. The result was faster research decisions, fewer errors and proof that when deployed with purpose, GenAI can deliver real business value.

This only sharpens the point: progress is possible and it can ensure the long-term reliability and accuracy of AI systems, but not without real-time governance.

Why governance must be real-time

 AI drift is messier. When a generative model drifts, it hallucinates, fabricates, or misleads.  That’s why governance needs to move from periodic check-ins to real-time vigilance. The NIST AI Risk Management Framework offers a strong foundation, but a checklist alone won’t be enough. Enterprises need coverage across two critical aspects:

  1. Ensure that the enterprise data is ready for AI. The data is typically fragmented across scores of systems and that non-coherence, along with lack of data quality and data governance, leads models to drift.
  2. The other is what I call “living governance”: Councils with the authority to stop unsafe deployments, adjust validators and bring humans back into the loop when confidence slips or rather to ensure that confidence never slips.

This is where guardrails matter. They’re not just filters but validation checkpoints that shape how models behave. They range from simple rule-based filters to ML-based detectors for bias or toxicity and to advanced LLM-driven validators for fact-checking and coherence. Layered together with humans in the loop, they create a defence-in-depth strategy.

Culture, people and the hidden causes of drift

In many enterprises, drift escalates fastest when ownership is fragmented. The strongest and most successful programs designate a senior leader who carries responsibility, with their credibility and resources tied directly to system performance. That clarity of ownership forces everyone around them to treat drift seriously.

Another, often overlooked, driver of drift is the state of enterprise data. In many organizations, data sits scattered across legacy systems, cloud platforms, departmental stores and third-party tools. This fragmentation creates inconsistent inputs that weaken even well-designed models. When data quality, lineage, or governance is unreliable, models don’t drift subtly; they diverge quickly because they are learning from incomplete or incoherent signals. Strengthening data readiness through unified pipelines, governed datasets and consistent metadata becomes one of the most effective ways to reduce drift before it reaches production.

A disciplined developer becomes more effective, while a careless one generates more errors. But individual gains are not enough; without coherence across the team, overall productivity stalls. Success comes when every member adapts in step, aligned in purpose and practice. That is why reskilling is not a luxury.

Culture now extends beyond individuals. In many enterprises, AI agents are beginning to interact directly with one another, both agent-to-agent and human-to-agent. That’s a new collaboration loop, one that demands new norms and maturity. If the culture isn’t ready, drift doesn’t creep in through the algorithm; it enters through the people and processes surrounding it.

Lessons from the field

If you want to see AI drift in action, just scan recent headlines. Fraudsters are already using AI cloning to generate convincing impostors, tricking people into sharing information or authorizing transactions.

But there are positive examples too. In financial services, for instance, some organizations have begun deploying layered guardrails, personal data detection, topic restriction and pattern-based filters that act like brakes before the output ever reaches the client. One bank I worked with moved from occasional audits to continuous validation. The result wasn’t perfection, but containment. Drift still appeared, as it always does, but it was caught upstream, long before it could damage customer trust or regulatory standing.

Why proactive guardrails matter

Regulators are increasingly beginning to align and the signals are encouraging. The White House Blueprint for an AI Bill of Rights stresses fairness, transparency and human oversight. NIST has published risk frameworks. Agencies like the SEC and the FDA are drafting sector-specific guidance.

Regulatory efforts are progressing, but they inevitably move more slowly than the pace of technology. In the meantime, adversaries are already exploiting the gaps with prompt injections, model poisoning and deepfake phishing. As one colleague told me bluntly, “The bad guys adapt faster than the good guys.” He was right and that asymmetry makes drift not just a technical problem, but a national one.

That’s why forward-thinking enterprises aren’t just meeting regulatory mandates, they are proactively going beyond them to safeguard against emerging risks. They’re embedding continuous evaluation, streaming validation and enterprise-grade protections like LLM firewalls now. Retrieval-augmented generation systems that seem fine in testing can fail spectacularly as base models evolve. Without real-time monitoring and layered guardrails, drift leaks through until customers or regulators notice, usually too late.

The leadership imperative

So, where does this leave leaders? With an uncomfortable truth: AI drift will happen. The test of leadership is whether you’re prepared when it does.

Preparation doesn’t look flashy. It’s not a keynote demo or a glossy slide. It’s continuous monitoring and treating guardrails not as compliance paperwork but as the backbone of reliable AI.

And it’s balanced. Innovation can’t mean moving fast and breaking things in regulated industries. Governance can’t mean paralysis. The organizations that succeed will be the ones that treat reliability as a discipline, not a one-time project.

AI drift isn’t a bug to be patched; it’s the cost of doing business with systems that learn, adapt and sometimes misfire. Enterprises that plan for that cost, with governance, culture and guardrails, won’t just avoid the headlines. They’ll earn the trust to lead.

AI drift forces us to rethink what resilience really means in the enterprise. It’s no longer about protecting against rare failure; it’s about operating in a world where failure is constant, visible and amplified. In that world, resilience is measured not by how rarely systems falter, but by how quickly leaders recognize the drift, contain it and adapt. That shift in mindset separates organizations that merely experiment with GenAI from those that will scale it with confidence.

My view is straightforward: treat drift as a given, not a surprise. Build governance that adapts in real time. Demand clarity on why your teams are using GenAI and what business outcomes justify it. Insist on accountability at the leadership level, not just within technical teams. And most importantly, invest in culture because the biggest source of drift is not always the algorithm but the people and processes around it.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

MCSP buyer’s guide: 6 top managed cloud services providers — and how to choose

9 January 2026 at 05:00

A managed cloud services provider (MCSP) helps organizations run some or all of their cloud environments. This can include moving systems to the cloud, monitoring and maintaining them, improving performance, managing security tools, and helping control costs. MCSPs typically work across public, private, and hybrid cloud environments.

Organizations decide which parts of their cloud environments they want the provider to handle and which parts they want to keep in-house. In most cases, the company and the MCSP share responsibility. The provider manages day-to-day operations and tooling, while the organization stays accountable for business decisions, data, and governance.

Choosing an MCSP is always an unnerving experience, says Brent Riley, MCSP VP of digital forensics and incident response for North America at cybersecurity consultancy CyXcel.

“So much trust is placed in their ability to perform to the level promised in their SLA, but it can be tough to validate whether they’re being met until there’s an outage or cybersecurity incident that reveals issues,” he says. “At that point, the damage is done. MCSPs are even more challenging to evaluate and select as there’s no physical infrastructure to inspect, and no visible work being done within an on-premise infrastructure.”

Benefits using an MCSP

Reduced operational burden: MCSPs can take on day-to-day cloud management tasks, reducing the need for large internal cloud and infrastructure teams. This is especially helpful for organizations that don’t have deep cloud or FinOps expertise in-house.

Faster problem response: Most MCSPs provide 24/7 monitoring and support. When issues arise, their teams can respond quickly, often before problems significantly impact users or applications.

Support for disaster recovery and resilience: MCSPs help design, manage, and test backup and disaster recovery setups. While customers still define recovery goals, providers help ensure systems can be restored quickly if something goes wrong.

Ongoing platform management: Cloud platforms change frequently. MCSPs help keep infrastructure components current and compatible, reducing the risk of outdated configurations while allowing customers to control when major changes are introduced.

Security expertise and tooling: Cloud security requires specialized skills in high demand. MCSPs bring experience with identity management, monitoring, compliance tools, and security best practices. Security remains a shared responsibility, but providers help strengthen day-to-day protection.

Improved reliability and performance: With experience running large and complex environments, MCSPs can help design and operate cloud infrastructure that’s more stable, scalable, and resilient.

Integration with existing systems: MCSPs help connect cloud resources with on-prem systems, applications, and identity platforms. This makes it easier for users and applications to access cloud services without disruption.

More predictable operations, not always lower costs: While MCSPs can reduce internal staffing and tooling costs, they don’t always lower overall cloud spend. Their value today is more about operational efficiency, expertise, and speed than cheaper cloud pricing.

Key considerations when choosing an MCSP

As organizations move toward more autonomous, AI-driven services, MCSPs play an important role turning automation into something that actually works every day, says Manny Rivelo, CEO at ConnectWise, a provider of IT management software.

Rivelo says one thing matters more than many teams realize: operational transparency. Organizations need a clear view into how their cloud environments are designed, secured, and managed, as well as how agentic AI monitors systems, makes decisions, and takes action so nothing important happens behind the scenes without their knowledge.

“Operational maturity matters more as autonomy increases,” Rivelo says. “This includes disciplined data governance, strong physical and logical security, and well-defined incident response processes that balance automation with human oversight. While agentic AI can detect issues, correlate signals, and respond at machine speed, humans remain essential to set policy, validate outcomes, and make judgment calls when conditions fall outside expected patterns.”

It’s also important that the MCSP fits well with the managed services model and the broader ecosystem around it, according to Rivelo. The right provider should use automation and AI to make things simpler. After all, when automation is done right, it backs up the people doing the work, brings more consistency to operations, and gives teams more time to focus on what actually matters, not manage another set of tools.

One factor that often gets missed when choosing an MCSP is how flexible pricing really is, says Jon Winsett, CEO at NPI, which helps enterprises get more value from their software licenses and navigate audits from vendors such as Microsoft, Oracle, and Cisco. The risk with an MCSP is usually not paying more at the start but losing negotiating power over time without noticing it.

MCSPs can be a big help for smaller teams or organizations still building cloud experiences, he adds. By combining cloud spend and packaging services, such as migration support, rightsizing, and cost controls, they can cut down on waste and make the cloud easier to run. For organizations without strong cloud or FinOps skills in-house, those benefits can be worth the tradeoffs.

“As cloud environments grow, pricing often becomes less clear,” says Winsett. “MCSPs add their own markup on top of Microsoft or AWS pricing, up to 8% for basic spend and more when services are bundled. That managed layer is how MCSPs reach profit margins of roughly 30 to 40%.”

Disadvantages to working with an MCSP

The biggest disadvantage using an MCSP is loss of control, according to Ryan McElroy, VP of technology at tech consulting firm Hylaine. 

“If you get discounts for various licenses, but you’re locked into contracts and have to overbuy, then you may not be saving money,” he says. “And an MCSP adds to your organization’s attack surface area. While Microsoft and other large cloud vendors train their MCSPs and provide guidance, if you read the root cause analysis reports produced after major cybersecurity incidents, you’ll find it’s a worryingly common vector.”

Anay Nawathe, director at research and advisory firm ISG, says that while working with MCSPs has many benefits, there are also risks.

“Your MCSP shouldn’t be the main voice of architecture in your organization,” he says. “Architectural decisions should be owned internally to maintain key systems knowledge in-house, reduce vendor lock-in, and mitigate architectural bias from a provider compared to market best practices.”

Additionally, he adds that MCSPs don’t always feel the same pressure to manage costs as the companies using the cloud. In the end, enterprises are the ones who feel the impact of overspending, which is why many bring FinOps roles back in-house to take direct control of cloud costs, he says.

6 top MCSPs

There are dozens, so to help streamline the research, we highlight the following products, arranged alphabetically, based on independent research and discussions with analysts. Organizations should contact providers directly for pricing information.

Accenture

Accenture offers its managed cloud services to customers worldwide, backed by teams and centers in most major regions and markets. It helps organizations design, run, and maintain their cloud environments, and supports everything from initial cloud setup to ongoing operations, including monitoring, maintenance, and security. Accenture also works across major cloud platforms, such as Microsoft Azure, Google Cloud, and AWS. Instead of managing complex cloud systems entirely in-house, companies can use Accenture’s services to handle routine operations and technical oversight. This includes monitoring systems, addressing issues as they come up, and keeping cloud environments updated. Overall, Accenture manages the day-to-day cloud infrastructure so organizational in-house staff can focus on key business priorities.

Capgemini

Capgemini provides managed cloud services worldwide and supports multicloud environments across all major regions, with much of its work centered in Europe and North America. The company works closely with industries such as manufacturing, retail, financial services, and insurance. Capgemini helps organizations run and manage applications on major cloud platforms, including AWS, Microsoft Azure, and Google Cloud, as well as specialized enterprise clouds. Its managed services cover both infrastructure and applications, including monitoring, backups, and technical support. Capgemini also helps companies decide which workloads make sense to move to the cloud, migrate those systems, and manage them over time. The firm is best suited for large enterprises and complex environments rather than midsize organizations.

Deloitte

Deloitte provides cloud services to customers around the world, with much of its work focused on organizations in North America and Europe. It works heavily with industries in financial services and insurance, government, and healthcare. Deloitte supports multicloud environments and works with platforms including AWS, Microsoft Azure, Google Cloud, VMware Cloud, and Oracle Cloud. The firm helps companies plan, build, and operate cloud environments tailored to business goals. A key focus is cloud transformation, including identifying where cloud tech can improve processes and operations. Deloitte is best suited for large enterprises pursuing digital transformation, and while consulting remains its core business, the firm continues to expand its managed services offerings.

HCL Technologies

Managed cloud services from HCL Technologies are offered globally, and supported by teams and centers around the world. HCL helps organizations move their systems to the cloud and keep them running smoothly over time. It works with major cloud providers, such as AWS, Microsoft Azure, and Google Cloud to design and set up cloud environments that match each business’s needs. Once everything’s in place, HCL handles the daily operations, including around-the-clock monitoring, performance management, and fixing issues as they arise, and also uses automation and AI tools for routine IT tasks. Overall, HCL helps organizations maintain reliable cloud systems across industries like banking, manufacturing, and healthcare.

NTT Data

NTT Data delivers managed cloud services to customers globally. It supports a wide range of industries, including manufacturing, healthcare, financial services, and insurance. NTT Data takes a multicloud approach, with managed services customers running on Microsoft Azure, Google Cloud, IBM Cloud, and AWS. NTT Data also helps companies move applications to the cloud, modernize aging systems, and move away from legacy tech, as well as draws on expertise from across the NTT Group to offer services like identity and access management, networking, and managed security, helping customers build cloud-based systems that better support their businesses.

Tata Consultancy Services

TCS works with organizations worldwide, but most of its cloud and managed services customers are in North America and Europe. The company has strong experience in industries such as financial services, life sciences and pharmaceuticals, and retail. TCS supports multicloud environments and works with leading cloud platforms like Microsoft Azure, Google Cloud, Oracle Cloud, and AWS, with some support for IBM Cloud. TCS has dedicated teams for its largest cloud partners and helps large enterprises plan cloud migrations, move existing systems, and modernize applications for the cloud. The majority of this work is focused on large enterprises, with limited emphasis on midsize organizations.


Perfumes solo ‘para ti’ y 600 tonos de maquillaje: IA para hacer una perfumería y cosmética personalizadas y una industria más resiliente

8 January 2026 at 11:10

Alguna gente es tan fiel a ciertos perfumes que, para sus personas cercanas, ese olor le queda ya por siempre unido. Cuando se cruzan por la calle con alguien que usa esas mismas notas olfativas, piensan en su persona de referencia. Cierto es, eso sí, que la fragancia no es exactamente suya, aunque esa frontera se podría cruzar en cualquier momento gracias a la tecnología. Si la inteligencia artificial está revolucionando otras industrias, también lo está haciendo ya, como confirman desde el sector, en la de la cosmética y la perfumería, abriendo las puertas a productos personalizados y adaptados a cada persona.

La industria de la perfumería y la cosmética tiene un impacto global económico notable, posiblemente porque es un sector transversal a diferentes demografías. Solo en perfumes, el gasto mundial alcanza los 56.750 millones de dólares, según cálculos de Grand View Research, y escalará hasta los 78.850 millones para el cierre de la década. A esa cifra habría que sumar lo que se gasta en cosmética y productos de higiene, como jabones o champús, para tener la foto completa de la inversión mundial sectorial.

En España, según los últimos datos de Stanpa, la Asociación Nacional de la Perfumería y la Cosmética, la industria supone el 1,03% del PIB español. España consume al año 11.200 millones de euros en estos productos, pero también lanza al mundo una parte importante de lo que produce. Las exportaciones de las marcas españolas crecían en 2024 a un ritmo del 23%.

El potencial de la IA

Desde fuera, cuando se piensa en fragancias, maquillaje o hasta productos de higiene se suele visualizar algo casi artesanal, llevado hasta por las emociones y los impulsos un tanto artísticos. Sin embargo, ese es un sector con mucha ciencia, mucha innovación y, también, mucha tecnología. La inteligencia artificial es una de sus piezas emergentes.

Y, teniendo en cuenta esa vinculación con una suerte de genio creativo, ¿cuesta integrar a la IA en términos de cultura corporativa?  “Como ocurre con cualquier transformación tecnológica relevante, la adopción de la IA supone un reto cultural”, explica Marc Ortega Aguasca, director de Data & AI en Bella Aurora Labs. “En nuestro caso, hemos trabajado desde el inicio para que estas herramientas no se perciban como un ‘juguete de TI’, sino como un habilitador real del negocio”, explica. Han usado “escucha activa de las necesidades de cada área y la construcción conjunta de soluciones que aporten valor tangible”, apunta. La IA entra a formar parte así de la “cultura creativa de la compañía, como un aliado y no como un sustituto”.

La experiencia de Bella Aurora Labs es una muestra clara de algo que la industria está percibiendo. La IA tiene un “papel estratégico” y una “importancia creciente”, como concluían los participantes en un evento sectorial centrado en esta herramienta organizado por Stanpa este diciembre. La inteligencia artificial se convierte así en “palanca de competitividad, eficiencia y modernización industrial”. Los usos que se le están dando son bastante parecidos a los que están aplicando otros sectores. La IA automatiza tareas y hace analítica de datos, mejora la trazabilidad o la eficiencia, afina la cadena logística o soporta la compliance.

Al tiempo, se introduce en áreas propias y únicas, como puede ser la mejora de formulaciones, el control de calidad, la aceleración de lanzamientos o el trabajo en marketing o atención al cliente. Así, por ejemplo, Bella Aurora acaba de desplegar un chatbot interno que responde a consultas en lenguaje natural sobre datos de la compañía. “Esto permite liberar a los responsables del dato de tareas repetitivas de soporte y, al mismo tiempo, ofrecer a los usuarios una mayor agilidad en la obtención de respuestas”, señala Ortega Aguasca.

Otra de las áreas en las que la industria ve potencial para la inteligencia artificial es la sostenibilidad. “La IA también tendrá un impacto decisivo en sostenibilidad, al permitir simular escenarios ambientales, optimizar cadenas de suministro circulares y tomar decisiones basadas en datos sobre materiales, envases y logística”, señala Adrià Martínez, director general del Beauty Cluster, en el que están asociadas compañías de cosmética, perfumería y cuidado personal.

Como defienden desde Stanpa, esta herramienta ya está generando valor real en la industria. “La IA ya está presente en la infraestructura TI de muchas compañías de cosmética y perfumería, no solo en marketing o ventas, sino de forma transversal”, confirma Martínez.

“La IA ya está presente en la infraestructura TI de muchas compañías de cosmética y perfumería, no solo en marketing o ventas, sino de forma transversal”,> afirma >Adrià Martínez, director general del Beauty Cluster

La era de la hiperpersonalización

Al tiempo, la IA se posiciona como una de las llaves que permiten seguir el ritmo de los avances del mercado y de las preferencias de consumo. Una de las tendencias sectoriales para este 2026 será, según las proyecciones del Beauty Cluster, la hiperpersonalización, esa búsqueda de lo único y propio. En resumidas cuentas, se podría decir que esa persona con una fragancia que huele a ella ahora quiere que literalmente las notas olfativas sean solo suyas.

“La hiperpersonalización ya no es un concepto aspiracional, sino una realidad operativa”, explica Martínez. El sector ya lo está viendo en “diagnósticos de piel basados en IA, recomendadores inteligentes en ecommerce y asistentes virtuales capaces de adaptar mensajes, rutinas y ofertas en tiempo real según el comportamiento, el contexto y los datos históricos del usuario”.

Las cosas ahora responden a lo que tú quieres de forma concreta. No se trata, además, de una cuestión con potencial a futuro, sino algo que se está ofreciendo ya en los canales de venta. Una muestra es el producto Skinceuticals Custom DOSE, al que solo se puede acceder en 8 puntos de venta en España y que usa una evaluación para crear un sérum personalizado, o la base de maquillaje Tonework, de la surcoreana Amorepacific, con más de 600 opciones de color. “En España ya vemos marcas y empresas que utilizan escáneres faciales con IA, cuestionarios avanzados y modelos predictivos para diseñar rutinas personalizadas y ajustar surtido y promociones digitales”, apunta Martínez. Este suma que esto no se está trabajando solo en experiencia cliente, sino que va también a lo que ocurre entre bambalinas. “Ya conocemos casos entre nuestros socios en los que se aplica también en procesos de formulación, planificación de producción, gestión de stocks y logística, permitiendo adaptar lotes, tiempos y recursos a una demanda cada vez más fragmentada”, señala.

El sector es, igualmente, plenamente consciente de los potenciales retos de esta apuesta. “Nos encontramos en una fase previa, pero absolutamente necesaria, para poder abordar la hiperpersonalización con garantías”, señala Ortega Aguasca sobre lo que están haciendo en su compañía. “Para extraer el máximo valor de los modelos que pueden impulsar nuestra estrategia de personalización, creemos imprescindible contar antes con una política de datos sólida y bien estructurada”, indica. El éxito llega de alimentar a la IA con buenos datos, pero también con hacerlo de forma segura.

Al fin y al cabo, hacerlo bien es todavía más importante cuando se echa la vista hacia el futuro, en el que la industria asume que la hiperpersonalización irá en aumento y la IA tendrá, por tanto, un papel aún más clave.

“Para extraer el máximo valor de los modelos [de IA] que pueden impulsar nuestra estrategia de personalización, creemos imprescindible contar antes con una política de datos sólida y bien estructurada”, reflexiona Marc Ortega Aguasca, director de datos e IA en Bella Aurora Labs

Nuevas oportunidades

“Todo apunta a que la cosmética y la perfumería evolucionarán hacia modelos cada vez más a la carta, tanto en producto como en servicio”, indica Martínez. Los productos serán para “cada persona, momento y estilo de vida”, lo que obligará a una mayor flexibilidad en la producción y a contar con “cadenas de suministro mucho más inteligentes”. “La IA actuará como el ‘cerebro’ que conecta datos, formulación, producción y logística en tiempo real”, resume.

Igualmente, esta personalización intensa hará que experiencias y cuestiones que ahora son consideradas de ultra lujo (por ejemplo, esos perfumes únicos) alcancen públicos mucho más generales. La tecnología está democratizando el acceso. “Esto transformará la perfumería en una experiencia íntima pero escalable, combinando exclusividad sensorial con eficiencia industrial”, ejemplifica este experto. El potencial es amplísimo, permitiendo hasta la cocreación de fragancias vía plataforma interactiva hasta el ajuste de las fórmulas para que respeten las necesidades de cada piel.

Los productos a la carta son el titular más jugoso, pero no es el único potencial que el sector ve a la IA o a la integración de otras herramientas, como es el boom de los wearables o la robotización de los almacenes. La tecnología se percibe “como un motor estratégico de diferenciación y competitividad para el sector”. “Hablemos o no de inteligencia artificial, la tecnología es hoy un pilar imprescindible para ser diferenciales en nuestro sector y seguir creciendo como compañía de referencia”, indica Ortega Aguasca. Las diferentes herramientas TI identifican palancas de crecimiento y mejoran la eficiencia operativa.

“La tecnología es uno de los principales factores que está permitiendo al sector de la perfumería y la cosmética ganar resiliencia en un entorno cada vez más complejo e incierto”, suma Martínez. Al tiempo, impulsa la innovación. “La biotecnología permite desarrollar fórmulas más eficaces y sostenibles, la realidad aumentada mejora la experiencia de compra y reduce devoluciones, y el uso de sensores e IoT [internet de las cosas] facilita un control continuo de los procesos industriales”, destaca.

❌
❌