Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

“올해 보안, 이것만은 필수” 글로벌 리더가 꼽은 2026년 보안 우선 순위

20 January 2026 at 00:47

2026년을 앞두고 CISO와 끊임없이 진화하는 사이버 공격자 간의 대결이 다시 한 번 격화되는 가운데, 공격자보다 한발 앞서 주도권을 유지하기 위해서는 치밀하게 기획된 강력한 사이버 보안 프로젝트가 효과적인 대응책일 수 있다.

데이터 거버넌스부터 제로 트러스트까지, 향후 1년 동안 모든 CISO가 도입을 검토해볼 만한 핵심 사이버 보안 프로젝트 7가지를 정리했다.

1. AI 시대를 위한 아이덴티티 및 접근 관리 전환

AI와 자동화 기술이 진화하면서 직원의 접근 권한뿐 아니라 AI 에이전트와 머신 프로세스의 아이덴티티까지 관리하는 것이 이제 필수적인 사이버 보안 요소로 자리 잡고 있다. 딜로이트 미국 사이버 아이덴티티 부문 리더인 앤서니 버그는 이러한 변화를 보안의 핵심 과제로 짚었다.

버그는 “특히 에이전틱 AI를 중심으로 한 AI의 빠른 발전이 많은 보안 리더로 하여금 아이덴티티 관리 전략을 다시 생각하게 만들고 있다”라며 “사람과 비인간 아이덴티티를 모두 아우르는 보다 정교한 아이덴티티 거버넌스에 대한 요구가 CISO와 CIO로 하여금 차세대 디지털 전환을 대비한 보안 프레임워크를 재구성하도록 이끌고 있다”라고 설명했다.

그는 생성형 AI와 에이전틱 AI가 새로운 비즈니스 모델과 더 높은 수준의 자율성을 가능하게 하는 만큼, 조직이 아이덴티티 및 접근 관리(IAM) 프로그램을 선제적으로 현대화하는 것이 중요하다고 언급했다. 모든 디지털 아이덴티티 전반의 접근을 보호하는 것은 민감한 데이터 보호와 규제 준수, 운영 효율성 확보를 위해 필수적이라는 설명이다.

버그는 라이프사이클 관리, 강력한 인증, 정밀한 역할 및 정책 기반 접근 제어와 같은 IAM 역량을 고도화하면 비인가 접근을 차단하고 탈취된 자격 증명으로 인한 위험을 줄일 수 있다고 밝혔다.

또한 이러한 통제를 비인간 아이덴티티까지 확장하면 시스템이나 데이터와 상호작용하는 모든 주체를 적절히 관리할 수 있으며, 정기적인 접근 권한 검토와 지속적인 교육을 병행할 경우 정보 보호 수준을 높이고 고도화된 AI 기술을 보다 안전하게 도입하는 데 도움이 된다고 설명했다.

2. 이메일 보안 강화

카네기멜론대학교 CISO인 메리 앤 블레어는 피싱이 여전히 자격 증명을 탈취하고 피해자를 속이는 주요 공격 경로로 활용되고 있다고 설명했다. 그는 위협 행위자들이 메일 서비스 제공업체의 탐지 기능을 효과적으로 회피할 수 있을 만큼 점점 더 정교한 피싱 공격을 만들어내고 있다고 경고했다.

블레어는 “기존의 다중요소 인증 기법은 이제 반복적으로 무력화되고 있으며, 공격자들은 침투에 성공한 이후 이를 빠르게 수익화하는 단계로 이동하고 있다”라고 언급했다.

이처럼 갈수록 대응이 어려워지는 이메일 보안 환경 속에서 블레어는 CISO가 보안 프로젝트를 추진하는 과정에서 외부 전문 조직의 지원을 검토할 필요가 있다고 조언했다. 실제로 그가 접촉한 여러 벤더는 제안요청서(RFP)에 응답하는 한편, 최신 보안 역량을 시험 적용할 수 있도록 테스트 환경을 제공하고 있다고 전했다.

3. AI를 활용한 코드 취약점 탐지

시스코 AI 연구원 아만 프리얀슈는 자원이 제한된 환경에서도 효과적으로 동작할 수 있는 소형 언어 모델(SLM)을 활용해 자율적으로 취약점을 탐색하는 에이전트를 개발하고 있다.

프리얀슈는 사이버 보안이 본질적으로 긴 맥락을 다뤄야 하는 영역이라고 설명했다. 최신 대규모 언어 모델(LLM)이 이를 처리할 수는 있지만, 비용이나 지연 시간 측면에서 상당한 부담이 따른다는 것이다. 그는 “조직의 코드베이스는 보통 수천 개 파일과 수백만 줄의 코드로 구성된다”라며 “특정 취약점을 찾아야 할 때 모든 코드를 대형 모델에 입력하면 감당하기 어려울 정도로 비용이 커지거나, 아예 맥락 한계를 초과하는 문제가 발생한다”라고 밝혔다.

프리얀슈는 이 프로젝트가 대부분의 보안 분석가가 취약점을 찾는 방식과 유사한 접근을 구현하는 데 목적이 있다고 설명했다. 잠재적인 취약 지점을 추론한 뒤 해당 영역을 탐색하고, 관련 코드를 가져와 분석하는 과정을 반복해 약점을 찾아내는 방식이다. 그는 “연구를 통해 이 접근법이 효과적이라는 점은 이미 확인했다”라며 “2026년에는 이를 확장해 실제 환경에서의 적용 가능성을 실질적으로 검증하고자 한다”라고 전했다.

이미 침투 테스터와 보안 연구원들은 생성형 AI를 취약점 탐색에 활용해 왔으며, AI 기반 버그 헌팅은 취약점 발견 속도를 높이는 동시에 그 접근성을 확대하는 흐름을 보이고 있다. 이는 효과적인 버그 바운티 프로그램을 설계하는 기준에도 변화를 주고 있다.

4. 기업 전반의 AI 거버넌스 및 데이터 보호 강화

AI 리스크와 자율형 위협이 사이버 보안 환경을 재편하는 가운데, AI 기반 커뮤니케이션 및 협업 솔루션 기업 고투(GoTo)의 CISO인 아틸라 퇴뢰크는 조직 내 모든 AI 도구를 안전하게 관리·모니터링하는 한편, 승인되지 않은 플랫폼을 차단해 데이터 유출을 방지하는 데 주력하고 있다.

퇴뢰크는 “설계 단계부터 보안을 내재화하는 원칙을 적용하고 사이버 보안을 비즈니스 전략과 정렬함으로써 회복력과 신뢰, 규제 준수를 동시에 구축하고 있다”라며 “이러한 요소는 AI 시대의 핵심적인 차별화 요인”이라고 설명했다. 다만 그는 여느 대규모 보안 이니셔티브와 마찬가지로, 특정 조직이나 부서에 국한된 접근으로는 성공할 수 없다고 경고했다.

그는 “현재와 미래의 성공을 보장하는 실행 방식을 정립하기 위해서는 전사 모든 부서와의 협업이 필요하다”라고 언급했다.

5. 보안 운영 강화를 위한 AI 우선 전략

세일즈 성과 관리 기업 잭틀리(Xactly)의 CISO 매슈 샤프는 수치 분석 결과와 위협 환경 변화 모두가 AI 신뢰를 최우선 과제로 삼아야 함을 보여주고 있다고 설명했다. 그는 자사 보안 운영을 대상으로 크리스텐슨식 분석을 수행한 결과, 증적 수집과 경보 검증, 규제 준수 보고와 같은 업무를 포함해 전체 기능적 작업의 약 67%가 기계적 성격을 띠며 자동화가 가능하다는 점을 확인했다고 밝혔다.

샤프는 “공격자들은 이미 머신 속도로 AI를 활용해 공격하고 있다”라며 “인간의 속도로 대응하는 방식으로는 AI 기반 공격을 방어할 수 없다”라고 지적했다. 이어 “AI 신뢰를 보안 운영에 적용하면, 기계가 더 효율적으로 수행할 수 있는 작업을 인간 분석가가 처리해야 하는 상황을 피할 수 있고, 동일한 방식으로 대응 역량을 끌어올릴 수 있다”라고 설명했다.

AI가 방어 수단으로서 실질적인 도구로 자리 잡으면서, CISO들은 조직 내 보안 팀 운영 방식 역시 AI의 잠재력을 최대한 활용할 수 있도록 재검토하고 있다.

6. 기본값으로서의 제로 트러스트 모델 전환

소프트웨어 개발 기업 유리스틱(Euristiq)의 CTO 파블로 트히르는 2026년 핵심 프로젝트로 자사 내부 개발과 고객 개발 전반에 제로 트러스트 아키텍처를 구현하는 작업을 꼽았다. 그는 “보안이 중요한 기업과 오랫동안 협력해 왔지만, 2026년에는 시장과 규제 요구 수준이 크게 높아지면서 ‘제로 트러스트 기본값’ 모델로의 전환이 전략적 필수 과제가 될 것”이라고 설명했다.

트히르는 이 프로젝트가 단순히 자사 보안을 강화하는 데 그치지 않는다고 밝혔다. 그는 “고부하 엔터프라이즈 시스템부터 데이터 무결성이 중요한 AI 기반 솔루션에 이르기까지, 고객을 위한 보다 안전한 플랫폼을 구축할 수 있게 될 것”이라며 “인프라와 개발, CI/CD, 내부 도구 전반에 제로 트러스트를 적용함으로써 통합된 보안 기준을 마련하고, 이를 고객 아키텍처에도 이전할 계획”이라고 전했다.

이 이니셔티브는 특정 보안 사고를 계기로 시작된 것은 아니라고 트히르는 설명했다. 그는 “위협 모델이 그 어느 때보다 빠르게 변화하고 있다는 점을 확인했다”라며 “공격은 더 이상 경계에서만 발생하는 것이 아니라, 라이브러리 취약점이나 API, 취약한 인증 메커니즘, 잘못 설정된 권한 등 내부 요소를 통해 점점 더 많이 발생하고 있다”라고 분석했다. 이러한 변화가 접근 방식을 근본적으로 재검토하게 만든 계기라고 밝혔다.

7. 전사 차원의 데이터 거버넌스 강화

엔터프라이즈 데이터·AI·데이터 패브릭 솔루션 기업 솔릭스 테크놀로지스(Solix Technologies)의 총괄 배리 쿤스트는 2026년 우선 과제로 모든 전사 시스템에 걸친 통합 데이터 거버넌스 및 보안 프레임워크 구축을 제시했다. 그는 이 이니셔티브가 많은 조직이 여전히 겪고 있는 섀도 데이터 문제와 일관되지 않은 접근 통제, 규제 준수 공백을 해소하기 위한 목적도 함께 담고 있다고 설명했다.

쿤스트는 “모든 환경에서 데이터 분류와 보호, 모니터링 방식을 표준화하면 추적되지 않는 민감 데이터라는 가장 큰 보안 허점을 줄일 수 있다”라며 “이 프로젝트는 가시성을 높이고 정책 기반 통제를 강화해 멀티클라우드 환경에서의 노출을 줄이는 방식으로 보안 수준을 끌어올릴 것”이라고 언급했다.

그는 고객들이 급격한 데이터 증가와 새로운 규제 요구에 압도되는 상황을 목격한 이후 이번 프로젝트를 추진하게 됐다고 밝혔다. 현재 보안 및 클라우드 엔지니어링 팀이 주요 기술 파트너와 협력하고 있으며, 2026년 3분기 도입을 목표로 준비가 진행 중이라고 전했다.
dl-ciokorea@foundryco.com

빔 소프트웨어, 홍성구 신임 한국 지사장 선임

19 January 2026 at 02:54

빔 소프트웨어(Veeam)는 대규모 AI 환경의 안전성, 규정 준수 및 감사 가능성을 보장하기 위해 복원력, 보안, 거버넌스 및 프라이버시 솔루션을 제공하고 있으며, 홍 지사장의 선임도 전략적으로 매우 중요한 시점에 이뤄졌다고 밝혔다.

빔 소프트웨어 아시아 태평양 및 일본(APJ) 수석부사장 겸 총괄 베니 시아는 “홍 지사장은 한국 기업 및 공공 부문 전반에 걸쳐 탄탄한 네트워크를 보유한 검증된 시장 진출 리더”라며, “많은 한국 기업이 사이버 복원력 강화에 박차를 가하는 가운데, 홍 지사장은 성과가 뛰어난 팀을 구축하고 채널 파트너와 협력해 온 경험을 갖고 있다. 이는 고객이 데이터를 보호하고 복구하며 더 많은 가치를 창출하는 원동력이 될 것”이라고 밝혔다.

srcset="https://b2b-contenthub.com/wp-content/uploads/2026/01/사진자료-홍성구-빔-소프트웨어-한국-지사장.png?quality=50&strip=all 1024w, https://b2b-contenthub.com/wp-content/uploads/2026/01/사진자료-홍성구-빔-소프트웨어-한국-지사장.png?resize=255%2C300&quality=50&strip=all 255w, https://b2b-contenthub.com/wp-content/uploads/2026/01/사진자료-홍성구-빔-소프트웨어-한국-지사장.png?resize=768%2C905&quality=50&strip=all 768w, https://b2b-contenthub.com/wp-content/uploads/2026/01/사진자료-홍성구-빔-소프트웨어-한국-지사장.png?resize=869%2C1024&quality=50&strip=all 869w, https://b2b-contenthub.com/wp-content/uploads/2026/01/사진자료-홍성구-빔-소프트웨어-한국-지사장.png?resize=592%2C697&quality=50&strip=all 592w, https://b2b-contenthub.com/wp-content/uploads/2026/01/사진자료-홍성구-빔-소프트웨어-한국-지사장.png?resize=143%2C168&quality=50&strip=all 143w, https://b2b-contenthub.com/wp-content/uploads/2026/01/사진자료-홍성구-빔-소프트웨어-한국-지사장.png?resize=71%2C84&quality=50&strip=all 71w, https://b2b-contenthub.com/wp-content/uploads/2026/01/사진자료-홍성구-빔-소프트웨어-한국-지사장.png?resize=408%2C480&quality=50&strip=all 408w, https://b2b-contenthub.com/wp-content/uploads/2026/01/사진자료-홍성구-빔-소프트웨어-한국-지사장.png?resize=306%2C360&quality=50&strip=all 306w, https://b2b-contenthub.com/wp-content/uploads/2026/01/사진자료-홍성구-빔-소프트웨어-한국-지사장.png?resize=212%2C250&quality=50&strip=all 212w" width="869" height="1023" sizes="auto, (max-width: 869px) 100vw, 869px">

Veeam Software

홍 지사장은 엔터프라이즈 기술 업계에서 29년 이상의 경력을 보유하고 있다. 빔 소프트웨어 합류 전에는 데이터스택(DataStax) 한국 지사장과 마리아DB 코퍼레이션(MariaDB Corporation) 한국 지사장을 역임했으며, 오라클(Oracle)에서는 국내 대기업에 오라클 전제품을 담당하는 영업본부장이자 전무로 재직했다. 팔로알토 네트웍스(Palo Alto Networks), F5 네트웍스(F5 Networks), BMC 소프트웨어(BMC Software)에서도 리더 및 영업 직책을 맡았다. 엔터프라이즈 고객 영업과 채널 영업 전반의 경험을 바탕으로, 한국 시장에서 팀 구축과 파트너 생태계, 고객 프로그램을 성공적으로 이끌어왔다.

홍 지사장은 “빔 소프트웨어의 데이터 복원력 비전은 IT 인프라 현대화와 클라우드 도입 가속화, AI 기반 혁신 수용과 동시에 점점 더 정교해지는 사이버 위협에 직면한 한국 기업들의 요구에 부합한다”라며, “고객과 제휴사, 채널 파트너와 긴밀히 협력해 국내 기업의 데이터 복원력을 한 단계 도약할 수 있도록 적극 지원하겠다”라고 밝혔다.

홍 지사장의 리더십 아래, 빔 소프트웨어는 파트너 및 제휴 채널을 통한 시장 진출을 강화해 고객 접근성과 솔루션 도입을 확대할 방침이다. 동시에 빔 데이터 플랫폼(Veeam Data Platform)과 마이크로소프트 365 및 애저(Azure)용 빔 데이터 클라우드(Veeam Date Cloud)를 비롯한 SaaS 솔루션의 도입을 가속화할 예정이다.
dl-ciokorea@foundryco.com

Agentic browsing: A real change with a big impact

13 January 2026 at 05:15

Three weeks ago, a financial director at my company showed me the morning routine he had been doing for many days. Basically, he transferred data from our ERP to the cloud reporting platform. Every day, he spends an average of fifteen minutes copying, pasting and checking the format. That adds up to a lot of time wasted on a menial task…not to mention the risk of manual operations, which I think we are all familiar with.

When I showed him an example, very quickly, of how a navigation agent could execute the same sequence in two minutes, his expression went from amazement to concern: “What if it makes a mistake that I don’t detect until the end of the quarter?”

AI agents promise to eliminate the friction between intention and digital execution. But in doing so, they introduce a new entity into our infrastructure: autonomous, opaque and capable of acting with our credentials. The question is not whether we will adopt this technology (IDC projects that by 2028, more than 1.3 billion agents will automate business flows that are currently performed by humans), but whether we are prepared to govern it before the market forces us to do so under pressure.

ROI lies in resilience, not efficiency

I hear the prevailing discourse that AI agents should focus solely on saving time and reducing operating costs. I believe this narrative misses the true strategic value.

Sustainable ROI does not lie in doing what we already do faster. It lies in protecting revenue by mitigating systemic risk. According to New Relic’s 2025 Observability Forecast, the average cost of a high-impact IT outage is $2 million per hour. Organizations with full-stack observability in place cut that cost in half. A continuous monitoring agent detects problems that humans would never see until it’s too late, because it operates on a temporal and dimensional scale inaccessible to human cognition.

This distinction separates incremental automation (which improves margins) from systemic resilience (which protects revenue). CIOs who deploy agents seeking the first goal will find modest, short-term ROI. Those who build for the second will find lasting competitive advantage.

The contradiction that must now be resolved

Not all use cases justify web browsing. The correct architectural choice depends on the target system. Web browsing is appropriate for systems that only offer a web interface, third-party SaaS without infrastructure control, decisions based on visual layout and manual cross-application workflows. Direct integration is superior for internal systems with documented APIs, structured backend data movement, latency-critical scenarios and infrastructure observability (logs/metrics/traces).

An observability agent validating microservices does not need a browser; it needs direct access to telemetry. An agent automating data entry in a legacy ERP that only offers a web interface does not need it. This architectural clarity must be established before any purchasing decision or project initiative.

Terminology confusion that paralyzes decisions

The current market for “AI agents” suffers from marketing practices that systematically confuse terminology. In June 2025, Gartner projected that more than 40% of agentive AI projects will be canceled before the end of 2027. The causes: scalable costs without clear ROI, underestimated integration complexity and inadequate risk controls.

The root cause goes back further: the vast majority of what is sold as an “agent” is not. According to Gartner’s analysis at the end of 2024, of thousands of vendors claiming agentive capabilities, approximately 130 meet the technical criteria for genuine agents when evaluated against specific benchmarks for autonomy, adaptability and traceability. The rest practice “agent washing”: rebranding chatbots, RPA tools or automation flows without real autonomous planning capabilities.

Criteria to validate agentic AI in minutes

A genuine AI agent has five non-negotiable characteristics:

  1. Autonomous planning: it builds its own sequence of actions to achieve a goal. It does not follow a predefined decision tree.
  2. Tactical adaptability: it adjusts in real time to interruptions (pop-ups, captchas, interface changes) without stopping or requiring manual restart.
  3. Access to environment tools: it operates a virtual browser, terminal or command line like a human.
  4. Persistent memory: it maintains context across multiple sessions, learning from previous interactions.
  5. Auditable traceability: it provides a detailed step-by-step record of its reasoning and actions taken.

If a vendor cannot demonstrate these five capabilities working together during a demo of, say, 15 minutes with non-predefined tasks, it does not offer true agentive AI.

Why the browser solves the integration problem

Agentic browsers are attracting strategic investment from all the big tech companies, such as Google with Project Mariner (public demo December 2024), Microsoft with Copilot Vision, and Anthropic with Computer Use, because they solve the fundamental problem of business integration, not to mention Perplexity Comet.

Integrating AI with enterprise systems using APIs or custom connectors is complex, costly and fragile, even with MCP. The agentic browser circumvents this with a simple principle: if a human can access a system via a web interface and log in, so can the agent. It requires no public API, special vendor permissions or custom code.

This approach offers three critical advantages for organizations with heterogeneous infrastructure:

  • Direct access to authenticated content: emails, internal documents and pages that require a logged-in session.
  • Multidimensional context without configuration: open tabs, browsing history, partially completed forms.
  • Dramatic reduction in “technical plumbing”: eliminates months of integration work to orchestrate multiple legacy systems.

However, this architectural advantage introduces a new risk vector that must be managed with rigor comparable to that applied to employees with privileged access.

Risks that define the scope of responsible implementation

The autonomy of agents with access to authenticated content introduces operational risk that must be proactively managed. According to New Relic, the average annual exposure for highimpact disruptions can reach $76 million.

Operational risk matrix with specific controls

Methodology: Probabilities reflect early adoption operational experience 2024-2025. High: >30% of implementations experience the event in the first 6 months without controls. Medium: 10-30%. Low: <10%. Implementing controls significantly reduces these probabilities.

RiskProbabilityImpactTechnical Control
Tactical error in executionHigh (initial)OperationalControlled environments (Windows 365 for Agents) with human-in-the-loop for critical decisions
Accidental leak of PIIAverageLegal (GDPR)Unique identity per agent (enter Agent ID) with granular access policies and complete logging
Wrong decision due to poor dataAverageFinancialData observability, validation of pre-decision inputs, automatic flagging of anomalies
Unintended privilege escalationLowSecurityLeast privilege, periodic review of permissions, execution sandboxing

The regulatory imperative that separates leaders from followers

August 2, 2025, marked a critical date for organizations operating in the European Union or processing European citizens’ data. On that date, specific obligations of the EU AI Act for general-purpose model providers (GPAIs) — related to copyright transparency and opt-out mechanisms—became enforceable under Article 53.

Agentic browsers that rely on scraping web sources for training or operation must have data pipelines that respect opt-outs and can demonstrate compliance. Organizations that build a legally clean data infrastructure will now have an insurmountable competitive advantage over those waiting for the first non-compliance notification. The fines are substantial: up to €15 million or 3% of global annual turnover, with fines of up to €35 million or 7% for prohibited practices¹⁰.

Beyond compliance: Organizations that establish agent governance standards now, before regulatory mandates, will be positioned to influence the evolution of industry standards, a significant strategic asset.

The cultural change that no technology can automate

I return to the CFO’s initial question: “What if it makes a mistake that I don’t detect?”

The correct answer is not “they won’t make mistakes” because they will. The correct answer is: “We design systems where agent errors are detectable before they cause irreparable damage, containable when they occur and recoverable through rollback.” We double-check with agents.

This requires a cultural change that no technology purchase can automate and that will determine which organizations capture sustainable value from this transformation.

  • The evolution of the professional role: the value of professionals no longer lies primarily in the transactional execution of copying, pasting and verifying, but in the orchestration of AI-augmented systems, the supervision of patterns and exceptions, and strategic decisions that require business, political and human context that cannot be encoded in models. This transition is structurally similar to the impact of industrial automation: human value does not disappear; it shifts to higher levels of abstraction and judgment.
  • The redefinition of supervision: Human supervision moves from the “inner loop” (manually supervising every action of the agent in real time) to the “outer loop” (supervising aggregate patterns, exceptions automatically flagged by observability systems and post-execution results). This change frees up cognitive capacity for higher-value work while maintaining accountability. But it requires new skills: interpreting agent behavior dashboards, calibrating confidence thresholds and designing effective escalation points.
  • The change management challenge: Organizations that treat agent adoption as a technical project will fail. Those that treat it as organizational transformation, investing in role redefinition, development of new oversight competencies and recalibration of performance metrics will build lasting capacity.

The question for every leader is: Is your organization investing as much in cultural readiness as in technical infrastructure?

The leadership decision that will define the next decade

AI agents are not the future; they are the present for organizations that decide to act while others remain inactive. The question is not whether your organization will adopt agents. It is whether you will adopt them as a leader that sets governance standards or as a late follower that accepts standards set by competitors.

For a manager, the imperative is clear: disciplined experimentation now, with limited use cases and robust governance, builds the organizational capacity that will be indispensable when adoption is no longer optional.

Not because the technology is perfect — it isn’t, and it won’t be in the immediate future.

It is because the pace of improvement is measurable and sustained, and organizations that build operational capacity now through disciplined experimentation will be positioned to capture value as the technology matures. Those who wait for absolute certainty will face the double disadvantage of competing against organizations with years of accumulated learning advantage and adopting under competitive pressure without time to develop internal expertise.

The CFO in our opening story implemented the agent. But only after we designed together the controls that allow him to sleep soundly: automatic validation, alerts for deviations and one-click rollback. His question was not about resistance to change. It was a demand for technical professionalism.

That demand must be our standard.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

7 challenges IT leaders will face in 2026

12 January 2026 at 05:01

Today’s CIOs face increasing expectations on multiple fronts: They’re driving operational and business strategy while simultaneously leading AI initiatives and balancing related compliance and governance concerns.

Additionally, Ranjit Rajan, vice president and head of research at IDC, says CIOs will be called to justify previous investment in automation while managing related costs.

“CIOs will be tasked with creating enterprise AI value playbooks, featuring expanded ROI models to define, measure, and showcase impact across efficiency, growth, and innovation,” Rajan says.

Meanwhile, tech leaders who spent the past decade or more focused on digital transformation are now driving cultural change within their organizations. CIOs emphasize that transformation in 2026 requires a focus on people as well as technology.

Here’s how CIOs say they’re preparing to address and overcome these and other challenges in 2026.

Talent gap and training

The most often cited challenge by CIOs is a consistent and widening shortage of tech talent. Because it’s impossible to meet their objectives without the right people to execute them, tech leaders are training internally as well as exploring non-traditional paths for new hires.

In CIO’s most recent State of the CIO survey 2025, more than half the respondents said staffing and skills shortages “took time away from more strategic and innovation pursuits.” Tech leaders expect that trend to continue in 2026.

“As we look at our talent roadmap from an IT perspective, we feel like AI, cloud, and cybersecurity are the three areas that are going to be extremely pivotal to our organizational strategy,” says Josh Hamit, CIO of Altra Federal Credit Union.

Hamit said the company will address the need by bringing in specialized talent, where necessary, and helping existing staff expand their skillsets. “As an example, traditional cybersecurity professionals will need upskilling to properly assess the risks of AI and understand the different attack vectors,” he says.

Pegasystems CIO David Vidoni has had success identifying staff with a mix of technology and business skills and then pairing them with AI experts who can mentor them.

“We’ve found that business-savvy technologists with creative mindsets are best positioned to effectively apply AI to business situations with the right guidance,” Vidoni says. “After a few projects, new people can quickly become self-sufficient and make a greater impact on the organization.”

Daryl Clark, CTO of Washington Trust, says the financial services company has moved away from degree requirements and focused on demonstrated competencies. He said they’ve had luck partnering with Year Up United, a nonprofit that offers job training for young people.

“We currently have seven full-time employees in our IT department who started with us at Year Up United interns,” Clark says. “One of them is now an assistant vice president of information assurance. It’s a proven pathway for early career talent to enter technology roles, gain mentorship, and grow into future high impact contributors.”

Coordinated AI integration

CIOs say in 2026 AI must move from experimentation and pilot projects to a unified approach that shows measurable results. Specifically, tech leaders say a comprehensive AI plan should integrate data, workflows, and governance rather than relying on scattered initiatives that are more likely to fail.

By 2026, 40% of organizations will miss AI goals, IDC’s Rajan claims. Why? “Implementation complexity, fragmented tools, and poor lifecycle integration,” he says, which is prompting CIOs to increase investment in unified platforms and workflows.

“We simply cannot afford more AI investments that operate in the dark,” says Flexera CIO Conal Gallagher. “Success with AI today depends on discipline, transparency, and the ability to connect every dollar spent to a business result.”

Trevor Schulze, CIO of Genesys, argues AI pilot programs weren’t wasted — as long as they provide lessons that can be applied going forward to drive business value.

“Those early efforts gave CIOs critical insight into what it takes to build the right foundations for the next phase of AI maturity. The organizations that rapidly apply those lessons will be best positioned to capture real ROI.”

Governance for rapidly expanding AI efforts

IDC’s Rajan says that by the end of the decade organizations will see lawsuits, fines, and CIO dismissals due to disruptions from inadequate AI controls. As a result, CIOs say, governance has become an urgent concern — not an afterthought.

“The biggest challenge I’m preparing for in 2026 is scaling AI enterprise-wide without losing control,” says Barracuda CIO Siroui Mushegian. “AI requests flood in from every department. Without proper governance, organizations risk conflicting data pipelines, inconsistent architectures, and compliance gaps that undermine the entire tech stack.”

To stay on top of the requests, Mushegian created an AI council that prioritizes projects, determines business value, and ensures compliance.

“The key is building governance that encourages experimentation rather than bottlenecking it,” she says. “CIOs need frameworks that give visibility and control as they scale, especially in industries like finance and healthcare where regulatory pressures are intensifying.”

Morgan Watts, vice president of IT and business systems at cloud-based VoIP company 8×8, says AI-generated code has accelerated productivity and freed up IT teams for other important tasks such as improving user experience. But those gains come with risks.

“Leading IT organizations are adapting existing guardrails around model usage, code review, security validation, and data integrity,” Watts says. “Scaling AI without governance invites cost overruns, trust issues, and technical debt, so embedding safeguards from the beginning is essential.”

Aligning people and culture

CIOs say one of their top challenges is aligning their organization’s people and culture with the rapid pace of change. Technology, always fast-moving, is now outpacing teams’ ability to keep up. AI in particular requires staff who work responsibly and securely.

Maria Cardow, CIO of cybersecurity company LevelBlue, says organizations often mistakenly believe technology can solve anything if they just choose the right tool. This leads to a lack of attention and investment in people.

“The key is building resilient systems and resilient people,” she says. “That means investing in continuous learning, integrating security early in every project, and fostering a culture that encourages diverse thinking.”

Rishi Kaushal, CIO of digital identity and data protection services company Entrust, says he’s preparing for 2026 with a focus on cultural readiness, continuous learning, and preparing people and the tech stack for rapid AI-driven changes.

“The CIO role has moved beyond managing applications and infrastructure,” Kaushal says. “It’s now about shaping the future. As AI reshapes enterprise ecosystems, accelerating adoption without alignment risks technical debt, skills gaps, and greater cyber vulnerabilities. Ultimately, the true measure of a modern CIO isn’t how quickly we deploy new applications or AI — it’s how effectively we prepare our people and businesses for what’s next.”

Balancing cost and agility

CIOs say 2026 will see an end to unchecked spending on AI projects, where cost discipline must go hand-in-hand with strategy and innovation.

“We’re focusing on practical applications of AI that augment our workforce and streamline operations,” says Pegasystems’ Vidoni. “Every technology investment must be aligned with business goals and financial discipline.”

When modernizing applications, Vidoni argues that teams need to stay outcome-focused, phasing in improvements that directly support their goals.

“This means application modernization and cloud cost-optimization initiatives are required to stay competitive and relevant,” he says. “The challenge is to modernize and become more agile without letting costs spiral. By empowering an organization to develop applications faster and more efficiently, we can accelerate modernization efforts, respond more quickly to the pace of tech change, and maintain control over cloud expenditures.”

Tech leaders also face challenges in driving efficiency through AI while vendors are increasing prices to cover their own investments in the technology, says Mark Troller, CIO of Tangoe.

“Balancing these competing expectations — to deliver more AI-driven value, absorb rising costs, and protect customer data — will be a defining challenge for CIOs in the year ahead,” Troller says. “Complicating matters further, many of my peers in our customer base are embracing AI internally but are understandably drawing the line that their data cannot be used in training models or automation to enhance third-party services and applications they use.”

Cybersecurity

Marc Rubbinaccio, vice president of information security at Secureframe, expects a dramatic shift in the sophistication of security attacks that looks nothing like current phishing attempts.

“In 2026, we’ll see AI-powered social engineering attacks that are indistinguishable from legitimate communications,” Rubbinaccio says. “With social engineering linked to almost every successful cyberattack, threat actors are already using AI to clone voices, copy writing styles, and generate deepfake videos of executives.”

Rubbinaccio says these attacks will require adaptive, behavior-based detection and identity verification along with simulations tailored to AI-driven threats.

In the most recent State of the CIO survey, about a third of respondents said they anticipated difficulty in finding cybersecurity talent who can address modern attacks.

“We feel it’s extremely important for our team to look at training and certifications that drill down into these areas,” says Altra’s Hamit. He suggests the certifications such as ISACA Advanced in AI Security Management (AAISM) and the upcoming ISACA Advanced in AI Risk (AAIR).

Managing workload and rising demands on CIOs

Pegasystems’s Vidoni says it’s an exciting time as AI prompts CIOs to solve problems in new ways. The role requires blending strategy, business savvy, and day-to-day operations. At the same time the pace of transformation can lead to increased workload and stress.

“My approach is simple: Focus on the highest-priority initiatives that will drive better outcomes through automation, scale, and end-user experience. By automating manual, repetitive tasks, we free up our teams to focus on higher-value, more engaging work,” he says. “Ultimately, the CIO of 2026 must be a business leader first and a technologist second. The challenge is leading organizations through a cultural and operational shift — using AI not just for efficiency, but to build a more agile, intelligent, and human-centric enterprise.”

The 37-point trust gap: It’s not the AI, it’s your organization

9 January 2026 at 09:23

I’ve been in the tech industry for over three decades, and if there’s one thing I’ve learned, it’s that the tech world loves a good mystery. And right now, we’ve got a fascinating one on our hands.

This year, two of the most respected surveys in our field asked developers a simple question: Do you trust the output from AI tools? The results couldn’t be more different!

  • The 2025 DORA report, a study with nearly 5,000 tech professionals that historically skews enterprise, found that a full 70% of respondents express some degree of confidence in the quality of AI-generated output.
  • Meanwhile, the 2025 Stack Overflow Developer Survey, with its own massive developer audience, found that only 33% of developers are “Somewhat” or “Highly” trusting of AI tools.

That’s a 37-point gap.

Think about that for a second. We’re talking about two surveys conducted during the same year, of the same profession and examining largely the same underlying AI models from providers like OpenAI, Anthropic and Google. How can two developer surveys report such fundamentally different realities?

DORA: AI is an amplifier

The mystery of the 37-point trust gap isn’t about the AI. It’s about the operational environment AI is surrounded with (more on that in the next section). As the DORA report notes in its executive summary, the main takeaway is: AI is an amplifier. Put bluntly, “the central question for technology leaders is no longer if they should adopt AI, but how to realize its value.”

DORA didn’t just measure AI adoption. They measured the organizational capabilities that determine whether AI helps or destroys your team’s velocity. And they found seven specific capabilities that separate the 70% confidence group in their survey from the 33% in the Stack Overflow results.

Let me walk you through them, because this is where we’ll get practical.

The 7 pillars of a high-trust AI environment

So, what does a good foundation look like? The DORA research team didn’t just identify the problem; they gave us a blueprint. They identified seven foundational “capabilities” that turn AI from a novelty into a force multiplier. When I read this list, I just nodded my head. It’s the stuff great engineering organizations have been working on for years.

Here are the keys to the kingdom, straight from the DORA AI Capabilities Model:

  1. A clear and communicated AI stance: Do your developers know the rules of the road? Or are they driving blind, worried they’ll get in trouble for using a tool or, worse, feeding it confidential data? When the rules are clear, friction goes down and effectiveness skyrockets.
  2. Healthy data ecosystems: AI is only as good as the data it learns from. Organizations that treat their data as a strategic asset—investing in its quality, accessibility and unification—see a massive amplification of AI’s benefits on organizational performance.
  3. AI-accessible internal data: Generic AI is useful. AI that understands your codebase, your documentation and your internal APIs is a game-changer. Connecting AI to your internal context is the difference between a helpful co-pilot and a true navigator.
  4. Strong version control practices: In an age of AI-accelerated code generation, your version control system is your most critical safety net. Teams that are masters of commits and rollbacks can experiment with confidence, knowing they can easily recover if something goes wrong. This is what enables speed without sacrificing sanity.
  5. Working in small batches: AI can generate a lot of code, fast. But bigger changes are harder to review and riskier to deploy. Disciplined teams that work in small, manageable chunks see better product performance and less friction, even if it feels like they’re pumping the brakes on individual code output.
  6. A user-centric focus: This one is a showstopper. The DORA report found that without a clear focus on the user, AI adoption can actually harm team performance. Why? Because you’re just getting faster at building the wrong thing. When teams are aligned on creating user value, AI becomes a powerful tool for achieving that shared goal.
  7. Quality internal platforms: A great platform is the paved road that lets developers drive the AI racecar. A bad one is a dirt track full of potholes. The data is unequivocal: a high-quality platform is the essential foundation for unlocking AI’s value at an organizational level.

What this means for you

This isn’t just an academic exercise. The 37-point DORA-Stack Overflow gap has real implications for how we work.

  • For developers: If you’re frustrated with AI, don’t just blame the tool. Look at the system around you. Are you being set up for success? This isn’t about your prompt engineering skills; it’s about whether you have the organizational support to use these tools effectively.
  • For engineering leaders: Your job isn’t to just buy AI licenses. It’s to build the ecosystem where those licenses create value. That DORA list of seven capabilities? That’s your new checklist. Your biggest ROI isn’t in the next AI model; it’s in fixing your internal platform, clarifying your data strategy and socializing your AI policy.
  • For CIOs: The DORA report states it plainly: successful AI adoption is a systems problem, not a tools problem. Pouring money into AI without investing in the foundational capabilities that amplify its benefits is a recipe for disappointment.

So, the next time you hear a debate about whether AI is “good” or “bad” for developers, remember the gap between these two surveys. The answer is both, and the difference has very little to do with the AI itself.

AI without a modern engineering culture and solid infrastructure is just expensive frustration. But AI with that foundation? That’s the future.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

Ensuring the long-term reliability and accuracy of AI systems: Moving past AI drift

9 January 2026 at 07:23

For years, model drift was a manageable challenge. Model drift alludes to the phenomenon in which a given trained AI program degrades in its performance levels over time. One way to picture this is to think about a car. Even the best car experiences wear and tear once it is out in the open world, leading to below-par performance and more “noise” as it runs. It requires routine servicing like oil changes, tyre balancing, cleaning and periodic tuning.

AI models follow the same pattern. These programs can range from a simple machine learning-based model to a more advanced neural network-based model. When “out in the open world” shifts, whether through changes in consumer behavior, latest market trends, spending patterns, or any other macro and micro-level triggers, the model drift starts to appear.

In the pre-GenAI scheme of things, models could be refreshed with new data and put back on track. Retrain, recalibrate, redeploy and the AI program was ready to perform again. GenAI has changed that equation. Drift is no longer subtle or hidden in accuracy reports; it is out in the open, where systems can misinform customers, expose companies to legal challenges and erode trust in real time.

McKinsey reports that while 91% of organizations are exploring GenAI, only a fraction feel ready to deploy it responsibly. The gap between enthusiasm and readiness is exactly where drift grows, moving the challenge from the backroom of data science to the boardroom of reputation, regulation and trust.  

Still, some are showing what readiness looks like. A global life sciences company used GenAI to resolve a nagging bottleneck: Stock Keeping Unit (SKU) matching, which once took hours, now takes seconds. The result was faster research decisions, fewer errors and proof that when deployed with purpose, GenAI can deliver real business value.

This only sharpens the point: progress is possible and it can ensure the long-term reliability and accuracy of AI systems, but not without real-time governance.

Why governance must be real-time

 AI drift is messier. When a generative model drifts, it hallucinates, fabricates, or misleads.  That’s why governance needs to move from periodic check-ins to real-time vigilance. The NIST AI Risk Management Framework offers a strong foundation, but a checklist alone won’t be enough. Enterprises need coverage across two critical aspects:

  1. Ensure that the enterprise data is ready for AI. The data is typically fragmented across scores of systems and that non-coherence, along with lack of data quality and data governance, leads models to drift.
  2. The other is what I call “living governance”: Councils with the authority to stop unsafe deployments, adjust validators and bring humans back into the loop when confidence slips or rather to ensure that confidence never slips.

This is where guardrails matter. They’re not just filters but validation checkpoints that shape how models behave. They range from simple rule-based filters to ML-based detectors for bias or toxicity and to advanced LLM-driven validators for fact-checking and coherence. Layered together with humans in the loop, they create a defence-in-depth strategy.

Culture, people and the hidden causes of drift

In many enterprises, drift escalates fastest when ownership is fragmented. The strongest and most successful programs designate a senior leader who carries responsibility, with their credibility and resources tied directly to system performance. That clarity of ownership forces everyone around them to treat drift seriously.

Another, often overlooked, driver of drift is the state of enterprise data. In many organizations, data sits scattered across legacy systems, cloud platforms, departmental stores and third-party tools. This fragmentation creates inconsistent inputs that weaken even well-designed models. When data quality, lineage, or governance is unreliable, models don’t drift subtly; they diverge quickly because they are learning from incomplete or incoherent signals. Strengthening data readiness through unified pipelines, governed datasets and consistent metadata becomes one of the most effective ways to reduce drift before it reaches production.

A disciplined developer becomes more effective, while a careless one generates more errors. But individual gains are not enough; without coherence across the team, overall productivity stalls. Success comes when every member adapts in step, aligned in purpose and practice. That is why reskilling is not a luxury.

Culture now extends beyond individuals. In many enterprises, AI agents are beginning to interact directly with one another, both agent-to-agent and human-to-agent. That’s a new collaboration loop, one that demands new norms and maturity. If the culture isn’t ready, drift doesn’t creep in through the algorithm; it enters through the people and processes surrounding it.

Lessons from the field

If you want to see AI drift in action, just scan recent headlines. Fraudsters are already using AI cloning to generate convincing impostors, tricking people into sharing information or authorizing transactions.

But there are positive examples too. In financial services, for instance, some organizations have begun deploying layered guardrails, personal data detection, topic restriction and pattern-based filters that act like brakes before the output ever reaches the client. One bank I worked with moved from occasional audits to continuous validation. The result wasn’t perfection, but containment. Drift still appeared, as it always does, but it was caught upstream, long before it could damage customer trust or regulatory standing.

Why proactive guardrails matter

Regulators are increasingly beginning to align and the signals are encouraging. The White House Blueprint for an AI Bill of Rights stresses fairness, transparency and human oversight. NIST has published risk frameworks. Agencies like the SEC and the FDA are drafting sector-specific guidance.

Regulatory efforts are progressing, but they inevitably move more slowly than the pace of technology. In the meantime, adversaries are already exploiting the gaps with prompt injections, model poisoning and deepfake phishing. As one colleague told me bluntly, “The bad guys adapt faster than the good guys.” He was right and that asymmetry makes drift not just a technical problem, but a national one.

That’s why forward-thinking enterprises aren’t just meeting regulatory mandates, they are proactively going beyond them to safeguard against emerging risks. They’re embedding continuous evaluation, streaming validation and enterprise-grade protections like LLM firewalls now. Retrieval-augmented generation systems that seem fine in testing can fail spectacularly as base models evolve. Without real-time monitoring and layered guardrails, drift leaks through until customers or regulators notice, usually too late.

The leadership imperative

So, where does this leave leaders? With an uncomfortable truth: AI drift will happen. The test of leadership is whether you’re prepared when it does.

Preparation doesn’t look flashy. It’s not a keynote demo or a glossy slide. It’s continuous monitoring and treating guardrails not as compliance paperwork but as the backbone of reliable AI.

And it’s balanced. Innovation can’t mean moving fast and breaking things in regulated industries. Governance can’t mean paralysis. The organizations that succeed will be the ones that treat reliability as a discipline, not a one-time project.

AI drift isn’t a bug to be patched; it’s the cost of doing business with systems that learn, adapt and sometimes misfire. Enterprises that plan for that cost, with governance, culture and guardrails, won’t just avoid the headlines. They’ll earn the trust to lead.

AI drift forces us to rethink what resilience really means in the enterprise. It’s no longer about protecting against rare failure; it’s about operating in a world where failure is constant, visible and amplified. In that world, resilience is measured not by how rarely systems falter, but by how quickly leaders recognize the drift, contain it and adapt. That shift in mindset separates organizations that merely experiment with GenAI from those that will scale it with confidence.

My view is straightforward: treat drift as a given, not a surprise. Build governance that adapts in real time. Demand clarity on why your teams are using GenAI and what business outcomes justify it. Insist on accountability at the leadership level, not just within technical teams. And most importantly, invest in culture because the biggest source of drift is not always the algorithm but the people and processes around it.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

Beyond the cloud bill: The hidden operational costs of AI governance

7 January 2026 at 13:20

In my work helping large enterprises deploy AI, I keep seeing the same story play out. A brilliant data science team builds a breakthrough model. The business gets excited but then the project hits a wall; a wall built of fear and confusion that lives at the intersection of cost and risk. Leadership asks two questions that nobody seems equipped to answer at once: “How much will this cost to run safely?” and “How much risk are we taking on?”

The problem is that the people responsible for cost and the people responsible for risk operate in different worlds. The FinOps team, reporting to the CFO, is obsessed with optimizing the cloud bill. The governance, risk and compliance (GRC) team, answering to the chief risk officer, is focused on legal exposure. And the AI and MLOps teams, driven by innovation under the CTO, are caught in the middle.

This organizational structure leads to projects that are either too expensive to run or too risky to deploy. The solution is not better FinOps or stricter governance in isolation; it is the practice of managing AI cost and governance risk as a single, measurable system rather than as competing concerns owned by different departments. I call this “responsible AI FinOps.”

To understand why this system is necessary, we first have to unmask the hidden costs that governance imposes long before a model ever sees a customer.

 Phase 1: The pre-deployment costs of governance

The first hidden costs appear during development, in what I call the development rework cost. In regulated industries, a model needs to not only be accurate, it must be proven to be fair. It is a common scenario: a model clears every technical accuracy benchmark, only to be flagged for noncompliance during the final bias review.

As I detailed in a recent VentureBeat article, this rework is a primary driver of the velocity gap that stalls AI strategies. This forces the team back to square one, leading to weeks or months of rework, resampling data, re-engineering features and retraining the model; all of which burns expensive developer time and delays time-to-market.

Even when a model works perfectly, regulated industries demand a mountain of paperwork. Teams must create detailed records explaining exactly how the model makes decisions and where its data comes from. You won’t see this expense on a cloud invoice, but it is a major part measured in the salary hours of your most senior experts.

These aren’t just technical problems, they’re a financial drain caused by an AI governance standard process failure.

Phase 2: The recurring operational costs in production

Once a model is deployed, the governance costs become a permanent part of the operational budget.

The explainability overhead

For high-risk decisions, governance mandates that every prediction be explainable. While the libraries used to achieve this (like the popular SHAP and LIME) are open source, they are not free to run. They are computationally intensive. In practice, this means running a second, heavy algorithm alongside your main model for every single transaction. This can easily double the compute resources and latency, creating a significant and recurring governance overhead on every prediction.

The continuous monitoring burden

Standard MLOps involves monitoring for performance drift (e.g., is the model getting less accurate?). But AI governance adds a second, more complex layer: governance monitoring. This means constantly checking for bias drift (e.g., is the model becoming unfair to a specific group over time?) and explainability drift. This requires a separate, always-on infrastructure that ingests production data, runs statistical tests and stores results, adding a continuous and independent cost stream to the project.

The audit and storage bill

To be auditable, you must log everything. In finance, regulations from bodies like FINRA require member firms to adhere to SEC rules for electronic recordkeeping, which can mandate retention for at least six years in a non-erasable format. This means every prediction, input and model version creates a data artifact that incurs a storage cost, a cost that grows every single day for years.

Regulated vs. non-regulated difference: Why a social media app and a bank can’t use the same AI playbook

Not all AI is created equal and the failure to distinguish between use cases is a primary source of budget and risk misalignment. The so-called governance taxes I described above are not universally applied because the stakes are vastly different.

Consider a non-regulated use case, like a video recommendation engine on a social media app. If the model recommends a video I don’t like, the consequence is trivial; I simply scroll past it. The cost of a bad prediction is nearly zero. The MLOps team can prioritize speed and engagement metrics, with a relatively light touch on governance.

Now consider a regulated use case I frequently encounter: an AI model used for mortgage underwriting at a bank. A biased model that unfairly denies loans to a protected class doesn’t just create a bad customer experience, it can trigger federal investigations, multimillion-dollar fines under fair lending laws and a PR catastrophe. In this world, explainability, bias monitoring and auditability are not optional; they are non-negotiable costs of doing business. This fundamental difference is why a single version of AI platform dictated solely by the MLOps, FinOps or GRC team is doomed to fail.

Responsible AI FinOps: A practical playbook for unifying cost and risk

Bridging the gap between the CFO, CRO and CTO requires a new operating model built on shared language and accountability.

  1. Create a unified language with new metrics. FinOps tracks business metrics like cost per user and technical metrics like cost per inference or cost per API call. Governance tracks risk exposure. A responsible AI FinOps approach fuses these by creating metrics like cost per compliant decision. In my own research, I’ve focused on metrics that quantify not just the cost of retraining a model, but the cost-benefit of that retraining relative to the compliance lift it provides.
  2. Build a cross-functional tiger team. Instead of siloed departments, leading organizations are creating empowered pods that include members from FinOps, GRC and MLOps. This team is jointly responsible for the entire lifecycle of a high-risk AI product; its success is measured on the overall risk-adjusted profitability of the system. This team should not only define cross-functional AI cost governance metrics, but also standards that every engineer, scientist and operations team has to follow for every AI model across the organization.
  3. Invest in a unified platform. The market is responding to this need. The explosive growth of the MLOps market, which Fortune Business Insights projects will reach nearly $20 billion by 2032, is proof that the market is responding to this need for a unified one-enterprise-level control plane for AI. The right platform provides a single dashboard where the CTO sees model performance, the CFO sees its associated cloud spend and the CRO sees its real-time compliance status.

The organizational challenge

The greatest barrier to realizing the value of AI is no longer purely technical, it is organizational. The companies that win will be those who break down the walls between their finance, risk and technology teams.

They will recognize that A) You cannot optimize cost without understanding risk; B) You cannot manage risk without quantifying its cost; and C) You can achieve neither without a deep engineering understanding of how the model actually works. By embracing a fused responsible AI FinOps discipline, leaders can finally stop the alarms from ringing in separate buildings and start conducting a symphony of innovation that is both profitable and responsible.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

AI maturity is what happens when curiosity meets control

6 January 2026 at 07:38

When AI first entered the enterprise, every week brought a new tool, a new headline, a new promise of transformation. The excitement was real, but the results were inconsistent. Now, the conversation has matured.

We’ve learned that success isn’t about chasing every use case. The teams I work with aren’t asking “What can AI do?” anymore. They’re asking, “Where does AI make the most impact?”

That mindset shift is changing how enterprises think about AI adoption and innovation. We’ve moved beyond a ChatGPT-for-everything approach toward embedded, specialized tools for everything from code editing to data modeling to workflow coordination.

Balancing discovery with control

In the push to innovate ahead of the curve, how do you balance technology discovery with responsibility and control? If you’ve built a culture that rewards innovation, your talent won’t wait for permission to start trying out the latest and greatest technology releases. That’s a good thing. The key is to harness that curiosity safely, turning experimentation into transformation.

Encourage AI curiosity by padding it with structure and disciplined investment in the AI tools that work for your organization. Because without guardrails in place, employees will still explore, just without oversight.

Organizations that fail to orchestrate and communicate clear AI governance may see a flood of shadow AI, so-called workslop, and operational chaos in the place of transformation.

The pillars of safe, scalable AI adoption

AI can finally deliver on much of what vendors have promised and yet, according to BCG, 74% of organizations have yet to show tangible value from their AI investments and only 26% are moving beyond proofs of concept. A separate AI survey from Deloitte found that 31% of board directors and executives felt their organizations weren’t ready to deploy AI — at all.

This isn’t too surprising. Enterprises faced similar challenges during the cloud adoption era. But as with any new technology, the key to capitalizing on it lies in empowered people, clear policies and consistent processes.

Here’s what that looks like in practice.

1. The people pillar: Equip employees to experiment

Treat every employee like a scientist handling experiments that could result in burn or breakthrough. At CSG, we hold regular open forums where employees from various departments come together to authentically share AI use cases, best practices and new tool suggestions.

This upward feedback from the people closest to the technology has been invaluable. It fosters cross-functional learning between teams and leadership, inspires passion and helps shape our AI adoption strategy.

For example, one of our developers proposed switching to a new, AI-driven code generation solution that (after appropriate testing) has become an integral part of our enterprise toolkit.

Once curiosity is sparked, it’s critical to create a protected space for exploration to manage shadow AI effectively.

An EY survey revealed that two-thirds of organizations allow citizen developers to build or deploy AI agents independently. Shockingly, only 60% of those organizations have formal policies to ensure their AI agents follow responsible AI principles. This could be a costly oversight. Breaches involving unauthorized AI use cost an average of $4.63 million, nearly 16% more than the global average.

However, banning these practices outright will just drive usage underground. The better approach is enablement — empowering employees with access to secure, enterprise-grade platforms where they can safely test and build.

The other piece to this puzzle is talent upskilling. Curiosity only delivers value when people have the knowledge and confidence to start testing the waters.

For example, to better train CSG talent, we launched an internal AI academy — a self-guided learning journey that allows employees across the organization to realize benefits of AI that fit their curiosity. The courses cover role-specific AI use, authorized tools and responsible experimentation. We then check utilization reports to help identify adoption gaps, success stories and further training needs.

2. The policy pillar: Governance as the guardrails

Trust, governance and risk mitigation are the foundation of enterprise AI maturity. In that previously mentioned EY survey, almost all respondents (99%) reported their organizations suffered financial losses from AI-related risks, with the average loss conservatively estimated to be over $4.4 million. However, the same survey indicated that organizations with real-time monitoring and oversight committees are 34% more likely to see improvements in revenue growth and 65% more likely to improve cost savings.

That’s why we established a governance committee. It brought together leaders across legal, compliance, strategy and the CIO and CTO offices to eliminate silos and ensure every AI initiative has clear ownership, policy alignment and oversight from day one.

The committee wasn’t formed to slow down progress. On the contrary, governance rails keep innovation on track and sustainable.

With the initial structure in place, the committee’s focus shifts to protection. Enterprises sit on massive volumes of customer data and intellectual property and launching AI without controls exposes that data to real risk.

If one of your developers uploads IP into ChatGPT or a lawyer pastes contract text into a public model, the consequences could be devastating. To navigate these concerns, we authorized secure, internal access to popular AI tools with built-in notifications that remind users of approved usage.

Vendor management is another major focus area for us. With so many vendors embedding AI into their products, it’s easy to lose track of what’s actually in use. That’s where our governance committee will step in. We are working to audit every internal tool to identify risks and avoid vendor sprawl and overlap. Doing so will allow us to maintain visibility into and control over how our data is shared — a crucial piece in safeguarding our customers’ trust.

Finally, governance also needs to extend to how you reinvest the gains. AI creates efficiencies that free up capital, and those newfound resources require a strategy. As we think about strategy across our organization and balancing demands, it’s important that we reinvest those capital savings responsibly and sustainably, whether into new tools, new markets or further innovations that benefit our business and our customers.

3. The process pillar: Avoid the pilot graveyard

In 2025, 42% of businesses reported scrapping most of their AI initiatives (up from just 17% in 2024) according to an S&P Global survey. On average, organizations actually sunsetted nearly half (46%) of all AI proof-of-concepts before they reached production.

The truth is, even the most advanced technology will end up in the ubiquitous AI pilot graveyard without clear decision frameworks and proper procurement processes.

I’ve found success starts with knowing where AI is truly necessary. You don’t have to throw a large language model at a problem when simple automation actually delivers faster, cheaper results.

For example, many back-office workflows, like accounting processes with four or five manual steps, would probably benefit from standard automation. Save your sophisticated, agentic solutions for complex, tightly scoped functions that require contextual understanding and dynamic interaction.

As you do so, keep in mind that only 44% of consumers are comfortable letting AI take action on their behalf. Part of building trust with customers is making sure they don’t feel “stuck” with chatbots and agentic experiences that feel out of their control and not personalized to their needs.

Once you’ve identified the right use cases, a rigorous and disciplined selection process will ensure you can successfully bring them to life. We use bake-off style RFPs to evaluate vendors head-to-head, define success metrics before deployment and ensure every pilot aligns with measurable business outcomes.

During the selection process, it’s also important to plan for the future. Free tools can be a tempting way to test capabilities, but beware: if they become integral to your workflow, you may put your company at the mercy of pricing shifts or feature changes outside your control.

Finally, scaling success requires alignment and awareness. Once a platform or process proves itself, it needs to be deployed consistently across the organization. That’s how you turn one good pilot into a repeatable process.

Lead with curiosity, scale with control

When it comes to AI maturity in the enterprise, the best organizations move fast but with intention.

Curiosity fuels innovation, but structure sustains it. Without one, you stall. Without the other, you spiral. The future belongs to those that can balance both; building systems where ideas can move freely and securely.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

SOX Compliance and Its Importance in Blockchain & Fintech

26 September 2025 at 07:55
5/5 - (1 vote)

Last Updated on October 8, 2025 by Narendra Sahoo

In the era where technology plays a core part in everything, fintech and blockchain have emerged as transformative forces for businesses. They not only reshape the financial landscape but also promise unparalleled transparency, efficiency and security as the world move forward to digital currency. That’s when you know being updated about SOX Compliance in Blockchain & Fintech are important than ever.

As per the latest statistics by DemandSage, there are around 29,955 Fintech startups in the world, in which over 13,100 fintech startups are based in the United States.  This shows how much business are increasingly embracing technology to innovate and address evolving financial needs. It also highlights the global shift towards digital-first solutions, driven by a demand for greater accessibility and efficiency in financial services.

On the other hand, blockchain technology, also known as Distributed Ledger Technology (DLT) is currently valued at approximately USD $8.70 billion in USA and is estimated to grow an impressive USD $619.28 billion by 2034, according to data from Precedence Research.

However, as this digital continues the revolution, businesses embracing these technologies must also prioritize compliance, security, and accountability. This is where SOX (Sarbanes-Oxley) compliance plays an important role. In today’s article we are going to explore the reason SOX Compliance is crucial for fintech and blockchain industry. So, lets get started!

 

Understanding SOX compliance

The Sarbanes-Oxley Act (SOX), passed in 2002, aims to enhance corporate accountability and transparency in financial reporting. It applies to all publicly traded companies in the U.S. and mandates strict adherence to internal controls, accurate financial reporting, and executive accountability to prevent corporate fraud.

To read more about the SOX you may check the introductory guide to SOX compliance.

The Intersection of SOX and Emerging Technologies

Blockchain technology and fintech solutions disrupt traditional financial systems by offering decentralized and automated alternatives. While these innovations bring significant benefits, they can also obscure transparency and accountability, two principles that SOX aims to uphold. SOX compliance focuses on accurate financial reporting, strong internal controls, and prevention of fraud, aligning with both the potential and risks of emerging technologies.

 Key reasons why SOX compliance matters

1. Ensuring accurate financial reporting

Blockchain technology is often touted for its transparency and immutability. However, errors in smart contracts, incorrect data inputs, or cyberattacks can lead to inaccurate financial records. SOX compliance mandates stringent controls over financial reporting, ensuring that organizations maintain reliable records even when leveraging blockchain.

2. Mitigating risks in decentralized systems

Fintech platforms and blockchain ecosystems often operate without centralized oversight, making it challenging to identify and address fraud or anomalies. SOX’s requirement for management’s assessment of internal controls and independent audits provides a critical layer of oversight, helping organizations address vulnerabilities in decentralized environments.

3. Building stakeholder trust

The trust of investors, customers, and regulators is paramount for fintech and blockchain companies. Adhering to SOX requirements demonstrates a commitment to transparency and accountability, promoting confidence among stakeholders and distinguishing compliant organizations from their competitors.

4. Addressing regulatory scrutiny

As blockchain and fintech solutions gain adoption, regulatory scrutiny is intensifying. SOX compliance ensures that organizations are prepared to meet these demands by maintaining rigorous financial practices and demonstrating accountability in their operations.

5. Adapting to hybrid financial models

Many organizations are integrating traditional financial systems with blockchain-based solutions. This hybrid approach can create gaps in controls and reporting mechanisms. Leveraging blockchain in compliance with SOX helps bridge these gaps by enforcing comprehensive internal controls that adapt to both traditional and innovative systems.

6. Promoting operational efficiency

By enforcing stringent controls and systematic processes, SOX compliance encourages better business practices and operational efficiency. This results in more accurate financial reporting, reduced manual interventions, and streamlined processes, which ultimately support better decision-making and resource allocation.

7. Future proofing against emerging technologies

Blockchain and fintech are continuously evolving, and organizations must adapt to new technologies. SOX compliance offers a flexible framework that can scale and evolve with these changes, ensuring that financial reporting and internal controls remain relevant and effective in the face of new technological challenges and opportunities.

Tips to get SOX compliant for fintech and blockchain companies


1. Understand SOX Requirements

  • Familiarize yourself with the key SOX sections, especially Section 302 (corporate responsibility for financial reports) and Section 404 (internal control over financial reporting).
  • Identify the specific areas that apply to your company’s financial reporting, internal controls, and auditing processes.

2. Form a Compliance Team

  • Assemble an internal team including executives, compliance officers, and IT staff.
  • Consider hiring external experts like auditors to guide the process.

3. Assess Current Financial Processes

  • Review existing financial systems, processes, and internal controls to identify gaps.
  • Document and ensure that these processes are auditable and compliant with SOX.

4. Implement Financial Reporting Systems

  • Automate financial reporting to ensure timely, accurate results.
  • Regularly conduct internal audits to confirm financial controls are working effectively.

5. Strengthen Data Security

  • Implement strong encryption, multi-factor authentication, and role-based access control (RBAC) to secure financial data.
  • Ensure regular backups and disaster recovery plans are in place.

6. Create and Document Policies

  • Develop formal policies for internal controls, financial reporting, and data handling.
  • Train employees on SOX compliance and ensure clear communication about financial responsibilities.

7. Establish Internal Control Framework

  • Build a solid internal control framework, focusing on accuracy, completeness, and fraud prevention in financial reporting.
  • Regularly test, validate controls and consider third-party validation for independent assurance.

8. Disclose Material Changes in Real-Time

  • Develop a process for promptly disclosing any material changes to financial data, ensuring transparency with stakeholders.

9. Prepare for External Audits

  • Engage an independent auditor to review your financial processes and internal controls.
  • Organize records and ensure a clear audit trail to make the audit process smoother.

10. Monitor and Maintain Compliance

  • Continuously monitor financial systems and internal controls to detect errors or fraud.
  • Review and update systems regularly to ensure ongoing SOX compliance.

11. Develop a Compliance Culture

  • Encourage a company-wide focus on SOX compliance, transparency, and accountability.
  • Provide regular training and leadership to instill a culture of compliance.

Conclusion

In the fast-paced era of blockchain and fintech, SOX compliance has evolved from a regulatory necessity to a strategic cornerstone. By driving accurate financial reporting, minimizing risks, and cultivating trust, it sets the stage for lasting growth and innovation. Companies that prioritize compliance and auditing standards don’t just safeguard their operation, but they also position themselves as forward-thinking leaders in the rapidly transforming financial landscape.

The post SOX Compliance and Its Importance in Blockchain & Fintech appeared first on Information Security Consulting Company - VISTA InfoSec.

❌
❌