Reading view

There are new articles available, click to refresh the page.

At VA, cyber dominance is in, cyber compliance is out

The Department of Veterans Affairs is moving toward a more operational approach to cybersecurity.

This means VA is applying a deeper focus on protecting the attack surfaces and closing off threat vectors that put veterans’ data at risk.

Eddie Pool, the acting principal assistant secretary for information and technology and acting principal deputy chief information officer at VA, said the agency is changing its cybersecurity posture to reflect a cyber dominance approach.

Eddie Pool is the acting principal assistant secretary for information and technology and acting principal deputy chief information officer at the Department of Veterans Affairs.

“That’s a move away from the traditional and an exclusively compliance based approach to cybersecurity, where we put a lot of our time resources investments in compliance based activities,” Pool said on Ask the CIO. “For example, did someone check the box on a form? Did someone file something in the right place? We’re really moving a lot of our focus over to the risk-based approach to security, pushing things like zero trust architecture, micro segmentation of our networks and really doing things that are more focused on the operational landscape. We are more focused on protecting those attack surfaces and closing off those threat vectors in the cyber space.”

A big part of this move to cyber dominance is applying the concepts that make up a zero trust architecture like micro segmentation and identity and access management.

Pool said as VA modernizes its underlying technology infrastructure, it will “bake in” these zero trust capabilities.

“Over the next several years, you’re going to see that naturally evolve in terms of where we are in the maturity model path. Our approach here is not necessarily to try to map to a model. It’s really to rationalize what are the highest value opportunities that those models bring, and then we prioritize on those activities first,” he said. “We’re not pursuing it in a linear fashion. We are taking parts and pieces and what makes the most sense for the biggest thing for our buck right now, that’s where we’re putting our energy and effort.”

One of those areas that VA is focused on is rationalizing the number of tools and technologies it’s using across the department. Pool said the goal is to get down to a specific set instead of having the “31 flavors” approach.

“We’re going to try to make it where you can have any flavor you want so long as it’s chocolate. We are trying to get that standardized across the department,” he said. “That gives us the opportunity from a sustainment perspective that we can focus the majority of our resources on those enterprise standardized capabilities. From a security perspective, it’s a far less threat landscape to have to worry about having 100 things versus having two or three things.”

The business process reengineering priority

Pool added that redundancy remains a key factor in the security and tool rationalization effort. He said VA will continue to have a diversity of products in its IT investment portfolios.

“Where we are at is we are looking at how do we build that future state architecture, as elegantly and simplistically as possible so that we can manage it more effectively, they can protect it more securely,” he said.

In addition to standardizing on technology and cyber tools and technologies, Pool said VA is bringing the same approach to business processes for enterprisewide services.

He said over the years, VA has built up a laundry list of legacy technology all with different versions and requirements to maintain.

“We’ve done a lot over the years in the Office of Information and Technology to really standardize on our technology platforms. Now it’s time to leverage that, to really bring standard processes to the business,” he said. “What that does is that really does help us continue to put the veteran at the center of everything that we do, and it gives a very predictable, very repeatable process and expectation for veterans across the country, so that you don’t have different experiences based on where you live or where you’re getting your health care and from what part of the organization.”

Part of the standardization effort is that VA will expand its use of automation, particularly in processing of veterans claims.

Pool said the goal is to take more advantage of the agency’s data and use artificial intelligence to accelerate claims processing.

“The richness of the data and the standardization of our data that we’re looking at and how we can eliminate as many steps in these processes as we can, where we have data to make decisions, or we can automate a lot of things that would completely eliminate what would be a paper process that is our focus,” Pool said. “We’re trying to streamline IT to the point that it’s as fast and as efficient, secure and accurate as possible from a VA processing perspective, and in turn, it’s going to bring a decision back to the veteran a lot faster, and a decision that’s ready to go on to the next step in the process.”

Many of these updates already are having an impact on VA’s business processes. The agency said that it set a new record for the number of disability and pension claims processed in a single year, more than 3 million. That beat its record set in 2024 by more than 500,000.

“We’re driving benefit outcomes. We’re driving technology outcomes. From my perspective, everything that we do here, every product, service capability that the department provides the veteran community, it’s all enabled through technology. So technology is the underpinning infrastructure, backbone to make all things happen, or where all things can fail,” Pool said. “First, on the internal side, it’s about making sure that those infrastructure components are modernized. Everything’s hardened. We have a reliable, highly available infrastructure to deliver those services. Then at the application level, at the actual point of delivery, IT is involved in every aspect of every challenge in the department, to again, bring the best technology experts to the table and look at how can we leverage the best technologies to simplify the business processes, whether that’s claims automation, getting veterans their mileage reimbursement earlier or by automating processes to increase the efficacy of the outcomes that we deliver, and just simplify how the veterans consume the services of VA. That’s the only reason why we exist here, is to be that enabling partner to the business to make these things happen.”

The post At VA, cyber dominance is in, cyber compliance is out first appeared on Federal News Network.

© Getty Images/ipopba

Cyber security network and data protection technology on virtual interface screen.

From oversight to intelligence: AI’s impact on project management and business transformation

For CIOs, the conversation around AI has moved from innovation to orchestration, and project management, long a domain of human coordination and control, is rapidly becoming the proving ground for how intelligent systems can reshape enterprise delivery and accelerate transformation.

In boardrooms across industries, CIOs face the same challenge of how to quantify AI’s promise in operational terms: shorter delivery cycles, reduced overhead, and greater portfolio transparency. A 2025 Georgia Institute of Technology-sponsored study of 217 project management professionals and C-level tech leaders revealed that 73% of organizations have adopted AI in some form of project management.

Yet amid the excitement, the question of how AI will redefine the role of the project manager (PM) remains, as does how will the future framework for the business transformation program be defined.

A shift in the PM’s role, not relevance

Across industries, project professionals are already seeing change. Early adopters in the study report project efficiency gains of up to 30%, but success depends less on tech and more on how leadership governs its use. The overwhelming majority found it highly effective in improving efficiency, predictive planning, and decision-making. But what does that mean for the associates running these projects?

Roughly one-third of respondents believed AI would allow PMs to focus more on strategic oversight, shifting from day-to-day coordination to guiding long-term outcomes. Another third predicted enhanced collaboration roles, where managers act as facilitators who interpret and integrate AI insights across teams. The rest envisioned PMs evolving into supervisors of AI systems themselves, ensuring that algorithms are ethical, accurate, and aligned with business goals.

These perspectives converge on a single point: AI will not replace PMs, but it will redefine their value. The PM of the next decade won’t simply manage tasks, they’ll manage intelligence and translate AI-driven insights into business outcomes.

Why PMOs can’t wait

For project management offices (PMOs), the challenge is no longer whether to adopt AI but how. AI adoption is accelerating, with most large enterprises experimenting with predictive scheduling, automated risk reporting, and gen AI for documentation. But the integration is uneven.

Many PMOs still treat AI as an add-on, a set of tools rather than its strategic capability. This misses the point since AI is about augmenting judgment and automation. The organizations gaining a real competitive advantage are those embedding AI into their project methodologies, governance frameworks, and performance metrics with this five-point approach in mind.

1. Begin with pilot projects

Think small, scale fast. The most successful AI integrations begin with targeted use cases that automate project status reports, predict schedule slippage, or identify resource bottlenecks. These pilot projects create proof points, generate enthusiasm, and expose integration challenges early.

2. Measure value, not just activity

One common pitfall is adopting AI without clear performance metrics. PMOs should set tangible KPIs such as reduction in manual reporting time, improved accuracy in risk forecasts, shorter project cycle times, and higher stakeholder satisfaction. Communicating these outcomes across the organization is just as important as achieving them. Success stories build momentum, foster buy-in, and demystify AI for skeptical teams.

3. Upskill PMs

AI will only be as valuable as the people who use it. Nearly half of the surveyed professionals cited lack of a skilled workforce as a barrier to AI integration. Project managers don’t need to become data scientists, but they must understand AI fundamentals, how algorithms work, where biases emerge, and what data quality means. In this evolving landscape, the most effective PMs will combine data literacy with human-centered leadership, including critical thinking, emotional intelligence, and communication.

4. Strengthen governance and ethics

Increasing AI raises pressing ethical questions, especially when algorithms influence project decisions. PMOs must take the lead in establishing AI governance frameworks that emphasize transparency, fairness, and human oversight. Embedding these principles into the PMO’s charter doesn’t just mitigate risk, it builds trust.

5. Evolve from PMO to BTO

The traditional PMO focuses on execution through scope, schedule, and cost. But AI-driven organizations are shifting toward business transformation offices (BTOs), which align projects directly with strategic value creation through process improvement in parallel. A PMO ensures projects are done right. A BTO ensures the right projects are done. A crucial element of this framework is the transition from a Waterfall to an Agile mindset. The evolution of project management has shifted from rigid plans to iterative, customer-centric, and collaborative methods, with hybrid methodologies becoming increasingly common. This Agile approach is vital for adapting to the rapid changes brought by AI and digital disruption.

The new PM career path

By 2030, AI could manage most routine project tasks, such as status updates, scheduling, and risk flagging, while human leaders focus on vision, collaboration, and ethics. This shift mirrors past revolutions in project management from the rise of Agile to digital transformation, but at an even faster pace. But as organizations adopt AI, the risk of losing the human element persists. Project management has always been about people and aligning interests, resolving conflicts, and inspiring teams. However, while AI can predict a delay, it can’t motivate a team to overcome it. The PM’s human ability to interpret nuance, build trust, and foster collaboration remains irreplaceable.

A call to action

AI represents the next frontier in enterprise project delivery, and the next decade will test how well PMOs, executives, and policymakers can navigate the evolution of transformation. To thrive, organizations must invest in people as much as in platforms, adopt ethical, transparent governance, foster continuous learning and experimentation, and measure success by outcomes rather than hype.

For CIOs, the mandate is clear: lead with vision, govern with integrity, and empower teams with intelligent tools. AI, after all, isn’t a threat to the project management profession. It’s a catalyst for its reinvention, and when executed responsibly, AI-driven project management will not only deliver operational gains but also build more adaptive, human-centered organizations ready for the challenges ahead. By embracing it thoughtfully, PMs can elevate their roles from administrators to architects of change.

La checklist del CIO per ottenere dall’intelligenza artificiale un ROI positivo

All’inizio di quest’anno, il MIT ha fatto notizia perché, in un rapporto [in inglese], ha rilevato che il 95% delle aziende non sta ottenendo alcun ritorno dall’intelligenza artificiale, nonostante investimenti sostanziosi. Ma perché così tante iniziative di intelligenza artificiale non riescono a garantire un ROI positivo? Perché spesso mancano di un chiaro collegamento al valore aziendale, afferma Neal Ramasamy, CIO globale di Cognizant, una società di consulenza IT.

“Questo porta a progetti tecnicamente impressionanti, ma che non risolvono un’esigenza reale né creano un vantaggio tangibile”, aggiunge. I tecnologi spesso seguono l’entusiasmo del momento, immergendosi a capofitto nei test sull’intelligenza artificiale senza considerare i risultati aziendali. “Molti iniziano con modelli e progetti pilota piuttosto che partire da ciò che vogliono ottenere”, osserva Saket Srivastava, CIO di Asana, un’applicazione per il project management. 

“I team eseguono demo in modo isolato, senza riprogettare il flusso di lavoro sottostante o assegnare un responsabile dei profitti e delle perdite”.

La combinazione di una mancanza di pensiero iniziale sul prodotto, pratiche di dati sottostanti inadeguate, governance inesistente e incentivi culturali minimi all’adozione dell’AI può produrre risultati negativi. Quindi, per evitare esiti scadenti, molte delle tecniche si riducono a una migliore gestione del cambiamento. “Senza una revisione dei processi, l’intelligenza artificiale accelera le inefficienze odierne”, aggiunge spiega.

Qui di seguito esaminiamo cinque modi per il change management all’interno di un’azienda che i CIO possono mettere in pratica oggi stesso. Seguendo questa checklist, le imprese dovrebbero iniziare a invertire la tendenza del ROI negativo dell’AI, imparare dagli anti-modelli e scoprire quali tipi di metriche convalidano le iniziative di intelligenza artificiale di successo a livello aziendale.

1. Allineare la leadership in anticipo comunicando gli obiettivi aziendali e guidando l’iniziativa di AI

Le iniziative di intelligenza artificiale richiedono il sostegno dei dirigenti e una visione chiara di come possono migliorare il business. “Una leadership forte è essenziale per tradurre gli investimenti nell’AI in risultati”, dichiara Adam Lopez, presidente e leadvCIO del managed IT support provider CMIT Solutions. “Il sostegno dei dirigenti e la supervisione dei programmi di intelligenza artificiale, idealmente a livello di CEO o di consiglio di amministrazione, sono correlati a un ROI più elevato”.

Per esempio, nella società di servizi IT e consulenza Xebia, un sottogruppo di dirigenti guida le attività interne di AI. Presieduto dal CIO globale Smit Shanker, il team comprende il CFO globale e i responsabili dell’intelligenza artificiale, dell’automazione, dell’infrastruttura IT, della sicurezza e delle operation aziendali.

Una volta costituita la leadership di livello più alto, la responsabilità diventa fondamentale. “Iniziate assegnando la titolarità dell’attività”, consiglia Srivastava. 

“Ogni caso d’uso dell’AI necessita di un leader responsabile con un obiettivo legato a traguardi e risultati chiave”. Raccomanda, poi, di istituire un PMO [in inglese] interfunzionale per definire casi d’uso di riferimento, fissare obiettivi di successo, applicare misure di sicurezza e comunicare regolarmente i progressi compiuti.

Tuttavia, anche con una leadership in atto, molti dipendenti avranno bisogno di una guida pratica per applicare l’intelligenza artificiale nel loro lavoro quotidiano. “Per la maggior parte delle persone, anche se si forniscono loro gli strumenti, non sanno da dove iniziare”, commenta Orla Daly, CIO di Skillsoft, un sistema di gestione dell’apprendimento. Il manager raccomanda di identificare chi, in azienda, può far emergere casi d’uso significativi e condividere consigli pratici, come ottenere il massimo da strumenti come Copilot. Coloro che hanno curiosità e volontà di imparare faranno i progressi maggiori, sostiene.

Infine, i dirigenti devono investire in infrastrutture, talenti e formazione. “I leader devono promuovere una cultura basata sui dati e una visione chiara di come l’AI risolverà i problemi aziendali”, afferma Ramasamy di Cognizant. Ciò richiede una stretta collaborazione tra la prima linea del management, i data scientist e l’IT per eseguire e misurare i progetti pilota prima di passare alla fase di scalabilità.

2. Evolversi modificando il quadro dei talenti e investendo nell’aggiornamento delle competenze

Le imprese devono essere aperte a modificare il loro quadro dei talenti e a riprogettare i ruoli. “I CIO dovrebbero adattare le loro strategie di gestione dei talenti per garantire il successo dell’adozione dell’AI e del ROI”, afferma Ramasamy. “Ciò potrebbe comportare la creazione di nuove figure e percorsi di carriera per i professionisti che si occupano di AI, come i data scientist e i prompt engineer, aggiornando, al contempo, le competenze dei dipendenti esistenti”.

I CIO dovrebbero anche considerare il talento come una pietra miliare di qualsiasi strategia di AI, aggiunge Lopez di CMIT. “Investendo nelle persone attraverso la formazione, la comunicazione e nuovi ruoli specialistici, i CIO possono essere certi che i dipendenti adotteranno gli strumenti di intelligenza artificiale e ne determineranno il successo”. Aggiunge che gli hackathon interni e le sessioni di formazione spesso producono notevoli miglioramenti nelle competenze e nella fiducia.

L’aggiornamento delle competenze, per esempio, dovrebbe soddisfare le esigenze dei dipendenti, quindi Srivastava di Asana raccomanda percorsi a più livelli: tutto il personale ha bisogno di una formazione di base sulla prompt literacy e sulla sicurezza, mentre gli utenti esperti richiedono una conoscenza più approfondita della progettazione del flusso di lavoro e della creazione di agenti. “Abbiamo adottato l’approccio di sondare la forza lavoro, puntare sull’abilitazione e rimisurare per confermare che la maturità si muovesse nella giusta direzione”, sottolinea.

Tuttavia, la valutazione dell’attuale struttura dei talenti va oltre le competenze umane. Significa anche rivalutare il lavoro da svolgere e i compiti di ciascuno al suo interno. “È essenziale rivedere i processi aziendali per individuare opportunità di rifattorizzazione, date le nuove capacità offerte dall’AI”, dichiara Scott Wheeler, responsabile delle attività cloud della società di consulenza Asperitas Consulting.

Per Daly di Skillsoft, l’era dell’AI odierna richiede un quadro di gestione dei talenti moderno che bilanci abilmente le quattro B: build, buy, borrow e bots [in inglese]. In altre parole, i leader dovrebbero considerare la loro azienda come un insieme di competenze e applicare il giusto mix di personale interno, software, partner o automazione in base alle necessità. “Ciò richiede di suddividere le attività in lavori o compiti da svolgere e di considerare l’attività di tutti in modo più frammentato”, rileva Daly.

Per esempio, il suo team ha utilizzato GitHub Copilot per codificare rapidamente un portale di apprendimento per un determinato cliente. Il progetto ha evidenziato come l’abbinamento di sviluppatori umani con assistenti AI possa accelerare notevolmente la consegna, sollevando nuove domande sulle competenze necessarie agli altri sviluppatori per essere altrettanto produttivi ed efficienti.

Tuttavia, poiché gli agenti AI assumono sempre più compiti di routine, i leader devono dissipare i timori che l’intelligenza artificiale sostituisca completamente i posti di lavoro. “Comunicare il motivo alla base delle iniziative di AI può alleviare i timori e dimostrare come questi strumenti possano potenziare i ruoli umani”, fa notare Ramasamy. Srivastava è d’accordo. “Il filo conduttore è la fiducia”, afferma, “Mostrate alle persone come l’AI elimina la fatica e aumenta l’impatto; mantenete gli esseri umani nel ciclo decisionale e l’adozione seguirà”.

3. Adattare i processi organizzativi per sfruttare appieno i vantaggi dell’intelligenza artificiale

Cambiare l’organico è solo l’inizio: le aziende devono anche riprogettare i processi fondamentali. “Sfruttare appieno il valore dell’intelligenza artificiale richiede, spesso, una riprogettazione del funzionamento dell’azienda”, dichiara Lopez di CMIT, che esorta a integrare l’AI nelle operazioni quotidiane e a supportarla con una sperimentazione continua, piuttosto che trattarla come un’aggiunta statica.

A tal fine, un adattamento necessario è quello che consiste nel trattare i flussi di lavoro interni basati sull’intelligenza artificiale come prodotti e codificare i modelli in tutta l’azienda, afferma Srivastava. “Stabilire un rigoroso sistema di gestione dei prodotti per l’acquisizione, la definizione delle priorità e la pianificazione dei casi d’uso dell’AI, con responsabili chiari, descrizioni dei problemi e ipotesi di valore”, sottolinea.

In Xebia, un comitato di governance supervisiona questo rigore attraverso un processo in tre fasi che consiste nell’identificare e misurare il valore, garantire l’accettazione da parte dell’azienda e poi passare all’IT per il monitoraggio e il supporto. “Un gruppo centrale è responsabile della semplificazione organizzativa e funzionale di ogni caso d’uso”, spiega Shanker. “Ciò incoraggia i processi interfunzionali e aiuta ad abbattere i silos”.

Allo stesso modo, per Ramasamy, l’ostacolo più grande è la resistenza organizzativa. “Molte aziende sottovalutano la gestione del cambiamento necessaria per un’adozione di successo”, dice. “Il cambiamento più critico è il passaggio da un processo decisionale compartimentato a un approccio incentrato sui dati. I processi aziendali dovrebbero integrare perfettamente i risultati dell’AI, automatizzando le attività e fornendo ai dipendenti informazioni basate sui dati”.

Identificare le aree giuste da automatizzare dipende anche dalla visibilità. “È qui che la maggior parte delle aziende fallisce perché non dispone di processi validi e documentati”, afferma Daly di Skillsoft, che raccomanda di coinvolgere esperti in materia di tutte le linee di business per esaminare i flussi di lavoro e ottimizzarli. “È importante nominare persone all’interno dell’azienda che si occupino di capire come integrare l’intelligenza artificiale nel flusso di lavoro”, precisa.

Una volta identificate le unità di lavoro comuni a tutte le funzioni che l’AI può semplificare, il passo successivo è renderle visibili e standardizzarne l’applicazione. Skillsoft sta facendo questo attraverso un registro degli agenti che documenta le loro capacità, le misure di sicurezza e i processi di gestione dei dati. “Stiamo formalizzando un framework di AI aziendale in cui l’etica e la governance fanno parte del modo in cui gestiamo il portafoglio di casi d’uso”, aggiunge.

Le imprese dovrebbero quindi anticipare gli ostacoli e creare strutture di supporto per aiutare gli utenti. “Una strategia per raggiungere questo obiettivo è quella di disporre di team SWAT di intelligenza artificiale, il cui scopo è facilitare l’adozione e rimuovere gli ostacoli”, osserva Wheeler di Asperitas.

4. Misurare i progressi per convalidare il ritorno sull’investimento

Per valutare il ROI, i CIO devono stabilire una linea di base pre-AI e fissare in anticipo dei parametri di riferimento. I leader raccomandano di assegnare la responsabilità di metriche quali il time-to-value, i risparmi sui costi, i risparmi di tempo, il lavoro gestito dagli agenti umani e le nuove opportunità di guadagno generate.

“Le misurazioni di riferimento dovrebbero essere stabilite prima di avviare i progetti di intelligenza artificiale”, argomenta Wheeler, che consiglia di integrare gli indicatori predittivi delle singole unità aziendali nelle regolari revisioni delle prestazioni da parte della leadership. Un errore comune, afferma, è quello di misurare solo i KPI tecnici come l’accuratezza del modello, la latenza o la precisione, senza collegarli ai risultati aziendali, come i risparmi, i ricavi o la riduzione dei rischi.

Pertanto, il passo successivo è definire obiettivi chiari e misurabili che dimostrino un valore tangibile. “Incorporare la misurazione nei progetti fin dal primo giorno”, dichiara Lopez di CMIT. “I CIO dovrebbero definire una serie di KPI rilevanti per ogni iniziativa di intelligenza artificiale. Per esempio, un tempo di elaborazione più veloce del 20% o un aumento del 15% della soddisfazione dei clienti”. Iniziare con piccoli progetti pilota che producono risultati rapidi e quantificabili, aggiunge.

Una misura chiara è il risparmio di tempo.

Per esempio, Eamonn O’Neill, CTO di Lemongrass, un fornitore di servizi basati su software, racconta di aver visto clienti documentare manualmente lo sviluppo SAP, un processo che può richiedere molto tempo. “L’utilizzo dell’IA generativa per creare questa documentazione comporta una chiara riduzione dello sforzo umano, che può essere misurato e tradotto in un ROI in modo abbastanza semplice”, commenta. La riduzione del lavoro umano per attività è un altro segnale chiave. 

“Se l’obiettivo è ridurre il numero di chiamate al servizio di assistenza gestite da operatori umani, i leader dovrebbero stabilire una metrica chiara e monitorarla in tempo reale”, illustra Ram Palaniappan, CTO di TEKsystems, fornitore di servizi tecnologici full-stack. Aggiunge, inoltre, che l’adozione dell’AI può anche far emergere nuove opportunità di guadagno.

Alcuni CIO monitorano più KPI granulari nei singoli casi d’uso e adeguano le strategie in base ai risultati. Srivastava di Asana, per esempio, monitora l’efficienza ingegneristica controllando i tempi di ciclo, la produttività, la qualità, il costo per transazione e gli eventi di rischio. Misura anche la percentuale di esecuzioni assistite da agenti, gli utenti attivi, l’accettazione da parte degli esseri umani e le escalation delle eccezioni. L’analisi di questi dati, spiega, aiuta a mettere a punto i prompt e le misure di sicurezza in tempo reale.

Il punto fondamentale è stabilire le metriche fin dall’inizio e non cadere nell’errore di non monitorare i segnali o il valore ottenuto. “Spesso la misurazione viene aggiunta in un secondo momento, quindi i leader non sono in grado di dimostrare il valore o decidere cosa scalare”, dichiara Srivastava. “La soluzione è iniziare con una metrica di missione specifica, stabilirne la linea di base e integrare l’AI direttamente nel flusso di lavoro, in modo che le persone possano concentrarsi su giudizi di valore più elevato”.

5. Governare la cultura dell’AI per evitare violazioni e instabilità

Gli strumenti di AI generativa sono ormai comuni, ma molti dipendenti non hanno ancora ricevuto una formazione adeguata per utilizzarli in modo sicuro. Per esempio, secondo uno studio del 2025 di SmallPDF [in inglese], quasi un dipendente su cinque negli Stati Uniti ha inserito le proprie credenziali di accesso in strumenti di AI. “Una buona leadership implica la creazione di governance e guardrail”, afferma Lopez. Ciò include la definizione di politiche per impedire che dati sensibili e riservati vengano inseriti in strumenti come ChatGPT.

Un uso intensivo dell’intelligenza artificiale amplia anche la superficie di attacco dell’azienda. La leadership deve ora considerare seriamente aspetti quali le vulnerabilità di sicurezza nei browser basati sull’AI, l’AI shadow e le hallucination [in inglese] LLM. Man mano che l’AI agentica diventa sempre più coinvolta nei processi critici per il business [in inglese], un’adeguata autorizzazione e controlli di accesso sono essenziali per prevenire l’esposizione di dati sensibili o l’ingresso dannoso nei sistemi IT.

Dal punto di vista dello sviluppo software, il rischio di fuga di password, chiavi e token attraverso gli agenti di codifica AI è molto reale. Gli ingegneri hanno adottato i server MCP [in inglese] per consentire agli agenti di codifica AI di accedere a dati, strumenti e API esterni, ma una ricerca di Wallarm [in inglese] ha rilevato un aumento del 270% delle vulnerabilità legate agli MCP dal secondo al terzo trimestre del 2025, insieme a un aumento delle vulnerabilità delle API.

Trascurare l’identità degli agenti, le autorizzazioni e le tracce di audit è una trappola comune in cui spesso cadono i CIO con l’AI aziendale, afferma Srivastava. “Introdurre la gestione dell’identità e dell’accesso degli agenti in modo che questi ultimi ereditino le stesse autorizzazioni e la stessa verificabilità degli esseri umani, compresi la registrazione e le approvazioni”, dice.

Nonostante i rischi, la supervisione rimane debole. Un rapporto di AuditBoard ha rilevato che, mentre l’82% delle imprese sta implementando l’intelligenza artificiale, solo il 25% ha implementato programmi di governance completi. Con violazioni dei dati che ora costano in media quasi 4,5 milioni di dollari ciascuna, secondo IBM, e IDC che riferisce [in inglese] che le organizzazioni che sviluppano un’intelligenza artificiale affidabile hanno il 60% di probabilità in più di raddoppiare il ROI dei progetti basati su di essa, i vantaggi commerciali della governance dell’intelligenza artificiale [in inglese] sono evidenti. “Abbinate l’ambizione a solide misure di protezione: ciclo di vita dei dati e controlli di accesso chiari, valutazione e red teaming, e checkpoint con intervento umano nei casi in cui la posta in gioco è alta”, afferma Srivastava. 

“Integrate la sicurezza, la privacy e la governance dei dati nell’SDLC in modo che la distribuzione e la sicurezza procedano di pari passo, senza scatole nere per la provenienza dei dati o il comportamento dei modelli”.

Non è magia

Secondo BCG [in inglese], solo il 22% delle aziende ha portato la propria AI oltre la fase di POC e solo il 4% sta creando un valore sostanziale. Tenendo presenti queste statistiche che fanno riflettere, i CIO non dovrebbero nutrire aspettative irrealistiche in termini di ritorno sull’investimento.

Ottenere un ROI dall’intelligenza artificiale richiederà uno sforzo iniziale significativo e necessiterà di cambiamenti fondamentali nei processi organizzativi. Come ha affermato George Maddaloni, CTO delle operazioni di Mastercard, in una recente intervista con Runtime, l’adozione delle app di GenAI riguarda, in gran parte, la gestione del cambiamento e l’adozione.

Le insidie dell’AI sono pressoché infinite ed è comune che le imprese inseguano l’hype piuttosto che il valore, lancino prodotti senza una chiara strategia sui dati, scalino troppo rapidamente e implementino la sicurezza come un ripensamento. Molti programmi di intelligenza artificiale semplicemente non dispongono del sostegno esecutivo o della governance necessari per raggiungere gli obiettivi prefissati. In alternativa, è facile credere all’hype dei fornitori sui guadagni in termini di produttività e spendere troppo, oppure sottovalutare la difficoltà di integrare le piattaforme di intelligenza artificiale con l’infrastruttura IT legacy.

Guardando al futuro, per massimizzare l’impatto dell’AI sul business, i leader raccomandano di investire nell’infrastruttura dati e nelle capacità della piattaforma necessarie per scalare, e di concentrarsi su uno o due casi d’uso ad alto impatto che possano eliminare il lavoro umano e aumentare chiaramente i ricavi o l’efficienza.

È necessario fondare l’entusiasmo per l’intelligenza artificiale su principi fondamentali e comprendere la strategia aziendale che si intende perseguire per avvicinarsi al ROI. 

Senza una leadership solida e obiettivi chiari, infatti, l’AI è solo una tecnologia affascinante con un ritorno economico che rimane sempre fuori portata.

三菱マテリアルのCIOが語る「CIOの役割や魅力」とは

キャリアの羅針盤を変えた場所:水島からシリコンバレー、そして経営の最前線へ

1989年、私は三菱化成(現・三菱ケミカル)に生産技術のエンジニアとして新卒入社しました。配属先は岡山県倉敷市の水島事業所──石油コンビナートの現場で、フィールドエンジニアリングに従事する日々が始まりました。

転機が訪れたのは1996年。アメリカ東海岸のボストンおよび西海岸のサンフランシスコに新たな拠点を立ち上げるという話があり、私はその西海岸の立ち上げメンバーとしてシリコンバレーに駐在することになりました。当時はWindows 95の登場、インターネットの民主化、そしてeビジネスの黎明期。全米の投資の約3分の1が集まるという、世界最先端の技術と資本が交差する場所に身を置くことになったのです。

3年間の駐在を終えて水島に戻り、再び生産技術の業務に従事しましたが、私は「いけないものを見てしまった」という感覚に襲われました。シリコンバレーで体験したスピード感、革新性、そして未来への挑戦──それらを知ってしまった今、従来の仕事に戻ることはできませんでした。

そこで私は自ら志願し、情報システム部門へ転属しました。以降、DXを含む多くのプロジェクトに関わり、技術と経営の橋渡しを担うようになりました。そして2021年、三菱マテリアルの執行役員CIOとして転職。現在は、企業のデジタル戦略を牽引する立場で、未来を見据えた挑戦を続けています。

「ERP再建請負人」:3度の逆転劇が教えてくれた、逃げずに向き合う力

私のキャリアの中で最も大きな挑戦だったのは、やはりERPの実装プロジェクトです。実はこれまでに3回、ERPプロジェクトの再建を経験してきました。

いずれも、前任者が行き詰まり、プロジェクトが頓挫した状態から引き継ぎ、立て直してゴールへ導くというものでした。金額的にも規模的にも非常に大きなプロジェクトであり、私のCIOとしての考え方や行動の軸を形成する極めて重要な経験となりました。

私のオリジナリティがあるとすれば、それは生産技術からITへとキャリアを転換したこと、そしてシリコンバレーのど真ん中で働いた経験にあると考えています。さらに、帰国後は企業内の業務に留まらず、業界横断の活動にも積極的に関わってきました。

例えば、石油化学工業協会でのIT活動や、企業間取引の電子化(EDI)、国内外の大手同業他社22社が集まったグローバルな化学品イーコマースサイトの立ち上げなど、業界全体を巻き込んだプロジェクトにも携わりました。

現場とオフィス、国内と海外、業務とIT──この様な境界を越えて仕事をしてきた経験が、現在のCIOとしての視野と判断力につながっています。

私が大切にしているのは、「目の前のことに集中して、全力を尽くす」という姿勢です。自分自身の目標を立てすぎることで、将来の可能性を狭めてしまうのではないかという考えから、あえて明確な目標を定めず、今この瞬間に全力を注ぐことを心がけています。

ERPのような大規模プロジェクトでは、困難や予期せぬ課題が次々と現れます。そういった状況でも「逃げない」、「責任を持って最後までやり遂げる」という姿勢を貫いてきました。経験を積み重ね、自分で考え、自分の軸を持って施策を打つ──それが、私のリーダーシップの根幹です。

そして何より、「海外を知ることで日本を知る」、「他社を知ることで自社を知る」、「人を知ることで自分を知る」──この気づきこそが、私にとって最大の財産であり、CIOとしての原動力になっています。

トップダウンだけでは動かない:現場が主役になるガバナンスの再定義

57歳での転職──それは決して早いタイミングではありませんでした。しかし、転職して初めて見えてきたことが数多くありました。特に印象的だったのは、IT戦略における「ガバナンス」「シナジー」という2つのキーワードです。

前職では、複数の上場子会社を含む大規模グループの情報システムを統括するというミッションを担いました。独立性の強い各社を束ねるには、単なるガバナンスの押し付けではなく、「その施策が現場にとってどんなメリットをもたらすのか」を丁寧に伝える必要がありました。

ガバナンスの先にシナジーが生まれ、従業員一人ひとりが納得して動き出す──その仕組みづくりこそが、持続可能なIT戦略の鍵だと実感しました。

また、DXの推進においても、トップダウンとボトムアップの両方が重要であることを学びました。トップダウンでは全社的なインパクトを生み出す一方、ボトムアップでは現場の若手が自分ごととして挑戦し、育成にもつながる。両者が連動することで、DXは組織全体に広がっていくのです。

CIOは「経営者」になれるか?──37年のキャリアが導いた2つのタイプ論

そして今、CIOとしての役割を振り返ると、2つのタイプがあると感じています。一つは「情報システムを統括するCIO」、もう一つは「経営の一翼を担うCIO」です。

私は後者を目指してきました。ITの専門性にとどまらず、外の世界を知り、業界を越え、現場と経営をつなぐ──そんな視点が、CIOの可能性を広げると信じています。

そのために私が重視しているのが、「リベラルアーツ=人類が蓄積した叡智」です。

新しいものはゼロから生まれるのではなく、様々な叡智の組み合わせから創発されるものです。生成AIの登場によって、私たちはこれまで以上に創造的な価値を生み出すチャンスを手にしています。

情報システム部門の皆さんには、ぜひこの視点を持っていただきたいと思います。時には専門領域から越境し、現場に寄り添い、経営と対話する──その先に、CIOとしての新しい可能性が広がっていると、私は確信しています。

より具体的なCIOの仕事観、やりがいや魅力に焦点を当て、リーダーシップやITリーダーへの効果的なアドバイスなど、板野氏に話を聞きました。詳細については、こちらのビデオをご覧ください。

経営の軸を持つCIOへ:運用・保守の価値と人を動かす力

57歳での転職から4年──私は現在、三菱マテリアルのCIOとして、経営視点でITとDXを捉え、社業にどう貢献するかを日々考え、実行しています。

CIOの役割は、新しい技術を導入することだけではありません。

むしろ、導入した仕組みが安定して運用され、保守され、セキュリティが確保された状態で初めて価値が生まれる。全体が「完結」してこそ、経営に貢献できるのです。

この4年間で、私たちは「MMCグループ IT WAY」という全社・グループ横断のIT戦略を掲げ、ガバナンスとシナジーを軸に、組織・人材・予算の最適化を進めてきました。情報システム部門は、ユーザーに寄り添いながらも、核となる機能を集約し、全体最適を目指す体制へと進化しています。

ERP導入、集中購買、セキュリティの統合、そしてレガシーシステムのモダナイゼーション──これらの施策は、単なる技術更新ではなく、経営の意思と現場の実行力をつなぐ仕組みとして設計されています。過度な刷新ではなく、必要な部分を見極めて適切に選び、変える。それが、私たちのモダナイゼーションの基本姿勢です。

このような責任の重さは、同時に大きなモチベーションとやりがいにもつながっています。ITが経営の一翼を担う時代において、CIOは単なる技術統括者ではなく、経営者としての視座を持ち、全体を完結させる存在であるべきだと、私は強く感じています。

グローバルを知り、日本を知る:CIOが語る「本質的なリーダー像」

37年にわたるビジネス人生の中で、私が最も強く感じているのは、「人をどう動かすか」ということの重要性です。プロジェクト、部下、同僚、関係者、そして上司──あらゆる人との関係の中で、最も難しく、最も価値があるのは、「経営をどう動かすか」ということです。

そのためにはまず、自分自身が何をしたいのか、何を伝えたいのかという軸を持つことが不可欠です。軸がぶれてしまえば、人はついてきません。そして、その軸を言語化する力が必要です。言葉にしなければ、思いは伝わりません。

言葉が伝わるためには、信頼関係が前提となります。信頼があれば、相手は共感し、行動変容が起こります。この一連のプロセス──軸を持ち、言語化し、信頼を築き、共感を得て、行動を促す──これをいかに美しく回すかが、リーダーとしての最大の課題だと感じています。

そのためには、自分を知ることが重要です。自分を知るとは、哲学的であり、簡単ではありません。しかし、「海外を知ることで日本を知り、日本を知ることで自社を知り、自社を知ることで自分を知る」──この循環が、リーダーとしての視座を高めてくれます。

また、日本のIT業界の構造的な特徴として、SEの約7割が外部パートナーに所属しているという現実があります。内製化が叫ばれる中で、限界もある。だからこそ、ベンダーやコンサルタントと「戦友」として共に戦う関係性を築くことが必要です。

発注者として指示するだけではなく、謙虚に学び合い、知恵を出し合う関係性を築くこと。それが、これからのIT・DXの世界で、真に価値ある成果を生み出すための鍵だと考えています。

ITは人を幸せにするためにある:CIOが見つけた「人間中心」の経営哲学

個人的なモットーとして、21世紀の人類が絶対に大事にしないといけないと思う2つのキーワードがあります。

「アウェアネス(気づき・意識)「コンパッション(利他・思いやり)です。

人類がこれからの時代において大切にすべき価値観として、私は「アウェアネス」と「コンパッション」の2つを強く意識しています。

この考え方は、私が三菱マテリアルにCIOとして迎えられた際、会社に対して果たすべき責務と、日本の製造業全体を強くしたいという想いの両面において、常に軸となってきました。

三菱マテリアルでは「三菱マテリアルグループ IT Way」という指針を確立し、これを基盤にIT施策を推進しています。

現在、生成AIは避けて通れないテーマとなっており、いかに人がそれを使いこなすかが問われています。重要なのは、生成AIが何かを生み出すのではなく、「人が生成AIを使って何かを生み出す」という関係性です。

生成AIはあくまでITツールの一つであり、使う主体は人間です。私は常に「人を中心にする」という考え方を社内外に伝え続けており、この思想をもとに様々な施策を展開しています。

日本の製造業を強くするには、企業の枠を超えた連携と対話が不可欠です。私は転職前から現在までの約5年間で、約70社・6,000人以上の方々と勉強会や講演会を通じて対話を重ねてきました。

その中で、自分の考えに共感してくださる方々との出会いがあり、そこから得られる学びによって、私自身も成長していると感じています。こうした知の集積が、新しい施策や取り組みを生み出す原動力になると信じています。

私が皆さんに最も伝えたいメッセージは、「すべてのテクノロジーは人を幸せにするものでなければならない」ということです。100年後、200年後の未来において、私たちの子孫が「あの時代から何かが変わった」と認識するとすれば、それはインターネットのような大きな変革であり、今まさに進行している生成AIもその一つです。

そしてもう一つ、私たちが今チャレンジすべきことは、「地球環境を守りながらビジネスができるようになった」と未来に語られることです。三菱マテリアルは資源循環を推進する企業であり、その姿勢はまさにこの未来像を具現化しようとしています。

ビジネスの世界では「甘いことを言ってはいけない」とされがちですが、地球環境を守るという視点においては、コンパッション(利他・思いやり)が不可欠です。これは人類全体がもっと意識すべきキーワードだと考えています。

CIOには専門領域に強い方が多く、経営層と同等のレベルで議論できることが求められます。

しかし、ITの領域だけでは足りません。海外の知見、業界の動向、そしてリベラルアーツなど、IT以外の分野にも目を向けることが重要です。それぞれの企業や業種によって状況は異なりますが、ITだけを考えていては本質的な課題解決には至りません。だからこそ、自分の得意領域を活かしながら、複合的に取り組むことが、これからのCIOに求められる姿勢だと私は考えています。

Agentic AI’s rise is making the enterprise architect role more fluid

In a previous feature about enterprise architects, gen AI had emerged, but its impact on enterprise technology hadn’t been felt. Today, gen AI has spawned a plethora of agentic AI solutions from the major SaaS providers, and enterprise architecture and the role of enterprise architect is being redrawn. So what do CIOs and their architects need to know?

Organizations, especially their CEOs, have been vocal of the need for AI to improve productivity and bring back growth, and analysts have backed the trend. Gartner, for example, forecasts that 75% of IT work will be completed by human employees using AI over the next five years, which will demand, it says, a proactive approach to identifying new value-creating IT work, like expanding into new markets, creating additional products and services, or adding features that boost margins.

If this radical change in productivity takes place, organizations will need a new plan for business processes and the tech that operates those processes. Recent history shows if organizations don’t adopt new operating models, the benefits of tech investments can’t be achieved.

As a result of agentic AI, processes will change, as well as the software used by the enterprise, and the development and implementation of the technology. Enterprise architects, therefore, are at the forefront of planning and changing the way software is developed, customized, and implemented.

In some quarters of the tech industry, gen AI is seen as a radical change to enterprise software, and to its large, well-known vendors. “To say AI unleashed will destroy the software industry is absurd, as it would require an AI perfection that even the most optimistic couldn’t agree to,” says Diego Lo Giudice, principal analyst at Forrester. Speaking at the One Conference in the fall, Lo Giudice reminded 4,000 business technology leaders that change is taking place, but it’s built on the foundations of recent successes.

“Agile has given better alignment, and DevOps has torn down the wall between developers and operations,” he said. “They’re all trying to do the same thing, reduce the gap between an idea and implementation.” He’s not denying AI will change the development of enterprise software, but like Agile and DevOps, AI will improve the lifecycle of software development and, therefore, the enterprise architecture. The difference is the speed of change. “In the history of development, there’s never been anything like this,” adds Phil Whittaker, AI staff engineer at content management software provider Umbraco.

Complexity and process change

As the software development and customization cycle changes, and agentic applications become commonplace, enterprise architects will need to plan for increased complexity and new business processes. Existing business processes can’t continue if agentic AI is taking on tasks currently done manually by staff.

Again, Lo Giudice adds some levity to a debate that can often become heated, especially in the wake of major redundancies by AI leaders such as AWS. “The view that everyone will get a bot that helps them do their job is naïve,” he said at the One Conference. “Organizations will need to carry out a thorough analysis of roles and business processes to ensure they spend money and resources on deploying the right agents to the right tasks. Failure to do so will lead to agentic technology being deployed that’s not needed, can’t cope with complex tasks, and increases the cloud costs of the business.

“It’s easy to build an agent that has access to really important information,” says Tiago Azevedo, CIO for AI-powered low-code platform provider OutSystems. “You need segregation of data. When you publish an agent, you need to be able to control it, and there’ll be many agents, so costs will grow.”

The big difference, though, is deterministic and non-deterministic, says Whittaker. So non-deterministic requires guardrails of deterministic agents that produce the same output every time over the more random outcomes of non-deterministic agents. Defining business outcomes by deterministic and non-deterministic is a clear role for enterprise architecture. He adds that this is where AI can help organizations fill in gaps. Whittaker, who’s been an enterprise architect, says it’ll be vital for organizations to experiment with AI to see how it can benefit their architecture and, ultimately, business outcomes.

“The path to greatness lies not in chasing hype or dismissing AI’s potential, but in finding the golden middle ground where value is truly captured,” write Gartner analysts Daryl Plummer and Alicia Mullery. “AI’s promise is undeniable, but realizing its full value is far from guaranteed. Our research reveals the sobering odds that only one in five AI initiatives achieve ROI, and just one in 50 deliver true transformation.” Further research also finds just 32% of employees trust the organization’s leadership to drive transformation. “Agents bring an additional component of complexity to architecture that makes the role so relevant,” Azevedo adds.

In the past, enterprise architects were focused on frameworks. Whittaker points out that new technology models will need to be understood and deployed by architects to manage an enterprise that comprises employees, applications, databases, and agentic AI. He cites MCP as one as it provides a standard way to connect AI models to data sources, and simplifies the current tangle of bespoke integrations and RAG implementations. AI will also help architects with this new complexity. “There are tools for planning, requirements, creating epics, user stories, code generation, documenting code, and translating it,” added Lo Giudice.

New responsibilities

Agentic AI is now a core feature of every major EA tool, says Stéphane Vanrechem, senior analyst at Forrester. “These agents automate data validation, capability mapping, and artifact creation, freeing architects to focus on strategy and transformation.” He cites the technology of Celonis, SAP Signavio, and ServiceNow for their agentic integrations. Whittaker adds that the enterprise architect has become an important human in the loop to protect the organization and be responsible for the decisions and outcomes that agentic AI delivers.

Although some enterprise architects will see this as a collapse of their specialization, Whittaker thinks it broadens the scope of the role and makes them more T-shaped. “I can go deep in different areas,” he says. “Pigeon-holing people is never a great thing to do.”

Traditionally, architecture has suggested that something is planned, built, and then exists. The rise of agentic AI in the enterprise means the role of the enterprise architect is becoming more fluid as they continue to design and oversee construction. But the role will also involve continual monitoring and adjustment to the plan. Some call this orchestration, or perhaps it’s akin to map reading. An enterprise architect may plan a route, but other factors will alter the course. And just like weather or a fallen tree, which can lead to a route deviation, so too will enterprise architects plan and then lead when business conditions change.

Again, this new way of being an enterprise architect will be impacted by technology. Lo Guidice believes there’ll be increased automation, and Azevedo sides with the orchestration view, saying agents are built and a catalogue of them is created across the organization, which is an opportunity for enterprise architects and CIOs to be orchestrators.

Whatever the job title, Whittaker says enterprise architecture is more important than ever. “More people will become enterprise architects as more software is written by AI,” he says. “Then it’s an architectural role to coordinate and conduct the agents in front of you.” He argues that as technologists allow agents and AI to do the development work for them, the responsibility of architecting how agents and processes function broadens and becomes the responsibility of many more technologists.

“AI can create code for you, but it’s your responsibility to make sure it’s secure,” he adds. Rather than developing the code, technology teams will become architecture teams, checking and accepting the technology that AI has developed, and then managing its deployment into the business processes.

With shadow AI already embedded in organizations, Whittaker’s view shows the need for a team of enterprise architects that can help business align with the AI agents they’ve deployed, and at the same time protect customer data and cybersecurity posture.

AI agents are redrawing the enterprise, and at the same time replanning the role of enterprise architects.

“바이브 러닝과 AI 리더십” C 레벨 기술 임원이 되기 위해 필요한 것

여러 디지털 전환 프로젝트를 이끌고 재무 성과를 만들어냈다. 경영진은 고객과 직원 경험을 모두 개선한 변화 리더십 역량을 높게 평가하고 있다. 주도적으로 구현한 아키텍처는 이제 플랫폼 표준이 됐고, 조직의 데이터와 AI 전략을 떠받치는 기반이 됐다.

이 단계에서 많은 IT 책임자가 이제 CIO나 데이터, 디지털, 보안 분야의 다른 C 레벨 자리에 도전할 자격이 있는지 자문한다.

CIO.com의 연례 CIO 현황(State of the CIO) 보고서에 따르면, CIO의 80% 이상이 역할이 점점 더 디지털과 혁신 중심으로 바뀌고 있고 디지털 전환을 이끄는 데 더 깊이 관여하고 있으며, CIO가 변화의 촉매 역할을 맡고 있다고 답했다. 이 조건을 충족하고 있다면, C 레벨 자리에 어떻게 올라설 수 있을지 고민해야 한다.

디지털 혁신 책임자는 훌륭한 C 레벨 후보

디지털 트랜스포메이션 프로젝트를 이끄는 경험은 C 레벨 자리에 오르기 위한 중요한 전제 조건이다. 하지만 C 레벨 임원이 되면, 모든 IT 프로젝트는 물론, 운영 전반의 성과와 리스크에 대해 책임을 지면서 역할과 책임이 크게 확대된다. C 레벨 기술 임원은 CEO와 CFO가 공감할 수 있는 전략을 수립해야 하고, 끊임없이 진화하는 디지털 운영 모델을 총괄해야 한다.

워크데이(Workday)의 CIO 라니 존슨은 “리더로 성장하고자 하는 기술 책임자는 프로젝트 기반 변화 실행을 관리하는 수준을 넘어, 기업 전체의 기술, 아키텍처, IT 전략에 대해 완전한 소유권과 책임을 져야 한다”라고 강조했다.

존슨은 “IT 인프라, 사이버 보안, AI 플랫폼, 핵심 시스템 운영, 데이터 거버넌스에 대해 깊이 있고 실무적인 전문성을 쌓아야 한다. 기술 전략을 안정적으로 운영하면서도 지속적인 비즈니스 가치로 연결할 수 있는 역량을 보여줘야 한다”라고 설명했다.

C 레벨 역할을 준비하려는 IT 리더는 평생 학습 프로그램을 설계해 전문성을 키우고 자신감을 쌓아야 한다. 70-20-10 학습 모델은 업무 경험 70%, 동료와의 소셜 러닝 20%, 정규 교육 10%에 초점을 맞추는 접근법이다. 디지털 혁신을 이끄는 리더는 이 모델을 C 레벨 기회를 향한 여정에 어떻게 적용하는지 알아보자.

전문가에서 ‘비전문 영역’의 인플루언서로 전환하는 경험

많은 디지털 혁신 리더가 여러 해에 걸친 전사 차원의 전략 프로젝트까지 포함해 자신이 맡은 프로그램 전반에 대한 전문성을 확보하려고 노력한다. 우선순위를 조정하고 리스크를 낮추기 위해 애자일 프로그램의 모든 상황을 들여다보려는 리더도 많다.

하지만 C 레벨 기술 책임자는 모든 전략 프로젝트의 세부 사항까지 관여할 시간도 없고, 기술 구현의 세부 사항에 대해 최고의 전문가도 아니다. C 레벨 임원이 되고자 하는 IT 리더가 채워야 할 70%의 업무 경험은 전문성과 책임 범위를 넘어서는 영역에 과감하게 발을 들여놓는 데서 나온다.

프린시펄(Principal)의 CIO 캐시 케이는 “C 레벨 기술 임원 역할로 올라서는 데 중요한 것은 모든 답을 알고 있는 것이 아니라, 모호함과 복잡성 속에서 리더십을 발휘하는 법을 배우는 것이다”라고 말했다. 케이는 “가장 값진 성장은 스트레치 과제를 맡고, 임팩트가 큰 비즈니스 문제를 해결하면서, IT 조직 내부에만 머물지 않고 전사 차원에서 영향력을 발휘하는 과정을 통해 이뤄진다. 이런 경험이 강력한 멘토와 동료의 조언과 결합되면 오래 가는 리더십 기반이 만들어진다”라고 설명했다.

다음은 업무 현장에서 찾아야 할 경험에 대한 몇 가지 조언이다.

  • 영업 및 마케팅 책임자와 함께 고객을 방문하고 비즈니스 감각을 기르고 구매자의 니즈를 이해하고, 고객의 엔드 투 엔드 워크플로우를 직접 검토한다.
  • 다른 프로젝트를 이끄는 리더를 멘토링해, 전문 영역 밖에 있는 사안에 대해서도 조언을 제공할 수 있는 자신감을 키운다.
  • 워크숍을 직접 주관한다. 특히 중간에 갈등이나 논쟁이 터지는 상황을 잘 수습해낸다면, 경영진 위원회나 이사회 앞에서 발표하는 훌륭한 실전 경험을 쌓을 수 있다.
  • 새로운 기술 도입에 부정적인 부서 리더를 찾아 현 상태에 안주하는 사고를 깨뜨릴 수 있는 방법을 모색한다.
  • 의사결정에 데이터를 충분히 활용하지 못하고 효율성을 높이기 위한 AI 도입에서 뒤처진 운영 조직을 골라 협력하면서 변화의 촉매 역할을 수행한다.

두 번째로 키워야 할 역량은 경청하고 도전하고 적응하고, 방향을 전환하는 능력이다. 성공적인 C 레벨 리더는 비전을 제시하고 지속적으로 계획을 세워야 하지만, 시장과 고객, 투자자, 이해관계자의 요구가 바뀌어 목표를 재조정해야 할 때를 감지할 줄도 알아야 한다.

메가포트(Megaport)의 CTO 카메론 다니엘은 “새로운 기술, 비즈니스 우선순위 변화, 예상치 못한 변수는 잘 설계된 계획도 단숨에 무력화할 수 있다”라고 지적했다. 다니엘은 “성공적인 리더는 변화가 벌어졌을 때 그때그때 대응하는 수준에 머물지 않는다. 변화를 미리 예측하고, 조직이 변화에 대비해 준비되고 무장돼 있도록 만든다. CTO는 이런 적응력의 총 설계자로서, 솔루션이 혁신의 속도를 따라 발전하면서도 비즈니스 영향과 전략적 목표를 계속 달성하도록 책임져야 한다”라고 강조했다.

AI와 신기술에 대한 소셜 러닝에 집중

생성형 AI와 인공 일반 지능이 언제 등장할 것인지에 대한 과대광고가 넘쳐나는 상황이다. 이사회와 경영진은 C 레벨 기술 리더가 이런 소음을 걸러내고 AI 전략을 이끌고, 데이터와 AI 거버넌스를 확립해 주기를 기대한다.

보도자료와 소규모 PoC만으로는 현실적인 AI 비전과 단기적인 투자 수익을 만들 수 없다. C 레벨 기술 리더는 동료와 네트워크를 넓히고 커뮤니티에 참여해 다른 조직이 어디에 투자하고 AI 기반 비즈니스 성과를 어떻게 만들고 있는지 배우면서 지식을 확장한다.

최근 열린 ‘커피 위드 디지털 트레일블레이저(Coffee With Digital Trailblazers)’에서는 변화 리더가 C 레벨 리더십의 바통을 어떻게 이어받을 수 있는지, 그리고 소셜 러닝을 사내에서 어떻게 구현할 수 있는지 논의했다.

예를 들어, 컨티뉴엄 스트래티지(Continuums Strategies)의 설립자이자 vCISO인 데릭 버츠는 AI 기반 위협 탐지와 다양한 유형의 AI 기반 자동화 공격을 분류하는 팀에 합류할 것을 제안했다. 성장 전략가이자 프랙셔널 CIO인 조 푸글리시는 AI를 기회로 삼는 열쇠로 호기심과 끊임없는 질문을 꼽았다.

푸글리시는 “호기심이 없고 일이 왜 그렇게 진행되는지 뿌리까지 파고들지 않으면, 고객 만족도를 새로운 수준으로 끌어올리고, 새로운 제품을 만들고, 새로운 매출원을 열고, 비용을 줄이는 더 새롭고, 더 빠르고, 더 똑똑하고, 더 저렴한 방법을 절대 발명하지 못한다”라고 강조했다.

소셜 러닝에서 또 하나 집중해야 할 영역은 비즈니스 운영을 뒷받침하는 데이터를 끝까지 설명할 수 있는 현업 전문가와의 만남이다. 퀀텍사(Quantexa)의 CTO 제이미 허튼은 “에이전틱 AI가 현실로 다가오면서 데이터 리터러시는 핵심 리더십 역량이 되고 있다. 데이터가 어디에서 왔는지 설명할 수 없다면, 그 위에 AI를 책임감 있게 올릴 수 없다. 사람과 AI 에이전트가 나란히 일하는 시대는 많은 사람이 생각하는 것보다 훨씬 빨리 온다”라고 지적했다.

‘왜’라는 질문을 던지는 소셜 러닝, AI 보안 이슈에 대응하는 보안팀과의 미팅, 비즈니스 운영 데이터 검토는 AI가 큰 가치를 만들 수 있는 영역에 대한 아이디어를 정리하는 데 도움을 준다. 인사이트(Insight) 계열사 SADA의 CTO 마일스 워드는 “C 레벨에 가장 빨리 다가가는 길은 회사의 명운이 걸린 문제를 직접 찾아 나서는 것이다”라고 말했다.

정규 교육을 버리지 말라

많은 C 레벨 리더는 업무가 너무 벅차고 시간이 부족하다는 이유로 정식 학습 활동을 ‘있으면 좋은 것’ 정도로 취급한다. 평생 학습자는 독서, 청취, 시청, 온라인 강의, 기타 학습 경험에 10% 정도 시간을 투자하면 사고의 폭이 넓어지고 새로운 개념을 접할 수 있다는 사실을 이해하고 있다. 학습은 단순한 기술 역량 개발에 그치지 않는다.

쏘트스팟(ThoughtSpot)의 최고 데이터·AI 전략 책임자인 신디 하우슨은 “혁신 속도가 빠른 지금은 70-20-10 규칙이 충분하지 않으며, 정식 학습 활동에 해당하는 10%는 더 늘려야 한다”라고 제안했다. 또, “집중적인 핸즈온 미니 클래스와 최신 AI 혁신 최전선에 있는 리더와의 피어 네트워크가 결합된 시의적절한 서밋을 활용하는 ‘바이브 러닝(Vibe Learning)’ 방식이 효과적이다”라고 덧붙였다.

활용할 수 있는 다른 학습 기회는 다음과 같다.

  • 디지털 트랜스포메이션 필독서 목록, CIO 추천 도서, CIO를 위한 필독서 40선 같은 책을 읽는다.
  • CIO 리더십 라이브(CIO Leadership Live), CXO토크(CXOTalk), 피터 하이의 테크노베이션(Technovation with Peter High), CIO 인 더 노우(CIO in the Know) 같은 인기 CIO 팟캐스트를 자주 청취한다.
  • 링크드인의 이그제큐티브 리더십 과정과 CIO 대상 유데미(Udemy) 강의 같은 온라인 학습 기회를 검토한다.
  • 더 큰 투자를 한다면 버클리나 카네기 멜런 대학교, 와튼 등 교육기관에서 제공하는 CTO 대상 학위 프로그램을 고려할 수 있다.

마지막으로, C 레벨 역할이 모든 사람에게 맞는 것은 아니다. CIO 현황 조사에 따르면, CIO의 43%는 스트레스 수준을 1~10점으로 평가할 때 8점 이상이라고 답했다. 따라서 C 레벨 자리를 오르고자 한다면, 경력 목표를 세우기 전에 역할을 충분히 이해해야 한다.
dl-ciokorea@foundryco.com

What it takes to step into a C-level technology role

You’ve led several digital transformation initiatives and delivered financial impacts. Executives recognize your change leadership competencies, having improved both customer and employee experiences. The architectures you helped roll out are now platform standards and are foundational to your organization’s data and AI strategies.

Now, you’re asking whether you’re ready for a CIO role, or another C-level role in data, digital, or security. 

CIO.com’s 24th annual State of the CIO reports that over 80% of CIOs say their role is becoming more digital- and innovation-focused, that they are more involved in leading digital transformation, and that the CIO is becoming a changemaker. If you’re checking these boxes, you should be asking how you can step up into a C-level job.

Transformation leaders are excellent C-level candidates

Leading transformation initiatives is an important prerequisite for C-level roles, but it’s not sufficient. There’s a significant step up in responsibilities when you become accountable for outcomes and managing risks across all IT initiatives and operations. C-level technology leaders must define a strategy that the CEO and CFO buy into and they must oversee an evolving digital operating model.

“Aspiring leaders need to shift from managing project-based change execution to taking full ownership and accountability for enterprise technology, architecture, and IT strategy,” says Rani Johnson, CIO of Workday. “They should develop deep, hands-on expertise in IT infrastructure, cybersecurity, AI platforms, core system operations, and data governance. They must demonstrate the ability to translate technical strategy into sustained business value whilst ensuring operational stability.”

To prepare for C-level roles, leaders should develop a lifelong learning program to develop expertise and build confidence. The 70-20-10 learning model is one approach that focuses 70% of efforts on on-the-job experiences, 20% on social learning from peers, and 10% on formal education. Here’s how digital trailblazers can apply the model in their quest for C-level opportunities.

Experience transitioning to the non-expert influencer

Many transformation leaders try to develop expertise across the full scope of their programs, even multi-year enterprise-wide strategic initiatives. Some leaders aim for full visibility into their agile programs to help steer priorities and mitigate risks.

But C-level leaders don’t have the time to get into the weeds on every strategic initiative and are generally not experts on the technology implementation details. The 70% of job experiences that transformation leaders should target require stepping into areas outside their expertise and responsibilities.

“Stepping into a C-level technology role is less about having all the answers and more about learning to lead through ambiguity and complexity,” says Kathy Kay, CIO of Principal. “Some of the most valuable growth comes from taking on stretch assignments, solving high-impact business problems, and building the ability to influence across the enterprise, not just within IT. When that experience is paired with the guidance of strong mentors and peers, it creates a lasting foundation for leadership.”

Here are some tips for on-the-job experiences to seek out.

  • Visit customers with leaders from sales and marketing to develop business acumen, understand buyer needs, and review customers’ end-to-end workflows.
  • Mentor leaders on other initiatives to build confidence in providing advice in areas outside of your expertise.
  • Facilitate a workshop because it’s a great experience for presenting to executive committees and boards, especially if you successfully navigate a blow-up moment.
  • Identify department leaders who are detractors to adopting new technologies and find ways to break through their status-quo thinking.
  • Become a change agent by partnering with select operations teams that lag in using data for decision-making and in adopting AI to drive efficiencies.

A second area to develop is the skills to listen, challenge, adapt, and pivot. Successful C-level leaders have to sell a vision and continuously plan, but also sense when market, customer, investor, and stakeholder needs require a reset of objectives.

“New technologies, shifting business priorities, and unexpected challenges can render even the best-laid plans obsolete overnight,” says Cameron Daniel, CTO of Megaport. “Successful leaders don’t just respond to change as it happens; they anticipate it and make sure their teams are prepared and equipped to handle it. As CTO, you serve as the chief architect of this adaptability, ensuring that your solution evolves alongside innovation while continuing to drive business impact and strategic goals.”

Focus social learning on AI and emerging technologies

There’s a lot of hype around generative AI and when artificial generative intelligence will emerge. Boards and executive leaders expect C-level leaders to filter the noise, lead the AI strategy, and establish data and AI governance.

C-level technical leaders can’t rely on press releases and small POCs to develop realistic AI visions that can deliver near-term ROI. Top C-level leaders expand their knowledge by networking with peers and joining communities to learn where others are investing and how they are delivering AI business outcomes. 

Communities to consider joining include:

Many of these communities are open to tech leaders aspiring to C-level roles.

At a recent Coffee With Digital Trailblazers, we discussed how transformation leaders prepare to take the C-level leadership baton and how social learning can happen inside the company as well. For example, Derrick Butts, founder and vCISO at Continuums Strategies, suggested joining the team working on AI threat detection and triaging different types of AI-enabled automated attacks.

Joe Puglisi, growth strategist and fractional CIO, added that being curious and asking many “why” questions is key to unlocking AI opportunities: “If you’re not curious and don’t get to the root of the reason things are done the way they’re done, you’ll never invent that new, better, faster, smarter, cheaper way that’s going to bring new customer satisfaction levels, new products to your customers, new revenue sources, or cost reductions.”

One more area to focus on for social learning about AI opportunities is meeting with subject-matter experts who can fully explain the data underlying a business operation. Jamie Hutton, CTO of Quantexa, says, “As agentic AI becomes a reality, data literacy becomes a core leadership skill. If you can’t explain where your data comes from, you can’t responsibly deploy AI on top of it. Humans and AI agents will be working side by side much sooner than most realize.”

Social learning by asking “why” questions, meeting security teams that respond to AI security issues, and reviewing data from business operations can help formulate ideas on where AI can deliver sizable benefits. “The fastest path to C-level is by seeking out ‘bet-the-company’ problems,” says Miles Ward, CTO of SADA, an Insight company.

Don’t eliminate formal learning

Many C-level leaders find the job too demanding and time-consuming, and leave formal learning activities as a nice-to-have. Lifelong learners recognize that a 10% commitment to reading, listening, viewing, coursework, and other learning experiences can expand their mindsets and expose them to new concepts. Learning is not just about skill development.

“In a time of rapid innovation, the 70-20-10 rule is inadequate, and that 10% formal education needs to increase,” suggests Cindi Howson, chief data and AI strategy officer at  ThoughtSpot. “However, it’s critical to look for the right formal education as executive training is rapidly out of date.”

Howson recommends “vibe learning” with hands-on mini classes and timely summits featuring peer-to-peer network from leaders at the cutting edge of AI innovation.

Other learning opportunities include:

A bigger commitment is to consider CTO degree programs from academic institutions such as Berkeley, Carnegie Mellon University, Wharton, and others.

C-level roles are not for everyone. On a scale of 1-10, 43% of CIOs rated the job 8 or higher on a stress level scale in CIO.com’s State of the CIO. So, for those aspiring to C-level roles, make sure to thoroughly understand the role before making it a career objective.

CIO 영향력을 키우는 5가지 핵심 동맹 전략

이전 기사에서는 CIO가 조직 내에서 ‘보이지 않는 존재’로 전락할 위험에 대해 다뤘다. 이번 글에서는 동맹이 있을 때와 없을 때의 차이, 실무자로 구성된 조직 하부에서 업무를 추진하는 일명 바텀업(bottom-up) 접근을 활용할 때 열리는 다양한 기회, 그리고 동맹을 무력화할 수 있는 보이지 않는 실수를 어떻게 피할 수 있는지 살펴본다.

1. 지원이 부족할 때: CIO에게 돌아오는 비용과 위험

2020년, 다임러는 미래 차량과 모빌리티 기술을 개발하던 혁신 인큐베이터 ‘Lab1886’을 분사하기로 결정했다. 인재는 충분했지만, 프로젝트를 책임지고 끌어갈 사내 동맹이 부족했다. 조직 내부 연결고리가 명확하지 않자 프로젝트는 제대로 이관되거나 실행되지 못했고, 결국 이 조직은 고립되고 말았다.

CIO도 마찬가지 상황에 놓인다. 내부 지원이 부족하면 인재와 노력만으로는 충분하지 않다. 실제로 대기업 CIO들조차 제안서를 처음부터 다시 방어해야 했다거나, 또는 IT가 문제를 사전에 파악할 기회조차 없는 상태에서 다른 부서의 프로젝트가 ‘필터링 없이’ 넘어온 적이 있다고 말한다.

이런 흐름이 반복되면 CIO가 조직의 속도를 늦추는 존재라는 인식이 만들어진다. CIO의 영향력 확대에도 전혀 도움이 되지 않고, 오히려 디지털 전략을 이끄는 자리에서 벗어나 끊임없이 발생하는 문제를 진화하는 역할로 떠밀리게 된다.

CIO들은 이러한 악순환이 지속될 수 없다는 사실을 점점 인식하고 있다. 보석 기업 팬도라의 CDO 겸 CTO 데이비드 왈름슬리는 마크 새뮤얼스와의 인터뷰에서 “디지털 전환 첫날부터 강조한 것은, 우리는 지시를 받기 위해 존재하는 조직이 아니라 견고한 협업을 제공하기 위해 존재한다는 점”이라고 설명했다.

2. 동맹: CIO의 입지를 강화하는 힘

CIO는 끊임없이 프로젝트를 정당화하거나 다른 부서의 과제를 떠안는 구조에서 벗어나야 한다. 이를 위해 필요한 것은 권위가 아니라 ‘공감대를 기반으로 한 협력’이며, 결국 조직 내부에 동맹을 구축하는 일이다.

동맹의 가장 큰 강점은 이미 확보된 지지 기반이라는 점이다. 이는 승인 과정을 앞당기고, 반복되는 정당화 과정에서 발생하는 소모를 줄여준다.

동맹이 제공하는 이점은 이뿐만이 아니다. 동맹은 자신의 신뢰를 바탕으로 완충 역할을 하므로 조직 내 마찰이 줄어들고, 조직 내 연결망을 확장해 CIO가 직접 참석하지 않은 자리에서도 새로운 논의나 기회가 열릴 수 있게 한다. 또한 CIO가 부재한 상황에서도 동맹이 해당 이니셔티브를 대신 옹호하며 자신만의 신뢰도를 보태는 역할을 한다.

결국 동맹은 CIO의 입지를 강화하고 영향력을 배가시키며 과도한 소모를 막아주는 핵심 자산이다.

3. 바텀업 방식의 동맹 구축: 확장되는 신뢰

동맹의 가치가 분명하지만, 그렇다면 어떻게 확보할 수 있을까. 항상 이해관계가 일치하거나 협력을 쉽게 만드는 조건이 주어지는 것은 아니다. 그러나 조직에는 예산과 시간을 갉아먹는 다양한 문제들이 존재한다. CIO가 이 문제를 해결하는 데 기여할 수 있다면, 그것이 곧 동맹의 출발점이 된다.

흔히 조직의 최상위에서 접근하는 것이 합리적으로 보이지만, 실제로는 가장 어렵고 멀리 돌아가는 길일 수 있다. 당장 해결해야 할 문제를 안고 있는 잠재적 동맹은 조직 곳곳에 존재한다. 예컨대 재무통제 담당자는 느린 마감 일정과 부정확한 예측으로 인한 압박에 시달리고, 구매 담당자는 중복 계약이나 오류가 많은 수기 인보이스 처리 문제와 매일 맞서야 한다.

기회는 이미 조직 안에 있다. 필요한 것은 이 문제들이 남기는 흔적을 찾아내는 일이다. 중복 송장, 매달 말 통제 없이 공유되는 스프레드시트 등은 아직 관리되지 않은 프로세스의 전형적인 징표이며, IT가 빠르게 개선 효과를 낼 수 있는 지점이다.

CIO가 이러한 실무자들의 문제를 해결하면, 그 영향력은 예상보다 훨씬 빠르게 조직 전체로 확산된다. 구매 부서의 병목을 해소한 작은 성과가 부서 회의에서 언급되고, 그 이야기가 CFO까지 전달되는 식이다.

4. 동맹을 깨뜨리는 보이지 않는 실수를 피하는 법

관계를 시작할 기회를 찾는 것은 첫 단계일 뿐이다. 실제로 동맹의 성패를 가르는 것은 이니셔티브를 어떻게 운영하느냐에 달려 있다. 협업의 지속성을 위협해 CIO를 다시 원점으로 돌릴 수 있는 ‘조용한 위험 요소’에 주의를 기울여야 한다.

첫 번째 위험은 이니셔티브가 ‘비즈니스 언어’로 번역되지 않을 때 발생한다. 이 주제만으로도 별도의 논의가 필요하지만 요지는 간단하다. 잠재적 동맹이 해당 활동을 온전히 이해하지 못하거나, 동료에게 자기 언어로 설명할 수 없다면 그 프로젝트는 절대 확산되지 못한다.

또 다른 위험은 CIO가 ‘완성된’ 이상적인 솔루션을 그대로 제시할 때다. 동맹 후보가 직접 참여해 의견을 보태고 흔적을 남길 공간이 없다면, 프로젝트가 아무리 유익해도 자기 일처럼 느끼지 못해 적극적으로 관여하지 않는다.

또 하나의 덜 보이는 장애물은 ‘주의력(Attention)’이라는 자원이다. 시간은 모든 조직에서 가장 희소한 자원이다. 프로젝트나 관계 자체가 지나치게 많은 시간과 집중을 요구하면 그 자체로 조직에 부담이 되고 지속되기 어렵다.

여기에 정치적이지만 매우 중요한 또 다른 위험이 있다. IT를 막을 수 있는 권한을 가진 이해관계자를 간과하는 경우다. 준법감시 담당자, 법무 담당자, 각종 위원회 참석자 등이 그 대상이다. 이들을 초기에 파악해 의견을 수렴하지 않으면, 반대 의견이 늦게 등장해 대응할 여지가 거의 없어지게 된다.

이러한 위험은 대부분 눈에 잘 띄지 않기 때문에 더 위험하다. 이들이 보여주는 메시지는 분명하다. 기술적 완성도만으로는 충분하지 않으며, 함께 일하는 과정 자체가 신뢰를 만든다는 점이다. 이렇게 형성된 관계와 소통 방식이 향후 협력의 기준점으로 자리 잡는다.

5. 공고해진 미션과 미래를 위한 정치적 자본

CIO에게 동맹이 있다는 것은 더 이상 혼자서 변화를 추진하지 않아도 된다는 의미다. 이미 형성된 지지 기반과 축적된 정치적 자본 덕분에 IT 이니셔티브를 지속적으로 방어할 필요도 사라진다.

또한 이러한 지원 네트워크는 CIO 전략의 회복탄력성까지 높여준다. 전략이 특정 조직도에 영향을 크게 받지 않으며, 조직이 공유하는 비즈니스 우선순위 위에서 움직이기 때문이다.

궁극적으로 CIO는 더 넓은 전략적 여지를 확보하게 된다. 끊임없이 발생하는 문제를 진화하는 데 에너지를 소모하는 대신, 전술적 과제를 넘어 디지털 전략을 설계하는 본연의 역할에 집중할 수 있게 된다.

이 글은 파운드리(Foundry) 수석 애널리스트 알베르토 벨레가 작성했다.
dl-ciokorea@foundryco.com

“섀도우 AI, 막지 말고 관리하라” CIO를 위한 6가지 거버넌스 전략

직원들이 제각각 생성형 AI 도구를 시험하면서 CIO는 섀도우 AI라는 익숙한 도전에 다시 직면하고 있다. 이런 시도는 선의의 혁신인 경우가 많지만, 데이터 프라이버시, 규제 준수, 보안 측면에서 심각한 위험을 초래할 수 있다.

1패스워드(1Password)의 2025년 연례 보고서 ‘액세스-신뢰 격차(The Access-Trust Gap)’에 따르면, 직원의 43%가 개인 기기에서 업무용으로 AI 앱을 사용하고 25%가 직장에서 승인되지 않은 AI 앱을 사용하는 것으로 나타났다.

전문가들은 이런 위험에도 섀도우 AI를 완전히 없애야 할 대상으로 보지 않는다. 섀도우 AI를 이해하고, 방향을 잡아주고, 관리해야 할 대상으로 본다. 민감한 데이터를 안전하게 지키면서 책임 있는 실험을 장려하려는 CIO에게 도움이 될 수 있는 6가지 전략을 소개한다.

1. 실험을 허용하는 명확한 가드레일을 세워라

섀도우 AI를 관리하는 첫 단계는 허용되는 것과 허용되지 않는 것을 명확히 구분하는 일이다. 웨스트 쇼어 홈(West Shore Home)의 CTO 대니 피셔는 CIO에게 AI 도구를 승인, 제한, 금지 3가지 단순한 범주로 분류할 것을 권고한다.

피셔는 “승인된 도구는 검증을 거쳤고 IT가 지원하는 도구이다”라며, “제한된 도구는 더미 데이터만 사용하는 등 명확한 한계를 둔 통제된 공간에서 사용할 수 있다. 일반에 공개됐거나 암호화되지 않은 AI 시스템 같은 금지 도구는 네트워크나 API 수준에서 차단해야 한다”라고 강조했다. 또한, 내부 오픈AI 워크스페이스나 보안 API 프록시 같은 안전한 테스트 공간을 각 AI 활용 유형에 매칭하면 팀이 회사 데이터를 위험에 빠뜨리지 않고 자유롭게 실험할 수 있다고 덧붙였다.

SAP 자회사 리닉스(LeanIX)의 수석 엔터프라이즈 아키텍트 제이슨 테일러는 발전 속도가 빠른 오늘날 AI 환경에서는 명확한 규칙이 필수라고 강조했다. 테일러는 “어떤 도구와 플랫폼이 승인됐고 승인되지 않았는지 분명히 해야 한다”라며, “어떤 시나리오와 사용례가 승인 대상인지, 직원이 AI를 사용할 때 회사 데이터와 정보를 어떻게 다뤄야 하는지, 예를 들어 복사·붙여넣기나 시스템 간 심층 연동이 아니라 일회성 업로드만 허용되는지 등을 명확하게 알려야 한다”라고 설명했다.

테일러는 어떤 유형의 데이터가 어떤 상황에서 사용해도 되는지, 사용하면 안 되는지 설명한 명확한 목록을 만드는 작업도 필요하다고 덧붙였다. 현대적인 데이터 손실 방지(Data Loss Prevention, DLP) 도구는 데이터를 자동으로 찾아 분류하고, 누가 어떤 데이터에 접근할 수 있는지 최소 권한과 제로 트러스트 원칙을 강제하는 데 도움이 될 수 있다.

뱁슨 칼리지(Babson College) CIO 패티 파트리아는 노코드·로코드 AI 도구와 바이브 코딩 플랫폼에 대해 CIO가 별도의 구체적인 가드레일을 세우는 것도 중요하다고 지적했다. 파트리아는 “이런 도구는 직원이 아이디어를 빠르게 프로토타이핑하고 AI 기반 솔루션을 실험하도록 도와주지만, 독점 데이터나 민감한 데이터와 연결할 때는 독특한 위험을 만들어낸다”라고 말했다.

이런 문제를 해결하려면 직원이 스스로 안전하게 실험할 수 있게 해주는 보안 계층을 구축하되, AI 도구를 민감한 시스템에 연결하려 할 때는 추가적인 검토와 승인을 요구해야 한다. 파트리아는 “예를 들어 최근 직원이 어떤 경우에 보안팀에 애플리케이션 검토를 요청해야 하는지, 어떤 경우에 이런 도구를 자율적으로 사용할 수 있는지에 대한 명확한 내부 지침을 마련해 혁신과 데이터 보호를 모두 최우선으로 두고 있다”라고 말했다. 또 “위험 수준이 너무 높다고 판단해 사용을 권장하지 않는 도구와 조직이 공식적으로 지원하는 AI 도구 목록도 유지하고 있다”라고 덧붙였다.

2. 지속적인 가시성과 인벤토리 추적을 유지하라

보이지 않는 것은 관리할 수 없다. 전문가는 정확하고 최신 상태의 AI 도구 인벤토리를 유지하는 일이 섀도우 AI에 대응하는 가장 중요한 방어 수단 가운데 하나라고 말한다.

피셔는 “가장 중요한 것은 직원이 사용 중인 도구를 숨기지 않고 편하게 공유하도록 만드는 문화다”라고 강조했다. 피셔가 이끄는 팀은 분기별 설문조사와 직원이 사용하는 AI 도구를 직접 등록하는 셀프서비스 레지스트리를 함께 운영한다. 이후 IT 부서는 네트워크 스캔과 API 모니터링을 통해 해당 등록 정보를 검증한다.

굿즈 제조 기업 뱀코(Bamko)의 IT 담당 부사장 아리 해리슨은 자신이 이끄는 팀이 가시성을 유지하기 위해 계층적 접근 방식을 취하고 있다고 밝혔다.

해리슨은 “구글 워크스페이스의 연결 앱 보기에서 데이터를 가져와 SIEM 시스템으로 이벤트를 보내면서 연결된 애플리케이션의 실시간 레지스트리를 유지하고 있다”라며, “마이크로소프트 365도 비슷한 텔레메트리를 제공하고, 필요한 곳에서는 CASB(Cloud Access Security Broker) 도구를 활용해 가시성을 보완할 수 있다”라고 설명했다.

이런 계층적 접근 방식 덕분에 뱀코는 어떤 AI 도구가 기업 데이터를 다루는지, 누가 승인했는지, 어떤 권한을 갖고 있는지 한눈에 파악할 수 있다.

iPaaS 업체 부미(Boomi)의 제품 담당 수석 부사장 매니 길은 수작업 감사만으로는 이제 충분하지 않다고 주장한다. 길은 “효과적인 인벤토리 관리는 정기적인 감사 수준을 넘어 전체 데이터 생태계에 대한 지속적이고 자동화된 가시성이 필요하다”라며, 승인된 AI 에이전트이든 다른 도구에 내장된 AI 에이전트이든 모든 AI 에이전트가 하나의 중앙 플랫폼을 통해 데이터를 주고받도록 하는 것이 좋은 거버넌스 정책이라고 강조했다.

엔드포인트 보안 업체 태니엄(Tanium)의 최고 보안 자문역 팀 모리스는 모든 기기와 애플리케이션 전반에 걸친 지속적인 탐지가 핵심이라는 데 동의한다. 모리스는 “AI 도구는 하룻밤 사이에 등장할 수 있다”라고 지적했다. 또 “새로운 AI 앱이나 브라우저 플러그인이 업무 환경에 나타나면 즉시 파악할 수 있어야 한다”라고 덧붙였다.

3. 데이터 보호와 접근 통제를 강화하라

섀도우 AI로 인한 데이터 노출을 막기 위해 전문가가 공통으로 지적하는 기반은 데이터 손실 방지(DLP), 암호화, 최소 권한 원칙이다.

피셔는 “승인되지 않은 도메인으로 개인정보, 계약서, 소스 코드를 업로드하는 행위를 DLP 규칙으로 차단하라”라고 말했다. 또 조직 밖으로 나가기 전에 민감한 데이터를 마스킹하고, 승인된 AI 도구에서는 모든 프롬프트와 응답을 추적할 수 있도록 로깅과 감사 기록을 활성화할 것을 권고했다.

해리슨 역시 이런 접근법을 지지하면서, 뱀코가 실제 현장에서 가장 중요하게 보는 보안 통제는 ▲민감한 데이터가 외부로 나가는 것을 막기 위한 아웃바운드 DLP와 콘텐츠 검사 ▲서드파티 권한을 최소 권한으로 유지하기 위한 OAuth 거버넌스 ▲기밀 데이터를 자사 생산성 제품군 안에서 승인된 AI 커넥터에만 업로드하도록 제한하는 접근 제어라고 설명했다.

또한, 문서나 이메일에 대한 읽기·쓰기 권한처럼 범위가 넓은 권한은 고위험으로 분류해 명시적인 승인을 요구하는 반면, 읽기 전용처럼 범위가 좁은 권한은 더 빠르게 승인하도록 운영하고 있다. 해리슨은 “목표는 일상적인 창의적 작업을 안전하게 허용하면서, 한 번의 클릭으로 AI 도구에 의도보다 더 많은 권한을 부여해 버릴 가능성을 줄이는 것이다”라고 말했다.

테일러는 보안 통제가 모든 환경에서 일관되게 작동해야 한다고 강조했다. 테일러는 “저장 상태, 사용 중, 전송 중인 모든 민감 데이터를 암호화하고, 데이터 접근 권한에는 최소 권한과 제로 트러스트 정책을 적용하며, DLP 시스템이 민감 데이터를 스캔·태깅·보호할 수 있게 하라”라고 권고했다. 또, 이런 통제가 데스크톱, 모바일, 웹 환경에서 똑같이 동작하는지 확인하고, 새로운 상황이 발생할 때마다 점검과 업데이트를 반복해야 한다고 덧붙였다.

4. 위험 허용 범위를 명확히 정하고 소통하라

위험 허용 범위를 정하는 일은 통제 못지않게 커뮤니케이션의 문제이기도 하다. 피셔는 데이터 분류 체계에 위험 허용 범위를 연계하라고 조언한다. 피셔가 이끄는 팀은 단순한 색상 체계를 사용해 마케팅 콘텐츠처럼 위험이 낮은 활동에는 녹색을, 승인된 도구만 사용해야 하는 내부 문서에는 노란색을, AI 시스템과 함께 사용할 수 없는 고객·재무 데이터에는 빨간색을 부여한다.

모리스는 “위험 허용 범위는 비즈니스 가치와 규제 의무를 기반으로 설정해야 한다”라고 말했다. 모리스는 피셔와 마찬가지로 AI 활용을 허용, 승인 필요, 금지 같은 명확한 범주로 나누고, 이 프레임워크를 경영진 브리핑, 신규 입사자 온보딩, 내부 포털을 통해 꾸준히 알릴 것을 권고한다.

뱁슨 칼리지의 AI 거버넌스 위원회(AI Governance Committee)는 이런 과정에서 핵심 역할을 한다. 파트리아는 “잠재적 위험이 포착되면 이를 위원회 안건으로 올려 논의한 뒤, 완화 전략을 함께 마련한다”라고 밝혔다. 또 “일부 경우에는 직원에게는 도구 사용을 차단하되 강의실에서는 허용하기로 결정하기도 한다. 이런 균형 덕분에 혁신을 억누르지 않으면서도 위험을 관리할 수 있다”라고 덧붙였다.

5. 투명성과 신뢰 문화를 키워라

섀도우 AI를 제대로 관리하는 데 핵심은 투명성이다. 직원은 어떤 부분이 왜 모니터링되는지 알 수 있어야 한다.

피셔는 “투명성이란 무엇이 허용되고 무엇이 모니터링 대상인지, 또 그 이유가 무엇인지 직원이 알고 있는 상태를 의미한다”라며, “회사 인트라넷에 AI 거버넌스 방식을 공개하고, 바람직한 AI 사용 사례와 위험한 사용 사례를 실제 예시로 함께 보여주라”라고 조언했다. 또 “목적은 사람을 잡아내는 데 있지 않다. AI를 활용하는 일이 안전하고 공정하다는 믿음을 심어 주는 것이 목적이다”라고 강조했다.

테일러는 공식적으로 승인한 AI 서비스 목록을 공개하고 항상 최신 상태로 유지하라고 권고했다. 또한, “아직 제공되지 않는 기능을 언제, 어떻게 제공할지에 대한 로드맵을 분명히 밝혀라. 예외 승인이나 새로운 도구 도입을 요청할 수 있는 절차도 마련하라”라고 덧붙였다.

이런 개방성은 AI 거버넌스가 혁신을 가로막기 위한 것이 아니라 지원하기 위한 장치라는 점을 보여준다. 파트리아는 기술적 통제와 명확한 정책뿐 아니라 AI 거버넌스 위원회 같은 전담 거버넌스 조직을 두면 섀도우 AI 위험을 관리하는 조직 역량을 크게 높일 수 있다고 말했다.

파트리아는 “딥시크나 파이어플라이즈 같은 도구에 대한 우려처럼 잠재적 위험이 나타나면 완화 전략을 함께 마련한다”라며, 이런 거버넌스 조직이 위험을 검토하고 조치할 뿐 아니라, 의사결정 내용과 그 이유를 설명해 투명성과 공동 책임 문화를 만드는 데도 기여한다고 덧붙였다.

모리스도 같은 의견이다. 모리스는 “투명성이란 예상치 못한 일이 없다는 뜻이다”라고 강조했다. 이어 “어떤 AI 도구가 승인되어 있는지, 의사결정이 어떻게 이뤄지는지, 질문이나 새로운 아이디어가 있을 때 어디로 가야 하는지 직원이 알고 있어야 한다”라고 설명했다.

6. 역할 기반의 지속적인 AI 교육을 구축하라

교육은 AI 도구의 우발적 오용을 막는 가장 효과적인 방법 가운데 하나이다. 핵심은 교육이 짧고, 업무와 관련성이 높고, 반복적으로 이뤄지도록 만드는 것이다.

피셔는 “교육은 짧고 시각적이며 역할별로 설계하라. 긴 슬라이드 자료는 피하고, 대신 사례 중심 스토리, 짧은 데모, 명확한 예시를 활용하라”라고 조언했다.

뱁슨 칼리지는 매년 실시하는 정보보안 교육에 AI 위험 인식을 포함하고, 새로운 도구와 떠오르는 위험에 대한 소식을 정기 뉴스레터로 발송한다. 파트리아는 “직원이 승인된 AI 도구와 새로운 위험을 이해하도록 정기 교육을 제공하고, 부서별 AI 챔피언에게는 AI 도입의 이점과 잠재적 함정을 모두 강조하면서 대화를 촉진하고 실제 경험을 공유하도록 권장하고 있다”라고 밝혔다.

테일러는 교육을 브라우저 안에 녹여 직원이 사용하는 도구 안에서 곧바로 베스트 프랙티스를 학습하도록 할 것을 권고했다. 테일러는 “웹 브라우저에 복사·붙여넣기를 하거나 프레젠테이션 파일을 끌어다 놓는 행위는 겉으로 보기에는 별 문제 없어 보이지만, 민감한 데이터가 이미 자기 조직의 생태계를 떠난 뒤에야 그 위험을 깨닫게 되는 경우가 많다”라고 지적했다.

길은 교육이 책임 있는 AI 활용과 성과를 연결해야 한다고 지적했다. 길은 “직원은 규정 준수와 생산성이 함께 간다는 점을 이해해야 한다. 승인된 도구는 섀도우 AI에 비해 더 빠른 결과, 더 나은 데이터 정확도, 더 적은 보안 사고를 제공한다”라고 강조했다. 또한, “역할 기반의 지속적인 교육을 통해 가드레일과 거버넌스가 데이터와 효율성을 모두 보호해 AI가 위험을 만들기보다 업무 흐름을 가속하는 수단이라는 점을 보여줄 수 있다”라고 설명했다.

책임 있는 AI 활용이 비즈니스 경쟁력

궁극적으로 섀도우 AI 관리는 위험을 줄이는 데서 그치지 않고 책임 있는 혁신을 뒷받침하는 일이다. 신뢰와 커뮤니케이션, 투명성에 집중하는 CIO는 잠재적 문제를 경쟁 우위로 바꿀 수 있다.

테일러는 “사용자가 원하는 것을 제공하고, 특히 섀도우 AI 방식을 택할 때 오히려 더 많은 불편이 따른다면 대체로 시스템에 거스르려 하지 않는다”라고 말했다.

모리스도 같은 의견이다. 모리스는 “목표는 사람을 겁주려는 것이 아니라 행동하기 전에 한 번 더 생각하게 만드는 것이다”라며, “승인된 경로가 쉽고 안전하다는 사실을 알면 자연스럽게 그 길을 선택하게 된다”라고 덧붙였다.

CIO가 지향해야 할 미래는 책임 있는 AI 활용이 단순한 규정 준수를 넘어 비즈니스에 도움이 되는 일이라는 인식 아래, 사람이 안전하게 혁신하고 신뢰 속에서 자유롭게 실험하며 데이터를 계속 보호할 수 있는 환경이다.
dl-ciokorea@foundryco.com

6 strategies for CIOs to effectively manage shadow AI

As employees experiment with gen AI tools on their own, CIOs are facing a familiar challenge with shadow AI. Although it’s often well-intentioned innovation, it can create serious risks around data privacy, compliance, and security.

According to 1Password’s 2025 annual report, The Access-Trust Gap, shadow AI increases an organization’s risk as 43% of employees use AI apps to do work on personal devices, while 25% use unapproved AI apps at work.

Despite these risks, experts say shadow AI isn’t something to do away with completely. Rather, it’s something to understand, guide, and manage. Here are six strategies that can help CIOs encourage responsible experimentation while keeping sensitive data safe.

1. Establish clear guardrails with room to experiment

Managing shadow AI begins with getting clear on what’s allowed and what isn’t. Danny Fisher, chief technology officer at West Shore Home, recommends that CIOs classify AI tools into three simple categories: approved, restricted, and forbidden.

“Approved tools are vetted and supported,” he says. “Restricted tools can be used in a controlled space with clear limits, like only using dummy data. Forbidden tools, which are typically public or unencrypted AI systems, should be blocked at the network or API level.”

Matching each type of AI use with a safe testing space, such as an internal OpenAI workspace or a secure API proxy, lets teams experiment freely without risking company data, he adds.

Jason Taylor, principal enterprise architect at LeanIX, an SAP company, says clear rules are essential in today’s fast-moving AI world.

“Be clear which tools and platforms are approved and which ones aren’t,” he says. “Also be clear which scenarios and use cases are approved versus not, and how employees are allowed to work with company data and information when using AI like, for example, one-time upload as opposed to cut-and-paste or deeper integration.”

Taylor adds that companies should also create a clear list that explains which types of data are or aren’t safe to use, and in what situations. A modern data loss prevention tool can help by automatically finding and labeling data, and enforcing least-privilege or zero-trust rules on who can access what.

Patty Patria, CIO at Babson College, notes it’s also important for CIOs to establish specific guardrails for no-code/low-code AI tools and vibe-coding platforms.

“These tools empower employees to quickly prototype ideas and experiment with AI-driven solutions, but they also introduce unique risks when connecting to proprietary or sensitive data,” she says.

To deal with this, Patria says companies should set up security layers that let people experiment safely on their own but require extra review and approval whenever someone wants to connect an AI tool to sensitive systems.

“For example, we’ve recently developed clear internal guidance for employees outlining when to involve the security team for application review and when these tools can be used autonomously, ensuring both innovation and data protection are prioritized,” she says. “We also maintain a list of AI tools we support, and which we don’t recommend if they’re too risky.”

2. Maintain continuous visibility and inventory tracking

CIOs can’t manage what they can’t see. Experts say maintaining an accurate, up-to-date inventory of AI tools is one of the most important defenses against shadow AI.

“The most important thing is creating a culture where employees feel comfortable sharing what they use rather than hiding it,” says Fisher. His team combines quarterly surveys with a self-service registry where employees log the AI tools they use. IT then validates those entries through network scans and API monitoring.

Ari Harrison, VP of IT at branding manufacturer Bamko, says his team takes a layered approach to maintaining visibility.

“We maintain a living registry of connected applications by pulling from Google Workspace’s connected-apps view and piping those events into our SIEM [security information and event management system],” he says. “Microsoft 365 offers similar telemetry, and cloud access security broker tools can supplement visibility where needed.”

That layered approach gives Bamko a clear map of which AI tools are touching corporate data, who authorized them, and what permissions they have.

Mani Gill, SVP of product at cloud-based iPaaS Boomi, argues that manual audits are no longer enough.

“Effective inventory management requires moving beyond periodic audits to continuous, automated visibility across the entire data ecosystem,” he says, adding that good governance policies ensure all AI agents, whether approved or built into other tools, send their data in and out through one central platform. This gives organizations instant, real-time visibility into what each agent is doing, how much data it’s using, and whether it’s following the rules.

Tanium chief security advisor Tim Morris agrees that continuous discovery across every device and application is key. “AI tools can pop up overnight,” he says. “If a new AI app or browser plugin appears in your environment, you should know about it immediately.”

3. Strengthen data protection and access controls

When it comes to securing data from shadow AI exposure, experts point to the same foundation: data loss prevention (DLP), encryption, and least privilege.

“Use DLP rules to block uploads of personal information, contracts, or source code to unapproved domains,” Fisher says. He also recommends masking sensitive data before it leaves the organization, and turning on logging and audit trails to track every prompt and response in approved AI tools.

Harrison echoes that approach, noting that Bamko focuses on the security controls that matter most in practice: Outbound DLP and content inspection to prevent sensitive data from leaving; OAuth governance to keep third-party permissions to least privilege; and access limits that restrict uploads of confidential data to only approved AI connectors within its productivity suite.

In addition, the company treats broad permissions, such as read and write access to documents or email, as high-risk and requires explicit approval, while narrow, read-only permissions can move faster, Harrison adds.

“The goal is to allow safe day-to-day creativity while reducing the chance of a single click granting an AI tool more power than intended,” he says.

Taylor adds that security must be consistent across environments. “Encrypt all sensitive data at rest, in use, and in motion, employ least-privilege and zero-trust policies for data access permissions, and ensure DLP systems can scan for, tag, and protect sensitive data.”

He notes that companies should ensure these controls work the same on desktop, mobile, and web, and keep checking and updating them as new situations come up.

4. Clearly define and communicate risk tolerance

Defining risk tolerance is as much about communication as it is about control. Fisher advises CIOs to tie risk tolerance to data classification instead of opinion. His team uses a simple color-coded system: green for low-risk activities, such as marketing content; yellow for internal documents that must use approved tools; and red for customer or financial data that can’t be used with AI systems.

“Risk tolerance should be grounded in business value and regulatory obligation,” says Morris. Like Fisher, Morris recommends classifying AI use into clear categories, what’s permitted, what needs approval, and what’s prohibited, and communicating that framework through leadership briefings, onboarding, and internal portals.

Patria says Babson’s AI Governance Committee plays a key role in this process. “When potential risks emerge, we bring them to the committee for discussion and collaboratively develop mitigation strategies,” she says. “In some cases, we’ve decided to block tools for staff but permit them for classroom use. That balance helps manage risk without stifling innovation.”

5. Foster transparency and a culture of trust

Transparency is the key to managing shadow AI well. Employees need to know what’s being monitored and why.

“Transparency means employees always know what’s allowed, what’s being monitored, and why,” Fisher says. “Publish your governance approach on the company intranet and include real examples of both good and risky AI use. It’s not about catching people. You’re building confidence that utilizing AI is safe and fair.”

Taylor recommends publishing a list of officially sanctioned AI offerings and keeping it updated. “Be clear about the roadmap for delivering capabilities that aren’t yet available,” he says, and provide a process to request exceptions or new tools. That openness shows governance exists to support innovation, not hinder it.

Patria says in addition to technical controls and clear policies, establishing dedicated governance groups, like the AI Governance Committee, can greatly enhance an organization’s ability to manage shadow AI risks.

“When potential risks emerge, such as concerns about tools like DeepSeek and Fireflies.AI, we collaboratively develop mitigation strategies,” she says.

This governance group not only looks at and handles risks, but explains its decisions and the reasons behind them, helping create transparency and shared responsibility, Patria adds.

Morris agrees. “Transparency means there are no surprises. Employees should know which AI tools are approved, how decisions are made, and where to go with questions or new ideas,” he says.

6. Build continuous, role-based AI training

Training is one of the most effective ways to prevent accidental misuse of AI tools. The key is be succinct, relevant, and recurring.

“Keep training short, visual, and role-specific,” says Fisher. “Avoid long slide decks and use stories, quick demos, and clear examples instead.”

Patria says Babson integrates AI risk awareness into annual information security training, and sends periodic newsletters about new tools and emerging risks.

“Routine training sessions are offered to ensure employees understand approved AI tools and emerging risks, while departmental AI champions are encouraged to facilitate dialogue and share practical experiences, highlighting both the benefits and potential pitfalls of AI adoption,” she adds.

Taylor recommends embedding training in-browser, so employees learn best practices directly in the tools they’re using. “Cutting and pasting into a web browser or dragging and dropping a presentation seems innocuous until your sensitive data has left your ecosystem,” he says.

Gill notes that training should connect responsible use with performance outcomes.

“Employees need to understand that compliance and productivity work together,” he says. “Approved tools deliver faster results, better data accuracy, and fewer security incidents compared with shadow AI. Role-based, ongoing training can demonstrate how guardrails and governance protect both data and efficiency, ensuring that AI accelerates workflows rather than creating risk.”

Responsible AI use is good business

Ultimately, managing shadow AI isn’t just about reducing risk, it’s about supporting responsible innovation. CIOs who focus on trust, communication, and transparency can turn a potential problem into a competitive advantage.

“People generally don’t try and buck the system when the system is giving them what they’re looking for, especially when there’s more friction for the user in taking the shadow AI approach,” says Taylor.

Morris concurs. “The goal isn’t to scare people but to make them think before they act,” he says. “If they know the approved path is easy and safe, they’ll take it.”

That’s the future CIOs should work toward: a place where people can innovate safely, feel trusted to experiment, and keep data protected because responsible AI use isn’t just compliance, it’s good business.

The AI in oil: GS Caltex empowers LOB teams to build agents

Caught between change and stability, many companies find themselves hesitating on how to square the two. The pace of change is increasing in the age of AI, and the weight of making inspired choices has only become more critical. GS Caltex, one of Korea’s leading refining companies, faced the same dilemma and recently embraced a new guiding principle of good risk taking — a phrase reportedly often heard in GS Caltex meetings, and initially proposed by company CEO Hur Sae-hong. “Once the word ‘good’ was added to ‘risk-taking,’ a culture began to spread where people are willing to attempt any challenge,” says CIO, CDO, and DX Center head Lee Eunjoo.

Amid growing uncertainties around crude oil prices and product demand, intensifying competition over production scale, and demographic decline, the value of good risk taking is pushing the company to pursue new opportunities and innovation. And a changing mindset is reshaping the organization from within.

The AI platform changing the enterprise

Even without any top-down mandate, it’s common at GS Caltex to see not just IT but LOB teams in production, sales, finance, legal, PR, and HR building and using AI agents in their day-to-day work. Finance, for instance, recently built an FAQ agent and asked Lee’s team to review it. “It’s incredibly rewarding to see employees actively using the new technologies provided by the DX Center.”

So far, they’ve created more than 50 agents, including ones that support pre-job safety briefings for partner company staff, review crude oil purchase contracts, automate a complex medical expense reimbursement process, and automatically classify and analyze gas station customer feedback.

All of these agents were developed on AiU, the company’s in-house gen AI service platform launched in June this year, which combines AI with yu, the Korean word for oil, and is also a play on “AI for you,” reflecting its role as AI tailored to each employee.

Lee says AiU is the clearest expression of the company’s approach to transformation. “It’s not just about DX anymore but DAX, combining digital with AI transformation,” she says. “From our production sites to headquarters, we’re rolling out initiatives that let every employee experience it all side by side. That’s how we’re reshaping ourselves into an energy company that uses AI broadly and with confidence.”

A secret to its rapid success is because no one feels pressured to build a perfect agent. “People are much more willing to try things and experiment,” says Lee. From the DX Center’s standpoint, that mindset has made it possible to support a growing number of AI projects with a relatively small team. “Plus, the AiU playground lets employees build and test agents themselves, which makes AI feel far more approachable and familiar in their day-to-day work,” she adds.

An AI agent platform might sound like something only developers can use, but AiU is designed so non-experts can easily work with it. The experience isn’t very different from ChatGPT as GS Caltex deliberately embedded AiU into the side of core business systems that employees check every day, so they’d naturally encounter and use AI in their daily workflows. Even if they don’t build agents themselves, employees can still ask the AI questions using internal company data, and search across both external information and internal systems at once.

It’s only been a few months since AiU officially launched, and around 85% of employees are now regular users, and nearly the entire workforce has tried it at least once. “Most of our production and technical staff work in a mobile-only environment without desktops,” Lee says. “The fact 95% of them have already used AiU shows just how fast the platform is spreading.”

Sowing seeds of success

AiU drew strong interest from employees even during its pilot stage. The DX Center began discussing AI service adoption in 2023, and in 2024, the team built a pilot service on AWS in just a few days. Although it was an early version with only basic UI, more than 300 employees participated and shared the features and requirements they needed. This underscored just how many people were eager to bring AI into their work.

Through this pilot, the DX Center was able to clearly identify what kinds of problems employees wanted to solve with AI, and which capabilities they needed most. The team then considered whether to adopt an external solution or develop one in house. In the end, they chose to build on MISO, the AI transformation platform developed by the GS Group, and add GS Caltex–specific capabilities on top. The entire development took about six months.

In designing AiU’s technical architecture, Lee focused most heavily on minimizing dependence on any single LLM. The platform supports multiple models that employees can choose from, including OpenAI and Anthropic.

“AI moves incredibly fast, so we built the system in a way that lets us easily plug in better technologies as they come along,” she says. “The AI layer will keep changing, but the internal data and applications underneath it will remain our core assets, which is why we’ve focused on strengthening the underlying infrastructure. That’s where our DAX philosophy — advancing digital and AI transformation together — comes into play.”

But AiU has done more than speed up AI adoption. It’s also put new life into existing systems. GS Caltex already had an internal enterprise search platform, but over time, its accuracy and usability declined, and usage dropped. AiU stepped in to augment that system with AI. Employees can now search M365 documents, work rules, and HR information in one go, and have the results summarized for them by the AI.

“All we really did was layer AI on top of what we already had to make it a little easier to use,” Lee says. “But in the end, that AI layer ended up reviving a service that was close to being forgotten.”

The growth engines behind the projects

Rolling out and scaling new IT technologies like AI across an entire organization isn’t easy. It’s common to see transformation stall at the slogan stage, held back by resistance to new tools or the simple reality that people are too busy to change how they work.

GS Caltex, however, has avoided treating DX as a one-off initiative. Instead, the company has built three pillars to sustain company-wide change over the long term: culture, performance management, and education.

The first step was to build a bottom-up DX culture. Traditional IT projects often begin with large-scale planning, writing RFPs, and selecting external vendors — a process so long that customer needs frequently change before anything goes live.

GS Caltex chose a different path: a fast-execution model focused on solving customer needs in real time. Even a small app or a single dashboard is recognized as DX, and each attempt is treated as valuable. One example is an app that automatically collects and organizes external news, built by a frontline business team not the IT department.

As these small wins accumulated, a voluntary culture of digital innovation took root. Since the establishment of the DX Center in 2019, GS Caltex has carried out hundreds of projects this way.

Behind this transformation is a high level of organizational acceptance. No matter how well something is built, if colleagues don’t respond favorably, it doesn’t advance. That hasn’t been a problem at GS Caltex, though, largely due to the embedded good risk taking philosophy.

“DX inevitably involves a certain level of risk,” says Lee. “For good risk taking to really work, you need to understand the level of risk and have leaders actively backing it. We have that kind of culture in place.”

After joining GS Caltex, Lee learned a new approach to positive communication. Rather than focusing on fixing problems, the company emphasizes recognizing small achievements, celebrating them together, and then building on that foundation to find areas to improve. “I’ve personally experienced the value of a positive feedback culture,” she says. “A culture that openly recognizes achievements has become a natural driving force encouraging frontline employees to participate in DX.”

This philosophy has been embedded into reward and performance management systems, including a performance innovation committee, which selects outstanding DX projects initiated by business teams and presents awards. And presentations are delivered not by team leaders but by the frontline employees who actually led the work. The monthly selected cases are then published on the company’s internal website, making sure their contributions are visibly acknowledged.

These practices give other employees confidence to do the same, and thus fuels wider voluntary participation. The committee also actively shares failure cases. By openly discussing what was attempted in each project and what could be improved, the company aims to turn failure into an opportunity for learning.

Lee says that GS Caltex only recognizes outcomes that can be proven in financial terms. Common IT metrics such as conversion rates or click-through rates, often used as proxy indicators, aren’t treated as final measures of success. Instead, the company tracks more meaningful indicators such as productivity gains that drive innovation, cost reductions, and improvements in customer satisfaction. These results are all centrally managed through the company-wide performance management system.

But it’s education that the DX Center prioritizes most. Rather than relying on a small group of experts, GS Caltex has chosen a strategy of cultivating hundreds of frontline DX specialists and sees strong results. The more business-side DX experts there are who can use digital tools to directly solve on-site problems, the faster digital adoption spreads. So once technology takes hold in the field, the DX organization provides the necessary development environment and additional support.

This training initiative, called the digital academy, runs as full-day programs ranging from a single day up to three months. It focuses on reskilling and deepening professional expertise to develop DX talent. The curriculum includes low-code developer tracks and in-house DX expert courses, enabling frontline employees to learn technologies themselves and apply them directly to their work. Topics include RPA, Tableau, Python, AI, and data science. Most notably in recent months, every executive has gone through gen AI training themselves, setting the tone from the top and actively championing a culture of continuous learning.

From IT support to proactive DX engine

Two years into her tenure, Lee is now reimagining how DX governance works. Historically, the DX organization operated in reactive mode, fielding requests from business units as they came in. Now, it’s flipping the script. That means taking the lead on company-wide DX priorities, vetting technologies for maturity and feasibility, and consolidating redundant projects.

One clear target is to streamline the system portfolio. Lee also plans to retire underutilized systems and those where operating costs outweigh the value they deliver, cutting waste while boosting efficiency.

At the same time, GS Caltex is leaning into global outsourcing. The company is building a distributed operations model, partnering with offshore teams not just for IT infrastructure, but for internal systems spanning HR, procurement, legal, and beyond. The savings are being funneled back into critical areas, like bolstering disaster recovery capabilities to strengthen business continuity, and reinforcing the DX foundation to deliver more reliable support across the organization.

AI, of course, remains a top priority, and internal demand is surging. “Employees, especially senior leaders, want services that pull together even more data,” Lee says. “Down the road, I’d like AiU to evolve to the point where you can ask what’s been happening with a particular customer lately, and instantly get a unified view of what division A is working on, what division B needs, and live customer inquiries all in one snapshot.”

AI의 ROI를 높이는 CIO의 5단계 체크리스트

올해 초 MIT는 “조직의 95%가 AI 투자에서 아무런 수익을 얻지 못하고 있다”는 조사 결과를 발표했다. 미국 내 생성형 AI 관련 내부 프로젝트에만 300억 달러 이상이 투입된 상황이었다. 왜 이렇게 많은 AI 프로젝트가 기대만큼의 ROI를 내지 못할까? IT 컨설팅 회사 코그니전트(Cognizant)의 글로벌 CIO 닐 라마사미는 “AI가 비즈니스 가치와 명확히 연결되지 않기 때문”이라며, “기술적으로 인상적이지만 실제 문제 해결이나 실질적 성과로 이어지지 않는 경우가 많다”라고 지적했다.

IT 책임자는 종종 유행에 휩쓸려 비즈니스 성과를 고려하지 않은 채 AI 실험에 뛰어든다. 아사나(Asana)의 CIO 사켓 스리바스타바는 “많은 기업이 비즈니스 결과보다 모델이나 파일럿 프로젝트부터 시작한다. 팀이 워크플로우를 다시 설계하거나 손익 책임자를 지정하지 않은 채 고립된 시연만 진행하는 경우가 많다”라고 설명했다.

제품 중심 사고의 부재, 부실한 데이터 관리, 부재한 거버넌스, 그리고 AI 활용을 장려하지 않는 조직 문화가 겹치면 부정적인 결과로 이어진다. 스리바스타바는 “프로세스를 바꾸지 않으면 AI는 현재의 비효율을 더 빠르게 반복할 뿐”이라고 경고했다.

조직이 부정적인 AI ROI를 피하기 위해서는 효과적인 변화 관리가 핵심이다. 다음은 CIO가 바로 실천할 수 있는 다섯 가지 변화 관리 지침이다. 이 지침을 지킨다면, 따르면 조직은 안티 패턴에서 교훈을 얻고, 성공적인 AI 프로젝트를 입증할 수 있는 측정 지표를 발견할 수 있을 것이다.

1. 비즈니스 목표를 명확히 하고 리더십 정렬을 통해 AI 프로젝트를 이끈다

AI 프로젝트는 최고경영진의 후원과 명확한 비전 없이는 성공하기 어렵다. CMIT 솔루션의 사장 겸 수석 vCIO 애덤 로페즈는 “강력한 리더십은 AI 투자를 실질적 결과로 전환하는 데 필수이다. CEO나 이사회 차원의 후원과 감독이 있을수록 ROI가 높다”라고 말했다.

예를 들어 IT 컨설팅 기업 제비아(Xebia)는 글로벌 CIO 스미트 샹커가 주재하는 ‘AI 추진위원회’를 운영하고 있다. 위원회에는 글로벌 CFO, AI 및 자동화 책임자, IT 인프라·보안 책임자, 비즈니스 운영 책임자가 포함돼 있다. 스리바스타바는 “AI 프로젝트마다 목표에 기반한 책임 리더를 지정해야 한다. 조직 전반의 프로젝트 관리 실을 세워 주요 사용례를 정의하고 성공 지표와 가드레일을 설정하며 진행 상황을 정기적으로 공유하라”라고 조언했다.

하지만 리더십이 확립돼도 직원 대부분은 일상 업무에서 AI를 어떻게 활용해야 할지 모른다. 스킬소프트의 CIO 올라 데일리는 “도구를 제공받더라도 대부분 직원은 어디서부터 시작해야 할지 모른다. 조직 내 AI 활용을 선도할 ‘챔피언’을 지정해 실질적 사용례와 팁을 공유해야 한다. 특히 코파일럿 같은 도구의 효율적인 사용법을 전파하는 것이 중요하다”라고 설명했다.

라마사미는 “리더는 데이터 중심 문화를 조성하고 AI가 실제 비즈니스 문제를 해결하는 비전을 제시해야 한다”라고 강조했다. 또, 이를 위해 경영진, 데이터 과학자, IT 부서 간 긴밀한 협력이 필요하며, 파일럿 프로젝트의 실행과 성과 측정을 거쳐야 한다고 덧붙였다.

2. 인재 프레임워크를 재편하고 업스킬링(역량 강화)에 투자한다

AI ROI를 높이기 위해 CIO는 인재 전략을 새롭게 설계해야 한다. 라마사미는 “CIO는 인재 및 관리 전략을 조정해 AI 채택과 ROI를 극대화해야 한다”며 “데이터 과학자, 프롬프트 엔지니어 같은 새로운 역할을 만들고 기존 직원을 재교육하는 접근이 필요하다”라고 말했다.

로페즈는 “인재는 모든 AI 전략의 핵심이다. 교육, 커뮤니케이션, 전문 인력 확보에 투자해야 직원이 AI를 받아들이고 성과를 낼 수 있다”라고 강조헸다. 또, 해커톤이나 사내 교육이 직원의 기술과 자신감을 높이는 효과가 크다고 덧붙였다.

스리바스타바는 “모든 직원에게는 기본적인 프롬프트 이해력과 안전 교육이 필요하고, 파워 유저에게는 워크플로우 설계와 AI 에이전트 구축 능력이 필요하다. 우리는 전사 설문을 통해 역량 수준을 파악하고 교육 목표를 설정해 성숙도가 제대로 향상됐는지 재측정했다”라고 밝혔다.

아스페리타스 컨설팅(Asperitas Consulting)의 클라우드 사업 책임자 스콧 휠러는 “AI 도입은 인적 역량뿐 아니라 업무 프로세스 자체를 다시 점검해야 한다”고 말했다. 즉, 어떤 업무를 누가 수행해야 하는지를 재정의해야 한다는 뜻이다.

스킬소프트의 데일리는 “현대의 인재 전략은 4B(Build, Buy, Borrow, Bot) 전략으로 균형을 맞춰야 한다”라며, “조직을 고정된 직무가 아니라 ‘역량의 집합체’로 보고, 내부 인력·소프트웨어·파트너·자동화 기술을 상황에 맞게 조합해야 한다”라고 설명했다.

스킬소프트의 팀은 깃허브 코파일럿을 활용해 고객용 학습 포털을 빠르게 구축했다. 이 경험을 통해 AI 도우미와 인간 개발자가 협업할 때 생산성이 비약적으로 높아진다는 점을 확인했다.

라마사미는 “직원이 AI 때문에 일자리를 잃을 것이라는 불안을 해소하려면, 왜 AI를 도입하는지 명확히 설명해야 한다”라고 지적했다. 스리바스타바 역시 “핵심은 신뢰다. AI가 반복 업무를 줄이고 임팩트를 높인다는 점을 보여주면 자연스럽게 채택이 뒤따른다”라고 말했다.

3. AI의 가치를 온전히 확보하기 위해 조직 프로세스를 재설계한다

인재 프레임워크를 바꾸는 것만으로는 충분하지 않다. 로페즈는 “AI의 잠재력을 완전히 발휘하려면 조직의 운영 방식 자체를 재구성해야 한다”고 조언했다. AI를 단순한 ‘부가 기능’이 아니라 핵심 운영 프로세스에 통합해야 한다는 의미다.

스리바스타바는 “AI 기반 워크플로우를 제품처럼 관리해야 한다. 요청, 우선순위, 로드맵을 체계적으로 운영하고, 문제 정의와 가치 가설을 명확히 설정해야 한다”라고 강조했다.

제비아는 AI 프로젝트마다 세 단계 검증 절차를 거친다. ‘가치 평가→비즈니스 승인→IT 이관 및 모니터링’의 구조다. 샹커는 “이 과정을 통해 부서 간 프로세스가 단순화되고 사일로가 줄어든다”라고 설명했다.

라마사미는 “대부분 기업이 필요한 변화 관리의 범위를 과소평가한다. 사일로형 의사결정에서 데이터 중심 방식으로 전환해야 한다”라며, “AI가 생성한 결과를 업무 프로세스에 자연스럽게 통합하고, 직원이 데이터 기반 통찰로 의사결정을 내릴 수 있어야 한다”라고 지적했다.

데일리는 “AI가 효율화할 수 있는 업무를 찾기 위해서는 현재의 워크플로우를 정확히 파악해야 한다. 업무 전문가가 프로세스를 검토해 최적화할 영역을 찾아야 하며, 각 부서에 AI를 어떻게 녹여낼지 질문을 던지는 인물을 지정해야 한다”라고 말했다.

스킬소프트는 AI 사용례를 체계화하기 위해 ‘에이전트 레지스트리’를 구축했다. AI 에이전트의 기능, 가드레일, 데이터 관리 방식을 문서화해 표준화하고 있으며, 이를 기반으로 윤리와 거버넌스를 포함한 기업 AI 관리 체계를 정립 중이다.

아스페리타스의 휠러는 “AI 채택을 가속화하려면 ‘AI 스왓팀(SWAT team)’을 운영해 초기 장애를 해결하고 사용자 지원을 강화하는 것이 효과적”이라고 조언했다.

4. 성과를 측정해 AI 투자 수익을 검증한다

ROI를 평가하려면 CIO는 AI 도입 이전의 기준선을 설정하고, 초기에 명확한 벤치마크를 세워야 한다. 많은 IT 리더가 가치 실현까지 걸리는 시간, 비용 절감, 시간 절감, 사람 직원이 처리하는 업무량, AI로 새로 창출된 매출 기회 같은 지표에 책임자를 지정할 것을 권고한다.

아스페리타스의 휠러는 “AI 프로젝트를 시작하기 전에 기준 측정값을 반드시 설정해야 한다”라며, 각 사업 부문의 예측 지표를 경영진의 정기 성과 리뷰에 포함시키라고 조언했다. 휠러는 많은 조직이 모델 정확도, 지연 시간, 정밀도 같은 기술 지표만 측정하고, 이 수치를 비용 절감, 매출 증가, 리스크 감소 같은 비즈니스 성과와 연결하지 못하는 실수를 저지른다고 지적했다.

그래서 다음 단계는 실질적 가치를 입증할 수 있는 명확하고 측정 가능한 목표를 세우는 것이다. CMIT 솔루션의 로페즈는 “프로젝트 초기 단계부터 측정 항목을 설계해야 한다”라고 말했다. 로페즈는 CIO가 각 AI 프로젝트마다 ‘처리 속도 20% 개선’, ‘고객 만족도 15% 상승’처럼 구체적인 KPI를 정의해야 한다고 조언했다. 또, 빠르고 정량화 가능한 결과를 낼 수 있는 소규모 파일럿부터 시작하라고 덧붙였다.

한 가지 명확한 측정 지표는 시간 절감이다. 소프트웨어 기반 서비스 회사 레몬그래스(Lemongrass)의 CTO 에이먼 오닐은 고객사가 SAP 개발 문서를 수작업으로 작성하는 장면을 여러 번 봤는데, 이 작업은 엄청난 시간이 소요되는 과정이다. 오닐은 “문서 작성을 생성형 AI로 처리하면 사람의 투입 시간을 분명하게 줄일 수 있고, 이 절감 효과를 매우 간단하게 달러 기준 ROI로 환산할 수 있다”라고 말했다.

업무당 투입되는 노동력의 감소도 중요한 신호다. 풀스택 기술 서비스 회사 TEK시스템즈(TEKsystems)의 CTO 램 팔라니아판은 “목표가 상담원이 처리하는 콜센터 문의 건수를 줄이는 것이라면, 이 수치를 명확한 지표로 정하고 실시간으로 추적해야 한다”라며, AI 도입 과정에서 새로운 매출 기회가 생길 가능성도 크다고 덧붙였다.

일부 CIO는 개별 사용례 별로 세분화된 KPI를 모니터링하며, 결과에 따라 전략을 조정한다. 예를 들어 아사나의 스리바스타바는 개발 효율성을 모니터링할 때 사이클 타임, 처리량, 품질, 트랜잭션당 비용, 리스크 이벤트 발생 건수를 추적한다. 또 에이전트 지원 실행 비율, 활성 사용자, 휴먼 인 더 루프 승인 비율, 예외 상황 에스컬레이션 비율도 함께 본다. 스리바스타바는 이런 데이터를 검토하면 프롬프트와 가드레일을 실시간으로 조정하는 데 도움이 된다고 설명했다.

핵심은 초기부터 측정 지표를 설정하고, 신호나 성과를 추적하지 않는 안티 패턴에 빠지지 않는 것이다. 스리바스타바는 “측정은 종종 프로젝트 후반에 뒤늦게 붙는 바람에 리더가 가치를 입증하지 못하고, 어떤 것을 확장해야 하는지도 결정하지 못한다”라고 설명했다. 또, “처음부터 명확한 핵심 미션 지표를 정하고 기준선을 세운 다음, AI를 업무 흐름 속에 직접 녹여 넣어야 직원이 더 중요한 판단이 필요한 일에 집중할 수 있다”라고 덧붙였다.

5. AI 문화를 거버넌스로 관리해 보안 사고와 불안정을 막는다

생성형 AI 도구는 이제 업무 현장에서 흔하게 쓰이지만, 여전히 상당수 직원은 이를 안전하게 사용하는 방법을 모른다. 스몰PDF(SmallPDF)의 2025년 조사에 따르면, 미국 기반 직원의 거의 1/5는 AI 도구에 로그인 자격 증명을 입력한 경험이 있었다. 로페즈는 “좋은 리더십은 거버넌스와 가드레일을 세우는 것에서 시작된다”라고 말했다. 여기에는 챗GPT 같은 도구에 민감한 비밀 레시피 데이터가 입력되지 않도록 하는 정책 수립도 포함된다.

AI를 많이 쓸수록 기업의 공격 표면도 넓어진다. 경영진은 AI 기반 브라우저의 보안 취약점, 섀도우 AI 사용, LLM의 환각 문제를 진지하게 고려해야 한다. 에이전트형 AI가 비즈니스 핵심 프로세스에 깊이 관여할수록, 제대로 된 권한 관리와 접근 제어 없이는 민감 데이터 노출이나 IT 시스템에 대한 악의적 침투 위험이 커진다.

소프트웨어 개발 관점에서 보면, AI 코딩 에이전트를 통해 비밀번호나 키, 토큰이 유출될 가능성도 매우 크다. 개발자는 외부 데이터나 도구, API에 접근하도록 MCP 서버를 사용해 AI 코딩 에이전트를 강화해 왔다. 그러나 월람(Wallarm) 조사에 따르면, 2025년 2~3분기 MCP 관련 취약점이 270% 증가했고, 동시에 API 취약점도 급증했다.

스리바스타바는 기업이 AI를 도입할 때 에이전트 ID, 권한, 감사 이력을 소홀히 하는 경우가 많다고 지적하며, “에이전트 ID 및 접근 관리를 도입해, 에이전트가 사람과 동일한 권한과 감사 가능성을 갖도록 해야 한다”라고 조언했다. 여기에는 로그 기록과 승인 절차도 포함된다.

위험이 이렇게 큰데도 관리 체계는 여전히 허술한 곳이 많다. 오딧보드(AuditBoard)의 보고서에 따르면, AI를 도입 중인 조직 비중은 82%에 이르지만, 거버넌스 프로그램을 완전히 구현한 곳은 25%에 불과하다. IBM 분석에 따르면, 데이터 유출 1건당 평균 피해액은 거의 450만 달러에 이르며, IDC는 ‘신뢰할 수 있는 AI’를 구축한 조직이 그렇지 않은 조직보다 AI 프로젝트 ROI가 2배 이상 높을 가능성이 60% 더 크다고 밝혔다. AI 거버넌스에 투자해야 하는 비즈니스 논리는 더할 나위 없이 분명하다.

스리바스타바는 “높은 목표 의식에 강력한 가드레일을 짝지어야 한다”라며 “데이터 수명 주기와 접근 제어를 명확히 하고, 평가와 레드팀, 그리고 위험이 큰 구간에는 휴먼 인 더 루프 검증 절차를 두어야 한다”라고 말했다. 또 “보안과 프라이버시, 데이터 거버넌스를 소프트웨어 개발 라이프사이클에 녹여 배포와 보안을 동시에 추진해야 한다. 데이터 계보나 모델 동작을 알 수 없는 블랙박스를 허용해서는 안 된다”라고 덧붙였다.

AI는 마법이 아니다

BCG에 따르면, 기업 가운데 22%만이 AI를 개념 증명 단계 이상으로 진척시켰고, 4%만이 상당한 가치를 창출하고 있다. 이런 냉정한 통계를 감안하면, CIO는 AI 투자 수익에 대해 비현실적인 기대를 세워서는 안 된다.

AI에서 의미 있는 ROI를 얻으려면 상당한 초기 노력이 필요하며, 조직 프로세스를 근본적으로 바꾸는 작업이 뒤따라야 한다. 마스터카드의 운영 CTO 조지 마달로니는 런타임(Runtime)과의 최근 인터뷰에서 생성형 AI 애플리케이션 도입은 본질적으로 변화 관리와 채택의 문제라고 밝혔다.

AI에는 함정이 끝없이 많고, 조직이 가치를 따지기보다 유행을 좇는 경우도 흔하다. 데이터 전략 없이 성급히 프로젝트를 시작하거나 너무 빨리 확장하거나 보안을 사후에 붙이는 경우도 많다. 많은 AI 프로그램이 목표한 수준에 도달하지 못하는 이유는 최고 경영진의 후원이나 거버넌스가 부족하기 때문이다. 반대로, 솔루션 업체의 홍보를 곧이곧대로 믿고 과도하게 지출하거나 AI 플랫폼을 기존 레거시 인프라와 통합하는 난이도를 과소평가하는 경우도 많다.

앞으로 AI의 비즈니스 영향을 극대화하려면, 확장을 뒷받침할 데이터 인프라와 플랫폼 역량에 투자하고, 사람의 반복 작업을 줄이고 매출이나 효율을 분명하게 끌어올릴 수 있는 소수의 핵심 사용례에 집중해야 한다.

AI 열기를 핵심 원칙에 다시 연결하고, 조직이 추구하는 비즈니스 전략을 명확히 이해하는 작업이 있어야만 ROI에 한 걸음씩 다가갈 수 있다. 탄탄한 리더십과 분명한 목표 없이 AI에만 기대면, AI는 손에 잡힐 듯 잡히지 않는 보상을 약속하는 흥미로운 기술에 그칠 뿐이다.
dl-ciokorea@foundryco.com

Il 67% dei CIO si considera un potenziale CEO

Secondo un recente sondaggio, i CIO ora si considerano leader aziendali e la maggior parte di loro ritiene di possedere le competenze necessarie per ricoprire la carica più alta all’interno delle aziende.

Due terzi dei Chief Information Officer aspirano [in inglese] a diventare CEO prima o poi e molti sostengono di possedere comprovate capacità di leadership e l’abilità di guidare l’innovazione necessaria per dirigere le aziende, secondo un sondaggio condotto dal CIO Program [in inglese] di Deloitte.

Inoltre, anche l’IT sembra aver raggiunto un punto di svolta, con il 52% dei CIO che ora afferma di considerare i propri team IT come una fonte di reddito piuttosto che un centro di servizi per l’azienda.

Nel complesso, i risultati del sondaggio sottolineano l’emergere del top manager IT come stratega aziendale di fiducia per alimentare la crescita e reinventare la competitività dell’impresa, dichiarano gli esperti di Deloitte.

“Non c’è mai stato un momento migliore per essere un CIO”, spiega Anjali Shaikh [in inglese], responsabile dei programmi CIO e CDAO di Deloitte negli Stati Uniti. “La tecnologia non è più una funzione consultiva e i Chief Information Officer si stanno affermando come catalizzatori strategici per le loro imprese, abbandonando il ruolo operativo che avevano in passato”.

La gestione del conto economico

Oltre ad attirare l’attenzione dei colleghi, i CIO stanno anche dimostrando di avere una visione rinnovata di sé stessi, dice Shaikh. Il 36% dei CIO riferisce di gestire ora un conto economico (P&L), il che potrebbe alimentare nuove ambizioni professionali.

Il 67% dei CIO che ha dichiarato di essere interessato a ricoprire in futuro il ruolo di CEO ha indicato tre competenze chiave che ritiene lo rendano idoneo per l’avanzamento di carriera. 

Quasi quattro su dieci hanno identificato separatamente le loro comprovate capacità di leadership e gestione, la loro capacità di promuovere l’innovazione e la crescita [in inglese] e la loro esperienza nella creazione di team ad alte prestazioni.

Al contrario, solo circa un terzo dei CTO e dei Chief Digital Officer intervistati da Deloitte si vede come CEO in futuro, e meno di un sesto dei CISO e dei Chief Data and Analytics Officer immagina questo passaggio.

Amit Shingala [in inglese], CEO e co-fondatore del fornitore di servizi IT Motadata, sottolinea come il cambiamento del ruolo del CIO, che da responsabile delle operazioni IT è diventato un motore chiave della crescita aziendale, è sempre più evidente in tutto il settore.

“La tecnologia ora influenza tutto, dall’esperienza dei clienti ai modelli di reddito, quindi ci si aspetta che i CIO contribuiscano direttamente ai risultati aziendali, non solo alla stabilità dell’infrastruttura”, osserva Shingala, che lavora a stretto contatto con diversi CIO.

Di conseguenza, Shingala non è sorpreso che molti CIO aspirino a diventare CEO e ritiene che questa posizione sia oggi più che mai un trampolino di lancio.

“I CIO hanno ora una visione d’insieme dell’intera attività: operazioni, rischi, finanza, sicurezza informatica e modalità di interazione dei clienti con i servizi digitali”, commenta. “Questa ampia comprensione, unita all’esperienza nella guida di importanti iniziative di trasformazione, li pone in una posizione di forza per il ruolo di CEO”.

L’innovazione prima dei ricavi

Shingala comprende anche perché molti CIO ora vedono il loro ruolo come un generatore di ricavi. Tuttavia, sebbene stimolare la crescita dei ricavi sia importante, l’obiettivo finale dovrebbe essere quello di fornire valore aziendale, afferma.

“Quando un Chief Information Officer introduce nuove funzionalità digitali o abilita l’automazione che migliora l’esperienza dei clienti, il risultato si traduce spesso in nuovi ricavi o in efficienza dei costi”, spiega. “L’innovazione viene prima di tutto. I ricavi sono solitamente la ricompensa per averla realizzata nel modo giusto”.

Scott Bretschneider [in inglese], vice president del reparto client delivery and operations presso Cowen Partners Executive Search, concorda sul fatto che l’innovazione dovrebbe essere la priorità assoluta per i CIO moderni, i quali dovrebbero agire sia come catalizzatori dell’innovazione che come operatori aziendali, dice.

“L’innovazione implica ripensare i processi aziendali, consentire decisioni basate sui dati e creare piattaforme per la crescita”, aggiunge Bretschneider. “I ricavi sono il risultato dell’efficace attuazione di tali innovazioni. Un ottimo CIO pone l’accento sull’innovazione che porta a risultati, trovando un equilibrio tra sperimentazione e rendimenti misurabili”.

Come Shingala, anche Bretschneider vede i CIO come candidati emergenti per diventare CEO [in inglese]. Negli ultimi anni, un numero crescente di CIO [in inglese] e Chief Digital Officer è passato a ruoli di presidente, COO e CEO, rileva, in particolare nei settori in cui l’IT è in prima linea, tra cui i servizi finanziari, la vendita al dettaglio e la produzione.

“I CIO di oggi possiedono molte delle qualità che i consigli di amministrazione e gli investitori cercano nei CEO”, aggiunge. “Comprendono le operazioni a livello aziendale che, a loro volta, includono la finanza, la supply chain, la customer experience e la gestione dei rischi. Sono abituati a guidare team eterogenei e a gestire budget consistenti”.

Cambiare la narrativa

Sebbene il sondaggio mostri aspettative e responsabilità crescenti per i CIO, la cattiva notizia è che quasi la metà delle aziende rappresentate continua a considerare il ruolo più incentrato sulla manutenzione e sull’assistenza che sull’innovazione e sui ricavi, osserva Shaikh di Deloitte.

I CIO che rimangono bloccati in aziende incentrate su questa visione antiquata della posizione possono spingere per evolvere i loro ruoli [in inglese], argomenta. I CIO dovrebbero impegnarsi a fondo per stare al passo con le tecnologie emergenti, mentre spingono per rendere le loro posizioni più incentrate sull’innovazione, raccomanda.

“La parte più difficile del loro lavoro è stare al passo con tutte le tecnologie emergenti, e non ci si può permettere di rimanere indietro”, afferma Shaikh. “Come si fa a creare lo spazio nella propria agenda e a sviluppare le capacità attraverso i propri team e la propria energia?”

I CIO dovrebbero avvalersi dell’aiuto di università, colleghi e altre risorse per stare al passo, aggiunge.

“Avete tutte le responsabilità del vostro ruolo tradizionale per aiutare a guidare il vostro team e la vostra impresa attraverso le tecnologie emergenti, e questo richiede che siate sempre un passo avanti”, conclude. “Quindi: in che modo lo state facendo?”

A CIO’s 5-point checklist to drive positive AI ROI

Earlier this year, MIT made headlines with a report that found 95% of organizations are getting no return from AI — and this despite a groundbreaking $30 billion investment, or more, into US-based internal gen AI initiatives. So why do so many AI initiatives fail to deliver positive ROI? Because they often lack a clear connection to business value, says Neal Ramasamy, global CIO at Cognizant, an IT consulting firm. “This leads to projects that are technically impressive but don’t solve a real need or create a tangible benefit,” he says.

Technologists often follow the hype, diving headfirst into AI tests without considering business results. “Many start with models and pilots rather than business outcomes,” says Saket Srivastava, CIO of Asana, the project management application. “Teams run demos in isolation, without redesigning the underlying workflow or assigning a profit and loss owner.”

A combination of a lack of upfront product thinking, poor underlying data practices, nonexistent governance, and minimal cultural incentives to adopt AI can produce negative results. So to avoid poor outcomes, many of the techniques boil down to better change management. “Without process change, AI speeds today’s inefficiencies,” adds Srivastava.

Here, we review five tips to manage change within an organization that CIOs can put into practice today. By following this checklist, enterprises should start to turn the tide on negative AI ROI, learn from anti-patterns, and discover which sort of metrics validate successful company-wide AI ventures.

1. Align leadership upfront by communicating business goals and stewarding the AI initiative

AI initiatives require executive sponsorship and a clear vision for how they improve the business. “Strong leadership is essential to translate AI investments into results,” says Adam Lopez, president and lead vCIO at managed IT support provider CMIT Solutions. “Executive sponsorship and oversight of AI programs, ideally at the CEO or board level, correlates with higher ROI.”

For example, at IT services and consulting company Xebia, a subgroup of executives steers its internal AI efforts. Chaired by global CIO Smit Shanker, the team includes the global CFO, head of AI and automation, head of IT infrastructure and security, and head of business operations.

Once upper leadership is assembled, accountability becomes critical. “Start by assigning business ownership,” advises Srivastava. “Every AI use case needs an accountable leader with a target tied to objectives and key results.” He recommends standing up a cross-functional PMO to define lighthouse use cases, set success targets, enforce guardrails, and regularly communicate progress.

Still, even with leadership in place, many employees will need hands-on guidance to apply AI in their daily work. “For most individuals, even if you give them the tools in the morning, they don’t know where to start,” says Orla Daly, CIO of Skillsoft, a learning management system. She recommends identifying champions across the organization who can surface meaningful use cases and share practical tips, such as how to get more out of tools like Copilot. Those with a curiosity and a willingness to learn will make the most headway, she says.

Finally, executives must invest in infrastructure, talent, and training. “Leaders must champion a data-driven culture and promote a clear vision for how AI will solve business problems,” says Cognizant’s Ramasamy. This requires close collaboration between business leaders, data scientists, and IT to execute and measure pilot projects before scaling.

2. Evolve by shifting the talent framework and investing in upskilling

Organizations must be open to shift their talent framework and redesign roles. “CIOs should adapt their talent and management strategies to ensure successful AI adoption and ROI for the organization,” says Ramasamy. “This could involve creating new roles and career paths for AI-focused professionals, such as data scientists and prompt engineers, while upskilling existing employees.”

CIOs should also view talent as a cornerstone of any AI strategy, adds CMIT’s Lopez. “By investing in people through training, communication, and new specialist roles, CIOs can be assured that employees will embrace AI tools and drive success.” He adds that internal hackathons and training sessions often yield noticeable boosts in skills and confidence.

Upskilling, for instance, should meet employees where they are, so Asana’s Srivastava recommends tiered paths: all staff need basic prompt literacy and safety training, while power users require deeper workflow design and agent-building knowledge. “We took the approach of surveying the workforce, targeting enablement, and remeasuring to confirm that maturity moved in the right direction,” he says.

But assessing today’s talent framework goes beyond human skillsets. It also means reassessing your work to be done, and who or what performs what tasks. “It’s essential to review business processes for opportunities to refactor them, given the new capabilities that AI brings,” says Scott Wheeler, cloud practice lead at cloud consulting firm Asperitas Consulting.

For Skillsoft’s Daly, today’s AI age necessitates a modern talent management framework that artfully balances the four Bs: build, buy, borrow, and bots. In other words, leaders should view their organization as a collection of skills rather than fixed roles, and apply the right mix of in-house staff, software, partners, or automation as needed. “It’s requiring us to break things down into jobs or tasks to be done, and looking at your work in a more fragmented way,” says Daly.

For instance, her team used GitHub Copilot to quickly code a learning portal for a certain customer. The project highlighted how pairing human developers with AI assistants can dramatically accelerate delivery, raising new questions about what skills other developers need to be equally productive and efficient.

But as AI agents take over more routine work, leaders must dispel fears that AI will replace jobs outright. “Communicating the why behind AI initiatives can alleviate fears and demonstrate how these tools can augment human roles,” says Ramasamy. Srivastava agrees. “The throughline is trust,” he says, “Show people how AI removes toil and increases impact; keep humans in the decision loop and adoption will follow.”

3. Adapt organizational processes to fully capture AI benefits 

Shifting the talent framework is only the beginning. Organizations must also reengineer core processes. “Fully unlocking AI’s value often requires reengineering how the organization works,” says CMIT’s Lopez, who urges embedding AI into day-to-day operations and supporting it with continual experimentation rather than treating it as a static add-on.

To this end, one necessary adaptation is toward treating internal AI-driven workflows like products and codifying patterns across the organization, says Srivastava. “Establish product‑management rigor for intake, prioritization, and roadmapping of AI use cases, with clear owners, problem statements, and value hypotheses,” he says.

At Xebia, a governance board oversees this rigor through a three-stage tollgate process of identifying and assessing value, securing business acceptance, and then handing off to IT for monitoring and support. “A core group is responsible for organizational and functional simplification with each use case,” says Shanker. “That encourages cross-functional processes and helps break down silos.”

Similarly for Ramasamy, the biggest hurdle is organizational resistance. “Many companies underestimate the change management required for successful adoption,” he says. “The most critical shift is moving from siloed decision-making to a data-centric approach. Business processes should integrate AI outputs seamlessly, automating tasks and empowering employees with data-driven insights.”

Identifying the right areas to automate also depends on visibility. “This is where most companies fall down because they don’t have good, documented processes,” says Skillsoft’s Daly. She recommends enlisting subject-matter experts across business lines to examine workflows for optimization. “It’s important to nominate individuals within the business to ask how to drive AI into your flow of work,” she says.

Once you identify units of work common across functions that AI can streamline, the next step is to make them visible and standardize their application. Skillsoft is doing this through an agent registry that documents agentic capabilities, guardrails, and data management processes. “We’re formalizing an enterprise AI framework in which ethics and governance are part of how we manage the portfolio of use cases,” she adds.

Organizations should then anticipate roadblocks and create support structures to help users. “One strategy to achieve this is to have AI SWAT teams whose purpose is to facilitate adoption and remove obstacles,” says Asperitas’ Wheeler.

4. Measure progress to validate your return   

To evaluate ROI, CIOs must establish a pre-AI baseline and set benchmarks upfront. Leaders recommend assigning ownership around metrics such as time to value, cost savings, time savings, work handled by human agents, and new revenue opportunities generated.

“Baseline measurements should be established before initiating AI projects,” says Wheeler, who advises integrating predictive indicators from individual business units into leadership’s regular performance reviews. A common fault, he says, is only measuring technical KPIs like model accuracy, latency, or precision, and failing to link these to business outcomes, such as savings, revenue, or risk reduction.

Therefore, the next step is to define clear, measurable goals that demonstrate tangible value. “Build measurement into projects from day one,” says CMIT’s Lopez. “CIOs should define a set of relevant KPIs for each AI initiative. For example, 20% faster processing time or a 15% boost in customer satisfaction.” Start with small pilots that yield quick, quantifiable results, he adds.

One clear measurement is time savings. For instance, Eamonn O’Neill, CTO at Lemongrass, a software-enabled services provider, shares how he’s witnessed clients documenting SAP development manually, which can be an extremely time-intensive process. “Leveraging generative AI to create this documentation provides a clear reduction in human effort, which can be measured and translated to a dollar ROI quite simply,” he says.

Reduction of human labor per task is another key signal. “If the goal is to reduce the number of support desk calls handled by human agents, leaders should establish a clear metric and track it in real time,” says Ram Palaniappan, CTO at full-stack tech services provider TEKsystems. He adds that new revenue opportunities may also surface through AI adoption.

Some CIOs are monitoring multiple granular KPIs across individual use cases and adjusting strategies based on results. Asana’s Srivastava, for instance, tracks engineering efficiency by monitoring cycle time, throughput, quality, cost per transaction, and risk events. He also measures the percentage of agent-assisted runs, active users, human-in-the-loop acceptance, and exception escalations. Reviewing this data, he says, helps tune prompts and guardrails in real time.

The resounding point is to set metrics early on, and not fall into the anti-patterns of not tracking signals or value gained. “Measurement is often bolted on late, so leaders can’t prove value or decide what to scale,” says Srivastava. “The remedy is to begin with a specific mission metric, baseline it, and embed AI directly in the flow of work so people can focus on higher-value judgment.”

5. Govern your AI culture to avoid breaches and instability

Gen AI tools are now commonplace, yet many employees still lack training to use them safely. For instance, nearly one in five US-based employees has entered login credentials into AI tools, according to a 2025 study from SmallPDF. “Good leadership involves establishing governance and guardrails,” says Lopez. That includes setting policies to prevent sensitive secret sauce data from being fed into tools like ChatGPT.

Heavy AI use also widens the enterprise attack surface. Leadership must now seriously consider things like security vulnerabilities in AI-driven browsers, shadow AI use, and LLM hallucinations. As agentic AI gets more involved in business-critical processes, proper authorization and access controls are essential to prevent exposure of sensitive data or malicious entry into IT systems.

From a software development standpoint, the potential for leaking passwords, keys, and tokens through AI coding agents is very real. Engineers have jumped at MCP servers to empower AI coding agents with access to external data, tools, and APIs, yet research from Wallarm found a 270% rise in MCP-related vulnerabilities from Q2 to Q3 2025, alongside surging API vulnerabilities.

Neglecting agent identity, permissions, and audit trails is a common trap that CIOs often stumble into with enterprise AI, says Srivastava. “Introduce agent identity and access management so agents inherit the same permissions and auditability as humans, including logging and approvals,” he says.

Despite the risks, oversight remains weak. An AuditBoard report found that while 82% of organizations are deploying AI, only 25% have fully implemented governance programs. With data breaches now averaging nearly $4.5 million each, according to IBM, and IDC reporting organizations that build trustworthy AI are 60% more likely to double the ROI of AI projects, the business case for AI governance is crystal clear.

“Pair ambition with strong guardrails: clear data lifecycle and access controls, evaluation and red‑teaming, and human‑in‑the‑loop checkpoints where stakes are high,” says Srivastava. “Bake security, privacy, and data governance into the SDLC so ship and secure move together — no black boxes for data lineage or model behavior.”

It’s not magic

According to BCG, only 22% of companies have advanced their AI beyond the POC stage, and just 4% are creating substantial value. With these sobering statistics in mind, CIOs shouldn’t set unrealistic expectations for getting a return.

Finding ROI from AI will require significant upfront effort, and necessitate fundamental changes to organizational processes. As Mastercard’s CTO for operations George Maddaloni said in a recent interview with Runtime, he thinks gen AI app adoption is largely about change management and adoption.

The pitfalls with AI are nearly endless and it’s common for organizations to chase hype rather than value, launch without a clear data strategy, scale too quickly, and implement security as an afterthought. Many AI programs simply don’t have the executive sponsorship or governance to get where they need to be, either. Alternatively, it’s easy to buy into vendor hype on productivity gains and overspend, or underestimate the difficulty of integrating AI platforms with legacy IT infrastructure.


Looking ahead, to better maximize AI’s business impact, leaders recommend investing in the data infrastructure and platform capabilities needed to scale, and hone on one or two high-impact use cases that can remove human toil and clearly drive revenue or efficiency.

Grounding AI fervor in core tenets and understanding the business strategy you’re aiming for is necessary to inch toward ROI. Because, without sound leadership and clear objectives, AI is only a fascinating technology with a reward that’s just always out of reach.

10 benefits of an optimized third-party IT services portfolio

In today’s rapidly changing digital landscape, CEOs and CIOs are under constant pressure to do more with less, reduce costs, increase agility, and ensure technology investments directly enable business growth. One of the most effective ways to achieve these objectives is by optimizing your third-party IT services portfolio.

An optimized portfolio not only unlocks cost savings but also enhances flexibility, strengthens risk management, and fosters innovation by aligning IT delivery with broader strategic goals. Here are the top 10 benefits to such a strategy:

Cost efficiency

An optimized portfolio can help with cost reduction and better financial management of IT services spend. By outsourcing certain IT functions to specialized vendors, companies can often achieve cost savings compared to in-house solutions. CEOs are always focused on maximizing profits and reducing unnecessary expenses, making cost-efficient IT services a priority.

Optimizing a decentralized portfolio into a centralized model can reduce IT services spend by up to 30% in fees alone. Beyond direct savings, consolidation creates a stronger base of institutional knowledge around systems, culture, and talent, accelerating onboarding and ensuring continuity of delivery.

Concentrating spend among a select set of strategic partners also creates meaningful leverage. Expect sustainable volume discounts, provider-led investments in technology and COEs, andbest-in-class commercial terms. The result is a more cost-effective, stable, and performance-driven services ecosystem.

Focus on core business

Outsourcing non-core IT functions allows the organization to concentrate on primary business activities. This aligns with the strategic goals of the CEO, who wants the company to excel in its main areas of expertise.

Technology is advancing at its most aggressive pace in decades, and staying current requires time and specialized skills. By entrusting day-to-day IT operations to trusted providers, organizations can reallocate internal resources toward higher-value initiatives such as digital transformation, automation, and product innovation. This accelerates adoption of emerging technologies, and allows internal teams to deepen business expertise, strengthen cross-functional collaboration, and focus on driving growth where it matters most.

Scalability and flexibility

A well-structured third-party IT services portfolio can provide flexibility to scale up or down based on business needs. This is particularly valuable for CEOs who need to adapt to changing market conditions and seize growth opportunities.

Securing talent in the market today is challenging and time consuming, so tapping into the talent pools of your strategic IT services partner base allows organizations to leverage their bench strength to fill immediate needs for talent.

Highly optimized IT service provider portfolios benefit from the institutional knowledge partners obtain over multiple engagements to ensure onboarded resources are the right fit for the organization’s culture. Provider partners often tap resources to fill needs that have worked in some capacity for the organization on prior engagements, allowing resources to hit the ground running by having experience in the environment, with people, and processes.

Innovation and expertise

Outsourcing IT services can grant access to specialized expertise and innovative technologies that the organization might not possess in-house. CEOs are often interested in staying ahead of the curve and leveraging the latest advancements to drive competitive advantage. They also increasingly look to IT service provider expertise in IT security solutions, as well as in advancements and innovation by leveraging AI.

IT service providers continuously invest in advanced tech and talent development, enabling clients to benefit from cutting-edge innovations without bearing the full cost of adoption. As AI, automation, and cybersecurity evolve, providers offer the subject matter expertise and tools organizations need to stay ahead of disruption.

By tapping into this ecosystem, businesses can improve stability, enhance operational efficiency, and accelerate transformation, positioning IT as a true driver of competitive differentiation.

Risk management

CIOs and CEOs share a concern for managing and mitigating risks. By partnering with reliable and experienced third-party IT service providers, organizations can offload some risks associated with technology management, cybersecurity, compliance, and regulatory issues.

The largest risks reside within the security of an organization’s data, its platforms, and applications. Providers like Accenture, Wipro, and TCS have built strong security services platforms that allow organizations to leverage the depth and breadth of partner resources to keep up with technology advances.

Focus on strategy

With operational stability ensured through a balance of internal talent and trusted third parties, CIOs can dedicate more focus to long-term strategic initiatives that fuel growth and innovation. As technology evolves, shifts in spend across your provider landscape can reveal new leverage opportunities, whether through volume consolidation, strategic renewals, or rebalanced sourcing models.

A well-optimized portfolio gives CIOs the visibility and flexibility to adjust quickly, align investments with business priorities, and continually extract greater value from every provider relationship.

Agility and time to market

Third-party IT services can accelerate project timelines and improve time to market for new products or services. This aligns with CEO desires to be agile and responsive to market demands. 

An optimized IT services portfolio enables organizations to tap into providers with proven delivery methodologies, agile frameworks, and global delivery centers that operate around the clock. This delivery model shortens development cycles, enhances responsiveness, and ensures critical initiatives move from concept to deployment faster. When providers are strategically aligned to your business priorities, they proactively identify opportunities to streamline workflows and eliminate bottlenecks, turning IT into an enabler of innovation rather than a constraint on progress.

Resource allocation

CEOs and CIOs can allocate internal resources more effectively by leveraging external expertise. This can lead to better resource allocation, improved efficiency, and enhanced overall performance.

Optimized portfolios ensure that resources, both internal and external, are strategically aligned with enterprise goals. By clearly defining roles and responsibilities across your IT ecosystem, internal teams can focus on initiatives that differentiate the business while third-party providers manage standardized or commodity functions. This balance creates organizational clarity, eliminates duplication of effort, and enhances operational efficiency.

Over time, this structure supports workforce planning and succession development, allowing organizations to invest in the right internal skillsets for long-term strategic growth.

Competitive edge

A well-managed third-party IT services portfolio can provide an edge by allowing organizations to leverage external partner expertise and resources to outpace competitors. Organizations that view their IT service providers not merely as vendors, but as strategic extensions of their teams usually have an upper hand.

Through continuous engagement, co-innovation, and shared investment models, organizations can pilot emerging technologies faster than peers and bring differentiated offerings to market. Providers with deep domain expertise often introduce industry best practices and benchmark insights that inform strategic decision-making. When these partnerships are managed proactively and built on mutual value, the result is a sustained competitive advantage rooted in speed, innovation, and operational excellence.

Business continuity

Outsourcing certain IT functions can contribute to business continuity planning by having redundancy and backup systems in place through third-party providers. Optimized third-party portfolios enhance resilience by ensuring redundancy across critical infrastructure, applications, and operations.

Leading IT service providers invest heavily in high-availability architectures, disaster recovery capabilities, and geographically diverse data centers, all of which strengthen your organization’s continuity posture. A diversified yet coordinated provider ecosystem ensures rapid recovery in the event of outages, cyber incidents, or natural disasters.

Overall, an optimized third-party IT services portfolio can contribute significantly to achieving the strategic objectives of CEOs and CIOs, including cost savings, efficiency improvements, innovation, risk management, and competitive advantage. However, it’s important to carefully select and manage third-party vendors to ensure they align with the organization’s goals. Otherwise, significant value and cost savings could be left on the table.

67% of CIOs see themselves as potential CEOs

CIOs now see themselves as business leaders, with most believing they have the skills necessary to take the top job within companies, according to a recent survey.

Two-thirds of CIOs aspire to become CEOs at some point, with many saying they possess the proven leadership skills and the ability to drive the innovation that’s needed to lead organizations, according to a survey by Deloitte’s CIO Program.

Moreover, IT appears to have hit a tipping point as well, with 52% of CIOs now saying their IT teams are viewed as a revenue generator rather than a service center for the business.

Overall, the survey results underscore the CIO’s emergence as a business strategist trusted to fuel growth and reimagine enterprise competitiveness, Deloitte experts say.

“It’s never been a better time to be a CIO,” says Anjali Shaikh, leader of Deloitte’s CIO and CDAO Programs US. “Technology is no longer an advisory function, and CIOs are very much showing up as strategic catalysts for their organizations and less of the operator role of the past.”

Managing P&L

In addition to gaining the attention of business colleagues, CIOs are also showing signs of renewed views about themselves, Shaikh says. Thirty-six percent of the CIOs report they now manage a profit-and-loss (P&L) statement, which may be fueling new career ambitions.

The 67% of CIOs who said they’re interested in pursuing a CEO role in the future pointed to three key skillsets that they believe make them qualified for advancement. Nearly four in 10 separately identified their proven leadership and management skills, their ability to drive innovation and growth, and their track record of building high-performing teams.

By contrast, only about a third of CTOs and chief digital officers surveyed by Deloitte see themselves as CEOs in the future, and less than a sixth of CISOs and chief data and analytics officers envision the move.

Amit Shingala, CEO and co-founder of IT service management vendor Motadata, says the CIO role’s shift from primarily running IT operations to becoming a key driver of business growth has become increasingly more evident across the industry.

“Technology now influences everything from customer experience to revenue models, so CIOs are being expected to contribute directly to business outcomes, not just infrastructure stability,” says Shingala, who works closely with several CIOs.

As a result, Shingala isn’t surprised that many CIOs aspire to become CEOs, and he believes that the position is a steppingstone now more than ever before.

“CIOs now have visibility into the entire business — operations, risk, finance, cybersecurity, and how customers interact with digital services,” he says. “That broad understanding combined with experience leading major transformation efforts puts them in a strong position for the CEO role.”

Innovation before revenue

Shingala also understands why many CIOs now see their role as a revenue generator. But while driving revenue growth is important, delivering business value should be the ultimate goal, he says.

“When a CIO introduces new digital capabilities or enables automation that improves customer experience, the result often shows up as new revenue or cost efficiency,” he explains. “Innovation comes first. Revenue is usually the reward for getting innovation right.”

Scott Bretschneider, vice president of client delivery and operations at Cowen Partners Executive Search, agrees that innovation should be the top priority for CIOs. Modern CIOs should act as both innovation catalysts and business operators, he says.

“Innovation involves rethinking business processes, enabling data-driven decisions, and creating platforms for growth,” Bretschneider adds. “Revenue is the result of effectively executing those innovations. A great CIO emphasizes innovation that leads to results, striking a balance between experimentation and measurable returns.”

Like Shingala, Bretschneider also sees CIOs as emerging candidates to become CEOs. In recent years, a growing number of CIOs and chief digital officers have transitioned into president, COO, and CEO roles, he says, particularly in industries where IT is at the forefront, including financial services, retail, and manufacturing.

“CIOs today have many of the qualities boards and investors look for in CEOs,” he adds. “They understand enterprise-wide operations, encompassing finance, supply chain, customer experience, and risk management. They’re used to leading diverse teams and managing large budgets.”

Shifting the narrative

While the survey shows growing expectations and responsibilities for CIOs, the bad news is that nearly half of the organizations represented still see the role as more focused on maintenance and service than on innovation and revenue, notes Deloitte’s Shaikh.

CIOs stuck at enterprises focused on this older view of the position can push to evolve their roles, she says. CIOs should work hard to keep up with emerging technologies as they push to make their positions more focused on innovation, she recommends.

“The hardest part of their job is staying ahead of all the emerging technology, and you can’t find yourself on the back foot,” Shaikh says. “How are you creating the space in your in your schedule and creating the capacity through your teams and the energy?”

CIOs should lean on universities, peers, and other resources to help them keep up, she adds.

“You have all the responsibilities of your traditional role to help guide your team and your organization through the emerging technology, and that requires you to have to stay ahead of it,” she says. “So how are you doing that?”

How CIOs can get a better handle on budgets as AI spend soars

Gartner predicts global AI spending will hit $2 trillion in 2026, up from $1.5 trillion this year. And In a survey of over 300 executives at large companies by management consulting firm West Monroe Partners, 85% said they plan to increase IT budgets next year, with a big chunk going to AI. For 42% of executives, scaling AI and data capabilities is the top priority for technology investment, and 91% said AI is causing their tech spend to increase while nearly three quarters plan to spend more on contractors as a result of AI.

In the previous couple of years, many companies were doing POCs, just figuring out what AI can do, says Bret Greenstein, CAIO at West Monroe. But that’s all changing now. “I see a lot less discussion of use cases and POCs, and more about phase one, phase two projects,” he says.

It’s not as hard to assess whether or not AI can do something anymore, he adds. “I can look at something and say this is highly addressable by AI.” But that doesn’t mean CIOs get carte blanche to spend all they want.

At the Principal Financial Group, a global investment and insurance company, the focus is now on delivering measurable business value, says Rajesh Arora, the company’s chief data and analytics officer.

“We’re reallocating budgets toward scalable platforms and high-impact use cases,” he says. Plus, the firm is implementing rigorous ROI tracking and cost governance. That’s because the firm is moving past experimental pilots, he says. In addition to looking for platforms that can scale, the company is also looking at lifecycle management tools, data foundations, and operational AI capabilities.

“These are solutions that’ll automate processes, enhance customer experience, build new capabilities, and strengthen risk management,” he says. “Our goal is to make every dollar work harder.” And that means some things have to go.

The company is pausing low-impact investments, for example, in favor of high-value use cases. And they’re tightening up their contract governance and renegotiating terms. There’s also automation. “We’re deploying cost-alerting for LLM ops and feature story versioning to flag anomalies and prevent overruns,” he says.

LLMs can produce different results for the same output, and different versions of the model can have very different performance metrics and costs. And feature story versioning tracks software changes, as well as the model, data, and prompts used. So it all comes down to AI becoming a strategic focus to manage costs.

Arora’s experience isn’t unique. Enterprises of all sizes and in all verticals are grappling with their AI spending as they move on from POCs to actual deployment at scale, which often means facing new demands for ROI, shifting money from legacy to AI projects, and struggling to get a handle on technical debt.

The push for proof

To prove its AI investments are worth the dough, Principal tracks efficiency gains, reductions in risk, improved customer satisfaction, and better employee experience. This creates a holistic view of the value that AI creates, says Arora.

“Our approach is to maintain a balanced portfolio,” he says. That means both short-term wins that build momentum, and long-term innovation to drive strategic advantage and growth. “As AI capabilities mature, we must be more intentional about how we define success and ensure long-term sustainability,” he adds.

Tech executives at smaller companies are also having to show results from their AI projects. JBGoodwin Realtors, with four offices in Austin and San Antonio, has 800 agents, partners, and employees, and everyone is all-in on AI, says Edward Tull, the company’s VP of technology and operations.

“The CEO uses it every day,” he says. “All the agents use it, too, and we have approval to spend more.” But he has to show ROI. “I have to prove it,” he adds. “I spend a little, prove the use case, and then I get a little more and spend a little more.” So for example, AI might result in better efficiency so to demonstrate this, he might run two processes in parallel, one the old-fashioned way, and the other with AI.

Focusing on AI projects that result in cost savings is a good way to show results and build momentum, agrees Gartner analyst Melanie Freeze. “We know that can lead to other non-cost considerations and long-term value.” For example, in infrastructure and operations, likely wins include cloud cost management, IT service support, and general employee productivity, she says.

“You can get cost optimization, but also all that other value like innovation, efficiency, optimizing talent management,” she says.

A shift in priorities

Another way to pay for AI projects, especially experimental ones that don’t yet have clear ROI, is to take money from other areas. JBGoodwin’s Tull says he does that. “I’ll get rid of other things we spend on, to offset what I spend on AI,” he says.

Everyone wants to become AI-centric or AI-native, says West Monroe’s Greenstein. “But nobody has extra buckets of money to do this unless it’s existential to their company,” he says. So moving money from legacy projects to AI is a popular strategy.

“It’s a shift of priorities within companies,” he says. “They look at their investments and ask how many are no longer needed because of AI, or how many can be done with AI. Plus, they’re putting pressure on vendors to drive down costs. They’re definitely squeezing existing suppliers.”

Even large, tech-forward companies might have to do this kind of juggling.

“We didn’t create a whole new allocation for AI,” says one senior tech executive at a Fortune 500 insurance company. “We’re still working through the mechanics of budgeting for AI.”

Instead, the firm is carving out funds from other areas.

“AI is in a self-funding model at the moment,” he says. “We’re shifting investment from legacy technologies to AI.” For example, he says, if the company was spending a million dollars on a particular technology and used automation to get it down to $900,000 a year, the $100,000 savings could go toward AI.

And sometimes the company can get new AI for free, he says, as vendors add AI functionality or agentic capabilities to existing products. But other platforms charge extra for the new features. “Some of it is inherent in the solution, though, and doesn’t really change the cost,” he says. That might evolve to new funding in 2026 to 2027, he adds. But as the company’s use of AI continues to mature, the funding model will evolve as well, he says.

“We’ll see that change as we demonstrate capabilities that either deliver high business value or efficiency gains,” he says. “Then we’ll shift to additional infusions of investment to accelerate things.”

Planning for the unexpected

Budgeting for IT projects has never been simple, but AI adds its own challenges. The unprecedented pace of change is one of them.

“Whatever modeling I do now is not going to be valid in six months,” says Sheldon Monteiro, chief product officer at Publicis Sapient. This isn’t always a bad thing. For example, the per token prices of some models have dropped dramatically over the past two years, he says. But on the flipside, there are always newer and better models, growing usage, and unpredictable performance.

“With traditional software economics, you have upfront costs like development, engineering, or infrastructure, but once you have those fixed costs, operating costs are relatively predictable and manageable,” he says. With AI, though, the inference costs are variable, and the guardrail and compliance checks might have additional costs, he says. Scaling is also non-linear and the tech itself is in constant flux.

“You need to be able to flex,” says Monteiro, “And to recognize that now, winners and losers are hard to call.”

Another challenge to budgeting is the demands that AI places on people, systems, and data. One of the most significant challenges to managing AI costs is talent, says Principal’s Arora. “Skill gaps and cross-team dependencies can slow deliveries and drive up costs,” he says.

Then there’s the problem of evolving regulations, and the need to continuously adapt governance frameworks to stay resilient in the face of these changes. Organizations also often underestimate how much money will be needed to train employees, and to bring data and other foundational systems in line with what’s needed for AI.

“Legacy environments add complexity and expense,” he adds. “These one-time costs are heavy but essential to avoid long-term inefficiencies.”

Finally, when AI technology is actually moved out of POCs into production, it often turns out very different to what companies expected.

“There are so many unknowns right now,” says Karen Panetta, IEEE fellow and dean of graduate engineering at Tufts University. “People think of it as a replacement for a human, and it’s not. And you get new areas you haven’t had to worry about before.” For example, many companies look to use AI agents to replace customer service or support teams.

“It’s really appealing,” she says. “You’ve got 10 people answering phone calls now, and it feels like AI is going to do the job of those ten people. But I’ve designed it for normal process flows, so what about all the exceptions? Now you have angry customers, or it breaks and is unavailable. And what about security? Before, we had humans to detect these things.”

CIOs have to be thoughtful about what they’re doing with the AI and why, she says.

Many CIOs have already transitioned from managing costs and risks, to managing data and becoming enablers of insight, and getting closer to the business units. Now they’re in a position to become enablers of AI, while doing it safely and at cost.

“There are some CIOs that blocked and firewalled every AI tool the day it came out,” says West Monroe’s Greenstein. “That blocked companies from adoption. The ones who are progressive are being thoughtful, deliberate, are building governance models, and creating a new enterprise architecture around AI. The CIOs who are embracing that are enabling the enterprises of tomorrow.”

How the administration is bringing much needed change to software license management

Over the last 11 months, the General Services Administration has signed 11 enterprisewide software agreements under its OneGov strategy.

The agreements bring both standard terms and conditions as well as significant discounts for a limited period of time to agencies.

Ryan Triplette, the executive director of the Coalition for Fair Software Licensing, said the Trump administration seems to be taking cues from what has been working, or not working, in the private sector around managing software licenses.

Ryan Triplette is the executive director of the Coalition for Fair Software Licensing.

“They seem to be saying, ‘let’s see if we can import that in to the federal agencies,’ and ‘let’s see if we can address that to mitigate some of the issues that have been occurring in some of the systemic problems that have been occurring here,’” said Triplette on Ask the CIO. “Now it’s significant, and it’s a challenge, but it’s something that we think is important that you understand any precedent that is set in one place, in this instance, in the public agencies, will have a ripple of impact over into the commercial sector.”

The coalition, which cloud service providers created in 2022 to advocate for less-restrictive rules for buying software, outlined nine principles that it would like to see applied to all software licenses, including terms should be clear and intelligible, customers should be free to run their on-premise software on the cloud of their choice and licenses should cover reasonably expected software uses.

Triplette said while there still is a lot to understand about these new OneGov agreements, GSA seems to recognize there is an opportunity to address some long standing challenges with how the government buys and manages its software.

“You had the Department of Government Efficiency (DOGE) efforts and you had the federal chief information officer calling for an assessment of the top five software vendors from all the federal agencies. And you also have the executive order that established OneGov and having them seeking to establish these enterprisewide licensees, I think they recognize that there’s an opportunity here to effect change and to borrow practices from what they have seen has worked in the commercial sector,” she said. “Now there’s so many moving parts of issues that need to be addressed within the federal government’s IT and systems, generally. But just tackling issues that we have seen within software and just tackling the recommendations that have been made by the Government Accountability Office over the past several years is important.”

Building on the success of the MEGABYTE Act

GAO has highlighted concerns about vendors applying restrictive licensing practices. In November 2024, GAO found vendor processes that limit, impede or prevent agencies’ efforts to use software in cloud computing. Meanwhile of the six agencies auditors analyzed, none had “fully established guidance that specifically addressed the two key industry activities for effectively managing the risk of impacts of restrictive practices.”

Triplette said the data call by the federal CIO in April and the OneGov efforts are solid initial steps to change how agencies buy and manage software.

The Office of Management and Budget and GSA have tried several times over the past two decades to improve the management of software. Congress also joined the effort passing the Making Electronic Government (MEGABYTE) Act in 2016.

Triplette said despite these efforts the lack of data has been a constant problem.

“The federal government has found that even when there’s a modicum of understanding of what their software asset management uses, they seem to find a cost performance improvement within the departments. So that’s been one issue. You have the differing needs of the various agencies and departments. This has led them in previous efforts to either opt out of enterprisewide licenses or to modify them with their own terms. So even when there’s been these efforts, you find, like, a year or two or three years later, it’s all a wash,” she said. “Quite frankly, you have a lack of a central mandate and appropriations line. That’s probably the most fundamental thing and why it also differs so fundamentally from other governments that have some of these more centralized services. For instance, the UK government has a central mandate, it works quite well.”

Triplette said what has changed is what she called a “sheer force of will” by OMB and GSA.

“They are recognizing the significant amount of waste that’s been occurring and that there has been lock-in with some software vendors and other issues that need to be tackled,” she said. “I think you’ve seen where the administration has really leaned into that. Now, what is going to be interesting is because it has been so centralized, like the OneGov effort, it’s still also an opt-in process. So that’s why I keep on saying, it’ll to be determined how effective it will be.”

SAMOSA gaining momentum

In addition to the administration’s efforts, Triplette said she’s hopeful Congress finally passes the Strengthening Agency Management and Oversight of Software Assets (SAMOSA) Act. The Senate ran out of time to act on SAMOSA last session, after the House passed it in December.

The latest version of SAMOSA mirrors the Senate bill the committee passed in May 2023. It also is similar to the House version introduced in March by Reps. Nancy Mace (R-S.C.), the late Gerry Connolly (D-Va.), and several other lawmakers.

The coalition is a strong supporter of SAMOSA.

Triplette said one of the most important provisions in the bill would require agencies to have a dedicated executive overseeing software license asset management.

“There is an importance and a need to have greater expertise within the federal workforce, around software licensing, and especially arguably, vendor-specific software licensing terms,” she said. “I think this is one area that the administration could take a cue from the commercial sector. When they’re engaged in commercial licensing, they tend to work with consultants that are experts in the vendor licensing rules, they understand the policy and they understand the ins and outs. They often have somebody in house that … may not be solely specific to one vendor, but they may do only two or three and so you really have that depth of expertise, that you can understand some great cost savings.”

Triplette added that while finding these types of experts isn’t easy, the return on the investment of either hiring or training someone is well worth it.

She said some estimate that the government could save $50 million a year by improving how it manages its software licenses.  This is on top of what the MEGABYTE Act already produced. In 2020, the Senate Homeland Security and Governmental Affairs Committee found that 13 agencies saved or avoided spending more than $450 million between fiscal 2017 and 2019 because of the MEGABYTE Act.

“The MEGABYTE Act was an excellent first step, but this, like everything, [is] part of an iterative process. I think it’s something that needs to have the requirement that it has to be done and mandated,” Triplette said. “This is something that has become new as you’ve had the full federal movement to the cloud, and the discussion of licensing terms between on-premise and the cloud, and the intersection between all of this transformation. That is something that wasn’t around during the MEGABYTE Act. I think that’s where it’s a little bit of a different situation.”

The post How the administration is bringing much needed change to software license management first appeared on Federal News Network.

© Federal News Network

fnr-icon-full

Yeske helped change what complying with zero trust means

The Cybersecurity and Infrastructure Security Agency developed a zero trust architecture that features five pillars.

The Defense Department’s zero trust architecture includes seven pillars.

The one the Department of Homeland Security is implementing takes the best of both architectures and adds a little more to the mix.

Don Yeske, who recently left federal service after serving for the last two-plus years as the director of national security in the cyber division at DHS, said the agency had to take a slightly different approach for several reasons.

Don Yeske is a senior solutions architect federal at Virtu and a former director of national security in the cyber division at the Homeland Security Department.

“If you look at OMB [memo] M-22-09 it prescribes tasks. Those tasks are important, but that itself is not a zero trust strategy. Even if you do everything that M-22-09 told us to do — and by the way, those tasks were due at the beginning of this year — even if you did it all, that doesn’t mean, goal achieved. We’re done with zero trust. Move on to the next thing,” Yeske said during an “exit” interview on Ask the CIO. “What it means is you’re much better positioned now to do the hard things that you had to do and that we hadn’t even contemplated telling you to do yet. DHS, at the time that I left, was just publishing this really groundbreaking architecture that lays out what the hard parts actually are and begins to attack them. And frankly, it’s all about the data pillar.”

The data pillar of zero trust is among the toughest ones. Agencies have spent much of the past two years focused on other parts of the architecture, like improving their cybersecurity capabilities in the identity and network pillars.

Yeske, who now is a senior solutions architect federal at Virtru, said the data pillar challenge for DHS is even bigger because of the breadth and depth of its mission. He said between the Coast Guard, FEMA, Customs and Border Protection and CISA alone, there are multiple data sources, requirements and security rules.

“What’s different about it is we viewed the problem of zero trust as coming in broad phases. Phase one, where you’re just beginning to think about zero trust, and you’re just beginning to adjust your approach, is where you start to take on the idea that my network boundary can’t be my primary, let alone sole line of defense. I’ve got to start shrinking those boundaries around the things that I’m trying to protect,” he said. “I’ve got to start defending within my network architecture, not just from the outside, but start viewing the things that are happening within my network with suspicion. Those are all building on the core tenants of zero trust.”

Capabilities instead of product focused

He said initial zero trust strategy stopped there, segmenting networks and protecting data at rest.

But to get to this point, he said agencies too often are focused on implementing specific products around identity or authentication and authorization processes.

“It’s a fact that zero trust is something you do. It’s not something you buy. In spite of that, federal architecture has this pervasive focus on product. So at DHS, the way we chose to describe zero trust capability was as a series of capabilities. We chose, without malice or forethought, to measure those capabilities at the organization, not at the system, not at the component, not as a function of design,” Yeske said. “Organizations have capabilities, and those capabilities are comprised of three big parts: People. Who’s responsible for the thing you’re describing within your organization? Process. How have you chosen to do the thing that you’re describing at your organization and products? What helps you do that?”

Yeske said the third part is technology, which, too often, is intertwined with the product part.

He said the DHS architecture moved away from focusing on product or technology, and instead tried to answer the simple, yet complex, questions: What’s more important right now? What are the things that I should spend my limited pool of dollars on?

“We built a prioritization mechanism, and we built it on the idea that each of those capabilities, once we understand their inherent relationships to one another, form a sort of Maslow’s hierarchy of zero trust. There are things that are more basic, that if you don’t do this, you really can’t do anything else, and there are things that are really advanced, that once you can do basically everything else you can contemplate doing this. And there are a lot of things in between,” he said. “We took those 46 capabilities based on their inherent logical relationships, and we came up with a prioritization scheme so that you could, if you’re an organization implementing zero trust, prioritize the products, process and technologies.”

Understanding cyber tool dependencies

DHS defined those 46 capabilities based on the organization’s ability to perform that function to protect its data, systems or network.

Yeske said, for example, with phishing-resistant, multi-factor authentication, DHS didn’t specify the technology or product needed, but just the end result of the ability to authenticate users using multiple factors that are resistant to phishing.

“We’re describing something your organization needs to be able to do because if you can’t do that, there are other things you need to do that you won’t be able to do. We just landed on 46, but that’s not actually all that weird. If you look at the Defense Department’s zero trust roadmap, it contains a similar number of things they describe as capability, which are somewhat different,” said Yeske, who spent more than 15 years working for the Navy and Marine Corps before coming to DHS. “We calculated a 92% overlap between the capabilities we described in our architecture and the ones DoD described. And the 8% difference is mainly because the DHS one is brand new. So just understanding that the definition of each of these capabilities also includes two types of relationships, a dependency, which is where you can’t have this capability unless you first had a different one.”

Yeske said before he left DHS in July, the zero trust architecture and framework had been approved for use and most of the components had a significant number of cyber capabilities in place.

He said the next step was assessing the maturity of those capabilities and figuring out how to move them forward.

If other agencies are interested in this approach, Yeske said the DHS architecture should be available for them to get a copy of.

The post Yeske helped change what complying with zero trust means first appeared on Federal News Network.

© Getty Images/design master

The 2023 Counter Ransomware Initiative Summit | Stepping Up Global Collaboration in Cybersecurity

Ransomware’s transformation from a targeted cybercrime to a significant threat to national security has increasingly drawn attention at international forums like the Counter Ransomware Initiative (CRI) Summit. The 2023 Summit, which brought together representatives from 50 countries, signifies a growing, yet cautious, acknowledgment of the need for collaborative strategies in tackling this complex issue.

In this post, we discuss the key findings emerging from the Summit, shedding light on the collective approach adopted by nations to combat the surge in ransomware attacks. We’ll delve into the role of advancing technologies such as Artificial Intelligence (AI) in fortifying cybersecurity measures, the pivotal role of information sharing in preempting attacks, and the strategic policy initiatives aimed at undermining the operational frameworks of ransomware syndicates.

Furthermore, we’ll reflect on the real-world challenges in countering adaptive cyber threats and highlight the recent law enforcement breakthroughs against notable ransomware groups. This post explores the steps being taken at an international level to address the ransomware menace and the ongoing efforts to shape a more resilient global cybersecurity infrastructure.

Building Collective Resilience Against Ransomware

Member countries gathered in Washington D.C. on October 31 to November 1 to reinforce the need for a global front against the escalating ransomware crisis. Some of the key areas of discussion to emerge were:

  • Strengthening International Cooperation to Undermine Ransomware Operations:
    • The Summit emphasized the importance of unified efforts across nations. Recognizing that ransomware networks often transcend borders, it called for enhanced cross-border law enforcement collaboration.
    • Delegates discussed the standardization of legal frameworks and law enforcement protocols to ensure swift and coordinated action against ransomware syndicates.
    • The Summit also highlighted the need for streamlined processes for sharing intelligence and cyber forensics across countries to facilitate faster identification and neutralization of ransomware threats.
  • Tackling the Financial Underpinnings of the Ransomware Ecosystem:
    • A lot of discussion centered on disrupting the financial networks that fuel ransomware operations.
    • Experts and policymakers deliberated on strategies to trace and block the flow of ransom payments, which often involve cryptocurrencies and unregulated digital payment platforms.
    • There was a consensus on increasing collaboration with financial institutions and regulatory bodies to monitor and report suspicious transactions linked to ransomware activities.
  • Enhancing Public-Private Partnerships to Combat Ransomware Threats:
    • Recognizing the critical role of the private sector, particularly technology and cybersecurity firms, the Summit pushed for stronger partnerships between governments and private entities.
    • Discussions were held on creating frameworks for regular information exchange and threat intelligence sharing between public agencies and private companies.
    • The Summit also saw proposals for joint initiatives in developing advanced cybersecurity technologies, focusing on AI and machine learning, to stay ahead of ransomware tactics.

The Summit’s approach to building collective resilience against ransomware was multi-dimensional, acknowledging that tackling such a complex issue requires a blend of legal, financial, technological, and cooperative strategies. Concerted effort is needed to create a more robust and unified defense against the burgeoning threat of ransomware, which continues to challenge global security and economic stability.

The Evolving Role of AI in Cybersecurity

During the event, a significant spotlight was cast on using Artificial Intelligence (AI) and Machine Learning (ML) in the fight against ransomware. This focus underscores a broader shift in cybersecurity tactics, moving towards more proactive and adaptive defense mechanisms.

AI and ML: Enhancing Threat Detection and Response

  • Advanced Threat Detection: AI and ML algorithms can sift through vast data, identifying patterns and anomalies that may indicate a cybersecurity threat. This allows for early detection of potential ransomware attacks, even before they fully manifest.
  • Automated Response Systems: Integrating AI into cybersecurity systems creates the potential for automated responses to detected threats. This not only speeds up the reaction time but also helps mitigate the impact of attacks, especially in scenarios where every second counts.
  • Adapting to Evolving Threats: The dynamic nature of cyber threats, particularly ransomware, requires tools that can adapt and evolve. AI systems, with their learning capabilities, are well-positioned to meet this need. However, the effectiveness of these AI models in real-world applications is a continuous journey of refinement and improvement, given the ever-advancing tactics of cybercriminals.

Sharing Information | Building a Proactive Defense Network

The CRI Summit also underscored the importance of information sharing in building a collective defense against ransomware.

Rapid Exchange of Threat Data

  • International Information Sharing Platforms: The establishment of platforms for quick and efficient sharing of threat intelligence among CRI members is a step towards a more unified global response to cyber threats.
  • Enhancing Anticipatory Capabilities: With timely access to shared intelligence, countries and organizations can better anticipate and prepare for potential ransomware attacks.
  • Real-World Application: The true test of these information-sharing initiatives lies in their implementation and effectiveness in diverse real-world scenarios. Ensuring these platforms are accessible, efficient, and secure will be crucial in maximizing their impact.

Policy Initiatives and Ransomware Financing | Striking at the Core

A key outcome of the Summit was the formulation of decisive policy initiatives aimed at disrupting the financial lifeline of ransomware operations.

Disincentivizing Ransom Payments

  • No Ransom Payments: The CRI’s collective stance against paying ransoms aims to weaken the financial incentive for cybercriminals. This policy needs global support and enforcement to be effective.
  • Tracking Illicit Financial Transactions: The U.S. Treasury’s commitment to monitor and share information on illicit financial transactions is a strategic move to disrupt the economic foundations of ransomware operations.
  • Global Enforcement Challenges: Implementing these policies on a global scale presents challenges, particularly in jurisdictions with varying levels of cybercrime laws and enforcement capabilities. The effectiveness of these initiatives hinges on the cooperative efforts and compliance of all member states of the CRI.
Discussions highlighted the need for collective effort against ransomware, underscored the importance of AI in cybersecurity, the power of shared intelligence, and the need for robust policy measures. As these strategies are implemented, their real-world effectiveness and adaptability will play a crucial role in shaping the global response to the ransomware threat.

Conclusion

The 2023 Counter Ransomware Initiative (CRI) Summit marks a step in the right direction towards global collaboration against cyber threats. However, the reality remains that many organizations and critical infrastructures are still vulnerable, continuing to fuel the ransomware industry. Despite the advancements and strategic discussions at the Summit, the prevalence of these threats highlights the urgent need for comprehensive and proactive measures.

At SentinelOne, we have been harnessing the power of AI and machine learning for over a decade, staying ahead in the cybersecurity landscape. These technologies, crucial in the fight against ransomware, must be complemented by a stronger alliance between private and public sector leaders. Setting a new standard in cybersecurity and working towards eliminating ransomware as a viable attack method requires a unified effort that transcends individual strategies and recommendations.

If you are ready to experience the advanced protection that SentinelOne offers, our dedicated team is here to assist you. Request a demo and see firsthand how our solutions can safeguard your digital landscape against the evolving cyber threats of today and tomorrow.

SentinelOne Singularity XDR
Supercharge. Fortify. Automate. Extend protection with unfettered visibility, proven protection, and unparalleled response.

The post The 2023 Counter Ransomware Initiative Summit | Stepping Up Global Collaboration in Cybersecurity appeared first on SentinelOne DE.

❌