Reading view

There are new articles available, click to refresh the page.

When it comes to AI, not all data is created equal

Gen AI is becoming a disruptive influence on nearly every industry, but using the best AI models and tools isn’t enough. Everybody’s using the same ones but what really creates competitive advantage is being able to train and fine-tune your own models, or provide unique context to them, and that requires data.

Your company’s extensive code base, documentation, and change logs? That’s data for your coding agents. Your library of past proposals and contracts? Data for your writing assistants. Your customer databases and support tickets? Data for your customer service chatbot.

But just because all this data exists, doesn’t mean it’s good.

“It’s so easy to point your models to any data that’s available,” says Manju Naglapur, SVP and GM of cloud, applications, and infrastructure solutions at Unisys. “For the past three years, we’ve seen this mistake made over and over again. The old adage garbage in, garbage out still holds true.”

According to a Boston Consulting Group survey released in September, 68% of 1,250 senior AI decision makers said the lack of access to high-quality data was a key challenge when it came to adopting AI. Other recent research confirms this. In an October Cisco survey of over 8,000 AI leaders, only 35% of companies have clean, centralized data with real-time integration for AI agents. And by 2027, according to IDC, companies that don’t prioritize high-quality, AI-ready data will struggle scaling gen AI and agentic solutions, resulting in a 15% productivity loss.

Losing track of the semantics

Another problem using data that’s all lumped together is that the semantic layer gets confused. When data comes from multiple sources, the same type of information can be defined and structured in many ways. And as the number of data sources proliferates due to new projects or new acquisitions, the challenge increases. Even just keeping track of customers — the most critical data type — and basic data issues are difficult for many companies.

Dun & Bradstreet reported last year that more than half of organizations surveyed have concerns about the trustworthiness and quality of the data they’re leveraging for AI. For example, in the financial services sector, 52% of companies say AI projects have failed because of poor data. And for 44%, data quality is their biggest concern for 2026, second only to cybersecurity, based on a survey of over 2,000 industry professionals released in December.

Having multiple conflicting data standards is a challenge for everybody, says Eamonn O’Neill, CTO at Lemongrass, a cloud consultancy.

“Every mismatch is a risk,” he says. “But humans figure out ways around it.”

AI can also be configured to do something similar, he adds, if you understand what the challenge is, and dedicate time and effort to address it. Even if the data is clean, a company should still go through a semantic mapping exercise. And if the data isn’t perfect, it’ll take time to tidy it up.

“Take a use case with a small amount of data and get it right,” he says. “That’s feasible. And then you expand. That’s what successful adoption looks like.”

Unmanaged and unstructured

Another mistake companies make when connecting AI to company information is to point AI at unstructured data sources, says O’Neill. And, yes, LLMs are very good at reading unstructured data and making sense of text and images. The problem is not all documents are worthy of the AI’s attention.

Documents could be out of date, for example. Or they could be early versions of documents that haven’t been edited yet, or that have mistakes in them.

“People see this all the time,” he says. “We connect your OneDrive or your file storage to a chatbot, and suddenly it can’t tell the difference between ‘version 2’ and ‘version 2 final.’”

It’s very difficult for human users to maintain proper version control, he adds. “Microsoft can handle the different versions for you, but people still do ‘save as’ and you end up with a plethora of unstructured data,” O’Neill says.

Losing track of security

When CIOs typically think of security as it relates to AI systems, they might consider guardrails on the models, or protections around the training data and the data used for RAG embeddings. But as chatbot-based AI evolves into agentic AI, the security problems get more complex.

Say for example there’s a database of employee salaries. If an employee has a question about their salary and asks an AI chatbot embedded into their AI portal, the RAG embedding approach would be to collect only the relevant data from the database using traditional code, embed it into the prompt, then send the query off to the AI. The AI only sees the information it’s allowed to see and the traditional, deterministic software stack handles the problem of keeping the rest of the employee data secure.

But when the system evolves into an agentic one, the AI agents can query the databases autonomously via MCP servers, and since they need to be able to answer questions from any employee, they require access to all employee data, and keeping it from getting into the wrong hands becomes a big task.

According to the Cisco survey, only 27% of companies have dynamic and detailed access controls for AI systems, and fewer than half feel confident in safeguarding sensitive data or preventing unauthorized access.

And the situation gets even more complicated if all the data is collected into a data lake, says O’Neill.

“If you’ve put in data from lots of different sources, each of those individual sources might have its own security model,” he says. “When you pile it all into block storage, you lose that granularity of control.”

Trying to add the security layer in after the fact can be difficult. The solution, he says, is to go directly to the original data sources and skip the data lake entirely.

“It was about keeping history forever because storage was so cheap, and machine learning could see patterns over time and trends,” he says. “Plus, cross-disciplinary patterns could be spotted if you mix data from different sources.”

In general, data access changes dramatically when instead of humans, AI agents are involved, says Doug Gilbert, CIO and CDO at Sutherland Global, a digital transformation consultancy.

“With humans, there’s a tremendous amount of security that lives around the human,” he says. “For example, most user interfaces have been written so if it’s a number-only field, you can’t put a letter in there. But once you put in an AI, all that’s gone. It’s a raw back door into your systems.”

The speed trap

But the number-one mistake Gilbert sees CIOs making is they simply move too fast. “This is why most projects fail,” he says. “There’s such a race for speed.”

Too often, CIOs look at data issues as slowdowns, but all those things are massive risks, he adds. “A lot of people doing AI projects are going to get audited and they’ll have to stop and re-do everything,” he says.

So getting the data right isn’t a slowdown. “When you put the proper infrastructure in place, then you speed through your innovation, you pass audits, and you have compliance,” he says.

Another area that might feel like an unnecessary waste of time is testing. It’s not always a good strategy to move fast, break things, and then fix them later on after deployment.

“What’s the cost of a mistake that moves at the speed of light?” he asks. “I would always go to testing first. It’s amazing how many products we see that are pushed to market without any testing.”

Putting AI to work to fix the data

The lack of quality data might feel like a hopeless problem that’s only going to get worse as AI use cases expand.

In an October AvePoint report based on a survey of 775 global business leaders, 81% of organizations have already delayed deployment of AI assistants due to data management or data security issues, with an average delay of six months.

Meanwhile, not only the number of AI projects continues to grow but also the amount of data. Nearly 52% of respondents also said their companies were managing more than 500 petabytes of data, up from just 41% a year ago.

But Unisys’ Naglapur says it’s going to become easier to get a 360-degree view of a customer, and to clean up and reconcile other data sources, because of AI.

“This is the paradox,” he says. “AI will help with everything. If you think about a digital transformation that would take three years, you can do it now in 12 to 18 months with AI.” The tools are getting closer to reality, and they’ll accelerate the pace of change, he says.

The tech leadership realizing more than the sum of parts

Waiting on replacement parts can be more than just an inconvenience. It can be a matter of sharp loss of income and opportunity. This is especially true for those who depend on industrial tools and equipment for agriculture and construction. So to keep things run as efficiently as possible, Parts ASAP CIO John Fraser makes sure end customer satisfaction is the highest motivation to get the tech implementation and distribution right.

“What it comes down to, in order to achieve that, is the team,” he says. “I came into this organization because of the culture, and the listen first, act later mentality. It’s something I believe in and I’m going to continue that culture.”

Bringing in talent and new products has been instrumental in creating a stable e-commerce model, so Fraser and his team can help digitally advertise to customers, establish the right partnerships to drive traffic, and provide the right amount of data.

“Once you’re a customer of ours, we have to make sure we’re a needs-based business,” he says. “We have to be the first thing that sticks in their mind because it’s not about a track on a Bobcat that just broke. It’s $1,000 a day someone’s not going to make due to a piece of equipment that’s down.”

Ultimately, this strategy helps and supports customers with a collection of highly-integrated tools to create an immersive experience. But the biggest challenge, says Fraser, is the variety of marketplace channels customers are on.

“Some people prefer our website,” he says. “But some are on Walmart or about 20 other commercial channels we sell on. Each has unique requirements, ways to purchase, and product descriptions. On a single product, we might have 20 variations to meet the character limits of eBay, for instance, or the brand limitations of Amazon. So we’ve built out our own product information management platform. It takes the right talent to use that technology and a feedback loop to refine the process.”

Of course, AI is always in the conversation since people can’t write updated descriptions for 250,000 SKUs.

“AI will fundamentally change what everybody’s job is,” he says. “I know I have to prepare for it and be forward thinking. We have to embrace it. If you don’t, you’re going to get left behind.”

Fraser also details practical AI adoption in terms of pricing, product data enhancement, and customer experience, while stressing experimentation without over-dependence. Watch the full video below for more insights, and be sure to subscribe to the monthly Center Stage newsletter by clicking here.

On consolidating disparate systems: You certainly run into challenges. People are on the same ERP system so they have some familiarity. But even within that, you have massive amounts of customization. Sometimes that’s very purpose-built for the type of process an organization is running, or that unique sales process, or whatever. But in other cases, it’s very hard. We’ve acquired companies with their own custom built ERP platform, where they spent 20 years curating it down to eliminate every button click. Those don’t go quite as well, but you start with a good culture, and being transparent with employees and customers about what’s happening, and you work through it together. The good news is it starts with putting the customer first and doing it in a consistent way. Tell people change is coming and build a rapport before you bring in massive changes. There are some quick wins and efficiencies, and so people begin to trust. Then, you’re not just dragging them along but bringing them along on the journey.

On AI: Everybody’s talking it, but there’s a danger to that, just like there was a danger with blockchain and other kinds of immersive technologies. You have to make sure you know why you’re going after AI. You can’t just use it because it’s a buzzword. You have to bake it into your strategy and existing use cases, and then leverage it. We’re doing it in a way that allows us to augment our existing strategy rather than completely and fundamentally change it. So for example, we’re going to use AI to help influence what our product pricing should be. We have great competitive data, and a great idea of what our margins need to be and where the market is for pricing. Some companies are in the news because they’ve gone all in on AI, and AI is doing some things that are maybe not so appropriate in terms of automation. But if you can go in and have it be a contributing factor to a human still deciding on pricing, that’s where we are rather than completely handing everything over to AI.

On pooling data: We have a 360-degree view of all of our customers. We know when they’re buying online and in person. If they’re buying construction equipment and material handling equipment, we’ll see that. But when somebody’s buying a custom fork for a forklift, that’s very different than someone needing a new water pump for a John Deere tractor. And having a manufacturing platform that allows us to predict a two and a half day lead time on that custom fork is a different system to making sure that water pump is at your door the next day. Trying to do all that in one platform just hasn’t been successful in my experience in the past. So we’ve chosen to take a bit of a hybrid approach where you combine the data but still have best in breed operational platforms for different segments of the business.

On scaling IT systems: The key is we’re not afraid to have more than one operational platform. Today, in our ecosystem of 23 different companies, we’re manufacturing parts in our material handling business, and that’s a very different operational platform than, say, purchasing overseas parts, bringing them in, and finding a way to sell them to people in need, where you need to be able to distribute them fast. It’s an entirely different model. So we’re not establishing one core platform in that case, but the right amount of platforms. It’s not 23, but it’s also not one. So as we think about being able to scale, it’s also saying that if you try to be all things to all people, you’re going to be a jack of all trades and an expert in none. So we want to make sure when we have disparate segments that have some operational efficiency in the back end — same finance team, same IT teams — we’ll have more than one operational platform. Then through different technologies, including AI, ensure we have one view of the customer, even if they’re purchasing out of two or three different systems.

On tech deployment: Experiment early and then make certain not to be too dependent on it immediately. We have 250,000 SKUs, and more than two million parts that we can special order for our customers, and you can’t possibly augment that data with a world-class description with humans. So we selectively choose how to make the best product listing for something on Amazon or eBay. But we’re using AI to build enhanced product descriptions for us, and instead of having, say, 10 people curating and creating custom descriptions for these products, we’re leveraging AI and using agents in a way that allow people to build the content. Now humans are simply approving, rejecting, or editing that content, so we’re leveraging them for the knowledge they need to have, and if this going to be a good product listing or not. We know there are thousands of AI companies, and for us to be able to pick a winner or loser is a gamble. Our approach is to make it a bit of a commoditized service. But we’re also pulling in that data and putting it back into our core operational platform, and there it rests. So if we’re with the wrong partner, or they get acquired, or go out of business, we can switch quickly without having to rewrite our entire set of systems because we take it in, use it a bit as a commoditized service, get the data, set it at rest, and then we can exchange that AI engine. We’ve already changed it five times and we’re okay to change it another five until we find the best possible partner so we can stay bleeding edge without having all the expense of building it too deeply into our core platforms.

Agentic browsing: A real change with a big impact

Three weeks ago, a financial director at my company showed me the morning routine he had been doing for many days. Basically, he transferred data from our ERP to the cloud reporting platform. Every day, he spends an average of fifteen minutes copying, pasting and checking the format. That adds up to a lot of time wasted on a menial task…not to mention the risk of manual operations, which I think we are all familiar with.

When I showed him an example, very quickly, of how a navigation agent could execute the same sequence in two minutes, his expression went from amazement to concern: “What if it makes a mistake that I don’t detect until the end of the quarter?”

AI agents promise to eliminate the friction between intention and digital execution. But in doing so, they introduce a new entity into our infrastructure: autonomous, opaque and capable of acting with our credentials. The question is not whether we will adopt this technology (IDC projects that by 2028, more than 1.3 billion agents will automate business flows that are currently performed by humans), but whether we are prepared to govern it before the market forces us to do so under pressure.

ROI lies in resilience, not efficiency

I hear the prevailing discourse that AI agents should focus solely on saving time and reducing operating costs. I believe this narrative misses the true strategic value.

Sustainable ROI does not lie in doing what we already do faster. It lies in protecting revenue by mitigating systemic risk. According to New Relic’s 2025 Observability Forecast, the average cost of a high-impact IT outage is $2 million per hour. Organizations with full-stack observability in place cut that cost in half. A continuous monitoring agent detects problems that humans would never see until it’s too late, because it operates on a temporal and dimensional scale inaccessible to human cognition.

This distinction separates incremental automation (which improves margins) from systemic resilience (which protects revenue). CIOs who deploy agents seeking the first goal will find modest, short-term ROI. Those who build for the second will find lasting competitive advantage.

The contradiction that must now be resolved

Not all use cases justify web browsing. The correct architectural choice depends on the target system. Web browsing is appropriate for systems that only offer a web interface, third-party SaaS without infrastructure control, decisions based on visual layout and manual cross-application workflows. Direct integration is superior for internal systems with documented APIs, structured backend data movement, latency-critical scenarios and infrastructure observability (logs/metrics/traces).

An observability agent validating microservices does not need a browser; it needs direct access to telemetry. An agent automating data entry in a legacy ERP that only offers a web interface does not need it. This architectural clarity must be established before any purchasing decision or project initiative.

Terminology confusion that paralyzes decisions

The current market for “AI agents” suffers from marketing practices that systematically confuse terminology. In June 2025, Gartner projected that more than 40% of agentive AI projects will be canceled before the end of 2027. The causes: scalable costs without clear ROI, underestimated integration complexity and inadequate risk controls.

The root cause goes back further: the vast majority of what is sold as an “agent” is not. According to Gartner’s analysis at the end of 2024, of thousands of vendors claiming agentive capabilities, approximately 130 meet the technical criteria for genuine agents when evaluated against specific benchmarks for autonomy, adaptability and traceability. The rest practice “agent washing”: rebranding chatbots, RPA tools or automation flows without real autonomous planning capabilities.

Criteria to validate agentic AI in minutes

A genuine AI agent has five non-negotiable characteristics:

  1. Autonomous planning: it builds its own sequence of actions to achieve a goal. It does not follow a predefined decision tree.
  2. Tactical adaptability: it adjusts in real time to interruptions (pop-ups, captchas, interface changes) without stopping or requiring manual restart.
  3. Access to environment tools: it operates a virtual browser, terminal or command line like a human.
  4. Persistent memory: it maintains context across multiple sessions, learning from previous interactions.
  5. Auditable traceability: it provides a detailed step-by-step record of its reasoning and actions taken.

If a vendor cannot demonstrate these five capabilities working together during a demo of, say, 15 minutes with non-predefined tasks, it does not offer true agentive AI.

Why the browser solves the integration problem

Agentic browsers are attracting strategic investment from all the big tech companies, such as Google with Project Mariner (public demo December 2024), Microsoft with Copilot Vision, and Anthropic with Computer Use, because they solve the fundamental problem of business integration, not to mention Perplexity Comet.

Integrating AI with enterprise systems using APIs or custom connectors is complex, costly and fragile, even with MCP. The agentic browser circumvents this with a simple principle: if a human can access a system via a web interface and log in, so can the agent. It requires no public API, special vendor permissions or custom code.

This approach offers three critical advantages for organizations with heterogeneous infrastructure:

  • Direct access to authenticated content: emails, internal documents and pages that require a logged-in session.
  • Multidimensional context without configuration: open tabs, browsing history, partially completed forms.
  • Dramatic reduction in “technical plumbing”: eliminates months of integration work to orchestrate multiple legacy systems.

However, this architectural advantage introduces a new risk vector that must be managed with rigor comparable to that applied to employees with privileged access.

Risks that define the scope of responsible implementation

The autonomy of agents with access to authenticated content introduces operational risk that must be proactively managed. According to New Relic, the average annual exposure for highimpact disruptions can reach $76 million.

Operational risk matrix with specific controls

Methodology: Probabilities reflect early adoption operational experience 2024-2025. High: >30% of implementations experience the event in the first 6 months without controls. Medium: 10-30%. Low: <10%. Implementing controls significantly reduces these probabilities.

RiskProbabilityImpactTechnical Control
Tactical error in executionHigh (initial)OperationalControlled environments (Windows 365 for Agents) with human-in-the-loop for critical decisions
Accidental leak of PIIAverageLegal (GDPR)Unique identity per agent (enter Agent ID) with granular access policies and complete logging
Wrong decision due to poor dataAverageFinancialData observability, validation of pre-decision inputs, automatic flagging of anomalies
Unintended privilege escalationLowSecurityLeast privilege, periodic review of permissions, execution sandboxing

The regulatory imperative that separates leaders from followers

August 2, 2025, marked a critical date for organizations operating in the European Union or processing European citizens’ data. On that date, specific obligations of the EU AI Act for general-purpose model providers (GPAIs) — related to copyright transparency and opt-out mechanisms—became enforceable under Article 53.

Agentic browsers that rely on scraping web sources for training or operation must have data pipelines that respect opt-outs and can demonstrate compliance. Organizations that build a legally clean data infrastructure will now have an insurmountable competitive advantage over those waiting for the first non-compliance notification. The fines are substantial: up to €15 million or 3% of global annual turnover, with fines of up to €35 million or 7% for prohibited practices¹⁰.

Beyond compliance: Organizations that establish agent governance standards now, before regulatory mandates, will be positioned to influence the evolution of industry standards, a significant strategic asset.

The cultural change that no technology can automate

I return to the CFO’s initial question: “What if it makes a mistake that I don’t detect?”

The correct answer is not “they won’t make mistakes” because they will. The correct answer is: “We design systems where agent errors are detectable before they cause irreparable damage, containable when they occur and recoverable through rollback.” We double-check with agents.

This requires a cultural change that no technology purchase can automate and that will determine which organizations capture sustainable value from this transformation.

  • The evolution of the professional role: the value of professionals no longer lies primarily in the transactional execution of copying, pasting and verifying, but in the orchestration of AI-augmented systems, the supervision of patterns and exceptions, and strategic decisions that require business, political and human context that cannot be encoded in models. This transition is structurally similar to the impact of industrial automation: human value does not disappear; it shifts to higher levels of abstraction and judgment.
  • The redefinition of supervision: Human supervision moves from the “inner loop” (manually supervising every action of the agent in real time) to the “outer loop” (supervising aggregate patterns, exceptions automatically flagged by observability systems and post-execution results). This change frees up cognitive capacity for higher-value work while maintaining accountability. But it requires new skills: interpreting agent behavior dashboards, calibrating confidence thresholds and designing effective escalation points.
  • The change management challenge: Organizations that treat agent adoption as a technical project will fail. Those that treat it as organizational transformation, investing in role redefinition, development of new oversight competencies and recalibration of performance metrics will build lasting capacity.

The question for every leader is: Is your organization investing as much in cultural readiness as in technical infrastructure?

The leadership decision that will define the next decade

AI agents are not the future; they are the present for organizations that decide to act while others remain inactive. The question is not whether your organization will adopt agents. It is whether you will adopt them as a leader that sets governance standards or as a late follower that accepts standards set by competitors.

For a manager, the imperative is clear: disciplined experimentation now, with limited use cases and robust governance, builds the organizational capacity that will be indispensable when adoption is no longer optional.

Not because the technology is perfect — it isn’t, and it won’t be in the immediate future.

It is because the pace of improvement is measurable and sustained, and organizations that build operational capacity now through disciplined experimentation will be positioned to capture value as the technology matures. Those who wait for absolute certainty will face the double disadvantage of competing against organizations with years of accumulated learning advantage and adopting under competitive pressure without time to develop internal expertise.

The CFO in our opening story implemented the agent. But only after we designed together the controls that allow him to sleep soundly: automatic validation, alerts for deviations and one-click rollback. His question was not about resistance to change. It was a demand for technical professionalism.

That demand must be our standard.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

Southeast Asia CIOs Top Predictions on 2026: A Year of Maturing AI, Data Discipline, and Redefined Work

As 2026 begins, my recent conversations with Chief Information Officers across Southeast Asia provided me with a grounded view of how digital transformation is evolving. While their perspectives differ in nuance, they converge on several defining shifts: the maturation of artificial intelligence, the emergence of autonomous systems, a renewed focus on data governance, and a reconfiguration of work. These changes signal not only technological advancement but a rethinking of how Southeast Asia organizations intend to compete and create value in an increasingly automated economy.

For our CIOs, the year ahead represents a decisive moment as AI moves beyond pilots and hype cycles. Organizations are expected to judge AI by measurable business outcomes rather than conceptual promise. AI capabilities will become standard features embedded across applications and infrastructure, fundamental rather than differentiating. The real challenge is no longer acquiring AI technology but operationalizing it in ways that align with strategic priorities.

Among the most transformative developments is the rise of agentic AI – autonomous agents capable of performing tasks and interacting across systems. CIOs anticipate that organizations will soon manage not a single AI system but networks of agents, each with distinct logic and behaviour. This shift ushers in a new strategic focus, agentic AI orchestration. Organizations will need platforms that coordinate multiple agents, enforce governance, manage digital identity, and ensure trust across heterogeneous technology environments. As AI ecosystems grow more complex, the CIO’s role evolves from integrator to orchestrator who directs a diverse array of intelligent systems.

As AI becomes more central to operations, data governance emerges as a critical enabler. Technology leaders expect 2026 to expose the limits of weak data foundations. Data quality, lineage, access controls, and regulatory compliance determine whether AI initiatives deliver value. Organizations that have accumulated “data debt” will be unable to scale, while those that invest early will move with greater speed and confidence.

Automation in physical environments is also set to accelerate as CIOs expect robotics to expand across healthcare, emergency services, retail, and food and beverage sectors. Robotics will shift from specialised deployments to routine service delivery, supporting productivity goals, standardizing quality, and addressing persistent labour constraints.

Looking ahead, our region’s CIOs point to the early signals of quantum computing’s relevance. While still emerging, quantum technologies are expected to gain visibility through evolving products and research. In my view, for Southeast Asia organizations, the priority is not immediate adoption but proactive monitoring, particularly in cybersecurity and long-term data protection, without undertaking premature architectural shifts.

IDGConnect_quantum_quantumcomputing_shutterstock_1043301451_1200x800

Shutterstock

Perhaps the most provocative prediction concerns the nature of work. As specialised AI agents take on increasingly complex task chains, one CIO anticipates the rise of “cognitive supply chains” in which work is executed largely autonomously. Traditional job roles may fragment into task-based models, pushing individuals to redefine their contributions. Workplace identity could shift from static roles to dynamic capabilities, a broader evolution in how people create value in an AI-native economy.

One CIOs spotlight the changing nature of software development where natural-language-driven “vibe coding” is expected to mature, enabling non-technical teams to extend digital capabilities more intuitively. This trend will not diminish the relevance of enterprise software as both approaches will coexist to support different organizational needs.

CIO ASEAN Editorial final take:

Collectively, these perspectives shared by Southeast Asia’s CIO community point to Southeast Asia preparing for a structurally different digital future, defined by embedded AI, scaled autonomous systems, and disciplined data practices. The opportunity is substantial, but so is the responsibility placed on technology leaders.

As 2026 continue to unfold, the defining question will not simply be who uses AI, but who governs it effectively, integrates it responsibly, and shapes its trajectory to strengthen long-term enterprise resilience. Enjoy reading these top predictions for 2026 by our region’s most influential CIOs who are also our CIO100 ASEAN & Hong Kong Award 2025 winners:

Ee Kiam Keong
Deputy Chief Executive (Policy & Development)
concurrent Chief Information Officer
InfoComm Technology Division
Gambling Regulatory Authority Singapore
 
Prediction 1
AI continue to lead its edge esp. Agentic AI would be getting more popular and used, and AI Governance in terms AI risks and ethnics would get more focused
 
Prediction 2
Quantum Computing related products should start to evolve and more apparent.
 
Prediction 3
Deployment of robotic applications would be widened esp. in medical, emergency response and casual activities such retail, and food and beverage etc.
Ng Yee Pern,
Chief Technology Officer
Far East Organization
 
Prediction 4
AI deployments will start to mature, as enterprises confront the disconnect between the inflated promises of AI vendors and the actual value delivered.
 
Prediction 5
Vibe coding will mature and grow in adoption, but enterprise software is not going away. There is plenty of room for both to co-exist.
Athikom Kanchanavibhu
Executive Vice President, Digital & Technology Transformation
& Chief Information Security Officer

Mitr Phol Group
 
Prediction 6
The Next Vendor Battleground: Agentic AI Orchestration
By 2026, AI will no longer be a differentiator, it will be a default feature, embedded as standard equipment across modern digital products. As every vendor develops its own Agentic AI, enterprises will manage not one AI, but an orchestra of autonomous agents, each optimized for its own ecosystem.
 
The new battleground will be Agentic AI Orchestration where platforms can coordinate, govern, and securely connect agentic AIs across vendors and domains. 2026 won’t be about smarter agents, but about who can conduct the symphony best-safely, at scale, and across boundaries.
 
Prediction 7
Enterprise AI Grows Up: Data Governance Takes Center Stage
2026 will mark the transition from AI pilots to AI in production. While out-of-the-box AI will become common, true competitive advantage will come from applying AI to enterprise-specific data and context. Many organizations will face a sobering realization: AI is only as good as the data it is trusted with.
 
As AI moves into core business processes, data governance, management, and security will become non-negotiable foundations. Data quality, access control, privacy, and compliance will determine whether AI scales or stalls. In essence, 2026 will be the year enterprises learn that governing data well is the quiet superpower behind successful AI.
Jackson Ng
Chief Technology Officer and Head of Fintech
Azimut Group
 
Prediction 8
In 2026, organizations will see AI seeking power while humans search for purpose. Cognitive supply chains of specialized AI agents will execute work autonomously, forcing individuals to redefine identity at work, in service, and in society. Roles will disintegrate, giving way to a task-based, AI-native economy
Big data technology and data science. Data flow. Querying, analyzing, visualizing complex information. Neural network for artificial intelligence. Data mining. Business analytics.

NicoElNino / Shutterstock

How tech and strategy align at Videojet

Legacy manufacturing environments are inherently complex. Deep technical expertise, global operations, and precision processes create a level of interdependence that makes transformation challenging to orchestrate. For CIOs, the task isn’t just about deploying new technologies, but untangling that complexity and evolving from old and deeply embedded ways of working.

When Aroon Sehgal joined Videojet Technologies as CIO last year, he became part of an organization with decades of technical excellence and a proud engineering culture. Videojet, a global leader in coding, marking, and printing solutions for product traceability, had long operated as part of healthcare company Danaher. Now, as a key business within Veralto, a $5 billion global tech leader focused on environmental and product quality solutions, Sehgal saw an opportunity to position technology as a source of differentiation and growth.

“When we were part of Danaher, Videojet was a rounding error,” Sehgal says. “Now under Veralto, we’re a meaningful part of the portfolio. That creates both visibility and accountability, and leadership is laser-focused on using technology to drive business outcomes.”

Tech moves to the center of strategy

Following Videojet’s most recent strategic planning cycle, one of the company’s top enterprise-wide initiatives focused on commercial excellence is being led by Sehgal himself. It marks the first time in company history that a technology executive has been chosen to lead one of its most critical strategic programs.

“Historically, these initiatives were owned by product or operations leaders,” Sehgal says. “The fact that technology is now seen as a primary driver of growth says everything about how the organization’s mindset has shifted.”

When he arrived, IT was viewed largely as a service provider. His first move was to rebrand the organization, both in name and purpose. IT became digital and technology solutions, or DTS, a deliberate signal that the function would no longer operate in the background. “We needed to recast technology,” he adds. “That meant aligning to our three most important outcomes: growth, margin expansion, and productivity.”

Embedding tech in the business

To make that shift real, Sehgal restructured how technology partners with the business. His team introduced geography-based business engagement leads, each embedded with regional leadership to ensure direct input into business decisions instead of hearing technology needs second or third hand. He also elevated leaders to run new centers of excellence around Videojet’s most strategic capabilities, including data and AI, e-commerce and web, and ERP transformation.

“It’s about being deliberate,” Sehgal says. “You can’t extract long-term value from AI or automation without first fixing your data strategy and governance. We’re laying the foundation for what I call the multi-agentic future, where workflows are increasingly autonomous.”

Laying the foundation for AI and automation

That foundation is already producing results. In partnership with Sehgal’s team, Videojet is piloting AI and ML applications across multiple fronts. In operations, they’re deploying ML to optimize production scheduling, and improve inventory forecasting and planning. The goal is to digitize their sales and operations planning process using a unified data set.

On the commercial side, Videojet has implemented AI-powered translation tools to create marketing content at scale across global markets, and is working with a startup to design an AI-first ERP system that automates order intake. At the same time, tools like Microsoft Copilot and ChatGPT Enterprise are being deployed widely to improve productivity across the organization.

“We’re not limiting experimentation,” Sehgal says. “Teams across R&D and operations are exploring large language models, and our job is to make sure they have the right data and governance in place to scale.”

Speaking the language of business

Still, Sehgal knows that even the most elegant technology story won’t land unless it’s translated into business terms. “You can’t walk into a leadership meeting and talk about APIs and architectures,” he said. “You have to talk about how technology contributes to growth and profitability.”

Every initiative under his watch is evaluated through a commercial lens, with clear visibility into how it supports both the customer and the company’s strategic and financial goals. Sehgal and his team also forecast how their programs will translate to earnings per share, giving leadership a tangible measure of technology’s targeted contribution to enterprise value. “When we model the impact of our initiatives, we express that impact in business terms that everyone in the organization understands,” he says. “That’s how technology earns its credibility.”

Lessons for tech leaders in legacy industries

For Sehgal, Videojet’s vision for technology holds lessons for every CIO navigating a legacy environment. His advice, shaped by leadership roles held at manufacturing giants Terex, ESAB, and ITT Inc., begins with identifying the business pain points where tech can drive the greatest impact. “In manufacturing, you have to know what holds the business back: labor intensity, asset dependency, supply chain complexity,” he says. “Then, pinpoint where technology can make a difference.”

Building credibility early is equally essential. “The business has to see you as a peer, not a service provider,” he adds. “And you can’t have your CFO reading about a breakthrough before you do.”

Above all, Sehgal believes technology leaders have to be willing to take risks. “In manufacturing or any legacy organization, you have to put skin in the game,” he says. “If you want to drive change, you need to be willing to take on the tough initiatives, own them, and deliver results.” In an industry where efficiency often surpasses innovation, Sehgal is positioning technology to be at the core of a strategy that blends Videojet’s track record of operational rigor with forward-looking ambition, grounded in the language of the business, and aimed squarely at customer growth and innovation. “Ultimately, our success will be measured not by how digital we are, but by how much we move the business forward,” he says.

클라우드 이후 겨냥한다···Arm, ‘피지컬 AI’ 조직 신설

반도체 설계 기업 Arm이 로봇공학과 자동차 시스템에 초점을 맞춘 ‘피지컬 AI’ 조직을 신설했다. 이는 엔터프라이즈 AI가 클라우드 중심의 데이터센터를 벗어나, 물리적 세계에서 작동하는 기계로 이동하고 있음을 보여주는 신호다.

로이터 통신에 따르면 이번 조직 개편을 통해 Arm은 사업 구조를 핵심 그룹 3개로 재편했다. 클라우드와 AI 기술 부문, 스마트폰과 PC 등 엣지 제품 부문, 자동차와 로보틱스를 하나로 묶은 신규 피지컬 AI 부문으로 나눴다.

이 같은 결정은 로보틱스 기술이 파일럿 단계를 넘어 실제 현장에 적용되기 시작한 흐름과 맞닿아 있다. 공장과 물류창고, 물류 운영 현장에서는 자율 시스템이 도입되고 있으며, 이 환경에서는 순수 연산 성능보다 실시간 의사결정이 더 중요한 요소로 떠오르고 있다.

이로 인해 AI 워크로드가 엣지로 이동하고 있으며, CIO는 클라우드 확장성보다 기기의 안정성과 신뢰성을 우선적으로 고려해야 하는 상황에 놓이고 있다.

기업에 미칠 영향

Arm의 조직 개편은 로보틱스와 자동차 시스템을 중심으로 컴퓨팅 자원과 구조를 설계하는 방식이 근본적으로 달라지고 있음을 보여준다.

카운터포인트리서치의 리서치 부문 부사장 닐 샤는 “챗GPT 등장 이후 지난 3년 동안 업계는 생성형 AI에서 에이전틱 AI를 거쳐 피지컬 AI 단계로 이동했다. 디지털 에이전트를 물리적 로봇과 연결하려면 대규모 합성 데이터 투자가 필수적”이라고 설명했다. 그는 “텍스트나 코드로 학습이 가능한 에이전틱 AI와 달리, 피지컬 AI는 고해상도 영상과 물리 시뮬레이션을 기반으로 학습된 ‘월드 모델’을 필요로 한다”라고 분석했다.

샤는 다양한 실제 환경 시나리오에서 로봇을 학습시키려면, 기업이 시뮬레이션 중심의 무거운 워크로드를 감당할 수 있는 인프라를 사전에 설계해야 한다고 설명했다.

피지컬 AI는 AI 워크로드가 실행되는 위치 자체도 변화시키고 있다. Arm은 특히 로보틱스와 같은 실시간 시스템에서 추론과 제어 기능을 엣지 및 온디바이스 환경으로 옮기는 데 초점을 두고 있다.

포레스터의 수석 애널리스트 비스와지트 마하파트라는 “이런 워크로드는 초저지연, 에너지 효율성, 복원성을 요구하지만 중앙 집중형 클라우드는 이를 항상 충족할 수는 없다. 따라서 하이브리드 아키텍처를 채택해야 한다”라고 설명했다. 그는 “추론과 제어 작업은 Arm 기반 가속기를 활용해 엣지 및 온디바이스 환경에서 처리하고, 학습과 대규모 분석은 클라우드에 남기는 방식이 적절하다”라고 말했다.

네트워킹 역시 핵심 요소로 떠오르고 있다. 피지컬 AI 시스템은 센서와 컨트롤러를 실시간으로 조율하기 위해 예측 가능하고 지연이 낮은 연결을 필요로 한다. 특히 공장과 물류창고에서 이러한 요구가 더욱 두드러진다. 이에 따라 많은 기업이 프라이빗 5G, 와이파이7, TSN(Time Sensitive Networking, 시간 민감형 네트워킹)과 같은 기술을 중심으로 산업용 네트워크 전략을 재검토할 가능성이 커지고 있다.

테크인사이트의 반도체 애널리스트 마니시 라와트는 “이는 클라우드를 대체하는 흐름이 아니라 역할 재조정에 가깝다. 클라우드는 학습과 조정의 중심 역할을 맡고, Arm 기반 엣지와 온디바이스 환경은 실시간 인식과 의사결정, 물리적 동작을 담당하게 될 것”이라고 설명했다.

CIO에게 필요한 준비

피지컬 AI에 대비하려면 기술 스택 전반에 걸친 변화가 필요하다. 포레스터의 마하파트라는 “IT 리더는 Arm 아키텍처에 맞게 운영체제, AI 프레임워크, 컨테이너 플랫폼을 최적화해야 한다. 분산된 로보틱스 시스템에 대한 보안 및 수명주기 관리 역시 강화할 필요가 있다”라고 설명했다. 이어 “Arm 기반 로봇 애플리케이션으로 파일럿 프로젝트를 운영하면, 확장에 앞서 성능과 통합 가능성을 검증하는 데 도움이 된다”라고 설명했다.

라와트는 기업이 로보틱스와 피지컬 AI를 제한적인 운영 기술(OT) 실험이 아니라, 핵심 IT 스택의 연장선으로 바라봐야 한다고 언급했다. 그는 “학습, 오케스트레이션, 실시간 실행을 명확히 분리하는 애플리케이션 설계가 필요하다. 그래야 구성 요소가 클라우드와 Arm 기반 엣지 또는 디바이스 플랫폼 간에 무리 없이 이동할 수 있다”라고 설명했다.

이 같은 조언은 기업이 로보틱스와 피지컬 AI를 단발성 자동화 프로젝트가 아니라 장기적인 인프라 투자로 인식하는 흐름이 나타나고 있음을 시사한다.

Arm의 엔터프라이즈 전략

AI 관련 지출의 중심이 토큰 생성 비용에서 물리적 환경의 실시간 의사결정 정확도에 대한 비용으로 전환되면서, Arm은 피지컬 AI 부문을 통해 고도로 최적화된 아키텍처를 설계하는 것을 목표로 하고 있다.

샤는 “Arm은 현장에서의 의사결정을 지원하기 위한 엔드투엔드 아키텍처를 설계하고 있다. 서버와 로봇 전반을 Arm으로 표준화하면, 기본 소프트웨어 스택을 다시 구축하지 않고도 AI 모델을 클라우드에서 엣지로 이동시킬 수 있는 ‘매끄러운 컴퓨트 패브릭’을 구현할 수 있다”라고 설명했다. 즉, Arm을 표준으로 채택하면 디바이스 유형 간 파편화를 줄이고, 개발자 역량을 단순화하는 동시에 데이터센터에서 엣지, 나아가 기계로 이어지는 워크로드 이동성을 높일 수 있다는 것이다.

다만 라와트는 “위험 요인은 벤더 종속성 자체보다는, Arm이 칩 설계까지 확대할 경우 라이선스 정책과 향후 기술 방향에 기업이 더 크게 영향을 받게 된다는 데 있다”라고 지적했다.

대부분의 기업에서 도입은 점진적으로 이뤄질 전망이다. CIO는 공장이나 물류창고와 같은 통제된 환경에서 제한적인 적용부터 시작한 뒤, 로보틱스와 자율 시스템을 조직 전반으로 확대해 나갈 가능성이 크다.
dl-ciokorea@foundryco.com

Perfumes solo ‘para ti’ y 600 tonos de maquillaje: IA para hacer una perfumería y cosmética personalizadas y una industria más resiliente

Alguna gente es tan fiel a ciertos perfumes que, para sus personas cercanas, ese olor le queda ya por siempre unido. Cuando se cruzan por la calle con alguien que usa esas mismas notas olfativas, piensan en su persona de referencia. Cierto es, eso sí, que la fragancia no es exactamente suya, aunque esa frontera se podría cruzar en cualquier momento gracias a la tecnología. Si la inteligencia artificial está revolucionando otras industrias, también lo está haciendo ya, como confirman desde el sector, en la de la cosmética y la perfumería, abriendo las puertas a productos personalizados y adaptados a cada persona.

La industria de la perfumería y la cosmética tiene un impacto global económico notable, posiblemente porque es un sector transversal a diferentes demografías. Solo en perfumes, el gasto mundial alcanza los 56.750 millones de dólares, según cálculos de Grand View Research, y escalará hasta los 78.850 millones para el cierre de la década. A esa cifra habría que sumar lo que se gasta en cosmética y productos de higiene, como jabones o champús, para tener la foto completa de la inversión mundial sectorial.

En España, según los últimos datos de Stanpa, la Asociación Nacional de la Perfumería y la Cosmética, la industria supone el 1,03% del PIB español. España consume al año 11.200 millones de euros en estos productos, pero también lanza al mundo una parte importante de lo que produce. Las exportaciones de las marcas españolas crecían en 2024 a un ritmo del 23%.

El potencial de la IA

Desde fuera, cuando se piensa en fragancias, maquillaje o hasta productos de higiene se suele visualizar algo casi artesanal, llevado hasta por las emociones y los impulsos un tanto artísticos. Sin embargo, ese es un sector con mucha ciencia, mucha innovación y, también, mucha tecnología. La inteligencia artificial es una de sus piezas emergentes.

Y, teniendo en cuenta esa vinculación con una suerte de genio creativo, ¿cuesta integrar a la IA en términos de cultura corporativa?  “Como ocurre con cualquier transformación tecnológica relevante, la adopción de la IA supone un reto cultural”, explica Marc Ortega Aguasca, director de Data & AI en Bella Aurora Labs. “En nuestro caso, hemos trabajado desde el inicio para que estas herramientas no se perciban como un ‘juguete de TI’, sino como un habilitador real del negocio”, explica. Han usado “escucha activa de las necesidades de cada área y la construcción conjunta de soluciones que aporten valor tangible”, apunta. La IA entra a formar parte así de la “cultura creativa de la compañía, como un aliado y no como un sustituto”.

La experiencia de Bella Aurora Labs es una muestra clara de algo que la industria está percibiendo. La IA tiene un “papel estratégico” y una “importancia creciente”, como concluían los participantes en un evento sectorial centrado en esta herramienta organizado por Stanpa este diciembre. La inteligencia artificial se convierte así en “palanca de competitividad, eficiencia y modernización industrial”. Los usos que se le están dando son bastante parecidos a los que están aplicando otros sectores. La IA automatiza tareas y hace analítica de datos, mejora la trazabilidad o la eficiencia, afina la cadena logística o soporta la compliance.

Al tiempo, se introduce en áreas propias y únicas, como puede ser la mejora de formulaciones, el control de calidad, la aceleración de lanzamientos o el trabajo en marketing o atención al cliente. Así, por ejemplo, Bella Aurora acaba de desplegar un chatbot interno que responde a consultas en lenguaje natural sobre datos de la compañía. “Esto permite liberar a los responsables del dato de tareas repetitivas de soporte y, al mismo tiempo, ofrecer a los usuarios una mayor agilidad en la obtención de respuestas”, señala Ortega Aguasca.

Otra de las áreas en las que la industria ve potencial para la inteligencia artificial es la sostenibilidad. “La IA también tendrá un impacto decisivo en sostenibilidad, al permitir simular escenarios ambientales, optimizar cadenas de suministro circulares y tomar decisiones basadas en datos sobre materiales, envases y logística”, señala Adrià Martínez, director general del Beauty Cluster, en el que están asociadas compañías de cosmética, perfumería y cuidado personal.

Como defienden desde Stanpa, esta herramienta ya está generando valor real en la industria. “La IA ya está presente en la infraestructura TI de muchas compañías de cosmética y perfumería, no solo en marketing o ventas, sino de forma transversal”, confirma Martínez.

“La IA ya está presente en la infraestructura TI de muchas compañías de cosmética y perfumería, no solo en marketing o ventas, sino de forma transversal”,> afirma >Adrià Martínez, director general del Beauty Cluster

La era de la hiperpersonalización

Al tiempo, la IA se posiciona como una de las llaves que permiten seguir el ritmo de los avances del mercado y de las preferencias de consumo. Una de las tendencias sectoriales para este 2026 será, según las proyecciones del Beauty Cluster, la hiperpersonalización, esa búsqueda de lo único y propio. En resumidas cuentas, se podría decir que esa persona con una fragancia que huele a ella ahora quiere que literalmente las notas olfativas sean solo suyas.

“La hiperpersonalización ya no es un concepto aspiracional, sino una realidad operativa”, explica Martínez. El sector ya lo está viendo en “diagnósticos de piel basados en IA, recomendadores inteligentes en ecommerce y asistentes virtuales capaces de adaptar mensajes, rutinas y ofertas en tiempo real según el comportamiento, el contexto y los datos históricos del usuario”.

Las cosas ahora responden a lo que tú quieres de forma concreta. No se trata, además, de una cuestión con potencial a futuro, sino algo que se está ofreciendo ya en los canales de venta. Una muestra es el producto Skinceuticals Custom DOSE, al que solo se puede acceder en 8 puntos de venta en España y que usa una evaluación para crear un sérum personalizado, o la base de maquillaje Tonework, de la surcoreana Amorepacific, con más de 600 opciones de color. “En España ya vemos marcas y empresas que utilizan escáneres faciales con IA, cuestionarios avanzados y modelos predictivos para diseñar rutinas personalizadas y ajustar surtido y promociones digitales”, apunta Martínez. Este suma que esto no se está trabajando solo en experiencia cliente, sino que va también a lo que ocurre entre bambalinas. “Ya conocemos casos entre nuestros socios en los que se aplica también en procesos de formulación, planificación de producción, gestión de stocks y logística, permitiendo adaptar lotes, tiempos y recursos a una demanda cada vez más fragmentada”, señala.

El sector es, igualmente, plenamente consciente de los potenciales retos de esta apuesta. “Nos encontramos en una fase previa, pero absolutamente necesaria, para poder abordar la hiperpersonalización con garantías”, señala Ortega Aguasca sobre lo que están haciendo en su compañía. “Para extraer el máximo valor de los modelos que pueden impulsar nuestra estrategia de personalización, creemos imprescindible contar antes con una política de datos sólida y bien estructurada”, indica. El éxito llega de alimentar a la IA con buenos datos, pero también con hacerlo de forma segura.

Al fin y al cabo, hacerlo bien es todavía más importante cuando se echa la vista hacia el futuro, en el que la industria asume que la hiperpersonalización irá en aumento y la IA tendrá, por tanto, un papel aún más clave.

“Para extraer el máximo valor de los modelos [de IA] que pueden impulsar nuestra estrategia de personalización, creemos imprescindible contar antes con una política de datos sólida y bien estructurada”, reflexiona Marc Ortega Aguasca, director de datos e IA en Bella Aurora Labs

Nuevas oportunidades

“Todo apunta a que la cosmética y la perfumería evolucionarán hacia modelos cada vez más a la carta, tanto en producto como en servicio”, indica Martínez. Los productos serán para “cada persona, momento y estilo de vida”, lo que obligará a una mayor flexibilidad en la producción y a contar con “cadenas de suministro mucho más inteligentes”. “La IA actuará como el ‘cerebro’ que conecta datos, formulación, producción y logística en tiempo real”, resume.

Igualmente, esta personalización intensa hará que experiencias y cuestiones que ahora son consideradas de ultra lujo (por ejemplo, esos perfumes únicos) alcancen públicos mucho más generales. La tecnología está democratizando el acceso. “Esto transformará la perfumería en una experiencia íntima pero escalable, combinando exclusividad sensorial con eficiencia industrial”, ejemplifica este experto. El potencial es amplísimo, permitiendo hasta la cocreación de fragancias vía plataforma interactiva hasta el ajuste de las fórmulas para que respeten las necesidades de cada piel.

Los productos a la carta son el titular más jugoso, pero no es el único potencial que el sector ve a la IA o a la integración de otras herramientas, como es el boom de los wearables o la robotización de los almacenes. La tecnología se percibe “como un motor estratégico de diferenciación y competitividad para el sector”. “Hablemos o no de inteligencia artificial, la tecnología es hoy un pilar imprescindible para ser diferenciales en nuestro sector y seguir creciendo como compañía de referencia”, indica Ortega Aguasca. Las diferentes herramientas TI identifican palancas de crecimiento y mejoran la eficiencia operativa.

“La tecnología es uno de los principales factores que está permitiendo al sector de la perfumería y la cosmética ganar resiliencia en un entorno cada vez más complejo e incierto”, suma Martínez. Al tiempo, impulsa la innovación. “La biotecnología permite desarrollar fórmulas más eficaces y sostenibles, la realidad aumentada mejora la experiencia de compra y reduce devoluciones, y el uso de sensores e IoT [internet de las cosas] facilita un control continuo de los procesos industriales”, destaca.

“IT 관리 시대는 끝났다” 2026년 CIO의 7가지 역할 변화

AI 기반 엔터프라이즈 전환이 가속화되면서 CIO의 위상은 더 높아질 전망이다. 데이터 파이프라인과 기술 플랫폼부터 솔루션 업체 선정, 임직원 교육, 심지어 핵심 업무 프로세스까지 모든 영역이 바뀌고 있으며, 기업을 미래로 이끌기 위한 조율의 한 가운데에 CIO가 있다.

2024년 기술 리더들의 고민이 ‘AI가 실제로 작동하는가, 어떻게 도입할 것인가’였다면, 2025년에는 ‘새 기술의 최적 사용례는 무엇인가’가 핵심 질문이었다. 2026년은 다르다. 이제 관심은 ‘확장’과 ‘업무 방식의 근본적 전환’으로 옮겨간다. AI를 통해 직원, 조직, 나아가 기업 전체가 실제로 작동하는 방식을 바꾸는 단계가 본격화된다. 과거에 IT가 어떤 역할로 인식됐든, 이제 IT는 조직 재편을 이끄는 동력이 됐다.

향후 12개월 동안 CIO 역할이 달라질 7가지를 정리했다.

“실험은 그만” 이제 가치 창출의 시간

인시던트 관리 기업 페이저듀티(PagerDuty)의 CIO 에릭 존슨은 2026년 CIO 역할이 AI 덕분에 더 좋아질 것이며, 비즈니스 가치와 기회가 매우 클 것으로 본다.

존슨은 “금이나 값비싼 광물이 가득한 광산을 갖고 있는데, 어떻게 캐내야 온전한 가치를 얻을 수 있을지 확신이 없는 상황”이라며, “지난 몇 년간 축적한 학습을 바탕으로 AI에서 의미 있는 가치를 찾아내라는 주문”을 받고 있다고 말했다.

다만 난이도는 더 올라갔다. 변화 속도가 과거보다 훨씬 빨라졌기 때문이다. 존슨은 “12개월 전의 생성형 AI는 오늘의 생성형 AI와 완전히 다르다. 그 변화를 지켜보는 현업 책임자도 몇 달 전엔 듣지도 못했던 사용례를 접하기 시작했다”라고 덧붙였다.

‘IT 관리자’에서 ‘비즈니스 전략가’로

전통적으로 기업 IT 조직은 다른 부서를 위한 기술 지원 역할에 집중해 왔다. 컨설팅 기업 KPMG US의 파트너이자 기술 컨설팅 총괄 책임자 마커스 머프는 “요구사항을 말하면, 그걸 만들어 주는 방식”이라고 표현했다.

하지만 IT는 ‘백오피스 주문 처리자’에서 ‘혁신을 함께 설계하는 동반자’로 변하고 있다. 머프는 “적어도 향후 10년은 기술 변화가 너무 급격해 다시 백오피스로 돌아가기 어렵다”라며, “인터넷이나 모바일 시대 이후 가장 빠른 ‘초가속 변화 사이클’이며, 어쩌면 그 이상”이라고 분석했다.

변화관리의 리더십

AI가 업무 방식을 바꾸면서 CIO는 기술 도입을 넘어 변화관리의 전면에 서야 한다는 목소리가 커지고 있다.

금융 서비스 기업 프린시펄 파이낸셜 그룹(Principal Financial Group)의 엔터프라이즈 비즈니스 솔루션 부문 VP 겸 CIO 라이언 다우닝은 “논의의 상당 부분이 AI 솔루션을 어떻게 구현하고, 어떻게 작동시키며, 어떤 가치를 더하는지에 집중돼 있다. 하지만, 실제로 AI가 현재 업무 공간에 혁신을 가져오고 있으며, ‘모두의 일하는 방식’ 자체를 근본적으로 바꾸고 있다”라고 설명했다.

다우닝은 역할과 전문성, 수년간 해오던 업무의 가치 제안까지 재정의해야 하는 충격이 올 것이라고 내다봤다. 특히, “우리가 들여오는 기술이 ‘미래의 일’ 자체를 만들고 있다”라며, “기술을 넘어 변화의 촉매가 돼야 한다”라고 강조했다.

변화관리는 IT 조직 내부에서 먼저 시작된다. 보스턴 컨설팅 그룹(BCG)의 총괄 책임자 겸 CTO인 맷 크롭은 “소프트웨어 개발 분야가 AI 적용이 가장 앞서 있고, 도구도 비교적 오래전부터 존재했다. 소프트웨어 개발자에게 AI 에이전트를 적용했을 때의 영향은 매우 명확하다”라고 말했다.

IT 조직이 먼저 겪은 혁신에서 얻은 교훈을 다른 사업부로 확장할 수도 있다. 크롭은 “AI 기반 소프트웨어 개발에서 벌어지는 일은 ‘탄광의 카나리아’ 같은 신호”라며, “생산성 향상을 확보하는 동시에, 전사에서 재사용할 변화관리 시스템을 만드는 기회가 된다”고 설명했다. 또, “그 출발점이 CIO”라고 덧붙였다.

조직 최상단의 베스트 프랙티스도 중요해진다. 크롭은 “조직의 리더가 AI를 직접 쓰고, 업무에서 어떻게 활용하는지 보여주며 ‘AI 사용이 허용되고, 받아들여지며, 기대된다’는 메시지를 줘야 한다”라고 제언했다. CIO와 경영진은 AI로 메모 초안을 만들고, 회의록을 정리하며, 전략 구상을 돕는 식으로 사용 범위를 넓힐 수 있다.

다만 ‘전사적 AI 배포’는 갈등이 큰 이슈가 될 수 있다. 카네기멜런대 교수 아리 라이트먼은 “기업은 고객 경험을 이해하는 데 많은 시간을 쓰지만, 직원 경험에 집중하는 곳은 드물다”라고 지적했다. 또한, “전사 AI 시스템을 출범하면서 지지하고 흥미를 느끼는 사람도 있지만, ‘망가뜨리고 싶어 하는’ 사람도 나온다”라며, “직원들이 가진 문제를 해결하지 못하면 프로젝트가 중단될 수 있다”라고 경고했다.

데이터 정비가 확장의 전제 조건

AI 프로젝트가 확장될수록 데이터 요구도 커진다. 제한적·선별된 데이터만으로는 부족하고, 아직 IT 현대화를 구현하지 못한 기업은 데이터 스택을 정비해 AI가 쓰기 좋은 형태로 만들어야 한다. 이와 함께 보안·컴플라이언스까지 확보해야 한다.

워너뮤직(Warner Music)의 데이터 부문 VP 애런 러커는 “AI에서 가치를 창출하기 위해 데이터 기반을 먼저 다지고 필요한 인프라가 갖춰졌는지 확인하고 있다”라고 밝혔다.

특히 AI 에이전트가 자율적으로 데이터 소스를 탐색하고 질의할 수 있게 되면서 보안 이슈는 더 커진다. 소규모 파일럿이나 RAG 내장 단계에서는 개발자가 프롬프트에 붙일 데이터를 엄격히 선별했지만, ‘에이전트 시대’에는 인간의 통제가 약해지거나 사라질 수 있다. 결국 통제는 애플리케이션이 아니라 데이터 자체에 더 밀접하게 적용해야 한다는 결론으로 이어진다.

러커는 “AI를 이용해 더 신속하게 움직이고 싶겠지만, 동시에 권한을 제대로 설정해 누군가 챗봇에 입력하는 바람에 ‘가장 중요한 자산’이 유출되는 일이 없도록 해야 한다”라고 강조했다.

직접 구축이냐 서비스 구매냐

2026년에는 AI를 ‘직접 개발할지, 구매할지’ 결정하는 것이 과거보다 훨씬 큰 영향을 미친다. 많은 경우 솔루션 업체가 더 빠르고, 더 잘 만들며, 더 저렴하게 제공할 수 있다. 더 나은 기술이 등장하면, 내부에서 처음부터 만든 시스템을 바꾸는 것보다 더 쉽게 전환할 수 있다.

반면 일부 프로세스는 핵심 가치이자 경쟁력의 근간이 될 수 있다. 러커는 “우리에게 HR은 경쟁력이 아니다. 워크데이가 규정을 준수하는 무언가를 만드는 데 더 유리하다”라며 “그걸 우리가 직접 구축할 이유가 없다”라고 덧붙였다.

그러면서도 “워너뮤직이 전략적 우위를 만들 수 있는 영역도 있다. AI 관점에서 그 우위가 무엇인지 정의하는 일이 중요해질 것”이라며, “AI를 위한 AI를 하면 안 된다. 기업 전략을 반영한 비즈니스 가치에 연결해야 한다”라고 강조했다.

외부 솔루션 업체에 핵심 프로세스를 맡기면, 업체가 해당 산업을 기존 플레이어보다 더 깊이 이해하게 될 위험도 있다. 하버드 비즈니스 스쿨의 최고 펠로우이자 GAI 인사이트(GAI Insights) 공동 설립자인 존 스비오클라는 “업무 프로세스를 디지털화하면 행동 자본, 네트워크 자본, 인지 자본이 쌓인다. 과거에는 직원의 머릿속에만 있던 무언가를 풀어내는 효과가 있다”라고 설명했다.

많은 기업이 이미 자사의 행동 자본을 구글이나 페이스북과, 네트워크 자본을 페이스북이나 링크드인과 거래하고 있다. 스비오클라는 “인지 자본을 ‘값싼 추론’이나 ‘값싼 기술 접근’과 맞바꾸는 건 매우 나쁜 선택”이라고 경고했다. 스비오클라는 “AI 기업이나 하이퍼스케일러가 지금 당장 그 사업을 하지 않더라도, 해당 비즈니스를 이해할 ‘스타터 키트’를 주는 셈이다. 기회가 크다고 판단하면 수십억 달러를 쏟아부어 시장에 진입할 수 있다”라고 설명했다.

유연성이 중요한 플랫폼 선택

AI가 일회성 PoC와 파일럿에서 전사 확산으로 넘어가면, 기업은 AI 플랫폼 선택이라는 과제와 마주한다. 프린시펄의 다우닝은 “변화가 너무 빠르다 보니 장기적으로 누가 리더가 될지 아직 모른다. 의미 있는 베팅을 시작하겠지만, ‘하나를 골라서 끝’이라고 말할 단계는 아니다”라고 말했다.

핵심은 확장성이 있으면서도 분리된 플랫폼을 고르는 것이다. 그래야 기업이 빠르게 방향 전환하면서도 비즈니스 가치를 확보할 수 있다. 다우닝은 “지금은 유연성을 최우선으로 두고 있다”라고 강조했다.

경영 컨설팅 기업 웨스트 먼로 파트너스(West Monroe Partners)의 최고 AI 책임자 브렛 그린스타인은 CIO가 ‘안정적인 요소’와 ‘급변하는 요소’를 구분해 플랫폼을 설계해야 한다고 조언했다. 그린스타인은 “AI는 클라우드 가까이에 둬라. 클라우드는 안정적일 것”이라며, “하지만 AI 에이전트 프레임워크는 6개월이면 바뀔 수 있으니, 특정 프레임워크에 종속되지 않도록 설계해 어떤 프레임워크와도 통합할 수 있어야 한다”라고 설명했다.

그린스타인은 특히 거버넌스 모델 구축을 포함해 CIO가 ‘내일의 인프라’를 신중하고 계획적으로 구축해야 한다고 덧붙였다.

매출 창출

AI는 산업 전반의 비즈니스 모델을 바꿀 가능성이 크다. 일부 기업에는 위협이지만, 어떤 기업에는 기회다. CIO가 AI 기반 제품·서비스를 함께 만들어내면 IT는 비용센터가 아니라 매출 창출 조직이 될 수 있다.

KPMG의 머프는 “대부분 IT 부서가 시장에서 가치를 만드는 기술 제품을 직접 만들고, 제조 방식과 서비스 제공 방식, 매장에서 제품을 판매하는 방식까지 바꾸는 흐름이 나타날 것”이라고 말했다. IT가 고객과 더 가까워지면서 조직 내 존재감도 커진다는 설명이다. 머프는 “과거 IT는 고객으로부터 한 걸음 떨어져 있었다. 다른 부서가 제품과 서비스를 팔 수 있도록 기술을 지원했다. AI 시대에는 CIO와 IT가 제품을 만든다. 서비스 지향에서 제품 지향으로 바뀐다”라고 강조했다.

이런 변화는 이미 진행 중이다. 미국 전역에서 1,380만 명의 환자를 진료하는 전국 단위 의사 그룹 비투이티(Vituity)의 CIO 아미스 나이르는 “우리는 내부에서 제품을 만들어 병원 시스템과 외부 고객에게 제공하고 있다”라고 말했다.

비투이티의 솔루션은 의사가 환자와의 대화를 기록·전사하는 데 들이는 시간을 AI로 줄일 수 있다. 나이르는 “환자가 오면 의사는 그냥 대화하면 된다. 컴퓨터를 보며 타이핑하는 대신 환자를 보고 듣는다. 이후 차트 작성, 의료 의사결정 프로세스, 퇴원 요약까지 멀티에이전트 AI 플랫폼이 만들어 준다”라고 설명했다.

이 도구는 마이크로소프트의 애저 플랫폼 위에 맞춤형으로 구축한 자체 개발 솔루션이며, 현재는 독립적으로 운영되는 스타트업으로 분사해 운영하고 있다. 나이르는 “우리는 매출을 만드는 조직이 됐다”라고 강조했다.
dl-ciokorea@foundryco.com

The self-creating SuperNet

When we imagine the future of artificial intelligence, our minds often conjure images straight from science fiction: legions of humanoid robots walking among us, indistinguishable from their creators. We have been conditioned to see the anthropomorphic form as the pinnacle of robotic evolution. This vision, however, is a profound failure of imagination. The future of embodied intelligence is not a fleet of mechanical butlers; it’s something far more fundamental, powerful and alive.

To find the true future of embodied intelligence, we must look beyond the individual robot and ask a more fundamental question: What is the system that gives it birth?

A failure of imagination

The obsession with the humanoid form factor is a trap. It’s both wildly over-engineered for most tasks and critically under-engineered for others. Why would a factory need a robot with five-fingered hands and two legs to move a pallet when a specialized, wheeled platform can do it with a fraction of the energy and complexity? Why would we send a bipedal robot to inspect an undersea cable when a sleek, aquatic drone is infinitely better suited? Why lumber a bipedal form through a warehouse when a swarm of coordinated drones could reorganize inventory in minutes? Humanoids are a jack-of-all-trades and a master of none; too slow, too weak, too big for some tasks and too small for others.

A common argument is that humanoids are ideal for learning through imitation. This, too, is a fallacy. The key to general robotic capability is not imitation learning but the interactive learning of a world model — an internal, predictive simulation of reality. True intelligence doesn’t just copy actions; it understands principles. A world model captures the causal structure of reality — how objects interact, how forces propagate, how systems respond to intervention. A world model captures the causal structure of reality — how objects interact, how forces propagate, how systems respond to intervention.

This is how we operate. When you drive a car or use a power drill, you aren’t retraining your brain from scratch. Your core world model seamlessly adapts, integrating the tool as an extension of your body. The intelligence is in the world model and it allows for the horizontal transfer of skills across different embodiments. The same will be true for AI. We can train universal action models that allow an AI to master a new robotic body with minimal tuning, rendering the need for a single, universal form factor obsolete.

That said, humanoids will have their place as interfaces in spaces designed for humans. But even then, to assume we’ve perfected that form is hubris. New materials, actuators and sensors — many of which will be designed by AI — will give rise to humanoid forms we can’t yet conceive. The humanoids of 2035 may bear as little resemblance to today’s prototypes as a modern smartphone does to a rotary telephone.

The body that builds itself

Instead of designing one robot for every task, we should build the one system that can design every robot for any task. You can imagine the system as a distributed network that acts as a virtual superfactory; or what we will elevate to the SuperNet.

Imagine a globally distributed network of automated factories. An AI designs a novel robot perfectly suited for a specific job. Other robots, controlled by the AI, begin to assemble it. The parts are 3D-printed onsite or sourced from other specialized nodes in the network — fully automated facilities that produce chips, motors and sensors — with autonomous vehicles handling all transport. This automated supply chain extends all the way back to the mines.

This system is managed by the emergent intelligence of a vast network of AIs. Think of it as a digital ecosystem operating on market principles, where each node is managed by an autonomous AI (see my book, “The Rise of Superintelligence,” for how these agents can be aligned). Through a shared protocol of resource and information exchange, these AIs collectively orchestrate a complex dance of creation without a central choreographer. One node specializes in precision optics, another in high-torque actuators, a third in radiation-hardened electronics — each contributing its expertise to the collective capability.

And here is the crucial step: The SuperNet can produce the very robots that build, maintain and expand itself. It is a recursively self-improving system — a machine that grows, learns and evolves, making it less like a traditional factory and more like a living organism.

From information to actualization

The internet revolutionized how we access information. You type a query and within milliseconds, a world of information materializes on your screen. The SuperNet represents the next evolutionary leap: from information to actualization.

Imagine expressing any physical need or desire — a custom robot, a car, a feast, a gadget, a home, a base on the moon — and having it realized. The SuperNet interprets your request, analyzes its requirements and orchestrates its fulfillment through a vast network of robots, facilities and services. If the perfect robot for the job doesn’t exist, the network designs and builds it. If specialized materials are needed, it sources or synthesizes them. If the task requires coordination across continents or worlds, autonomous logistics make it seamless. The complexity remains hidden behind a simple interface, just as the internet’s infrastructure of servers, routers and fiber optic cables disappears behind a search box.

This is the internet’s physical manifestation. Where the digital internet routes packets of information to deliver digital reality, the SuperNet routes atoms and energy to deliver physical reality. It translates intention into form, thought into matter. The interface remains simple — a request — but behind it lies a planetary-scale orchestration of physical resources operating with the same fluidity we now take for granted in the digital realm.

Closing the loop

In the digital realm, large language models (LLMs) are already learning to generate their own tools in the form of code, dramatically expanding their capabilities. The SuperNet is the physical manifestation of this principle. It is the machine that allows a superintelligence to generate its own physical tools — robots — to act upon the world.

This approach is not only more capable but also profoundly more efficient. The SuperNet can design robots to be easily recyclable or reconfigurable, breaking them down and using their components to build new forms as needs change. This minimizes waste and optimizes the use of material and energy resources, creating a truly sustainable industrial base. Where today’s manufacturing leaves graveyards of obsolete machines, the SuperNet creates an endlessly reconfigurable pool of matter and energy.

Crucially, the loop closes when the SuperNet begins designing and fabricating the next generation of computer chips — the very hardware that houses the mind of the superintelligence. The body improves the mind and the mind improves the body. Each generation of hardware enables better AI, which in turn designs better hardware, accelerating the cycle of improvement.

This culminates in a powerful conclusion. The popular vision for AI’s embodiment has been misplaced. The focus has been on the puppets, not the puppeteer. The SuperNet is not just a tool for a superintelligence; it is its physical realization. It is an ever-expanding, ever-improving body that’s capable of shaping itself and the world in any way it can imagine. It is the universal translator between human intention and physical manifestation — whether you need a robot, a meal, a building or a journey to another world. It translates from intention to reality, approaching a true magic wand. The true form factor for embodied superintelligence is not a humanoid. It’s the entire, dynamic network of creation itself.

The future is now

The future described here is not a distant dream; it is a project. Our team has already published foundational research on designing robots specifically for automated assembly, proving the core concept is viable.

We believe the SuperNet must be a global, open ecosystem. Crucially, this network doesn’t need to be fully automated from day one. It is designed as a framework that can incorporate human-run nodes initially while providing a clear pathway to automate every component over time. To catalyze this creation, we are developing and open-sourcing the core software that will allow these distributed nodes to coordinate.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

7 changes to the CIO role in 2026

Everything is changing, from data pipelines and technology platforms, to vendor selection and employee training — even core business processes — and CIOs are in the middle of it to guide their companies into the future.

In 2024, tech leaders asked themselves if this AI thing even works and how do you do it. Last year, the big question was what the best use cases are for the new technology. This year will be all about scaling up and starting to use AI to fundamentally transform how employees, business units, or even entire companies actually function.

So whatever IT was thought of before, it’s now a driver of restructuring. Here are seven ways the CIO role will change in the next 12 months.

Enough experimenting

The role of the CIO will change for the better in 2026, says Eric Johnson, CIO at incident management company PagerDuty, with a lot of business benefit and opportunity in AI.

“It’s like having a mine of very valuable minerals and gold, and you’re not quite sure how to extract it and get full value out of it,” he says. Now, he and his peers are being asked to do just that: move out of experimentation and into extraction.

“We’re being asked to take everything we’ve learned over the past couple of years and find meaningful value with AI,” he says.

What makes this extra challenging is the pace of change is so much faster now than before.

“What generative AI was 12 months ago is completely different to what it is today,” he says. “And the business folks watching that transformation occur are starting to hear of use cases they never heard of months ago.”

From IT manager to business strategist

The traditional role of a company’s IT department has been to provide technology support to other business units.

“You tell me what the requirements are, and I’ll build you your thing,” says Marcus Murph, partner and head of technology consulting at KPMG US.

But the role is changing from back-office order taker to full business partner working alongside business leaders to leverage innovation.

“My instincts tell me that for at least the next decade, we’ll see such drastic change in technology that they won’t go back to the back office,” he says. “We’re probably in the most rapid hyper cycle of change at least since the internet or mobile phones, but almost certainly more than that.”

Change management

As AI transforms how people do their jobs, CIOs will be expected to step up and help lead the effort.

“A lot of the conversations are about implementing AI solutions, how to make solutions work, and how they add value,” says Ryan Downing, VP and CIO of enterprise business solutions at Principal Financial Group. “But the reality is with the transformation AI is bringing into the workplace right now, there’s a fundamental change in how everyone will be working.”

This transformation will challenge everyone, he says, in terms of roles, value proposition of what’s been done for years, and expertise.

“The technology we’re starting to bring into the workplace is really shaping the future of work, and we need to be agents of change beyond the tech,” he says.

That change management starts within the IT organization itself, adds Matt Kropp, MD and senior partner and CTO at Boston Consulting Group.

“There’s quite a lot of focus on AI for software development because it’s maybe the most advanced, and the tools have been around for a while,” he says. “There’s a very clear impact using AI agents for software developers.”

The lessons that CIOs learn from managing this transformation can be applied in other business units, too, he says.

“What we see happening with AI for software development is a canary in the coal mine,” he adds. And it’s an opportunity to ensure the company is getting the productivity gains it’s looking for, but also to create change management systems that can be used in other parts of the enterprise. And it starts with the CIO.

“You want the top of the organization saying they expect everyone to use AI because they use it, and can demonstrate how they use it as part of their work,” he says. Leaders need to lead by example that the use of AI is allowed, accepted, and expected.

CIOs and other executives can use AI to create first drafts of memos, organize meeting notes, and help them think through strategy. And any major technology initiative will include a change management component, yet few technologies have had as dramatic an impact on work as AI is having, and is expected to have.

Deploying AI at scale in an enterprise, however, is a very contentious issue, says Ari Lightman, a professor at Carnegie Mellon University. Companies have spent a lot of time focusing on understanding the customer experience, he says, but few focus on the employee experience.

“When you roll out enterprise-wide AI systems, you’re going to have people who are supportive and interested, and people who just want to blow it up,” he says. Without addressing the issues that employees have, AI projects can grind to a halt.

Cleaning up the data

As AI projects scale up, so will their data requirements. Instead of limited, curated data sets, enterprises will need to modernize their data stacks if they haven’t already, and make the data ready and accessible for AI systems while ensuring security and compliance.

“We’re thinking about data foundations and making sure we have the infrastructure in place so AI is something we can leverage and get value out of,” says Aaron Rucker, VP of data at Warner Music.

The security aspect is particularly important as AI agents gain the ability to autonomously seek out and query data sources. This was much less of a concern with small pilot projects or RAG embedding, where developers carefully curated the data that was used to augment AI prompts. And before gen AI, data scientists, analysts, and data engineers were the ones accessing data, which offered a layer of human control that might diminish or completely vanish in the agentic age. That means the controls will need to move closer to the data itself.

“With AI, sometimes you want to move fast, but you still want to make sure you’re setting up data sources with proper permissions so someone can’t just type in a chatbot and get all the family jewels,” says Rucker.

Make build vs buy decisions

This year, the build or buy decisions for AI will have dramatically bigger impacts than they did before. In many cases, vendors can build AI systems better, quicker, and cheaper than a company can do it themselves. And if a better option comes along, switching is a lot easier than when you’ve built something internally from scratch. On the other hand, some business processes represent core business value and competitive advantage, says Rucker.

“HR isn’t a competitive advantage for us because Workday is going to be better positioned to build something that’s compliant” he says. “It wouldn’t make sense for us to build that.”

But then there are areas where Warner Music can gain a strategic advantage, he says, and it’s going to be important to figure out what this advantage is going to be when it comes to AI.

“We shouldn’t be doing AI for AI’s sake,” says Rucker. “We should attach it to some business value as a reflection of our company strategy.”

If a company uses outside vendors for important business processes, there’s a risk the vendor will come to understand an industry better than the existing players.

Digitizing a business process creates behavioral capital, network capital, and cognitive capital, says John Sviokla, executive fellow at the Harvard Business School and co-founder of GAI Insights. It unlocks something that used to be exclusively inside the minds of employees.

Companies have already traded their behavioral capital to Google and Facebook, and network capital to Facebook and LinkedIn.

“Trading your cognitive capital for cheap inference or cheap access to technology is a very bad idea,” says Sviokla. Even if the AI company or hyperscaler isn’t currently in a particular line of business, this gives them the starter kit to understand that business. “Once they see a massive opportunity, they can put billions of dollars behind it,” he says.

Platform selection

As AI moves from one-off POCs and pilot projects to deployments at scale, companies will have to come to grips with choosing an AI platform, or platforms.

“With things changing so fast, we still don’t know who’s going to be the leaders in the long term,” says Principal’s Downing. “We’re going to start making some meaningful bets, but I don’t think the industry is at the point where we pick one and say that’s going to be it.”

The key is to pick platforms that have the ability to scale, but are decoupled, he says, so enterprises can pivot quickly, but still get business value. “Right now, I’m prioritizing flexibility,” he says.

Bret Greenstein, chief AI officer at management consulting firm West Monroe Partners, recommends CIOs identify aspects of AI that are stable, and those that change rapidly, and make their platform selections accordingly.

“Keep your AI close to the cloud because the cloud is going to be stable,” he says. “But the AI agent frameworks will change in six months, so build to be agnostic in order to integrate with any agent frameworks.”

Progressive CIOs are building the enterprise infrastructure of tomorrow and have to be thoughtful and deliberate, he adds, especially around building governance models.

Revenue generation

AI is poised to massively transform business models across every industry. This is a threat to many companies, but also an opportunity for others. By helping to create new AI-powered products and services, CIOs can make IT a revenue generator instead of just a cost center.

“You’re going to see this notion of most IT organizations directly building tech products that enable value in the marketplace, and change how you do manufacturing, provide services, and how you sell a product in a store,” says KPMG’s Murph.

That puts IT much closer to the customer than it had been before, raising its profile and significance in the organization, he says.

“In the past, IT was one level away from the customer,” he says. “They enabled the technology to help business functions sell products and services. Now with AI, CIOs and IT build the products, because everything is enabled by technology. They go from the notion of being services-oriented to product-oriented.”

One CIO already doing this is Amith Nair at Vituity, a national physician group serving 13.8 million patients.

“We’re building products internally and providing them back to the hospital system, and to external customers,” he says.

For example, doctors spend hours a day transcribing conversations with patients, which is something AI can help with. “When a patient comes in, they can just have a conversation,” he says. “Instead of looking at the computer and typing, they look at and listen to the patient. Then all of their charting, medical decision processes, and discharge summaries are developed using a multi-agent AI platform.”

The tool was developed in-house, custom-built on top of the Microsoft Azure platform, and is now a startup running on its own, he says.

“We’ve become a revenue generator,” he says.

How Black & Veatch is democratizing AI expertise across its employee owners

Black & Veatch’s AI strategy demonstrates how thoughtful implementation can drive rapid, meaningful adoption across a large organization. Rather than deploying AI tools companywide and hoping for results, it’s built a cohort-based program that’s driven active and specific AI work usage to nearly half of its employee owners in just one year. The approach addresses the human factors that often derail AI initiatives by building champion networks, eliminating friction, and converting employee passion into tangible workplace and business benefits. By also combining partner-provided AI capabilities with proprietary tools trained on 110 years of engineering data, Black & Veatch is creating a multiplier effect that enables safety improvement, profitability, and increased resource capacity.

How is AI making its way into your business strategy?

We anchor our AI opportunities to three areas: safety, resourcing improvements, and profitable returns for our employee owners. With market demand increasing, particularly the power needs of data centers, we’re using AI to democratize knowledge across our engineers so Black & Veatch can deliver more strategic and accelerated solutions.

How are you embedding this strategy?

We’ve defined our AI capabilities continuum as foundational, differentiating, and enduring with a focus on four themes across gen AI, agentic AI, and MLOps.

The first theme is iterative innovation, which lowers the barriers to effective use of AI for all by driving adoption of Microsoft’s integrated gen AI capabilities.

Second is placing strategic bets on platforms for engineering, construction, HR, sales, and marketing while leveraging our strategic partners’ platform-specific generative and agentic AI strategies. We want the big providers to bring the models to us, so when an employee asks to use Claude, Perplexity AI, or ChatGPT, it’s fine to use a governed user experience like Microsoft 365 Copilot to bring those models to the user.

Third is disruptive innovation, which focuses less on provider AI and more on our own data. We’re rich in unstructured, natural language data from 110 years of documentation to engineer and deliver critical infrastructure. Our new BV ASK platform applies generative models against data, democratizing and improving functional expertise across engineering disciplines. So we’re leveraging AI and our data to create that multiplier effect of expertise.

Our fourth theme is in the MLOps space, turning our project sites into trillions of data points that train models to advance our work. We’re advancing plans to collect telemetry from job site equipment, employee wearables for safety monitoring, geofencing technology, and drones with computer vision to create multivariate models that can help predict the success and profitability of new projects. Rather than turn down good work, we’re creating an AI-driven feedback loop to increase our margins.

The human factor is the sticking point in driving AI adoption. How are you changing minds and behaviors?

I’ve seen CIOs give everybody Microsoft 365 Copilot and watch adoption hover at five to 10%. Instead, we started by using early successes with Copilot to build a champion network to influence more adoption. We picked a few powerful use cases, identified personas who’d benefit from those use cases, and created a cohort of early adopters. Then we found another set of use cases and created another cohort, so today, approximately 5,000 employee owners engage in AI cohorts at Black & Veatch, with 97% active usage of our core AI capabilities.

Curriculum within each cohort includes hands-on training and spark sessions to encourage growth and engagement within the community itself. In a few months, we expect to have about 7,500 of our employee owners through a program cohort, and 75% of our employee base actively using generative AI to support their work.

We ask our cohorts for three things: to actively incorporate AI into your daily job, participate enthusiastically in the cohort community, and be a net producer for the community versus a net consumer. The cohorts not only increase AI skill development, but drive a whole new level of collaboration across departments.

What’s your advice for CIOs who need to balance AI innovation and data security?

Just as access to the internet and social media platforms took some time to govern in corporations, AI is bringing similar consumer-driven urgency that we need to understand and use to drive efficiencies. People see that AI provides a tangible way to improve their personal lives, so when our teams come into our offices, they expect to have the same access to AI platforms to improve their work efficiency.

My first piece of advice is to educate your teams about the need for innovation and guardrails. We set up an AI governance committee and launched several campaigns coupled with cybersecurity awareness month to outline what we’re doing to deliver experiences using BV data, but within secure and safe guardrails. We also have a technology showcase every year where we educate ourselves on the why and how: why we need the guardrails and how to use the tools. Rather than restricting access, the approach we’re taking is to eliminate friction and frustration while establishing clear guidance and data security controls.

Also, establish a formal process to increase the overall AI acumen across the entire company. AI is different from the innovations of Metaverse and blockchain. People understand AI because it’s so tangible. They can use natural language to create interesting things, so the barriers to innovation are low.

And of course, use every opportunity to shift mindsets. When people express interest in AI tools, I ask them to send me an email with the answer to two questions: Why is this new capability interesting to you, and how will it allow you to do your job better? If the response is thoughtful, we pull them into an earlier cohort immediately. This removes potential frustration by converting their passion into a benefit of a cohort where they can apply their ideas.

What’s the key motivation behind this cohort program?

The most critical factor in AI-driven transformation, and in society, is the human element. Our program helps build a level playing field to enable all Black & Veatch creators to do what they do best — create! This new but foundational knowledge across the company allows us to pursue more advanced opportunities with AI. As collective knowledge increases, this opens even more to further advance AI enablement within engineering, and even out in the field. We’re beginning to see it already.

❌