โŒ

Normal view

There are new articles available, click to refresh the page.
Today โ€” 9 December 2025CIO

Why standardizing workplace technology is the next competitive advantage for CIOs

9 December 2025 at 12:30

Over the past decade, the enterprise tech stack has expanded dramatically โ€” with hundreds of workplace apps, including numerous overlapping collaboration and productivity tools used across teams. But what began as digital empowerment has evolved into fragmentation โ€” with disconnected systems, duplicate workflows, inconsistent data, and rising governance and security risks.

It couldnโ€™t come at a worse time: 93% of executives say cross-functional collaboration is more crucial than ever.1 Yet, employees struggle to collaborate across tools โ€” constantly chasing context, toggling between apps, and recreating work โ€” while IT teams face mounting integration, licensing, and security burdens that slow transformation and increase costs.

The result is a silent productivity tax: reduced visibility, fragmented decision-making, and slower execution across the business that ultimately undermines performance. For CIOs, the next competitive edge isnโ€™t adopting more tools โ€” itโ€™s creating operational excellence by uniting departments on a secure, extensible, standardized digital workplace foundation.

Standardization: the new lever for operational excellence

To reclaim control over costs, risks, and velocity, leading CIOs are bringing teams across the organization together on a unified, extensible collaboration stack that has the flexibility to be tailored to each teamโ€™s requirements. A consolidated platform unifies teams, systems, and strategy โ€” giving IT visibility and control while empowering business units to execute more effectively and adapt quickly. With one governed foundation, IT reduces redundancy, strengthens security, and improves the employee experience.

The payoff is operational excellence, simplified governance, and more time for IT to focus on innovation rather than maintenance. CIOs gain unified visibility into system governance while delivering a more consistent, reliable user experience across the enterprise.ย ย ย ย  ย 

Driving workplace productivity and business outcomes

On a standardized digital workplace foundation, all team workflows stay connected to enterprise goals. Leaders across the organization gain end-to-end visibility into progress, dependencies, and outcomes โ€” turning work data into actionable intelligence, operational improvements, and velocity. That enterprise-wide visibility accelerates execution, resulting in faster decision cycles, stronger alignment, and measurable improvements in workplace productivity and customer experience.

This organization-wide transformation is made possible by IT. IT moves from maintaining systems to orchestrating outcomes, becoming the bridge between business goals and the technology that powers them.

The foundation for an AI-ready enterprise

AI is quickly becoming embedded into every type of workflow. But AI can only be as effective as the systems and data it draws from. Disconnected and inconsistent information leads to inaccurate results, failed automations, and stalled value.

CIOs who standardize their collaboration ecosystem today can scale AI safely, consistently, and with confidence. Standardization creates the structured, governed data fabric AI depends on, enabling responsible innovation and future-ready operations. It provides the consistent taxonomies, permissions, and workflows that make safe and effective AI deployment possible.

When AI tools and agents have access to consistent, accurate, context-rich data across teams, they can create meaningful insights and outputs that create real business value.

Secure, governed, and future-proof

A unified digital workplace strengthens security and governance across every team. With consistent access controls and audit trails, CIOs can enforce compliance, reduce risk, and adapt to new regulations or technologies with confidence.

Future-proofing isnโ€™t about predicting change โ€” itโ€™s about building a secure, adaptable foundation that can evolve with it. Itย doesnโ€™t just strengthen todayโ€™s defenses but creates a governed foundation adaptable to tomorrowโ€™s technologies and regulations.

Atlassian: A unified base for collaboration

By unifying collaboration and execution on one platform, CIOs empower teams, enable AI success, and secure the enterprise for future innovations.

With Atlassianโ€™s Teamwork Collection, organizations can standardize on a single extensible platform connecting teams, goals, work, communication, and knowledge through AI-powered workflows. The result: a simplified, streamlined, secure collaboration ecosystem that empowers every team and positions IT to lead the modern, AI-ready enterprise.

To learn more, visit us here.


1Atlassian, โ€œThe State of Teams 2025โ€

Salesforce: Latest news and insights

9 December 2025 at 05:12

Salesforce (NYSE:CRM) is a vendor of cloud-based software and applications for sales, customer service, marketing automation, ecommerce, analytics, and application development. Based in San Francisco, Calif., its services include Sales Cloud, Service Cloud, Marketing Cloud, Commerce Cloud, and Salesforce Platform. Its subsidiaries include Tableau Software, Slack Technologies, and MuleSoft, among others.

The company is undergoing a pivot to agentic AI, increasingly focused on blending generative AI with a range of other capabilities to offer customers the ability to develop autonomous decision-making agents for their service and sales workflows. Salesforce has a market cap of $293 billion, making it the worldโ€™s 36th most valuable company by market cap.

Salesforce news and analysis

Salesforceโ€™s Agentforce 360 gets an enterprise data backbone with Informaticaโ€™s metadata and lineage engine

December 9, 2025: While studies suggest that a high number of AI projects fail, many experts argue that itโ€™s not the modelโ€™s fault, itโ€™s the data behind it. Salesforce aims to tackle this problem with the integration of its newest acquisition, Informatica.

Salesforce unveils observability tools to manage and optimize AI agents

November 20, 2025: Salesforce unveiled new Agentforce 360 observability tools to give teams visibility into why AI agents behave the way they do, and which reasoning paths they follow to reach decisions.

Salesforce unveils simulation environment for training AI agents

November 14, 2025: Salesforce AI Research today unveiled a new simulation environment for training voice and text agents for the enterprise. Dubbed eVerse, the environment leverages synthetic data generation, stress testing, and reinforcement learning to optimize agents.

Salesforce to acquire Doti to boost AI-based enterprise search via Slack

November 14, 2025: Salesforce wii acquire Israeli startup, Doti, aiming to enhance AI-based enterprise search capabilities offered via Slack. The demand for efficient data retrieval and interpretation has been growing within enterprises, driven by the need to streamline workflows and increase productivity.

Salesforceโ€™s glaring Dreamforce omission: Vital security lessons from Salesloft Drift

October 22, 2025: Salesforceโ€™s Dreamforce conference offered a range of sessions on best practices for securing their Salesforce environments and AI agents, but what it didnโ€™t address were weaknesses exposed by the recent spate of Salesforce-related breaches.

Salesforce updates its agentic AI pitch with Agentforce 360

October 13, 2025: Salesforce has announced a new release of Agentforce that, it said, โ€œgives teams the fastest path from AI prototypes to production-scale agentsโ€ โ€” although with many of the new releaseโ€™s features still to come, or yet to enter pilot phases or beta testing, some parts of that path will be much slower than others.

Lessons from the Salesforce breach

October 10, 2025: The chilling reality of aย Salesforce.com data breach is a jarring wake-up call, not just for its customers, but for the entireย cloud computingย industry.ย 

Salesforce brings agentic AI to IT service management

October 9, 2025: Salesforceย is bringing agentic AI toย IT service management (ITSM). The CRM giant is taking aim at competitors likeย ServiceNowย with Agentforce IT Service, a new IT support suite that leverages autonomous agents to resolve incidents and service requests.

Salesforce Trusted AI Foundation seeks to power the agentic enterprise

October 2, 2025: Asย Salesforceย pushes further into agentic AI, its aim is to evolve Salesforce Platform from an application for building AI to a foundational operating system for enterprise AI ecosystems. The CRM giant took a step toward that vision today, announcing innovations across the Salesforce Platform, Data Cloud, MuleSoft, and Tableau.

Salesforce AI Research unveils new tools for AI agents

August 27, 2025: Salesforce AI Research announced three advancements designed to help customers transition to agentic AI: a simulated enterprise environment framework for testing and training agents, a benchmarking tool to measure the effectiveness of agents, and a data cloud capability for autonomously consolidating and unifying duplicated data.

Attackers steal data from Salesforce instances via compromised AI live chat tool

August 26, 2025: A threat actor managed to obtain Salesforce OAuth tokens from a third-party integration called Salesloft Drift and used the tokens to download large volumes of data from impacted Salesforce instances. One of the attackerโ€™s goals was to find and extract additional credentials stored in Salesforce records that could expand their access.

Salesforce acquires Regrello to boost automation in Agentforce

August 19, 2025: Salesforce is buying Regrello to enhanceย Agentforce, its suite of tools for building autonomous AI agents for sales, service, and marketing. San Francisco-based startup Regrelloย specializes in turning data into agentic workflows, primarily for automating supply-chain business processes.

Salesforce adds new billing options to Agentforce

August 19, 2025: In a move that aims to improve accessibility for agentic AI, Salesforce announced new payment options for Agentforce, its autonomous AI agent suite.The new options, built on the flexible pricing the company introduced in May, allow customers to use Flex Credits to pay for the actions agents take.

Salesforce to acquire Waii to enhance SQL analytics in Agentforce

August 11, 2025: Salesforce has signed a definitive agreement to acquire San Francisco-based startup Waii for an undisclosed sum to enhance SQL analytics withinย Agentforce, its suite of tools aimed at helping enterprises build autonomous AI agents for sales, service, marketing, and commerce use cases.

Could Agentforce 3โ€™s MCP integration push Salesforce ahead in the CRM AI race?

June 25, 2025: โ€œ[Salesforceโ€™s] implementation of MCP is one of the most ambitious interoperability moves we have seen from a CRM vendor or any vendor. It positionsย Agentforceย as a central nervous system for multi-agent orchestration, not just within Salesforce but across the enterprise,โ€ saidย Dion Hinchcliffe, lead of the CIO practice at The Futurum Group. But it introduces new considerations around security.

Salesforce Agentforce 3 promises new ways to monitor and manage AI agents

June 24, 2025: This is the fourth version of Salesforce Agentforce since itsย debut in September last year, with the newest, Agentforce 3, succeeding the previousย โ€˜2dxโ€™ release. A new feature of the latest version is Agentforce Studio, which is also available as a separate application within Salesforce.

Salesforce supercharges Agentforce with embedded AI, multimodal support, and industry-specific agents

Jun 18, 2025: Salesforce is updating Agentforce with new AI features and expanding it across every facet of its ecosystem with the hope that enterprises will see the no-code platform as ready for tackling real-world digital execution, shaking its image of being a module for pilot projects.

CIOs brace for rising costs as Salesforce adds 6% to core clouds, bundles AI into premium plans

Jun 18, 2025: Salesforce is rolling out sweeping changes to its pricing and product packaging, including a 6% increase for Enterprise and Unlimited Editions of Sales Cloud, Service Cloud, Field Service, and select Industries Clouds, effective August 1.

Salesforce study warns against rushing LLMs into CRM workflows without guardrails

June 17, 2025: A new benchmark study from Salesforce AI Research has revealed significant gaps in how large language models handle real-world customer relationship management tasks.

Salesforce Industry Cloud riddled with configuration risks

June 16, 2025: AppOmni researchers found 20 insecure configurations and behaviors in Salesforce Industry Cloudโ€™s low-code app building components that could lead to data exposure.

Salesforce changes Slack API terms to block bulk data access for LLMs

June 11, 2025: Salesforceโ€™s Slack platform has changed its API terms of service to stop organizations from using Large Language Models to ingest the platformโ€™s data as part of its efforts to implement better enterprise data discovery and search.

Salesforce to buy Informatica in $8 billion deal

May 27. 2025: Salesforceย has agreed to buy Informatica in an $8 billion deal as a way to quickly access far more data for its AI efforts. Analysts generally agreed that the deal was a win-win for both companiesโ€™ customers, but for very different reasons.ย 

Salesforce wants your AI agents to achieve โ€˜enterprise general intelligenceโ€™

May 1, 2025: Salesforce AI Research unveiled a slate of new benchmarks, guardrails, and models to help customers develop agentic AI optimized for business applications.

Salesforce CEO Marc Benioff: AI agents will be like Iron Manโ€™s Jarvis

April 17, 2025: AI agents are more than a productivity boost; theyโ€™re fundamentally reshaping customer interactions and business operations. And while thereโ€™s still work to do on trust and accuracy, the world is beginning a new tech era โ€” one that might finally deliver on the promises seen in movies likeย Minority Reportย andย Iron Man, according toย Salesforceย CEOย Marcย Benioff.

Agentblazer: Salesforce announces agentic AI certification, learning path

March 6, 2025: Hot on the heels of theย release of Agentforce 2dxย for developing, testing, and deploying AI agents, Salesforce introduced Agentblazer Status to its Trailhead online learning platform.

Salesforce takes on hyperscalers with Agentforce 2dx updates

March 6, 2025: Salesforceโ€™s updates to its agentic AI offering โ€”ย Agentforceย โ€” could give the CRM software provider an edge over its enterprise application rivals and hyperscalers including AWS, Google, IBM, Service Now and Microsoft.

Salesforceโ€™s Agentforce 2dx update aims to simplify AI agent development, deployment

March 5, 2025: Salesforce released the third version of its agentic AI offering โ€” Agentforce 2dx โ€” to simplify the development, testing, and deployment of AI agents that can automate business processes across departments, such as sales, service, marketing, finance, HR, and operations.

Salesforceโ€™s AgentExchange targets AI agent adoption, monetization

March 4, 2025: Salesforceย is launching a new marketplace named AgentExchange for its agents and agent-related actions, topics, and templates to increase adoption of AI agents and allow its partners to monetize them.

Salesforce and Google expand partnership to bring Agentforce, Gemini together

February 25, 2025: The expansion of the strategic partnership will enable customers to build Agentforce AI agents using Google Gemini and to deploy Salesforce on Google Cloud.

AI to shake up Salesforce workforce with possible shift to sales over IT

February 5, 2025: With the help of AI, Salesforce can probably do without some staff. At the same time, the company needs salespeople trained in new AI products, CEO Marc Benioff has stated.

Salesforceโ€™s Agentforce 2.0 update aims to make AI agents smarter

December 18, 2024: The second release of Salesforceโ€™s agentic AI platform offers an updated reasoning engine, new agent skills, and the ability to build agents using natural language.

Meta creates โ€˜Business AIโ€™ group led by ex-Salesforce AI CEO Clara Shih

November 20, 2024: The ex-CEO of Salesforce AI, Clara Shih, has turned up at Meta just a few days after quitting Salesforce. In her new role at Meta she will set up a new Business AI group to package Metaโ€™s Llama AI models for enterprises.

CEO of Salesforce AI Clara Shih has left

November 15, 2024: The CEO of Salesforce AI, Clara Shih, has left after just 20 months in the job. Adam Evans, previously senior vice president of product for Salesforce AI Platform, has moved up to the newly created role of executive vice president and general manager of Salesforce AI.

Marc Benioff rails against Microsoftโ€™s copilot

October 24, 2024: Salesforceโ€™s boss doesnโ€™t have a good word to say about Microsoftโ€™s AI assistants, saying the technology is basically no better than Clippy 25 years ago.

Salesforceโ€™s Financial Services Cloud targets ops automation for insurance brokerages

October 16, 2024: Financial Services Cloud for Insurance Brokerages will bring new features to help with commissions management and employee benefit servicing, among other things, when it is released in February 2025.

Explained: How Salesforce Agentforceโ€™s Atlas reasoning engine works to power AI agents

September 30, 2024: AI agents created via Agentforce differ from previous Salesforce-based agents in their use of Atlas, a reasoning engine designed to help these bots think like human beings.

5 key takeaways from Dreamforce 2024

September 20, 2024: As Salesforceโ€™sย 2024 Dreamforceย conference rolls up the carpet for another year, hereโ€™s a look at a few high points as Salesforce pitched a new era for its customers, centered around Agentforce, which brings agentic AI to enterprise sales and service operations.

Alation and Salesforce partner on data governance for Data Cloud

September 19, 2024: Data intelligence platform vendor Alation has partnered with Salesforce to deliver trusted, governed data across the enterprise. It will do this, it said, with bidirectional integration between its platform and Salesforceโ€™s to seamlessly deliversย data governanceย andย end-to-end lineageย withinย Salesforce Data Cloud. This enables companies to directly access key metadata (tags, governance policies, and data quality indicators) from over 100 data sources in Data Cloud, it said.

New Data Cloud features to boost Salesforceโ€™s AI agents

September 17, 2024: Salesforce added new features to its Data Cloud to help enterprises analyze data from across their divisions and also boost the companyโ€™s new autonomous AI agents released under the name Agentforce, the company announced at theย ongoing annual Dreamforce conference.

Dreamforce 2024: Latest news and insights

September 17, 2024: Dreamforce 2024 boasts more than 1,200 keynotes, sessions and workshops. While this yearโ€™s Dreamforce will encompass a wide spectrum of topics, expect Salesforce to showcase Agentforce next week at Dreamforce.

Salesforce unveils Agentforce to help create autonomous AI bots

September 12, 2024: The CRM giantโ€™s new low-code suite enables enterprises to build AI agents that can reason for themselves when completing sales, service, marketing, and commerce tasks.

Salesforce to acquire data protection specialist Own Company for $1.9 billion

September 6, 2024: The CRM company said Ownโ€™s data protection and data management solutions will help it enhance availability, security, and compliance of customer data across its platform.

Salesforce previews new XGen-Sales model, releases xLAM family of LLMs

September 6, 2024: The XGen-Sales model, which is based on the companyโ€™s open source APIGen and its family of large action models (LAM), will aid developers and enterprises in automating actions taken by AI agents, analysts say.

Salesforce mulls consumption pricing for AI agents

August 30, 2024: Investors expect AI agent productivity gains to reduce demand for Salesforce license seats. CEO Marc Benioff says a per-conversation pricing model is a likely solution.

Coforge and Salesforce launch new offering to accelerate net zero goals

August 27, 2024: Coforge ENZO is designed to streamline emissions data management by identifying, consolidating, and transforming raw data from various emission sources across business operations.

Salesforce unveils autonomous agents for sales teams

August 22, 2024: Salesforce today announced two autonomous agents geared to help sales teams scale their operations and hone their negotiation skills. Slated for general availability in October, Einstein Sales Development Rep (SDR) Agent and Einstein Sales Coach Agent will be available through Sales Cloud, with pricing yet to be announced.

Salesforce to acquire PoS startup PredictSpring to augment Commerce Cloud

August 2, 2024: Salesforce has signed a definitive agreement to acquire cloud-based point-of-sale (PoS) software vendor PredictSpring. The acquisition will augment Salesforceโ€™s existing Customer 360 capabilities.

Einstein Studio 1: What it is and what to expect

July 31, 2024: Salesforce has released a set of low-code tools for creating, customizing, and embed AI models in your companyโ€™s Salesforce workflows. Hereโ€™s a first look at what can be achieved using it.

Why are Salesforce and Workday building an AI employee service agent together?

July 26, 2024: Salesforce and Workday are partnering to build a new AI-based employee service agent based on a common data foundation. The agent will be accessible via their respective software interfaces.

Salesforce debuts gen AI benchmark for CRM

June 18, 2024: The software companyโ€™s new gen AI benchmark for CRM aims to help businesses make more informed decisions when choosing large language models (LLMs) for use with business applications.

Salesforce updates Sales and Service Cloud with new capabilities

June 6, 2024: The CRM software vendor has added new capabilities to its Sales Cloud and Service Cloud with updates to its Einstein AI and Data Cloud offerings, including additional generative AI support.

IDC Research: Salesforce 1QFY25: Building a Data Foundation to Connect with Customers

June 5, 2024: Salesforce reported solid growth including $9.13 billion in revenue or 11% year-over-year growth. The company has a good start to its 2025 fiscal year, but the market continues to shift in significant ways, and Salesforce is not immune to those changes.

IDC Research: Salesforce Connections 2024: Making Every Customer Journey More Personalized and Profitable Through the Einstein 1 Platform

June 5, 2024: The Salesforce Connections 2024 event showcased the companyโ€™s efforts to revolutionize customer journeys through its innovative artificial (AI)-driven platform, Einstein 1. Salesforceโ€™s strategic evolution at Connections 2024 marks a significant step forward in charting the future of personalized and efficient AI-driven customer journeys.

Salesforce launches Einstein Copilot for general availability

April 25, 2024: Salesforce has announced the general availability of its conversational AI assistant along with a library of pre-programmed โ€˜Actionsโ€™ to help sellers benefit from conversational AI in Sales Cloud.

Salesforce debuts Zero Copy Partner Network to streamline data integration

April 25, 2024: Salesforce has unveiled a new global ecosystem of technology and solution providers geared to helping its customers leverage third-party data via secure, bidirectional zero-copy integrations with Salesforce Data Cloud.

Salesforce-Informatica acquisition talks falls through: Report

April 22, 2024: Salesforceโ€™s negotiations to acquire enterprise data management software provider Informatica have fallen through as both couldnโ€™t agree on the terms of the deal. The disagreement about the terms of the deal is more likely to be around the price of each share of Informatica.

Decoding Salesforceโ€™s plausible $11 billion bid to acquire Informatica

April 17, 2024: Salesforce is seeking to acquire enterprise data management vendor Informatica, in a move that could mean consolidation for the integration platform-as-a-service (iPaaS) market and a new revenue stream for Salesforce.

Salesforce adds Contact Center updates to Service Cloud

March 26, 2024: Salesforce has announced new Contact Center updates to its Service Cloud, including features such as conversation mining and generative AI-driven survey summarization.

Salesforce bids to become AIโ€™s copilot building platform of choice

March 7, 2024: Salesforce has entered the race to offer the preeminent platform for building generative AI copilots with Einstein 1 Studio, a new set of low-code/no-code AI tools for accelerating the development of gen AI applications. Analysts say the platform has all the tools to become the platform for building out and deploying gen AI assistants.

Salesforce rebrands its low-code platform to Einstein 1 Studio

March 6, 2024: Salesforce has rebranded its low-code platform to Einstein 1 Studio and bundled it with the companyโ€™s Data Cloud offering. The platform has added a new feature, Prompt Builder, which allows developers to create reusable LLM prompts without the need for writing code.

Salesforceโ€™s Einstein 1 platform to get new prompt-engineering features

February 9, 2024: Salesforce is working on adding two new prompt engineering features to its Einstein 1 platform to speed up the development of generative AI applications in the enterprise. The features include a testing center and the provision of prompt engineering suggestions.


How to link green manufacturing to long-term brand value

9 December 2025 at 10:55

Matsumoto Precision Co., Ltd. is pioneering smart and sustainable machine parts manufacturing in Japan through data-driven carbon tracking. The company specializes in pneumatic control parts for robots and internal combustion engine components for automobiles. In 2022, it launched The Sustainable Factory, a fully renewable-energy-powered facility that marked a major step in its commitment to sustainability.ย 

Since 1948,ย Matsumoto Precisionย has focused on operational efficiency and supply chain transparency to better serve customers worldwide. In recent years, the Fukushima-based B2B manufacturer faced growing pressure to increase profitability, strengthen sustainability, and remain competitive. To address these challenges, the company began calculating product-level carbon footprint (PCF) data to provide customers greater visibility into emissions and environmental impact.

At the same time, inefficiencies in cost tracking limited the companyโ€™s ability to accurately assess profitability. Fragmented systems and outdated processes slowed productivity and made strategic planning difficult. Without real-time insights, employees lacked the information needed to improve operations and drive engagement.

By offering customers carbon footprint data at the product level, Matsumoto Precision aimed to provide credible โ€œproof of sustainabilityโ€ that could influence purchasing decisions and help customers share emissions information confidently within their own value chains.

A modern ERP system and a solution to link green manufacturing to brand value

To modernize operations, the company implemented a cloud-based ERP system designed to boost efficiency, enhance cost visibility, standardize processes, and improve decision-making. In 2021, Matsumoto Precision deployed SAP S/4HANA, integrating its existing systems to create consistent operational data flows across procurement, logistics, and manufacturing.

SAP S/4HANA also provides the real-time business transaction data required for accurate PCF calculations.

In 2022, the company launched The Sustainable Factory to directly connect green manufacturing with long-term brand value. The initiative provides carbon footprint visibility to B2B customers and transitions operations to 100% renewable energyโ€”helping reduce fossil-fuel dependency and mitigate rising energy costs.

As carbon accountability becomes increasingly important in manufacturing, Matsumoto Precision recognized the need for accurate and trustworthy emissions data. The ERP foundation enabled the calculation of product-level carbon emissions and the sharing of sustainability insights with customers and partners.

To advance its goals, Matsumoto Precision implemented theย SAP Sustainability Footprint Management solution in 2023. The solution uses the manufacturing performance data already available in SAP S/4HANA to calculate and visualize product-level COโ‚‚ย emissions. These capabilities directly support The Sustainable Factoryโ€™s objectives by ensuring the emissions data shared with stakeholders is transparent and reliable.

Visualizing product carbon footprints across the entire value chain

By integrating digital and green transformation, Matsumoto Precision can now visualize emissions across the full B2B supply chainโ€”from raw materials to final delivery.

โ€œWe are a company that continues to be chosen by the world,โ€ says Toshitada Matsumoto, CEO, Matsumoto Precision Co., Ltd. โ€œWith SAP S/4HANA and SAP Sustainability Footprint Management, we make smarter, greener decisions while tracking and visualizing COโ‚‚ย emissions at the product level. And with this clarity, we can enhance our brand value.โ€

Matsumoto Precision partnered with Accenture to become the first industrial manufacturer to adopt the Connected Manufacturing Enterprises (CMEs) platform, built on SAP S/4HANA. CMEs is a cloud-based regional ERP platform jointly developed by Accenture and SAP, designed to standardize business systems for small and medium-sized manufacturers and enable collaboration across the B2B community. This strong foundation made it possible for Matsumoto Precision to implement the SAP Sustainability Footprint Management (SFM) solution, delivering accurate, product-level emissions data that supports the goals of The Sustainable Factory initiative.

โ€œBy visualizing carbon footprints, companies and consumers can choose low-carbon products and contribute to a decarbonized society,โ€ says Joichi Ebihara, Sustainability Lead, Japan and Accenture Innovation Center Fukushima Center Co-Lead, Japan. Achieving this ambition, he adds, โ€œrequires collaboration across the enterprise.โ€

Matsumoto Precisionโ€™s transformation now serves as a model for manufacturing communities worldwide.

Productivity up 30% with a 400-ton reduction in annual COโ‚‚ย emissions

Through digital and green transformation, Matsumoto Precision has strengthened its leadership in sustainable manufacturing and supply chain decarbonization. The company now has visibility into costs and product-level carbon emissions, enabling informed decision-making and enhanced transparency.

Real-time data access enables employees to work more efficiently, leading to increased job satisfaction. Following the modernization effort, Matsumoto Precision increased employeesโ€™ wages by 4% annually, enhancing financial security and engagement.

Its optimized manufacturing practices now run entirely on renewable energy through The Sustainable Factory initiative. The company reduced its carbon dioxide (COโ‚‚) emissions by 400 tons annually, and the new ERP system has increased productivity by 30%. Additionally, operating profit margin is up 3% through improved cost tracking and standardization.

Matsumoto Precision Company Limited is a 2025ย SAP Innovation Awardย winner in the Sustainability Hero category for industrial manufacturing. Explore the companyโ€™sย pitch deckย to see how its digital transformation enables accurate, product-level visualization of carbon emissions across the value chain.ย Watch the videoย to see The Sustainable Factory in action.


ย 

El 67% de los CIO se ven a sรญ mismos como potenciales CEO

9 December 2025 at 10:45

Segรบn una encuesta reciente, los CIO se ven ahora a sรญ mismos como lรญderes empresariales y la mayorรญa cree que tiene las habilidades necesarias para ocupar el puesto mรกs alto, el de director general o consejero delegado, dentro de las empresas. Dos tercios de los CIO aspiran a convertirse en CEO en algรบn momento, y muchos afirman que poseen las habilidades de liderazgo probadas y la capacidad de impulsar la innovaciรณn necesaria para dirigir organizaciones, segรบn una encuesta del Programa CIO de Deloitte.

Ademรกs, las TI tambiรฉn parecen haber alcanzado un punto de inflexiรณn, ya que el 52% de los CIO afirman ahora que sus equipos de TI se consideran una fuente de ingresos mรกs que un centro de servicios para la empresa. En general, los resultados de la encuesta subrayan el surgimiento del CIO como estratega empresarial en quien se confรญa para impulsar el crecimiento y reimaginar la competitividad de la empresa, segรบn los expertos de Deloitte.

โ€œNunca ha habido un mejor momento para ser CIOโ€, afirma Anjali Shaikh, directora de los programas CIO y CDAO de Deloitte en Estados Unidos. โ€œLa tecnologรญa ya no es una funciรณn de asesoramiento, y los CIO se estรกn convirtiendo en catalizadores estratรฉgicos para sus organizaciones, alejรกndose del papel de operadores que tenรญan en el pasadoโ€.

Gestiรณn de pรฉrdidas y beneficios

Ademรกs de llamar la atenciรณn de sus compaรฑeros de trabajo, los CIO tambiรฉn estรกn mostrando signos de una nueva visiรณn de sรญ mismos, afirma Shaikh. El 36% de los CIO afirman que ahora gestionan una cuenta de resultados, lo que puede estar impulsando nuevas ambiciones profesionales.

El 67% de los CIO que han afirmado estar interesados en desempeรฑar el cargo de CEO en el futuro seรฑalan tres habilidades clave que, en su opiniรณn, les cualifican para ascender. Casi cuatro de cada diez identifican por separado sus habilidades probadas de liderazgo y gestiรณn, su capacidad para impulsar la innovaciรณn y el crecimiento, y su trayectoria en la creaciรณn de equipos de alto rendimiento.

Por el contrario, solo alrededor de un tercio de los directores de tecnologรญa y los directores digitales encuestados por Deloitte se ven a sรญ mismos como directores generales en el futuro, y menos de una sexta parte de los directores de seguridad de la informaciรณn y los directores de datos y anรกlisis se plantean dar ese paso.

Amit Shingala, director ejecutivo y cofundador del proveedor de gestiรณn de servicios de TI Motadata, afirma que el cambio de funciรณn del director de TI, que ha pasado de ocuparse principalmente de las operaciones de TI a convertirse en un motor clave del crecimiento empresarial, es cada vez mรกs evidente en todo el sector. โ€œLa tecnologรญa influye ahora en todo, desde la experiencia del cliente hasta los modelos de ingresos, por lo que se espera que los directores de informรกtica contribuyan directamente a los resultados empresariales, y no solo a la estabilidad de la infraestructuraโ€, dice Shingala, que trabaja en estrecha colaboraciรณn con varios directores de informรกtica.

Por ello, a Shingala no le sorprende que muchos CIO aspiren a convertirse en CEO, y cree que este puesto es ahora mรกs que nunca un trampolรญn. โ€œLos CIO tienen ahora una visiรณn global de todo el negocio: operaciones, riesgos, finanzas, ciberseguridad y cรณmo interactรบan los clientes con los servicios digitalesโ€, afirma. โ€œEsa amplia comprensiรณn, combinada con la experiencia en la direcciรณn de importantes iniciativas de transformaciรณn, los coloca en una posiciรณn privilegiada para desempeรฑar el papel de CEOโ€.

La innovaciรณn antes que los ingresos

Shingala tambiรฉn entiende por quรฉ muchos directores de TI consideran ahora que su funciรณn es generar ingresos. Pero, aunque impulsar el crecimiento de los ingresos es importante, el objetivo final debe ser aportar valor al negocio, cuenta. โ€œCuando un director de informรกtica introduce nuevas capacidades digitales o habilita la automatizaciรณn que mejora la experiencia del cliente, el resultado suele traducirse en nuevos ingresos o en una mayor eficiencia de costesโ€, explica. โ€œLa innovaciรณn es lo primero. Los ingresos suelen ser la recompensa por acertar con la innovaciรณnโ€.

Scott Bretschneider, vicepresidente de entrega al cliente y operaciones de Cowen Partners Executive Search, estรก de acuerdo en que la innovaciรณn debe ser la mรกxima prioridad de los CIO. Los CIO modernos deben actuar como catalizadores de la innovaciรณn y operadores comerciales, afirma. โ€œLa innovaciรณn implica replantearse los procesos comerciales, permitir la toma de decisiones basadas en datos y crear plataformas para el crecimientoโ€, aรฑade Bretschneider. โ€œLos ingresos son el resultado de ejecutar eficazmente esas innovaciones. Un buen CIO hace hincapiรฉ en la innovaciรณn que conduce a resultados, logrando un equilibrio entre la experimentaciรณn y los rendimientos mediblesโ€.

Al igual que Shingala, Bretschneider tambiรฉn ve a los CIO como candidatos emergentes para convertirse en CEO. En los รบltimos aรฑos, un nรบmero creciente de CIO y directores digitales han pasado a ocupar puestos de presidente, director de operaciones y director ejecutivo, afirma, especialmente en sectores en los que las tecnologรญas de la informaciรณn estรกn a la vanguardia, como los servicios financieros, el comercio minorista y la fabricaciรณn. โ€œLos CIO de hoy en dรญa tienen muchas de las cualidades que los consejos de administraciรณn y los inversores buscan en los CEO. Entienden las operaciones de toda la empresa, que abarcan las finanzas, la cadena de suministro, la experiencia del cliente y la gestiรณn de riesgos. Estรกn acostumbrados a dirigir equipos diversos y a gestionar grandes presupuestosโ€.

Nuevo relato

Aunque la encuesta muestra un aumento de las expectativas y responsabilidades de los CIO, la mala noticia es que casi la mitad de las organizaciones representadas siguen considerando que su funciรณn se centra mรกs en el mantenimiento y el servicio que en la innovaciรณn y los ingresos, seรฑala Shaikh, de Deloitte.

Los CIO que se encuentran atrapados en empresas que se centran en esta visiรณn anticuada del puesto pueden presionar para evolucionar sus funciones, afirma. Los CIO deben esforzarse por mantenerse al dรญa con las tecnologรญas emergentes mientras presionan para que sus puestos se centren mรกs en la innovaciรณn, recomienda.

โ€œLa parte mรกs difรญcil de su trabajo es mantenerse a la vanguardia de todas las tecnologรญas emergentes, y no puedes quedarte atrรกsโ€, dice Shaikh. โ€œยฟCรณmo estรกs creando el espacio en tu agenda y generando la capacidad a travรฉs de tus equipos y la energรญa?โ€. Los CIO deben apoyarse en las universidades, sus compaรฑeros y otros recursos para ayudarles a mantenerse al dรญa, aรฑade. โ€œTienes todas las responsabilidades de tu funciรณn tradicional para ayudar a guiar a tu equipo y a tu organizaciรณn a travรฉs de la tecnologรญa emergente, y eso requiere que te mantengas a la vanguardia. Por tanto, ยฟcรณmo lo estรกs haciendo?โ€, pregunta.

Green AI: A complete implementation framework for technical leaders and IT organizations

9 December 2025 at 10:10

When we first began exploring the environmental cost of large-scale AI systems, we were struck by a simple realization: our models are becoming smarter, but our infrastructure is becoming heavier. Every model training run, inference endpoint and data pipeline contributes to an expanding carbon footprint.

For most organizations, sustainability is still treated as a corporate initiative rather than a design constraint. However, by 2025, that approach is no longer sustainable, either literally or strategically. Green AI isnโ€™t just an ethical obligation; itโ€™s an operational advantage. It helps us build systems that do more with less (less energy, less waste and less cost) while strengthening brand equity and resilience.

What if you could have a practical, end-to-end framework for implementing green AI across your enterprise IT? This is for CIOs, CTOs and technical leaders seeking a blueprint for turning sustainability from aspiration into action.

Reframing sustainability as an engineering discipline

For decades, IT leaders have optimized for latency, uptime and cost. Itโ€™s time to add energy and carbon efficiency to that same dashboard.

A 2025 ITU Greening Digital Companies report revealed that operational emissions from the worldโ€™s largest AI and cloud companies have increased by more than 150% since 2020. Meanwhile, the IMFโ€™s 2025 AI Economic Outlook found that while AI could boost global productivity by 0.5% annually through 2030, unchecked energy growth could erode those gains.

In other words, AIโ€™s success story depends on how efficiently we run it. The solution isnโ€™t to slow innovation, itโ€™s to innovate sustainably.

When sustainability metrics appear beside core engineering KPIs, accountability follows naturally. Thatโ€™s why our teams track energy-per-inference and carbon-per-training-epoch alongside latency and availability. Once energy becomes measurable, it becomes manageable.

The green AI implementation framework

From experience in designing AI infrastructure at scale, weโ€™ve distilled green AI into a five-layer implementation framework. It aligns with how modern enterprises plan, build and operate technology systems.

1. Strategic layer: Define measurable sustainability objectives

Every successful green AI initiative starts with intent. Before provisioning a single GPU, define sustainability OKRs that are specific and measurable:

  • Reduce model training emissions by 30% year over year
  • Migrate 50% of AI workloads to renewable-powered data centers
  • Embed carbon-efficiency metrics into every model evaluation report

These objectives should sit within the CIOโ€™s or CTOโ€™s accountability structure, not in a separate sustainability office. The Flexera 2025 State of the Cloud Report found that more than half of enterprises now tie sustainability targets directly to cloud and FinOps programs.

To make sustainability stick, integrate these goals into standard release checklists, SLOs and architecture reviews. If security readiness is mandatory before deployment, sustainability readiness should be, too.

2. Infrastructure layer: Optimize where AI runs

Infrastructure is where the biggest sustainability wins live. In our experience, two levers matter most: location awareness and resource efficiency.

  • Location awareness: Not all data centers are equal. Regions powered by hydro, solar or wind can dramatically lower emissions intensity. Cloud providers such as AWS, Google Cloud and Azure now publish real-time carbon data for their regions. Deploying workloads in lower-intensity regions can cut emissions by up to 40%. The World Economic Forumโ€™s 2025 guidance encourages CIOs to treat carbon intensity like latency, something to optimize, not ignore.
  • Resource efficiency: Adopt hardware designed for performance per watt, like ARM, Graviton or equivalent architectures. Use autoscaling, right-sizing and sleep modes to prevent idle resource waste.

Small architectural decisions, replicated across thousands of containers, deliver massive systemic impact.

3. Model layer: Build energy-efficient intelligence

At the model layer, efficiency is about architecture choice. Bigger isnโ€™t always better; itโ€™s often wasteful.

A 2025 study titled โ€œSmall is Sufficient: Reducing the World AI Energy Consumption Through Model Selectionโ€ found that using appropriately sized models could cut global AI energy use by 27.8% this year alone.

Key practices to institutionalize:

  • Model right-sizing: Use smaller, task-specific architectures when possible.
  • Early stopping: End training when incremental improvement per kilowatt-hour falls below a threshold.
  • Transparent model cards: Include power consumption, emissions and hardware details.

Once engineers see those numbers on every model report, energy awareness becomes part of the development culture.

4. Application layer: Design for sustainable inference

Training gets the headlines, but inference is where energy costs accumulate. AI-enabled services run continuously, consuming energy every time a user query hits the system.

  • Right-sizing inference: Use autoscaling and serverless inference endpoints to avoid over-provisioned clusters.
  • Caching: Cache frequent or identical queries, especially for retrieval-augmented systems, to reduce redundant computation.
  • Energy monitoring: Add โ€œenergy per inferenceโ€ or โ€œjoules per requestโ€ to your CI/CD regression suite.

When we implemented energy-based monitoring, our inference platform reduced power consumption by 15% within two sprints, without any refactoring. Engineers simply began noticing where waste occurred.

5. Governance layer: Operationalize GreenOps

Sustainability scales only when governance frameworks make it routine. Thatโ€™s where GreenOps comes in โ€” the sustainability counterpart to FinOps or DevSecOps.

A GreenOps model standardizes:

  • Energy and carbon tracking alongside cloud cost reporting
  • Automated carbon-aware scheduling and deployment
  • Sustainability scoring in architecture and security reviews

Imagine a dashboard that shows Model X: 75% carbon-efficient vs. baseline: Inference Y: 40% regional carbon optimization. That visibility turns sustainability from aspiration to action.

Enterprise architecture boards should require sustainability justification for every major deployment. It signals that green AI is not a side project, itโ€™s the new normal for operational excellence.

Building organizational capability for sustainable AI

Technology change alone isnโ€™t enough; sustainability thrives when teams are trained, empowered and measured consistently.

  1. Training and awareness: Introduce short sustainability in software modules for engineers and data scientists. Topics can include power profiling, carbon-aware coding and efficiency-first model design.
  2. Cross-functional collaboration: Create a GreenOps guild or community of practice that brings together engineers, product managers and sustainability leads to share data, tools and playbooks.
  3. Leadership enablement: Encourage every technical leader to maintain an efficiency portfolio: a living document of projects that improve energy and cost performance. These portfolios make sustainability visible at the leadership level.
  4. Recognition and storytelling: Celebrate internal sustainability wins through all-hands or engineering spotlights. Culture shifts fastest when teams see sustainability as innovation, not limitation.

Measuring progress: the green AI scorecard

Every green AI initiative needs a feedback loop. We use a green AI scorecard across five maturity dimensions:

DimensionKey metricsExample target
Strategy% of AI projects with sustainability OKRs100%
InfrastructureCarbon intensity (kg COโ‚‚e / workload)โˆ’40% YoY
Model efficiencyEnergy per training epochโ‰ค baseline โˆ’ 25%
Application efficiencyJoules per inferenceโ‰ค 0.5 J/inference
Governance% of workloads under GreenOps90%

Reviewing this quarterly, alongside FinOps and performance metrics, keeps sustainability visible and actionable.

Turning sustainability into a competitive advantage

Green AI isnโ€™t just about responsibility โ€” itโ€™s about resilience and reputation.

A 2025 Global Market Insights report projects the green technology and sustainability market to grow from $25.4 billion in 2025 to nearly $74 billion by 2030, driven largely by AI-powered energy optimization. The economic logic is clear: efficiency equals competitiveness.

When we introduced sustainability metrics into engineering scorecards, something remarkable happened: teams started competing to reduce emissions. Optimization sprints targeted GPU utilization, quantization and memory efficiency. What began as compliance turned into competitive innovation.

Culture shifts when sustainability becomes a point of pride, not pressure. Thatโ€™s the transformation CIOs should aim for.

Leading the next wave of sustainable AI innovation

The next era of AI innovation wonโ€™t be defined by who has the biggest models, but by who runs them the smartest. As leaders, we have the responsibility and opportunity to make efficiency our competitive edge.

Embedding sustainability into every layer of AI development and deployment isnโ€™t just good citizenship. Itโ€™s good business.

When energy efficiency becomes as natural a metric as latency, weโ€™ll have achieved something rare in technology: progress that benefits both the enterprise and the planet.

The future of AI leadership is green, and it starts with us.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

AWS is still chasing a cohesive enterprise AI story after re:Invent

9 December 2025 at 09:49

AWS kicked off re:Invent 2025 with a defensive urgency that is unusual for the cloud leader, arriving in Las Vegas under pressure to prove it can still set the agenda for enterprise AI.

With Microsoft and Google tightening their grip on CIOsโ€™ mindshare through integrated AI stacks and workflow-ready agent platforms, AWS CEO Matt Garman and his lieutenants rolled out new chips, models, and platform enhancements, trying to knit the updates into a tighter pitch that AWS can still offer CIOs the broadest and most production-ready AI foundation.

Analysts remain unconvinced that AWS succeeded.

โ€œWe are closer, but not done,โ€ said David Linthicum, independent consultant and retired chief cloud strategy officer at Deloitte.

Big swing but off target

Garmanโ€™s biggest swing, at least the one that got it โ€œcloserโ€, came in the form of Nova Forge,ย  a new service with which AWS is attempting to confront one of its strategic weaknesses: the absence of a unified narrative that ties data, analytics, AI, and agents into a single, coherent pathway for enterprises to adopt.

Itโ€™s this cohesion that Microsoft has been selling aggressively to CIOs with its recently launched IQ set of offerings.

Unlike Microsoftโ€™s IQ stack, which ties agents to a unified semantic data layer, governance, and ready-made business-context tools, Nova Forge aims to provide enterprises raw frontier-model training power in the form of a toolkit to build custom models with proprietary data, rather than a pre-wired, workflow-ready AI platform.

But it still requires too much engineering lift to adopt, analysts say.

AWS is finally positioning agentic AI, Bedrock, and the data layer as a unified stack instead of disconnected services, but according to Linthicum, โ€œItโ€™s still a collection of parts that enterprises must assemble.โ€

Thereโ€™ll still be a lot of work for enterprises wanting to make use of the new services AWS introduced, said Phil Fersht, CEO of HFS Research.

โ€œEnterprise customers still need strong architecture discipline to bring the parts together. If you want flexibility and depth, AWS is now a solid choice. If you want a fully packaged, single-pane experience, the integration still feels heavier than what some competitors offer,โ€ he said.

Powerful tools instead of turnkey solutions

The engineering effort needed to make use of new features and services echoed across other AWS announcements, with the risk that they will confuse CIOs rather than simplify their AI roadmap.

On day two of the event, Swami Sivasubramanianโ€™s announced new features across Bedrock AgentCore, Bedrock, and SageMaker AI to help enterprises move their agentic AI pilots to production, but still focused on providing tools that accelerate tasks for developers rather than offering โ€œplug-and-play agentsโ€ by default, Linthicum said.

The story didnโ€™t change when it came to AWSโ€™s update to vibe-coding tool Kiro or the new developer-focused agents it introduced to simplify devops, said Paul Nashawaty, principal analyst at The Cube Research.

โ€œAWS clearly wants to line up against Copilot Studio and Gemini Agents. Functionally, the gap is closing,โ€ said Nashawaty. โ€œThe difference is still the engineering lift. Microsoft and Google simply have tighter productivity integrations. AWS is getting there, but teams may still spend a bit more time wiring things together depending on their app landscape.โ€

Similarly, AWS made very little progress toward delivering a more unified AI platform strategy. Analysts had looked to the hyperscaler to address complexity around the fragmentation of its tools and services by offering more opinionated MLops paths, deeper integration between Bedrock and SageMaker, and ready-to-use patterns that help enterprises progress from building models to deploying real agents at scale.

Linthicum was dissatisfied with efforts by AWS to better document and support the connective tissue between Bedrock, SageMaker, and the data plane. โ€œThe fragmentation hasnโ€™t vanished,โ€ he said. โ€œThere are still multiple ways to do almost everything.โ€

The approach taken by AWS contrasts sharply with those of Microsoft and Google to present more opinionated end-to-end stories, Linthicum said, calling out Azureโ€™s tight integration around Fabric and Googleโ€™s around its data and Vertex AI stack.

Build or buy?

For CIOs who were waiting to see what AWS delivered before finalizing their enterprise AI roadmap, they are back at a familiar fork: powerful primitives versus turnkey platforms.

They will need to assess whether their teams have the architectural discipline, MLops depth, and data governance foundation to fully capitalize on AWSโ€™s latest additions to its growing modular stack, said Jim Hare, VP analyst at Gartner.

โ€œFor CIOs prioritizing long-term control and customization, AWS offers unmatched flexibility; for those seeking speed, simplicity, and seamless integration, Microsoft or Google may remain the more pragmatic choice in 2026,โ€ Hare said.

The decision, as so often, comes down to whether the enterprise wants to build its AI platform or just buy one.

How you can turn 2025 AI pilots into an enterprise platform

9 December 2025 at 09:10

Most enterprises right now are running two AIs.

The first AI is the visible, exciting one: developer-led copilots, RAG pilots in customer support, agentic PoCs someone spun up in a cloud notebook and the AI that quietly arrived inside SaaS apps. Itโ€™s fast, easy to get up and running, with a very impressive potential and usually lives just outside the formal IT perimeter.

The other AI is the one the CIO has to defend: the one that must be governed, costed, secured and mapped to board expectations. Those two AIs are starting to collide โ€” which is exactly what May Habib described when she said 42% of Fortune 500 executives feel AI is โ€œtearing their companies apart.โ€

As with past waves of innovation, AI follows an inevitable path: new tech starts in the developerโ€™s playground, then becomes the CIOโ€™s headache and finally matures into a centrally managed platform. We saw that with virtualization, then with cloud, then with Kubernetes. AI isnโ€™t the exception.

Application and business teams have been getting access to powerful generative AI tools that help them solve real problems without waiting for a 12-month IT cycle; thatโ€™s what generative AI has been doing so far. Yet, success breeds sprawl and enterprises are now dealing with multiple RAG stacks, different model providers, overlapping copilots in SaaS and no shared guardrails.

Thatโ€™s the tension showing up in 2025 enterprise reporting โ€” AI value is uneven and organizational friction is high. We have definitely reached the point where IT has to step in and say: this is how our company approaches AI โ€” a single way to expose models, consistent policies, better economics and plenty of visibility. Thatโ€™s the move McKinsey describes as โ€œbuild a platform so product teams can consume it.โ€

Whatโ€™s different with AI is where the pain is. With cloud adoption, for example, security and network were the first blockers. With AI, the blocker is inference โ€” the part that delivers the business returns, touches private and confidential data and is now the main source of opex. Thatโ€™s why McKinsey talks about โ€œrewiring to capture value,โ€ not just adding more pilots. And this matches the widely reported results of a recent MIT study: 95% of enterprise gen-AI implementations have had no measurable P&L impact because they werenโ€™t integrated into existing workflows.

The issue isnโ€™t that models donโ€™t work โ€” itโ€™s that they werenโ€™t put on a common, governed path.

Platformization as the path to governance and margin

The biggest mistake we can make today is treating AI infrastructure like a static, dedicated resource. The demands of language models (large and small), the pressure of data sovereignty and the relentless drive for cost reduction all converge on one conclusion: AI inference is now an infrastructure imperative. And the solution is not more hardware; itโ€™s a CIO-led platformization strategy that enforces accountability and control, making AI a strategic infrastructure service. This requires a strong separation of duties and the implementation of a scale-smart philosophy versus just a scale-up approach.

Enforce a separation of duties and create the AI P&L center

We must elevate the management of AI infrastructure to a financial priority. This mandates a clear split: the infrastructure team focuses entirely on the platform โ€” ensuring security, managing the distributed topology and driving down the $/million tokens cost โ€” while the data science teams focus solely on business value and model accuracy.

This framework, which I call the AI P&L center, ensures that resource choices are treated as direct financial levers that increase margin and guarantee compliance. Research highlights that CIOs are increasingly tasked with establishing strong AI governance and cost control frameworks to deliver measurable value.

Shift from scale-up to scale-smart optimization

The technical strategy must implement a scale-smart philosophy โ€” a continuous process of monitoring, analyzing, optimizing and deploying models based on economic policy, not just load. This involves deep intelligence to perfectly map the modelโ€™s needs to the infrastructureโ€™s capabilities. This operational shift is essential because it enables the effective use of resources in support of the requirements coming from the adoption of two of the most critical pieces of innovation in artificial intelligence:

  • Small language models (SLMs). Highly specialized SLMs fine-tuned on proprietary data deliver far greater accuracy and contextual relevance for specific enterprise tasks than giant, generic LLMs. This move saves money not just because the models are smaller, but because their higher precision reduces costly errors. Studies show that enterprises deploying SLMs report better model accuracy and faster ROI compared to those using general-purpose models. Gartner has predicted that by 2027, organizations will use task-specific SLMs three times more often than general-use LLMs.
  • Agentic workflows. Next-generation applications use agentic AI, meaning a single user query cascades through multiple models. Managing these sequential, multimodel workflows requires an intelligent platform that can route requests based on key-value (KV) cache proximity and seamlessly execute optimizations like automatic prefill/decode split, flash attention, quantization, speculative decoding and model sharding across heterogeneous GPUs and CPUs. These are techniques that, in plain terms, drastically reduce latency and cost for complex AI tasks.

In both cases and more in general any time a model is used to perform inference, achieving a double-digit reduction in $/million tokens is possible only when every request is automatically routed based on cost policy and optimized by techniques that continuously tune the modelโ€™s execution against the heterogeneous hardware, but that will only be possible if a centralized and unified platform is designed and built to support inference across the enterprise.

Addressing todayโ€™s inefficiencies of AI inference serving

The traditional approach we use to manage most of our enterprise infrastructure โ€” what I call the scale-up mentality โ€” is failing when applied to continuous AI inference and canโ€™t be used to build the inference platform needed by CIOs. Weโ€™ve been provisioning dedicated, oversized clusters, often purchasing the newest and largest GPUs and replicating the resource-intensive environment required for training.

This is fundamentally inefficient for at least two key reasons:

  1. Inference is characterized by massive variability and idle time. Unlike training, which is a continuous, long-running job, inference requests are spiky, unpredictable and often separated by periods of inactivity. If youโ€™re running a massive cluster to serve intermittent requests, youโ€™re paying for megawatts of wasted capacity. Our utilization rates drop and the finance team asks tough questions. The true cost metric that matters now isnโ€™t theoretical throughput; itโ€™s dollars per million tokens. Gartner research shows that managing the unpredictable and often spiraling cost of generative AI is a top challenge for CIOs. We are optimizing for economics, not just theoretical performance.
  2. The deployment landscape is hybrid by mandate. Itโ€™s inconceivable to think that AI inference will run in a centralized, homogeneous environment. For regulated industries, such as financial services and health care or for operations that rely on proprietary internal data, the data often cannot leave the secure environment. Inference must occur on premises, at the data edge or in secure colocation facilities to meet strict data residency and sovereignty requirements. Trying to force mission-critical workloads through generic cloud API endpoints often cannot satisfy these strict regulatory and security requirements, driving a proven enterprise pattern toward hybrid and edge services. Taking things down one more level, we must keep in mind that the hardware is heterogeneous as well โ€” a mix of CPUs, GPUs, DPUs and specialized processing units โ€” and the platform must manage it all seamlessly.

Mastering the inference platform: An infrastructure imperative for the CIO

A unified platform is not about forcing alignment to a single model; itโ€™s about establishing the governance layer necessary to unlock a much wider variety of models, agents and applications that meet enterprise security and cost management requirements.

The transition from scale-up to scale-smart is the essential, unifying task for the technology leader. The future of AI is not defined by the models we train, but by the margin we capture from the inference we run.

The strategic mandate for every technology leader must be to embrace the function of platform owner and financial architect of the AI P&L center. This structural change ensures that data science teams can continue to innovate at speed, knowing the foundation is secure, compliant and cost-optimized.

By enforcing platformization and adopting a scale-smart approach, we move beyond the wild west of uncontrolled AI spending and secure a durable, margin-driving competitive advantage. The choice for CIOs is clear: Continue to try managing the escalating cost and chaos of decentralized AI or seize the mandate to build the AI P&L center that turns inference into a durable, margin-driving advantage.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

A no-nonsense framework for cloud repatriation

9 December 2025 at 08:10

In โ€œWhy cloud repatriation is back on the CIO agenda,โ€ I discussed why cloud repatriation has returned to strategic conversations and why it is attracting attention across industries. The move is neither a rejection of cloud nor a reversal of the investments of the last decade. It reflects a more balanced posture, where organizations want to place each workload where it delivers predictable value. Rising spend, uneven performance across regions and a more aggressive regulatory stance have placed workload placement on board agendas in a manner not seen in some years. Some executives now question whether some services still benefit from public cloud economics, while others believe that cloud is still the right place for elasticity, reach and rapid development.

So, moving forward, in this article, letโ€™s consider how to execute workload moves without exposing the business to unnecessary risk. I want to set out a practical framework for leadership teams to treat repatriation as a planned, evidence-led discipline rather than a reactive correction.

A strategy built on clarity rather than sentiment

Repatriation succeeds when it is anchored to clear reasoning. Most organizations already run hybrid estates, so the question is not whether cloud remains viable, but where specific workloads run best over the next cycle. This requires a calm assessment of economics, regulation and operational behavior rather than instinctive reactions to cost headlines.

The challenge for executives is to separate three things that often get blended:

  • The principle of cloud.
  • The experience of running specific workloads.
  • The fundamental drivers behind cost, resilience and compliance strain.

Once separated, the repatriation conversation becomes far easier to manage.

Understanding the economics without being drawn into technical detail

Many organizations are reporting cloud expenditure that is growing and difficult to forecast accurately. In fact, cost management remains the top cloud challenge for large enterprises, according to Flexera. That makes it seem as if the cloud has lost economic discipline โ€“ when actually it is usually the workload shape, the optimization or the team visibility where discipline is lacking.

For senior leaders, the question is simple: Why? Which services are pattern-based and behave in ways that cloud pricing does not reward?

Steady applications with predictable annual usage are usually not affected by consumption-based billing. Those are the cases where alternatives like private cloud or dedicated infrastructure can offer more stable budgets. In the opposite direction, variable or seasonal workloads benefit from cloud elasticity and automation. No technical analysis is required for the distinction. You only need to identify it. The demand patterns, growth expectations and business cycles are usually well understood.

A useful executive lens is to think in terms of financial posture rather than technical design to shift the conversation away from technology preference and keep the focus on business value:

Business PriorityStrategic ApproachRationale
Predictability of cost and performanceRepatriationStable workloads gain from fixed, controlled environments where budgets and behavior are easier to manage.
Volatility, rapid scaling or global accessPublic cloudVariable or internationally distributed workloads benefit from elastic capacity and broad geographic reach.

Placing workloads where they can succeed

Repatriation is not a โ€˜big-bangโ€™ operation; rather, it is a selective movement pattern in which relocation is justified only for specific workloads. Leaders do not need deep architectural familiarity to guide these decisions; the drivers come across clearly enough in the context of the business.

Workloads tend to fall into three broad groups: data-heavy and predictable services, locality-sensitive workloads and highly variable or globally oriented services

A simple classification of workloads across these categories gives executives an intuitive sense of what should move and what should stay:

Workload TypePreferred PlacementReasoning
Data-heavy and predictable servicesPrivate cloud or repatriated environmentsLarge, steady datasets lead to high data-movement costs and require high performance; therefore, stable, controlled platforms are better suited.
Locality-sensitive workloadsOn-premises or near-site infrastructureOperations in manufacturing, logistics, financial trading or retail require systems close to physical activity to avoid latency and inconsistency introduced by distant cloud regions.
Highly variable or globally oriented servicesPublic cloudThese workloads depend on elasticity, rapid provisioning and global reach. Moving them back on-premises usually increases cost and risk.

How regulation shapes repatriation decisions

Regulatory pressure is now one of the strongest signals for placement. Several jurisdictions have raised expectations regarding operational resilience, sovereignty and auditability. For example, resilience expectations are explicit within DORA (EU) and the UKโ€™s supervisory guidance.

This is not a directive for regulated industries to abandon the cloud. Actually, this makes it obligatory to engage in meaningful consideration of cloud deployment options, including sovereign cloud configuration, restricted-region deployments and customer-controlled encryption. Leaders need to assess if:

  • Residency controls and administrative requirements can be met effectively
  • Workloads are subject to regulatory inspection
  • Exit and continuity processes must be evidenced to a higher standard.

Repatriation is one of several available approaches to meet these obligations, although not necessarily the default one. Repatriation may be preferable when the cloud cannot meet locality or control requirements without excessive complexity.

Keeping optionality at the heart of the strategy

Optionality has become a top executive priority. Boards are sensitive to concentration risk, geopolitical exposure and long-term pricing leverage. What is most clear from discussions with senior technology leaders is that they want to move when cost, regulation or service quality changes.

This is where repatriation fits in as part of a broader strategy. If organizations value optionality, they design systems, contracts and governance so that workloads can move either way. Repatriation is easier because the estate is built for change, and cloud adoption requires less discipline and accountability. So repatriation becomes a business decision about autonomy, rather than a technology or engineering imperative.

Rehearsals are too often overlooked

Rehearsals critically demonstrate that workloads can move without drama and that the organization retains control. They also provide the evidence regulators increasingly expect to see.

A rehearsal does three things at the leadership level:

  • It shows that the business can extract its data and rebuild services in a controlled way.
  • It clarifies whether internal teams are operationally ready.
  • It exposes gaps in contracts, documentation or knowledge transfer.

No technical deep-dive is needed. Leaders need to ensure that rehearsals happen, that outcomes are documented and that follow-up actions are tracked. Enterprises that make rehearsals routine find that repatriation, if required, is far less disruptive than expected. More importantly, they discover that their cloud operations improve too, because the estate becomes more transparent and easier to govern.

How to structure a repatriation program without over-engineering it

A repatriation program should be a straightforward and easily repeatable construct. I propose a simple five-step model I call REMAP:

StageFocusKey Activities
R โ€“ RecognizeFact baseCapture and document workload purpose, demand patterns, regulatory exposure, indicative total cost over a reasonable horizon and all business dependencies.
E โ€“ EvaluatePlacement choiceDecide whether the workload benefits more from predictability or elasticity, taking regulatory suitability and risk posture into account.
M โ€“ MapDirection and ownershipSet objectives, select target environments, confirm accountable owners and align timelines with operational windows.
A โ€“ ActExecutionRehearse, agree on change criteria, communicate with stakeholders and manage cutover.
P โ€“ ProveOutcomes and learningCheck whether the move delivered the intended economic, performance or compliance result, and use the insight to guide future placement decisions.
ย ย ย 

This is not a technical transformation. It is a structured leadership exercise focused on clarity, accountability and controlled execution.

Lessons from sectors where repatriation is accelerating

Different sectors are arriving at similar conclusions about when repatriation makes sense, but the triggers are different depending on regulatory pressure, data sensitivity and operating model. The examples below are not prescriptive rules. They illustrate how industry context influences which workloads move and which remain in the cloud. The basic thread is simple: repatriation is selected where it improves control, predictability or compliance.

SectorWhat usually moves backWhat usually stays in the cloudWhy this pattern appears
Financial servicesStable, sensitive systems such as core ledgers or payment hubsElastic services, analytics and customer digital channelsRegulators expect firms to prove failover, exit and recovery. Firms also want tight control and clear audit trails.
HealthcarePrimary patient record systems and other regulated data storesResearch environments, collaboration tools and analytics workspacesPatient data is highly sensitive and often must remain local. Research and collaboration benefit from cloud scale.
Retail and consumer servicesTransaction processing close to stores and distribution centresCustomer apps, marketing platforms and omnichannel servicesLocal processing reduces latency and improves reliability at sites. Digital engagement benefits from flexible cloud capacity.
Media and entertainmentHigh-volume rendering and distribution pipelinesGlobal streaming, content collaboration and partner workflowsLarge data transfer costs make local processing attractive. Global reach and partner access suit cloud services.
ย ย ย ย 

Why repatriation often delivers less disruption than expected

Despite concerns that workload repatriation will introduce instability or complexity. In practice, organizations that approach repatriation with a clear rationale and a steady process often find the opposite. Movement defines how systems work, removes unnecessary dependencies, tightens governance and increases cost visibility.

More importantly, repatriation reinforces leadership control. It prevents cloud adoption from drifting into unimportant areas and keeps platform strategy tied to business needs rather than infrastructure momentum.

What this means for CIOs and boards

The mandate for CIOs and boards is to keep repatriation decisions within normal portfolio governance and not outside it. Repatriation is neither a strategy reversal nor a verdict on the validity of the cloud. It is a signal that organizations are reaching a more mature phase in how they use it. Most enterprises will continue to run the majority of their estate in public cloud because it still offers speed, reach and access to managed services that would be expensive or slow to reproduce in-house. Selected workloads, meanwhile, will be simply be repatriated when the commercials, regulatory posture or operating model point in that direction.

Repatriation should be a straightforward business decision supported by evidence, protecting optionality and providing reassurance for regulators and investors that exit readiness binds infrastructure choices to cost discipline and compliance. This combined clarity, control and movement readiness enables organizations to manage regional regulatory divergence, ongoing cost pressures and increasing performance demands without being forced to make rushed or defensive decisions concerning their platforms.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

US approves Nvidia H200 exports to China, raising questions about enterprise GPU supply

9 December 2025 at 08:03

The US will allow Nvidiaโ€™s H200 AI chips to be exported to China with a 25 percent fee, a policy shift that could redirect global demand toward one of the worldโ€™s largest AI markets and intensify competition for already limited GPU inventories.

The move raises fresh questions about whether enterprise buyers planning 2026 infrastructure upgrades should brace for higher prices or longer lead times if H200 supply tightens again.

โ€œWe will protect National Security, create American Jobs, and keep Americaโ€™s lead in AI,โ€ US President Donald Trump said in a post on his Truth Social platform.

Trump stopped short of allowing exports of Nvidiaโ€™s fastest chips, however, saying, โ€œNvidiaโ€™s US Customers are already moving forward with their incredible, highly advanced Blackwell chips, and soon, Rubin, neither of which are part of this deal.โ€

He did not say how many H200 units will be cleared or how export vetting will work, leaving analysts to gauge whether even a partial reopening of the Chinese market could tighten availability for buyers in the US and Europe.

Trump added that the Commerce Department is finalizing the details, noting that โ€œthe same approach will apply to AMD, Intel, and other GREAT American Companies.โ€

Shifting demand scenarios

What remains unclear is how much demand Chinese firms will actually generate, given Beijingโ€™s recent efforts to steer its tech companies away from US chips.

Charlie Dai, VP and principal analyst at Forrester, said renewed H200 access is likely to have only a modest impact on global supply, as China is prioritizing domestic AI chips and the H200 remains below Nvidiaโ€™s latest Blackwell-class systems in performance and appeal.

โ€œWhile some allocation pressure may emerge, most enterprise customers outside China will see minimal disruption in pricing or lead times over the next few quarters,โ€ Dai added.

Neil Shah, VP for research and partner at Counterpoint Research, agreed that demand may not surge, citing structural shifts in Chinaโ€™s AI ecosystem.

โ€œThe Chinese ecosystem is catching up fast, from semi to stack, with models optimized on the silicon and software,โ€ Shah said. Chinese enterprises might think twice before adopting a US AI server stack, he said.

Others caution that even selective demand from China could tighten global allocation at a time when supply of high-end accelerators remains stretched, and data center deployments continue to rise.

โ€œIf Chinese buyers regain access to H200 units, global supply dynamics will tighten quickly,โ€ said Manish Rawat, semiconductor analyst at TechInsights. โ€œChina has historically been one of the largest accelerator demand pools, and its hyperscalers would likely place aggressive, front-loaded orders after a prolonged period of restricted access. This injects a sudden demand surge without any matching increase in near-term supply, tightening availability over the next 2โ€“3 quarters.โ€

Rawat added that such a shift would also reshape Nvidiaโ€™s allocation priorities. Nvidia typically favors hyperscalers and strategic regions, and reintroducing China as a major buyer would place US, EU, and Middle East hyperscalers in more direct competition for the limited H200 supply.

โ€œEnterprise buyers, already the lowest priority, would face longer lead times, delayed shipment windows, and weaker pricing leverage,โ€ Rawat said.

Planning for procurement risk

For 2026 refresh cycles, analysts say enterprise buyers should anticipate some supply-side uncertainty but avoid overcorrecting.

Dai said diversifying supply and engaging early with vendors would be prudent, but said extreme measures such as stockpiling or placing premium pre-orders are unnecessary. โ€œLead times may tighten marginally, but overall procurement scenarios should assume steady availability of H200,โ€ he said.

Others, however, warn that renewed Chinese demand could still stretch supply in ways enterprises need to factor into their planning.

Renewed Chinese access could extend H200 lead times to six to nine months, driven by hyperscaler competition and limited HBM and packaging capacity, Rawat said. He advised enterprises to pre-book 2026 allocation slots and secure framework agreements with fixed pricing and delivery terms.

โ€œIf Nvidia prioritizes hyperscalers, enterprise allocations may shrink, with integrators charging premiums or mixing GPU generations,โ€ Rawat said. โ€œCompanies should prepare multi-generation deployment plans and keep fallback SKUs ready.โ€

A sustained high-pricing environment is likely even without dire shortages, Rawat added. โ€œEnterprises should lock multi-year pricing and explore alternative architectures for better cost-performance flexibility,โ€ he said.

This article first appeared on Network World.

Salesforceโ€™s Agentforce 360 gets an enterprise data backbone with Informaticaโ€™s metadata and lineage engine

9 December 2025 at 08:03

While studies suggest that a high number of AI projects fail โ€” potentially as many as 95% โ€” many experts argue that itโ€™s not the modelโ€™s fault, itโ€™s the data behind it, which can be fragmented, inadequate, or of poor quality.

Salesforce aims to tackle this problem with the integration of its newest acquisition, Informatica. The cloud data management companyโ€™s intelligent data management cloud (IDMC) will be integrated into Salesforceโ€™s Agentforce 360, Data 360, and Mulesoft platforms.

Rebecca Wettemann, CEO of tech analyst firm Valoir.com, called the integration โ€œreally criticalโ€ for the customer relations management (CRM) giant.

โ€œThis really shores up their data piece,โ€ she said. โ€œWhat Informatica particularly brings to the mix is this rich metadata layer, and also the perception for the market that Agentforce is not limited by CRM data.โ€

Goal to provide โ€˜enterprise understandingโ€™

Salesforceโ€™s new unified AI platform, Agentforce 360, is designed to connect humans, AI agents, apps, and data to provide what it calls a 360-degree view. Data 360 (formerly Data Cloud) serves as its foundational layer.

Now, with Informatica incorporated into Agentforce 360, the goal is to provide full โ€˜enterprise understandingโ€™ by giving agents access to core business data and its intricate relationships.

โ€œWeโ€™re combining Salesforceโ€™s metadata model and catalog with Informaticaโ€™s enterprise-wide catalog to build a complete data index,โ€ Rahul Auradkar, Salesforceโ€™s EVP and GM of unified data services, Data 360 and AI foundations, said in a briefing.

Enterprise master data management (MDM), powered by Informatica, will provide a โ€œgolden recordโ€ for data across assets, products, suppliers, and other areas, he explained. Agents will get a map of assets across systems, whether on premises or in data lakes or other repositories. Data lineage capabilities will also trace data journeys, from origin to ingestion. Further, โ€˜zero copyโ€™ capabilities mean data doesnโ€™t have to be moved around, thus lowering storage costs.

Informaticaโ€™s IDMC โ€œreplaces guessingโ€ by focusing on the entire data chain, discovering, cleaning, protecting, and unifying; this reflects Informaticaโ€™s mission of โ€œdata without boundaries,โ€ Krish Vitaldevara, Informatica chief product officer, said during the briefing.

โ€œItโ€™s the governed power plant that feeds the rest of the enterprise,โ€ he said. โ€œWe are going to be the Switzerland of data and the Switzerland of AI.โ€

To bolster this, Salesforceโ€™s MuleSoft provides real-world operational signals, such as inventory changes or shipment delays. This real-time working memory is critical, said Auradkar. โ€œAn agent needs to know what is happening right now.โ€

Agentforce 360 is built on four layers: The first combines Data 360, Informatica, and MuleSoft to support context; the second is access to 20 years of business logic and workflows (sales, service, marketing, commerce) built into Salesforce; the third, a โ€˜command center where enterprises can build, govern and orchestrate specialist agents (such as โ€˜campaign agentโ€™ or โ€˜supply chain agentโ€™); and the fourth is where enterprises actually deploy agents.

The platform was built to be open and extensible, so enterprises are not limited to the Salesforce ecosystem. They can use third-party agents, such as those built on OpenAI, Amazon Web Services (AWS), Microsoft Azure, Google Cloud Platform (GCP), Oracle, or hybrid environments, Auradkar explained.

AI can be โ€˜stupidโ€™

Informaticaโ€™s Vitaldevara emphasized the importance of quality, consistency, and context around data, noting that this is what delivers full value. But different systems have their own languages, rules, and truth, and AI canโ€™t always see the full picture because data is scattered, stale, or inconsistent.

โ€œWe all know data alone is not enough, and context is a new currency in the world of agentic AI and agentic enterprise,โ€ said Vitaldevara. Lineage, relationships, governance โ€” these all tell AI what a product is, how a process works, where data comes from, and whether it can be trusted.

โ€œContext is this digital equivalent of AIโ€™s working memory and situational awareness,โ€ said Vitaldevara.

Salesforceโ€™s Auradkar agreed that current AI agents just see โ€œfragmentsโ€; without shared understanding, they are forced to guess. โ€œThe models are incredibly intelligent, but they tend to be stupid,โ€ he said. โ€œThey know almost everything about the world, but very little about your businesses.โ€

But when every system, workflow, and agent operates within the same context, decision-making can be sped up dramatically. โ€œAI becomes more accurate and automation becomes more reliable,โ€ said Vitaldevara.

Going beyond CRM data

Whether building specialized AI agents (for sales, marketing, or customer service), or more general agents intended for broader scenarios, enterprises must go beyond CRM data, Valoirโ€™s Wettemann emphasized. Further, to get to any โ€œreasonable degree of accuracyโ€ with AI, data must be in context and supported by a โ€œreal metadata fabric,โ€ she said.

โ€œThe kind of data lineage that Informatica provides really lowers both the learning curve and the technology curve,โ€ said Wettemann.

More generally, when it comes to AI agents, she noted that enterprises have moved beyond the fear of missing out (FOMO) to the fear of messing up (FOMU).

They worry: โ€œHow do I even conceptualize bringing in ERP data to inform an agent and make sure that A) itโ€™s the right data, B) that itโ€™s not too much, and C) that Iโ€™m not overwhelming my infrastructure folks? And, finally, and maybe even most importantly, that Iโ€™m not spending a ridiculous amount of money,โ€ she said.

Indeed, pricing continues to be top of mind for enterprises, particularly as Salesforce has discussed raising prices for its AI agent platforms, โ€œmonetizingโ€ new AI contracts, and returning to seat-based and usage-based contracts.

โ€œSo itโ€™s the pricing, but itโ€™s also the visibility into pricing and the predictability of pricing that people are really paying attention to,โ€ said Wettemann.

To address concerns from Informatica customers about what this integration might mean for them, Wettemann pointed to other recent acquisitions (like Tableau) where Salesforce offered a clear roadmap and strong support mechanisms.

โ€œJudging by the way Salesforce integrated Tableau and Tableau customers into Salesforce, Informatica customers donโ€™t have anything to worry about,โ€ she noted.

During the briefing, however, Salesforce and Informatica did not reveal any details on licensing or pricing for the new integrated platform, nor did they explain how existing Informatica customers would be accommodated.

Claves para acceder a un puesto tecnolรณgico de alto nivel

9 December 2025 at 06:32

Ha liderado varias iniciativas de transformaciรณn digital y generado beneficios econรณmicos. Los ejecutivos reconocen sus competencias de liderazgo en el cambio, ya que ha mejorado la experiencia tanto de los clientes como de los empleados. Las arquitecturas que ha ayudado a implementar son ahora estรกndares de plataforma y constituyen la base de las estrategias de datos e inteligencia artificial de su organizaciรณn.

Ahora se pregunta si estรก preparado para un puesto de CIO u otro cargo de alto nivel en las รกreas de datos, digitalizaciรณn o seguridad.

El 24.ยบ informe anual State of the CIO de CIO.com revela que mรกs del 80% de los CIO reconoce que su funciรณn se centra cada vez mรกs en lo digital y la innovaciรณn, que estรกn mรกs involucrados en liderar la transformaciรณn digital, y que el CIO se estรก consolidando como agente del cambio. Si cumple estos requisitos, es lรณgico plantearse cรณmo ascender a un puesto de nivel C.

Los lรญderes de la transformaciรณn son excelentes candidatos, pero necesitan mรกs

Liderar iniciativas de transformaciรณn es un requisito previo importante para los puestos de alto nivel, pero no basta. Las responsabilidades aumentan cuando se asume la gestiรณn de resultados y riesgos en todas las iniciativas y operaciones de TI. En consecuencia, los lรญderes tecnolรณgicos sรฉnior deben definir una estrategia respaldada por el director ejecutivo y el financiero, ademรกs de supervisar un modelo operativo digital en constante evoluciรณn.

Rani Johnson, CIO de Workday, considera que โ€œlos aspirantes a lรญderes deben pasar de gestionar la ejecuciรณn de cambios basados en proyectos a asumir la plena responsabilidad de la tecnologรญa, la arquitectura y la estrategia de TI de la empresaโ€, afirma, para aรฑadir: โ€œDeben desarrollar experiencia profunda y prรกctica en infraestructura de TI, ciberseguridad, plataformas de IA, operaciones de sistemas centrales y gobernanza de datos, pero tambiรฉn demostrar su capacidad para traducir la estrategia tรฉcnica en valor empresarial sostenido, garantizando la estabilidad operativaโ€.

Para prepararse, los lรญderes deben adoptar un programa de aprendizaje permanente. El modelo 70-20-10 โ€”70% experiencias laborales, 20% aprendizaje social y 10% educaciรณn formalโ€” es una referencia รบtil para quienes aspiran a oportunidades de alto nivel.

Experiencia: de experto a influenciador sin especialidad

Muchos lรญderes de transformaciรณn buscan dominar todas las รกreas de sus programas, incluso en iniciativas de alcance corporativo que se extienden a varios aรฑos. Algunos aspiran a tener total visibilidad de sus programas para dirigir prioridades y mitigar riesgos.

Sin embargo, los lรญderes de nivel C no disponen de tiempo para sumergirse en cada detalle tรฉcnico ni suelen ser expertos en ellos. El 70% del aprendizaje experiencial al que deben aspirar implica adentrarse en รกreas fuera de su especialidad.

Para Kathy Kay, CIO de Principal, โ€œasumir un puesto tecnolรณgico de alto nivel no consiste en tener todas las respuestas, sino en aprender a liderar en la ambigรผedad y la complejidad. Las experiencias mรกs valiosas provienen de asumir tareas exigentes, resolver problemas empresariales de gran impacto e influir en toda la organizaciรณn, no sรณlo en TI. Cuando esto se combina con el acompaรฑamiento de mentores y colegas sรณlidos, se crea una base duradera para el liderazgoโ€.

Experiencias laborales que conviene buscar

ยท Visitar clientes junto a ventas y marketing para comprender necesidades y flujos de trabajo completos.

ยท Asesorar a lรญderes de otras รกreas para ganar confianza en la orientaciรณn fuera del propio dominio.

ยท Facilitar talleres, experiencia clave antes comitรฉs ejecutivos o consejos de administraciรณn.

ยท Identificar lรญderes detractores del cambio tecnolรณgico y romper mentalidades de statu quo.

ยท Convertirse en agente de cambio apoyando a equipos rezagados en el uso de datos e IA.

Una segunda รกrea clave es desarrollar la capacidad de escuchar, cuestionar, adaptarse y pivotar. Los lรญderes de alto nivel deben vender una visiรณn, planificar continuamente y saber cuรกndo las necesidades del mercado o las partes interesadas requieren replantear objetivos.

Cameron Daniel, CTO de Megaport, explica que โ€œlas nuevas tecnologรญas y los cambios en las prioridades pueden volver obsoletos los planes de la noche a la maรฑana. Los lรญderes exitosos anticipan el cambio y preparan a sus equipos para enfrentarlo, asegurando que las soluciones evolucionen con la innovaciรณn mientras mantienen el impacto empresarialโ€.

Aprendizaje social: centrarse en IA y tecnologรญas emergentes

Existe revuelo sobre la IA generativa y cuรกndo emergerรกn capacidades mรกs avanzadas. Los comitรฉs ejecutivos esperan que los lรญderes de nivel C filtren el ruido existente, dirijan la estrategia y establezcan la gobernanza de datos e IA.

Para ello no pueden basarse รบnicamente en comunicados o pequeรฑos POC. Para ampliar su visiรณn han de conectarse con pares y unirse a comunidades donde se comparten inversiones reales y resultados empresariales.

Comunidades recomendadas

ยท Asociaciรณn de CIO de Canadรก.

ยท Comunidad Digital Trailblazer.

ยท Comunidad de pares de Gartner.

ยท Foro global de CIO.

ยท CIO HotTopics.

ยท Consejo ejecutivo de CIO de IDC.

ยท Red de liderazgo Inspire.

ยท Comunidad de CIO de MIT Sloan.

ยท SIM.

ยท Women Tech Network.

Muchas se encuentran abiertas a lรญderes con aspiraciones.

Por ejemplo, en Coffee With Digital Trailblazers se debatiรณ cรณmo los lรญderes se preparan para puestos de alto nivel. Derrick Butts, de Continuums Strategies, sugiere integrarse en equipos que trabajan en detecciรณn de amenazas de IA o clasificaciรณn de ataques automatizados.

Mientras, Joe Puglisi, estrategia de crecimiento y CIO a tiempo parcial, destaca la importancia de la curiosidad y las preguntas โ€œpor quรฉโ€: โ€œSรณlo asรญ se descubren nuevas y mejores formas de hacer las cosasโ€.

Otra vรญa es reunirse con expertos para comprender los datos detrรกs de una operaciรณn.

โ€œA medida que la IA agรฉntica se convierte en realidad, la alfabetizaciรณn en datos es fundamental โ€”explica Jamie Hutton, CTO de Quantexaโ€”. Si no puedes explicar el origen de tus datos, no puedes implementar IA de forma responsableโ€.

El aprendizaje social โ€”preguntar, observar, profundizar en datos operativosโ€” ayuda a identificar oportunidades significativas de IA.

โ€œEl camino mรกs rรกpido hacia la alta direcciรณn es buscar problemas que pongan en juego la empresaโ€, opina a su vez Miles Ward, CTO de SADA.

No descuidar el aprendizaje formal

Muchos lรญderes creen que su rol no deja tiempo para la educaciรณn formal, pero el aprendizaje continuo amplรญa la mentalidad y les expone a nuevas ideas.

โ€œEn una รฉpoca de rรกpida innovaciรณn, el 70-20-10 es insuficiente: ese 10% debe aumentarโ€, sugiere Cindi Howson, director de Estrategia de Datos e Inteligencia Artificial de ThoughtSpot. Para ello recomienda una formaciรณn compuesta de miniclases prรกcticas y cumbres con lรญderes de vanguardia en IA.

Oportunidades de aprendizaje formal

ยท Leer listas recomendadas de libros sobre transformaciรณn digital y liderazgo CIO.

ยท Escuchar podcasts clave como CIO Leadership Live, CXOTalk, Technovation o CIO in the Know.

ยท Explorar cursos online como los de Executive Leadership y CIO de LinkedIn o Udemy.

ยท Considerar programas de grado de CTO en universidades como Berkeley, Carnegie Mellon o Wharton.

Los puestos de nivel C no son para todos. En State of the CIO, el 43% calificรณ el nivel de estrรฉs del cargo con 8 o mรกs sobre 10. Quienes aspiran a estos roles deben comprender plenamente sus implicaciones antes de adoptarlo como objetivo profesional.

AI churn has IT rebuilding tech stacks every 90 days

9 December 2025 at 05:01

AI tech churn is becoming a mounting problem for enterprises, which find themselves continually rebuilding their AI infrastructures in response to evolving AI capabilities, as well as AI strategies in flux.

According to a survey from AI data quality vendor Cleanlab, 70% of regulated enterprises โ€” and 41% of unregulated organizations โ€” replace at least part of their AI stacks every three months, with another quarter of both regulated and unregulated companies updating every six months.

The survey, of more than 1,800 software engineering leaders, underscores how organizations still struggle both to keep up with the ever-changing AI landscape and to deploy AI agents into production, says Cleanlab CEO Curtis Northcutt.

Just 5% of those surveyed have AI agents in production or plan to put them into production soon. Based on the surveyed engineersโ€™ answers about technical challenges, Cleanlab estimates that only 1% of represented enterprises have deployed AI agents beyond the pilot stage.

โ€œEnterprise agents are totally not here, and theyโ€™re nowhere near what people are saying,โ€ Northcutt says. โ€œThere are literally hundreds of startups that have tried to sell components of AI agents for enterprises and have failed.โ€

The speed of evolution

Even without full production status, the fact that so many organizations are rebuilding components of their agent tech stacks every few months demonstrates not only the speed of change in the AI landscape but also a lack of faith in agentic results, Northcutt claims.

Changes in the agent tech stack range from something as simple as updating the underlying AI modelโ€™s version, to moving from a closed-source to an open-source model or changing the database where agent data is stored, he notes. In many cases, replacing one component in the stack sets off a cascade of changes downstream, he adds.

โ€œWhen you go to an open-source model that you run on your own server, your whole infrastructure changes, and you have to deal with a lot of things you werenโ€™t dealing with before, and then you might go, โ€˜That was actually worse than we expected,โ€™โ€ Northcutt says. โ€œSo you go back to a different model, but then you switch to cloud, and the cloud API is actually totally different than the OpenAI API, because they are not in agreement.โ€

Cozmo AI, a voice-based AI provider, has also observed a pattern of frequent changes in agent tech stacks, says Nuha Hashem, cofounder and CTO there. The Cleanlab survey matches the churn Cozmo sees across regulated environments, she says.

โ€œMany client teams swap out parts of their stack every quarter because the early setup is often a patchwork that behaves one way in testing and a different way in production,โ€ she adds. โ€œA small shift in a library or a routing rule can change how the agent handles a task, and that forces another rebuild.โ€

While the speed of AI evolution can drive frequent rebuilds, part of the problem lies in the way AI models are tweaked, she says.

โ€œThe deeper issue is that many agent systems rely on behaviors that sit inside the model rather than on clear rules,โ€ Hashem explains. โ€œWhen the model updates, the behavior drifts. When teams set clear steps and checks for the agent, the stack can evolve without constant breakage.โ€

Low levels of faith

Another problem seems to be low satisfaction with existing components of AI stacks. The Cleanlab survey asked about user experience with several components of agent infrastructure, including agent orchestration, fast inference, and observability. Only about a third of those surveyed say they are happy with any of the five components listed, with about 40% saying they are looking for alternatives for each of them.

Just 28% of respondents are satisfied with the agent security and guardrails they have in place, signaling a lack of trust in agent results.

While the Cleanlab survey may paint a bleak picture of the current state of agents, several AI experts say its conclusions appear accurate.

Jeff Fettes, CEO of AI-based CX provider Laivly, isnโ€™t surprised that many enterprises rebuild part of their agent stacks every few months. He sees a similar phenomenon.

โ€œWhat separates out the more successful organizations with respect to AI is their ability to iterate,โ€ he says. โ€œWhat youโ€™re seeing there is companies havenโ€™t let go of the old way of doing things, and theyโ€™re really struggling to keep up with how fast AI itself as a technology is evolving.โ€

For most other major IT platforms, CIOs go through a long evaluation and deployment process, but the rate of AI advancements have destroyed that timeline, he says.

ย โ€œIT departments used to go through big arcs of planning, and then transform their tech stack, and it would be good for a while,โ€ Fettes says. โ€œRight now, what theyโ€™re finding is they get halfway through โ€” or a small way through โ€” the planning process, and the technology has moved so far they have to start over.โ€

Fettes sees many of his customers scrapping AI pilots as the technology evolves.

ย โ€œItโ€™s creating a situation where a lot of companies have to abandon existing use cases,โ€ he says. โ€œWe know weโ€™re obsoleting our own technology in a very short period of time.โ€

In addition to the fast-moving technology, the AI marketplace offers so many choices that itโ€™s difficult for CIOs to keep up, Fettes says.

โ€œThere have been hundreds and hundreds of new companies that have flooded into the space,โ€ he adds. โ€œThereโ€™s a lot of stuff that doesnโ€™t work. Sometimes itโ€™s hard to figure it out.โ€

The risks of staying put

Tapforce, an app development firm, also sees enterprises rebuilding their AI stacks every few months, driven by constant evolution, says Artur Balabanskyy, cofounder and CTO there.

โ€œWhat works now may become suboptimal later on,โ€ he says. โ€œIf organizations donโ€™t actively keep up to date and refresh their stack, they risk falling behind in performance, security, and reliability.โ€

Constant rebuilds donโ€™t have to create chaos, however, Balabanskyy adds. CIOs should take a layered approach to their agent stacks, he recommends, with robust version control, continuous monitoring, and a modular deployment approach.

โ€œModular architectures allow leaders to destabilize the full stack as well as swap out components, when necessary,โ€ he says. โ€œGuardrails, automated testing, and observability are all essential to ensure production systems remain reliable even as tech evolves.โ€

Cleanlabโ€™s Northcutt recommends IT leaders go through a rigorous process, including a detailed prerequisite description of what an agent is expected to do, before deployment.

โ€œPeople are like, โ€˜Letโ€™s have AI do customer support,โ€™ and thatโ€™s a very high-level thing,โ€ he says. โ€œThe number one step is, โ€˜Letโ€™s define very precisely, exactly, where does AI start? What do we expect good performance to look like? What do we expect it to accomplish? What tools is it actually going to use?โ€™โ€

The survey results suggest that widespread deployment of AI agents may still be years away, Northcutt says. He predicts the estimated 1% of organizations with agents in production will rise to 3% or 4% in 2027, with true agents in production reaching 30% of enterprises in 2030.

He believes AI agents will lead to major benefits, but he urges evangelists to cut back their rhetoric in the meantime.

โ€œWe can now use AI to get better at our jobs, but the whole idea of enterprise AI automating everything and agents in every product, itโ€™s coming,โ€ he says. โ€œIf we can just kind of keep it cool, guys, and set reasonable expectations, then all this money invested might actually play out.โ€

Time for CIOs to ratify an IT constitution

9 December 2025 at 04:30

IT governance is simultaneously a massive value multiplier and a must-immediately-take-a-nap-boring topic for executives.

For busy moderns, governance is as intellectually palatable as the stale cabbage on the table Renรฉ Descartes once doubted. How do CIOs get key stakeholders to care passionately and appropriately about how IT decisions are made?

Americaโ€™s 18th century would-be constitutionalists โ€”ย 55 delegates, including George Washington, James Madison, Benjamin Franklin, and Alexander Hamilton โ€” knew something about governance. They understood that if people always made the right choices and did the right things a constitution would be superfluous. Governance is necessary because humans are flawed.

The authors of the US Constitution knew they did not want the autocracy of a monarchy they had just won independence from but they were also painfully aware that the anarchy emanating from the Articles of Confederation was not a viable path forward. So, they crafted a constitution. Should CIOs do something similar?

Rethinking IT governance

The delegates to the Philadelphia Constitutional Convention came together because the current system of governance was not working. Has IT governance sunk to such a state of disrepair that a total rethink is necessary? I asked 30 CIOs and thought leaders what they thought about the current state of IT governance and possible paths forward.

The CFO for IT at a state college in the northeast argued that if the CEO, the board of directors, and the CIO were โ€œdoing their job, a constitution would not be necessary.โ€

The CIO at a midsize, mid-Florida city argued that writing an effective IT constitution โ€œwould be like pushing water up a wall.โ€

The CIO at a billion-dollar-plus conglomerate questioned whether most organizations were โ€œsophisticated enough to develop a meaningful constitution.โ€

The CIO at a southern manufacturer thought that an IT constitution would be a great idea if the right people were on the committee that crafted it โ€” and, very importantly, if there was an โ€œIT Supreme Courtโ€ to rule on disputes.

The executive in residence at an AI infrastructure supplier asked, โ€œWhat would an IT constitution look like?โ€

The responses of these learned interlocutors gravitated immediately to the question of whether an IT constitution could work โ€” not whether an improved form of IT governance was necessary.

In Democracy in America, Alexis de Tocqueville argued, โ€œA new political science is needed for a world altogether new.โ€

Everyone I speak too agrees that IT governance can and should be improved. Everyone agrees that one canโ€™t have a totally centralized, my-way-or-the-highway dictatorship or a totally decentralized you-all-do-whatever-you-want, live-in-a-yurt digital commune. Has the stakeholder base become too numerous, too culturally disparate, and too attitudinally centrifugal to be governed at all?

Improving IT decision-making

I believe IT has to operate with a changeable but not mood-based set of core rules, customs, and principles โ€” see Cheryl Smith, The Day Before IT Transformation: 35 Technology Leadership Practices for Transforming IT.

CIOs need to have a conversation regarding IT rights, privileges, duties, and responsibilities. Are they willing to do so?

The CIO at one of the best governed counties in America polled 268 of his peers asking whether they were concerned or implementing any form of governance, risk, and compliance within their organization. Less than 2% replied affirmatively. It appears that IT governance is not a hill that CIOs are willing to expend political capital on.

Aristotle was a big fan of โ€œthinking on thinking.โ€ IT governance is a subject that deserves much more attention than it is getting. I believe a plausible, no-damage-to-professional-credibility case can be made by CIOs stressing the need to improve mutual understanding of stakeholders.

Psychologists tell us that individuals make approximately 35,000 decisions a day. How many of these are technology related? Would having an IT constitution improve individual IT decision-making?

Political scientists tell us that the current dysfunction of Americaโ€™s three branches of government is in no small way attributable to the fact that citizens, elected representatives, and public servants are essentially working from hundreds of different โ€œrealities,โ€ based on situation, education, skill set, and/or aspiration. The folks who crafted the US Constitution thought they had solved this problem. How do we get to the point where we can talk to one another intelligibly?

In contemporary law there is the concept of duty of care โ€” the basic idea that people in specific positions or occupations are responsible for putting in place measures that help ensure, as far as possible, the safety or well-being of others who are under their care.

Putting in place an IT constitution that celebrates subsidiarity โ€” the idea that problems are best solved by people nearest to them and lets stakeholders shape the way governance occurs and authority is exercised โ€” is an important agenda item for 2026.

How the BMC Helix spin-off has fared, one year later

9 December 2025 at 04:00

The split of BMC into two companies,ย announced just over a year ago, marked the evolution of this long-standing business project into two players: one, BMC, focused onย mainframeย automation and software, the firmโ€™s original business; and the other, BMC Helix, focused on IT services and operations.

Raรบl รlvarez, vice president of worldwide sales for BMC Helix,ย explains in an interview with Computerworld Spain how specialization, agility, and a technological focus are driving accelerated growth for the new entity.

The split, he asserts, citing analyst data, has enabled the company to accelerate innovation, strengthen alliances, and position itself in the IT service management (ITSM) market, where other technology companies such as ServiceNow, Ivanti, Jira Service Management (from Atlassian), SolarWinds, and IBM also operate.

The result of this move, he adds, is a faster, more customer-focused organization, ready for a new era of AI-based agent automation.

Here is that conversation, edited for length and clarity.

Why did BMC decide to split into two companies, and what strategic needs does this separation address in the specific case of BMC Helix?

Raรบl รlvarez: Iโ€™ve been working at BMC for 26, almost 27 years. So, as you can imagine, Iโ€™m not at all neutral because this is the company of my life. Iโ€™ve built my career at BMC.

Why do I say this? Because, splitting the company has an emotional impact: You stop working day-to-day with half your family. Itโ€™s a bittersweet feeling: On the one hand, you stop working with people who have made you who you are; on the other, thereโ€™s the professional aspect, which is what you mentioned, and the logic behind splitting the company in two, the main reason for which was focus and specialization.

Our owners rightly considered, in my opinion, and setting aside any emotional considerations, that we [BMC Helix] had already achieved sufficient size and growth to be an independent company. Furthermore, as you can see, focus is essential in such a dynamic market, where AI and innovation must be tangible and rapid. The announcement of the separation was made last year, and it officially began on April 1 with two separate companies. Without a doubt, time has proven that the decision was the right one.

Why do you say that?

Because thatโ€™s how our clients perceive it. They tell us that this specialization is helping us provide better service. And not just them, but analysts, too. I donโ€™t know if youโ€™ve seen Forresterโ€™s latest quadrant for ITSM, which places us as leaders. I think the last time we were leaders was in 2011. Weโ€™re leaders again in a very competitive market. Thatโ€™s why, as I said, itโ€™s clear that the focus is making sense.

BMC Hellix

Erik Cronberg

BMC Hellix.

What role does BMC Helix play within the companyโ€™s new ecosystem, and how does it differ fromย BMCโ€™s historicalย mainframebusiness?

Theย mainframe marketย is fairly static. Thereโ€™s a lot of talk about its decline, but the reality is that the large companies using it continue to grow. I come from that world, and I know it works very well, but itโ€™s not a growing market; itโ€™s a replacement market. Consequently, the separation gives us differentiationย per se: focus, faster innovation, and the ability to generate more value. Itโ€™s a SaaS model, with more aggressive growth, focused on new clients, new brands, adding value to the installed base, and all the artificial intelligence, agent-based AI, and use cases.

What growth and specialization opportunities are opening up for BMC Helix after the spin-off, especially in the ServiceOps and distributed operations market?

Well, to be honest, you could say it was almost step one. When BMC was a single company, everything was aligned in our catalog, but there were always overlapping schedules. Now, being independent gives us agility in decision-making. Before, you had to see how each move fit into a huge portfolio; now itโ€™s a single line, and that accelerates everything:ย marketing, go-to-market, resources,ย roadmap, development, etc.

Looking back 10 years, weโ€™ve done a tremendous amount of work evolving fromย on-premisesย technologies like Remedy, Proactive, TrueSight, and Patrol to modernย cloudย and container architectures. That effort has put us in a very good position. Having changes, configurations, service models, assets, and metrics all in one place is invaluable. And the emergence of AI โ€” first generative, now agent-based โ€” is a huge opportunity because this structured data multiplies its value.

How was this whole process put into action?

Itโ€™s very simple: On day one, we reorganized everything. Before, we had product managers for each module; now theyโ€™re aligned with roles: change manager, asset manager, governance manager, and so on, because weโ€™re creating agents for those roles. Each agent automates repetitive tasks that historically required a lot of effort from people. We launch new agents every two, three, or four weeks. And we prioritize based on real impact: which agent can automate the most work for our clients, how much time it saves, and so on. Undoubtedly, having 40 years of experience and so much data makes this strategy much easier.

The separation also opens doors to strategic alliances that were previously impossible. Where will they lead?

Weโ€™re working extensively on these areas. We have several layers to work on. First, we collaborate with large consulting firms, especially in the banking sector, onย compliance matters,ย regulations like DORA, audits, and so on. In this regard, weโ€™re creating dedicated agents for these tasks. Then, working with integrators allows us to gain specialization, scalability, and project capacity.

As we expand our network with new logos, we need well-trainedย partners. And finally, I canโ€™t forget the hyperscale providers: Google, AWS, and Oracle. Our SaaS platform resides in their clouds, so weโ€™re exploring ways to leverage their LLMs, theirย datacenters, and establish a presence in theirย marketplaces. Consequently, both forging technological alliances and strengthening our platform are key actions that were part of the agenda we defined a year ago.

What specific benefits will BMC Helix customers gain in terms of agility, technological focus, and speed of delivery in digital transformation projects?

First, operational robustness: incidents, changes, configurations, IT, availability, capacity. Thatโ€™s fundamental. But the real opportunity lies in AI and agents. And here the benefits are very clear: agility, accuracy, predictability, and quality. Automation reduces hours spent on repetitive tasks. Many clients donโ€™t complete all the projects they want due to a lack of time and resources. This will allow them to rebalance priorities.

What impact is expected from the adoption of agent-based AI and intelligent automation technologies from BMC Helix on the IT operations of companies such as Telefรณnica, BBVA, or the public administration?

I would summarize it in three words: agility, accuracy, and self-service. IT is becoming increasingly complex, which is why AI will allow us to identify and resolve problems faster, make decisions sooner, and free up resources for higher-value tasks. Self-service is also key: enabling users to resolve issues themselves with the help of agents, thus freeing up support teams.

The channel is important to BMC Helix. What messages were conveyed at the event held in Madrid, and how do you see its potential to generate business with AI?

Itโ€™s fundamental. We depend 100% on the channel to grow sustainably and guarantee the success of our projects. Weโ€™ve invested in training and equipping them withย skills, and we need to continue expanding that network: specialized integrators, solidย partners, scalability. Without a doubt, the companyโ€™s future depends on a strong channel.

Finally, how do you foresee 2025 ending and what are your expectations for 2026?

The year is looking good. Itโ€™s a very active market, and weโ€™re in a good position. The focus now isnโ€™t just on sales, but on ensuring adoption: customers in production, using the agents, and satisfied. For that, we have a solidย pipelineย and a highly specialized team. Thatโ€™s why weโ€™ve developed an aggressive growth plan โ€” which has to be aggressive by nature โ€” but weโ€™re optimistic. Product, focus, market, and analysts are all aligned. Now it all depends on us executing well.

La industria farmacรฉutica abraza la inteligencia artificial

9 December 2025 at 03:34

A principios de este aรฑo, el grupo de laboratorio de la Universidad de Washington que lidera David Baker โ€”ganador de la mitad del Nobel de Quรญmica en 2024 por el diseรฑo de proteรญnas con computaciรณnโ€” anunciaba un nuevo logro: el desarrollo de proteรญnas generadas mediante inteligencia artificial para contrarrestar las toxinas mortales del veneno de serpientes. Un problema que mata a mรกs de 100.000 personas cada aรฑo y deja al triple con lesiones graves, pero para el que actualmente solo hay tratamientos de alto coste y eficacia limitada. El hallazgo abre camino a una nueva generaciรณn de medicamentos mรกs seguros y rentables y que estarรญan ampliamente disponibles. Yendo mรกs allรก, esta investigaciรณn prueba el potencial de la IA para el desarrollo de nuevos fรกrmacos.

Un mercado en crecimiento

La industria farmacรฉutica es consciente desde hace tiempo de las posibilidades que estas herramientas abren en investigaciรณn. Las cifras lo indican: segรบn Mordor Intelligence, el mercado mundial de IA en farma moverรก 4.350 millones de dรณlares en 2025, con previsiรณn de crecer a casi un 43% de inversiรณn anual hasta 2030, cuando se estima que alcanzarรก los 25.370 millones. โ€œLas principales compaรฑรญas farmacรฉuticas estรกn transformando sus modelos operativos hacia alianzas interindustria con proveedores tecnolรณgicos, canalizando el valor de acuerdos multimillonarios hacia lรญneas de I+D compartidasโ€, explican. Las estimaciones son similares a las de la consultora especializada Evaluate Pharma โ€”perteneciente a la compaรฑรญa global de tecnologรญa farmacรฉutica Norstellaโ€”. Esta calcula que la inversiรณn en IA de la industria serรก de 25.000 millones de dรณlares para dentro de cinco aรฑos, habiendo incrementado su valor en un 600% en este periodo. Apunta, ademรกs, a que la aplicaciรณn de IA en el desarrollo de nuevos fรกrmacos crezca mรกs de un 40% anual.

Sobre el tipo de tecnologรญa concreta utilizada, Mordor seรฑala al machine learning como la principal, aunque con la IA generativa ganando momento โ€”junto con la computaciรณn cuรกnticaโ€”. Precisamente en relaciรณn a esta tecnologรญa, un informe de Globant y MIT Technology Review Insights estima en un 73% el porcentaje de farmacรฉuticas que estรกn haciendo pruebas piloto o implementando GenIA a nivel global. El campo de trabajo โ€”y las herramientas para enfrentarloโ€” es amplio.

Y su contribuciรณn, o al menos la percepciรณn actual de los resultados logrados y de lo que se puede alcanzar, muy positiva. โ€œLa inteligencia artificial se ha convertido en un catalizador decisivo para la transformaciรณn de la I+D farmacรฉuticaโ€, afirma Elena Medina, directora del รกrea Digital de Sanofi en Iberia. Medina destaca varios de los beneficios de integrar la IA, como la capacidad para analizar grandes volรบmenes de datos biomรฉdicos, identificar patrones y predecir las propiedades de nuevas molรฉculas, que, indica, estรก reduciendo significativamente el tiempo y el coste asociados al desarrollo de fรกrmacos.

En una lรญnea similar se manifiesta Mariluz Amador, directora del departamento mรฉdico deย Rocheย Farma Espaรฑa, cuando destaca que โ€œla inteligencia artificial tiene un potencial transformador sin precedentes para la I+D biomรฉdicaโ€. La portavoz de Roche desarrolla esta visiรณn, identificando la capacidad de la IA โ€œde aportar una mayor eficiencia en la identificaciรณn de dianas terapรฉuticas, al abrir la posibilidad de analizar volรบmenes masivos de datos genรณmicos, proteรณmicos y de biologรญa de sistemas para predecir y validar nuevas dianas de forma mucho mรกs rรกpida y precisa que los mรฉtodos tradicionalesโ€. Tambiรฉn incide en la optimizaciรณn de compuestos y la posibilidad de โ€œdiseรฑar virtualmente millones de molรฉculas candidatasโ€ โ€”como en el caso del laboratorio de Bakerโ€”, โ€œanticipando su eficacia, toxicidad y propiedades farmacocinรฉticasโ€, y en su aplicaciรณn en el diseรฑo de estudios, potenciando โ€œel anรกlisis de los datos de seguridad y eficacia en tiempo real, lo que podrรญa reducir la duraciรณn de las fases clรญnicasโ€.

Mariluz Amador (Roche Farma)

Roche Farma

โ€œLa inteligencia artificial tiene un potencial transformador sin precedentes para la I+D biomรฉdicaโ€, afirma Mariluz Amador (Roche Farma)

Amador valora ademรกs su potencial para impulsar la medicina personalizada de precisiรณn, precisamente por la capacidad de โ€œanalizar grandes cantidades de datos en tiempo rรฉcord, correlacionando patrones genรฉticos, datos de salud digital y la respuesta a tratamientos para predecir quรฉ perfil de paciente se podrรญa beneficiar mรกs de un fรกrmaco especรญficoโ€.

Casos de uso

En el caso de Roche, la integraciรณn de la IA se estรก ejecutando โ€œa distintos nivelesโ€, explica Amador, โ€œcomo un elemento transversal capaz de impulsar nuestra estrategia de futuro. Somos una empresaย data-drivenย con una profunda experiencia en todo lo que tiene que ver con el cuidado de la salud, y en este sentido la IA es una herramienta muy potente para poder extraer valor de todos esos datosโ€. La directora del departamento mรฉdico de la compaรฑรญa en Espaรฑa destaca como una de las รกreas de mayor impacto la de I+D de nuevos medicamentos en sus distintas fases, โ€œdonde la integraciรณn de plataformas de IA yย machine learningย puede suponer una autรฉntica revoluciรณnโ€. En este campo aรฑade โ€œla posibilidad de desarrollo de capacidades internas para el anรกlisis avanzado de datos del mundo realโ€, el denominado como Real-World Data o RWD. La agencia del medicamento estadounidense, la FDA, incluye en este tipo de informaciรณn datos derivados de historiales mรฉdicos electrรณnicos o de reclamaciones mรฉdicas o datos de registros de productos o enfermedades, entre otros. Este elemento, seรฑala Amador, โ€œviene a reforzar nuestro enfoque de medicina personalizada, donde estos datos que se generan fuera del entorno clรญnico, y que pueden medirse gracias a las nuevas tecnologรญas digitales, tienen tambiรฉn una enorme importanciaโ€.

Tambiรฉn en Sanofi catalogan la IA como un factor ya transversal, con distintas aplicaciones. Uno de los pilares serรญa la โ€œIA expertaโ€, empleada por los equipos de I+D y de fabricaciรณn y suministro, explica Medina, quien apunta a su uso en el anรกlisis de grandes cantidades de datos โ€œpara comprender mejor la biologรญa de las enfermedades, impulsar la traducciรณn clรญnica, optimizar el diseรฑo de ensayos clรญnicos y aumentar la probabilidad de รฉxito. Este avance nos permite satisfacer las necesidades de los pacientes de forma mรกs rรกpida y seguraโ€. Pone ejemplos concretos, como el uso del LLM CodonBERT para el diseรฑo de vacunas de ARNm. Tambiรฉn se emplean grandes modelos de lenguaje en la redacciรณn de informes clรญnicos, lo que les permite reducir los tiempos de trabajo significativamente. Medina aรฑade su enfoque de โ€œIA Snackableโ€ que, a travรฉs de herramientas propias como Plai, se integran en el flujo de trabajo diario para, por ejemplo, el apoyo en la toma de decisiones estratรฉgicas.

Elena Medina (Sanofi)

Sanofi

โ€œEste avance nos permite satisfacer las necesidades de los pacientes de forma mรกs rรกpida y seguraโ€, cree Elena Medina (Sanofi)

โ€œNuestra ambiciรณn es convertirnos en la primera compaรฑรญa biofarma orientada a la I+D impulsada por inteligencia artificial a gran escalaโ€, explica. Esto se traduce en una estrategia basada en una arquitectura integral y no en casos de uso aislados, a travรฉs de โ€œun despliegue generalizado, resultados medibles e impacto diario en todas las funciones del negocioโ€. Se trata, insiste, de โ€œtransformar fundamentalmente nuestra forma de trabajar y pensarโ€. Desde la parte de TI, esto โ€œimplica mucho mรกs que aรฑadir una nueva capa de tecnologรญa. Requiere rediseรฑar procesos, actualizar arquitecturas, fortalecer la gobernanza de datos, adoptar enfoques responsables como garantizar la calidad y la trazabilidad de toda la informaciรณn utilizada por los modelos, etcโ€. Para esto han desarrollado un marco propio, denominado RAISE โ€”siglas de IA responsable para todos en Sanofiโ€”, con el que buscan equilibrar innovaciรณn con gestiรณn de riesgos. Medina aรฑade la necesidad de invertir en aspectos como la formaciรณn de talento o el refuerzo de la cultura digital โ€œpara garantizar un uso responsable y sostenible de la IAโ€.

En la misma lรญnea de compromiso de Sanofi con la IA estรก el desarrollo de Concierge, herramienta de IA generativa propia pensada para agilizar las tareas diarias. Su director global de producto es Cyril Zaidan, quien explica que,โ€œa diferencia de las herramientas generalistas, Concierge opera completamente en un entorno protegido, comprende la terminologรญa y los flujos de trabajo farmacรฉuticos y proporciona respuestas alineadas con mรกs de 15.000 guรญas y procedimientos internosโ€. La herramienta funciona no solo para resolver dudas, sino que tiene usos mรกs amplios, desde automatizar tareas administrativas a redactar mails y gestionar listas de tareas.

โ€œEl mayor desafรญo suele ser la complejidad del ecosistema: mรกs de 20.000 puntos de informaciรณn, mรบltiples sistemas heredados y estrictos requisitos de seguridad y cumplimientoโ€, detalla Zaidan, quien destaca tambiรฉn la parte humana de incorporarlo a las rutinas diarias y conseguir que se apoye el cambio. Los retos en la integraciรณn de IA en farma tambiรฉn replican los de otras industrias. Es el caso de la calidad de los datos, โ€œya que la IA puede ser tan buena como los datos con los que se entrenaโ€, recuerda Amador. โ€œAquรญ nos enfrentamos a la necesidad de estandarizar y armonizar datos clรญnicos, genรณmicos y RWD que provienen de fuentes muy diversas, garantizando que sean de alta calidad, completos y anotados correctamente. Tambiรฉn hemos de avanzar en entender cรณmo la IA llega a una conclusiรณn, y por supuesto hemos de ser capaces de adaptar las actuales regulaciones al nuevo escenario marcado por la irrupciรณn de la IAโ€, sintetiza.

Centrรกndose en Concierge, entre los beneficios detectados tras su despliegue, Zaidan apunta cifras concretas, como un ahorro de algo mรกs de dos horas de trabajo a la semana, pero tambiรฉn aporta una visiรณn de conjunto. โ€œQuizรกs el resultado mรกs relevante sea el cambio cultural: avanzamos hacia formas de trabajo mรกs inteligentes y colaborativas, donde la IA libera tiempo para centrarse en lo que realmente impulsa la innovaciรณn en la atenciรณn mรฉdica, empoderando a nuestros empleados para que tengan mรกs confianza, estรฉn preparados para tomar decisiones mรกs inteligentes, actuar con mayor rapidez y colaborar con mayor claridadโ€, resume. Medina coincide cuando dice que la IA โ€œconlleva un cambio profundoโ€ en la forma de trabajo. โ€œPasamos de procesos lineales a ciclos iterativos donde la experimentaciรณn es continua. Y no es una fase puntual: los modelos requieren un seguimiento continuo, actualizaciones y adaptaciรณn a las nuevas regulaciones y avances tecnolรณgicosโ€, sintetiza. Un trabajo continuado a travรฉs del que buscar mejores resultados en un sector del que depende tanta gente.

๊ณต๊ฒฉ์  ๋ณด์•ˆ, AI ์‹œ๋Œ€ ๋ณด์•ˆ ์ „๋žต์˜ ํ•ต์‹ฌ์œผ๋กœ ๋ถ€์ƒ

9 December 2025 at 03:19

B2B ๊ธˆ์œต ์„œ๋น„์Šค ์—…์ฒด ์ปจ๋ฒ ๋ผ(Convera)์˜ CISO ์‚ฌ๋ผ ๋งค๋“ ์€ ์ž์‚ฌ ๋ณด์•ˆ ์ฒด๊ณ„๋ฅผ ๊ฐ•ํ™”ํ•˜๊ธฐ ์œ„ํ•ด ๋ณด๋‹ค ๊ณต๊ฒฉ์ ์ธ ์ ‘๊ทผ ๋ฐฉ์‹์„ ๋ชจ์ƒ‰ํ•˜๊ณ  ์žˆ๋‹ค. ๋งค๋“ ์€ ๊ธˆ์œต ์„œ๋น„์Šค ์‹œ์Šคํ…œ์„ ๊ทนํ•œ ์ƒํ™ฉ์—์„œ ์ ๊ฒ€ํ•˜๊ณ  ๋ฐฉ์–ด๋ฅผ ๋ณด์™„ํ•ด์•ผ ํ•  ์ง€์ ์„ ํŒŒ์•…ํ•˜๊ธฐ ์œ„ํ•ด ๋ ˆ๋“œํŒ€ ์šด์˜์„ ๋„์ž…ํ•˜๊ณ ์ž ํ•œ๋‹ค. ๋˜ํ•œ ๋ ˆ๋“œํŒ€๊ณผ ๋ธ”๋ฃจํŒ€์ด ํ˜‘๋ ฅํ•ด ์ „๋ฐ˜์ ์ธ ๋ณด์•ˆ ์ˆ˜์ค€์„ ๋Œ์–ด์˜ฌ๋ฆฌ๋Š” ํผํ”ŒํŒ€ ํ™œ๋™๋„ ํฌํ•จํ•  ๊ณ„ํš์ด๋‹ค.

๋งค๋“ ์€ โ€œ๊ณต๊ฒฉ์  ๋ณด์•ˆ์€ ๋ฐ˜๋“œ์‹œ ๋„๋‹ฌํ•ด์•ผ ํ•˜๋Š” ์˜์—ญ์ด๋ผ๊ณ  ์ƒ๊ฐํ•œ๋‹ค. ๊ทธ ๊ณผ์ •์—์„œ ์–ป๋Š” ์ •๋ณด๋ฅผ ๊ธฐ๋ฐ˜์œผ๋กœ ๋ณด์•ˆ ํ”„๋กœ๊ทธ๋žจ๊ณผ ํ†ต์ œ๋ฅผ ๋” ์ •๊ตํ•˜๊ฒŒ ์กฐ์ •ํ•  ์ˆ˜ ์žˆ๊ธฐ ๋•Œ๋ฌธโ€์ด๋ผ๊ณ  ๋งํ–ˆ๋‹ค.

๋งค๋“ ์ด ์‚ฌ์ด๋ฒ„๋ณด์•ˆ ์ „๋žต์„ ๋ฐœ์ „์‹œํ‚ค๊ธฐ ์œ„ํ•ด ๊ณต๊ฒฉ์  ํ”„๋กœ๊ทธ๋žจ์„ ๋„์ž…ํ•˜๋ ค๋Š” ์›€์ง์ž„์€ ๋น„๋‹จ ๋งค๋“ ๋งŒ์˜ ์‹œ๋„๊ฐ€ ์•„๋‹ˆ๋‹ค.

๊ธฐ์—… ๋ณด์•ˆ์˜ ๊ธฐ๋ณธ ์ž„๋ฌด๋Š” ๋ฐฉ์–ด๋‹ค. ํšŒ์‚ฌ์˜ ์‹œ์Šคํ…œ, ๋ฐ์ดํ„ฐ, ํ‰ํŒ, ๊ณ ๊ฐ, ์ง์› ๋“ฑ์„ ๋ณดํ˜ธํ•˜๊ณ  ์ง€ํ‚ค๊ธฐ ์œ„ํ•œ ์—ญํ• ์ด ์ค‘์‹ฌ์ด์—ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ๋งค๋“ ๊ณผ ๊ฐ™์€ CISO๋“ค์€ ์ ์  ๋” ๊ณต๊ฒฉ์  ์š”์†Œ๋ฅผ ์ „๋žต์— ํฌํ•จํ•˜๊ณ  ์žˆ๋‹ค. ์ด๋“ค์€ ๊ณต๊ฒฉ ์‹œ๋ฎฌ๋ ˆ์ด์…˜์„ ํ†ตํ•ด ๊ธฐ์ˆ  ํ™˜๊ฒฝ๊ณผ ๋ฐฉ์–ด ํƒœ์„ธ๋ฅผ ํ‰๊ฐ€ํ•˜๊ณ , ์‹ค์ œ ๊ณต๊ฒฉ ์‹œ ํ•ด์ปค๊ฐ€ ์•…์šฉํ•  ์ˆ˜ ์žˆ๋Š” ์ทจ์•ฝ์ ์„ ์‹๋ณ„ํ•  ์ˆ˜ ์žˆ๋Š” ๊ฐ€์น˜ ์žˆ๋Š” ์ •๋ณด๋ฅผ ์–ป์„ ์ˆ˜ ์žˆ๋‹ค๊ณ  ๋ณด๊ณ  ์žˆ๋‹ค.

์ด์ œ ๋” ๋งŽ์€ CISO๊ฐ€ ๊ณต๊ฒฉ์  ๋ณด์•ˆ์„ ํ•„์ˆ˜ ์š”์†Œ๋กœ ์ธ์‹ํ•˜๋ฉด์„œ, ๊ด€๋ จ ์—ญ๋Ÿ‰์„ ๊ตฌ์ถ•ํ•˜๊ณ  ์ด๋ฅผ ๋ณด์•ˆ ํ”„๋กœ์„ธ์Šค์— ํ†ตํ•ฉํ•ด ๊ณต๊ฒฉ ์—ฐ์Šต ๊ณผ์ •์—์„œ ๋“œ๋Ÿฌ๋‚œ ์ •๋ณด๊ฐ€ ์กฐ์ง ์ „์ฒด์˜ ๋ณด์•ˆ ์ˆ˜์ค€ ํ–ฅ์ƒ์œผ๋กœ ์ด์–ด์ง€๋„๋ก ํ•˜๊ณ  ์žˆ๋‹ค.

๋งค๋“ ์€ โ€œ์œ„ํ˜‘ ์ธํ…”๋ฆฌ์ „์Šค๋ฅผ ํ™œ์šฉํ•˜๊ณ  ํ…Œ์ด๋ธ”ํƒ‘ ํ›ˆ๋ จ์„ ์ˆ˜ํ–‰ํ•˜๋ฉฐ, ๋‚˜์•„๊ฐ€ ํผํ”ŒํŒ€ ์šด์˜ ๋‹จ๊ณ„๊นŒ์ง€ ๊ฐ€๊ธฐ ์œ„ํ•ด ํ•„์š”ํ•œ ์‹œ๊ฐ„๊ณผ ์ž์›์„ ํ™•๋ณดํ•˜๋Š” ๊ฒƒ์ด ๋งค์šฐ ์ค‘์š”ํ•˜๋‹คโ€๋ผ๋ฉฐ โ€œ์ˆ˜์„ธ์—๋งŒ ๋ชฐ๋ฆฌ๋Š” ์ƒํ™ฉ์„ ํ”ผํ•ด์•ผ ํ•œ๋‹คโ€๋ผ๊ณ  ์–ธ๊ธ‰ํ–ˆ๋‹ค.

๊ณต๊ฒฉ์  ๋ณด์•ˆ์˜ ๊ตฌ์„ฑ ์š”์†Œ

๊ณต๊ฒฉ์  ๋ณด์•ˆ์€ ๊ณต๊ฒฉ์ž ๊ด€์ ์˜ ์ „์ˆ ์„ ํ™œ์šฉํ•ด ์กฐ์ง ๋‚ด๋ถ€ IT ํ™˜๊ฒฝ์˜ ์ทจ์•ฝ์ ์„ ์ฐพ์•„๋‚ด๊ณ  ํ•ด๊ฒฐํ•˜๋Š” ํ™œ๋™์„ ๋œปํ•œ๋‹ค. ์ปจ์„คํŒ… ๊ธฐ์—… EY์—์„œ ๊ธ€๋กœ๋ฒŒ ๋ฐ ๋ฏธ๊ตญ ์‚ฌ์ด๋ฒ„ CTO๋ฅผ ๋งก๊ณ  ์žˆ๋Š” ๋Œ„ ๋ฉœ๋Ÿฐ์€ ์ด๋ฅผ โ€œ์ ๋Œ€์ž๊ฐ€ ๋ฐœ๊ฒฌํ•˜๊ธฐ ์ „์— ์ทจ์•ฝ์ ์„ ์‹๋ณ„ํ•˜๊ณ  ํ™œ์šฉํ•ด๋ณด๋Š” ๊ณผ์ •โ€์ด๋ผ๊ณ  ์ •์˜ํ•œ๋‹ค.

๋ฉœ๋Ÿฐ์€ ๊ณต๊ฒฉ์  ๋ณด์•ˆ ํ™œ๋™์ด ์—ฌ๋Ÿฌ ๋‹จ๊ณ„๋กœ ๊ตฌ์„ฑ๋œ๋‹ค๊ณ  ์„ค๋ช…ํ•œ๋‹ค. ์„ฑ์ˆ™๋„ ์Šค์ผ€์ผ์˜ ๊ฐ€์žฅ ๋‚ฎ์€ ๋‹จ๊ณ„์—๋Š” ์ทจ์•ฝ์  ๊ด€๋ฆฌ๊ฐ€ ์žˆ์œผ๋ฉฐ, ๊ทธ๋‹ค์Œ ๊ณต๊ฒฉ ํ‘œ๋ฉด ๊ด€๋ฆฌ์™€ ์นจํˆฌ ํ…Œ์ŠคํŠธ, ์œ„ํ˜‘ ํ—ŒํŒ…, ํ…Œ์ด๋ธ”ํƒ‘ ํ›ˆ๋ จ๊ณผ ๊ฐ™์€ ์ ๋Œ€์  ์‹œ๋ฎฌ๋ ˆ์ด์…˜์ด ๋’ค๋ฅผ ์ž‡๋Š”๋‹ค.

๊ทธ๋Š” โ€œํผํ”ŒํŒ€ ๊ฐœ๋…๋„ ์ค‘์š”ํ•˜๋‹ค. ๊ณต๊ฒฉ ์‹œ๋‚˜๋ฆฌ์˜ค๋ฅผ ๊ธฐ๋ฐ˜์œผ๋กœ ์–ด๋–ค ๋ฐฉ์–ด ์ฒด๊ณ„๊ฐ€ ํƒ์ง€๋ฅผ ์ˆ˜ํ–‰ํ–ˆ์–ด์•ผ ํ•˜๋Š”์ง€, ํƒ์ง€๊ฐ€ ์ด๋ค„์ง€์ง€ ์•Š์•˜๋‹ค๋ฉด ๋ฌด์—‡์„ ์–ด๋–ป๊ฒŒ ์ˆ˜์ •ํ•ด์•ผ ํ•˜๋Š”์ง€๋ฅผ ์‚ดํŽด๋ณด๋Š” ๊ณผ์ •โ€์ด๋ผ๊ณ  ๋งํ–ˆ๋‹ค.

๋˜ ๋‹ค๋ฅธ ํ•ต์‹ฌ์  ๊ณต๊ฒฉ์  ๋ณด์•ˆ ๊ตฌ์„ฑ ์š”์†Œ๋Š” ๋‹ค์Œ๊ณผ ๊ฐ™๋‹ค.

โ€ข ๋ ˆ๋“œํŒ€ ์šด์˜: ์œค๋ฆฌ์  ํ•ด์ปค๊ฐ€ ์‹ค์ œ ๊ณต๊ฒฉ์„ ๋ชจ์‚ฌํ•ด ํƒ์ง€ ๋ฐ ๋Œ€์‘ ๋Šฅ๋ ฅ์„ ์‹œํ—˜ํ•œ๋‹ค. ๋ ˆ๋“œํŒ€์€ ์€๋ฐ€ํ•œ ์ „์ˆ ์„ ํ™œ์šฉํ•ด ํ†ต์ œ๋ฅผ ์šฐํšŒํ•˜๊ณ , ๋ฐ์ดํ„ฐ ํƒˆ์ทจ๋‚˜ ๊ถŒํ•œ ์ƒ์Šน๊ณผ ๊ฐ™์€ ๋ชฉํ‘œ ๋‹ฌ์„ฑ์„ ์‹œ๋„ํ•œ๋‹ค.
โ€ข ์ ๋Œ€์ž ์—๋ฎฌ๋ ˆ์ด์…˜: ์•Œ๋ ค์ง„ ๊ณต๊ฒฉ ๊ทธ๋ฃน์˜ ์ „์ˆ ยท๊ธฐ์ˆ ยท์ ˆ์ฐจ(TTP)๋ฅผ ์œ„ํ˜‘ ์ธํ…”๋ฆฌ์ „์Šค๋ฅผ ๊ธฐ๋ฐ˜์œผ๋กœ ์žฌํ˜„ํ•ด ๋ฐฉ์–ด ๋„๊ตฌ์˜ ์œ ํšจ์„ฑ์„ ๊ฒ€์ฆํ•˜๊ณ , ์‚ฌ๊ณ  ๋Œ€์‘ํŒ€์„ ์‹ค์ œ์™€ ๊ฐ€๊นŒ์šด ํ™˜๊ฒฝ์—์„œ ํ›ˆ๋ จํ•œ๋‹ค.
โ€ข ์†Œ์…œ ์—”์ง€๋‹ˆ์–ด๋ง ํ‰๊ฐ€: ํ”ผ์‹ฑ, ์‚ฌ์ „๋ฌธ์ž(pretexting), ๊ธฐํƒ€ ๊ต๋ž€ ๊ธฐ๋ฒ•์„ ํ™œ์šฉํ•ด ์‚ฌ๋žŒ๊ณผ ํ”„๋กœ์„ธ์Šค๋ฅผ ์‹œํ—˜ํ•จ์œผ๋กœ์จ ์ทจ์•ฝ์„ฑ๊ณผ ์•ฝ์ ์„ ์‹๋ณ„ํ•œ๋‹ค. ์ด๋Š” ๊ธฐ์ˆ  ์‹œ์Šคํ…œ์„ ์‹œํ—˜ํ•˜๋Š” ์นจํˆฌ ํ…Œ์ŠคํŠธ์˜ ์ธ๊ฐ„ ์ค‘์‹ฌ ๋ฒ„์ „์ด๋ผ ๋ณผ ์ˆ˜ ์žˆ๋‹ค.
โ€ข ๋ณด์•ˆ ๋„๊ตฌ ํšŒํ”ผ ํ…Œ์ŠคํŠธ: ๋‚œ๋…ํ™”, ์•”ํ˜ธํ™”, ์ •์ƒ ๋„๊ตฌ ์•…์šฉ ๊ธฐ๋ฒ•(Living-off-the-land) ๊ฐ™์€ ํšŒํ”ผ ๊ธฐ๋ฒ•์„ ํ™œ์šฉํ•ด ์กฐ์ง์˜ ๋ณด์•ˆ ๊ธฐ์ˆ ์ด ์ด๋ฅผ ํƒ์ง€ยท์ฐจ๋‹จํ•  ์ˆ˜ ์žˆ๋Š”์ง€๋ฅผ ํ™•์ธํ•˜๊ณ , ์•…์„ฑ ๊ธฐ๋ฒ•์œผ๋กœ ์šฐํšŒํ•  ์ˆ˜ ์žˆ๋Š”์ง€๋ฅผ ๊ฒ€์ฆํ•œ๋‹ค.

์ด ๊ฐ€์šด๋ฐ ์ทจ์•ฝ์  ๊ด€๋ฆฌ, ์นจํˆฌ ํ…Œ์ŠคํŠธ, ํ”ผ์‹ฑ ๋ชจ์˜ํ›ˆ๋ จ ๋“ฑ์€ ์˜ค๋žซ๋™์•ˆ ๋Œ€๋ถ€๋ถ„์˜ ์—”ํ„ฐํ”„๋ผ์ด์ฆˆ ๋ณด์•ˆ ํ”„๋กœ๊ทธ๋žจ์—์„œ ๊ธฐ๋ณธ ์š”์†Œ์˜€๋‹ค. ์‚ฌ์ด๋ฒ„๋ณด์•ˆ ์†Œํ”„ํŠธ์›จ์–ด ๊ธฐ์—… ์ฝ”๋ฐœํŠธ(Cobalt)์˜ โ€˜2025 CISO ํผ์ŠคํŽ™ํ‹ฐ๋ธŒ ๋ฆฌํฌํŠธโ€™์— ๋”ฐ๋ฅด๋ฉด ๋ณด์•ˆ ๋ฆฌ๋”์˜ 88%๋Š” ์นจํˆฌ ํ…Œ์ŠคํŠธ๋ฅผ โ€œ์กฐ์ง ์ „์ฒด ๋ณด์•ˆ ๋…ธ๋ ฅ์˜ ํ•ต์‹ฌ ์š”์†Œโ€๋ผ๊ณ  ํ‰๊ฐ€ํ•˜๊ณ  ์žˆ๋‹ค.

๋˜ํ•œ ๋งŽ์€ CISO๋Š” ์˜ค๋žœ ๊ธฐ๊ฐ„ ๊ณต๊ฒฉ์  ๋ณด์•ˆ ์—ญ๋Ÿ‰์„ ๊ฐ–์ถ˜ ์ธ๋ ฅ์„ ํŒ€ ๋‚ด์— ๋‘์–ด ์™”๋‹ค. ์‚ฌ์ด๋ฒ„๋ณด์•ˆ ๊ต์œก ์—…์ฒด์ธ ์˜คํ”„์„น(OffSec)์ด ์ œ๊ณตํ•˜๋Š” OSCP, OSEP, OSCE์™€ ๊ฐ™์€ ์ž๊ฒฉ์ฆ์€ ์˜ค๋žซ๋™์•ˆ ๋†’์€ ์ˆ˜์š”๋ฅผ ๋ณด์—ฌ์™”๋‹ค. ์ตœ๊ทผ์—๋Š” ๊ณต๊ฒฉ์  ๋ณด์•ˆ ยท์นจํˆฌ ํ…Œ์ŠคํŠธยท์œค๋ฆฌ์  ํ•ดํ‚น ๋ถ„์•ผ์˜ ์ž๊ฒฉ์ฆ ์ข…๋ฅ˜๋„ ๋”์šฑ ๋‹ค์–‘ํ•ด์ง€๋Š” ์ถ”์„ธ๋‹ค.

๊ณต๊ฒฉ์  ๋ณด์•ˆ ๊ธฐ์ˆ  ์ž์ฒด๋„ ์ƒˆ๋กœ์šด ๊ฐœ๋…์€ ์•„๋‹ˆ๋‹ค. ๋‹ค๋งŒ ์ „๋ฌธ๊ฐ€๋“ค์€ ์ž๋™ํ™”, ๋ถ„์„, ์ธ๊ณต์ง€๋Šฅ ๊ธฐ์ˆ ์ด ๋ฒค๋” ์ œํ’ˆ์— ๋„์ž…๋˜๋ฉด์„œ ๊ณต๊ฒฉ์  ๋ณด์•ˆ ํ”„๋กœ๊ทธ๋žจ์˜ ํšจ์œจ์ด ํฌ๊ฒŒ ํ–ฅ์ƒ๋์œผ๋ฉฐ, ๋ณด์•ˆํŒ€์ด ๊ณต๊ฒฉ์  ๋ณด์•ˆ์„ ์šด์˜์— ๋„์ž…ํ•˜๋Š” ๋ฐ ํ•„์š”ํ•œ ์žฅ๋ฒฝ๋„ ๋‚ฎ์•„์กŒ๋‹ค๊ณ  ๋ถ„์„ํ•œ๋‹ค.

๋ฉœ๋Ÿฐ์€ โ€œํ˜„์žฌ ๋งŽ์€ ๊ธฐ์ˆ  ๊ณต๊ธ‰์—…์ฒด๊ฐ€ ์ด๋Ÿฐ ์„ ์ œ์ ์ด๊ฑฐ๋‚˜ ๊ณต๊ฒฉ์ ์ธ ์ ‘๊ทผ์„ ์ง€์›ํ•˜๊ธฐ ์œ„ํ•œ ๊ธฐ๋Šฅ์„ ์‹œ์žฅ์— ๋‚ด๋†“๊ณ  ์žˆ๋‹คโ€๋ผ๊ณ  ๋งํ–ˆ๋‹ค.

๊ณต๊ฒฉ์  ๋ณด์•ˆ์˜ ์šด์˜ ๊ณผ์ œ

๊ทธ๋Ÿฌ๋‚˜ ์—ฌ์ „ํžˆ ๋งŽ์€ ๋ณด์•ˆ ๋ถ€์„œ๋Š” ํฌ๊ด„์ ์ธ ๊ณต๊ฒฉ์  ๋ณด์•ˆ ํ”„๋กœ๊ทธ๋žจ์„ ๋„์ž…ํ•˜์ง€ ๋ชปํ•œ ์ƒํƒœ๋‹ค. ๋ฉœ๋Ÿฐ์— ๋”ฐ๋ฅด๋ฉด ํŠนํžˆ ์ค‘์†Œ๊ธฐ์—…์€ ๊ณต๊ฒฉ์  ๋ณด์•ˆ ์š”์†Œ๊ฐ€ ๊ฑฐ์˜ ์—†๊ฑฐ๋‚˜ ์•„์˜ˆ ์—†๋Š” ๊ฒฝ์šฐ๊ฐ€ ๋งŽ์œผ๋ฉฐ, ์˜ˆ์‚ฐยท์ธ๋ ฅยท์—ญ๋Ÿ‰ ๋ถ€์กฑ์ด ๊ณต๊ฒฉ์  ๋ณด์•ˆ์„ ๋„์ž…ํ•˜๊ฑฐ๋‚˜ ์„ฑ์ˆ™์‹œํ‚ค๋Š” ๋ฐ ๊ฐ€์žฅ ํ”ํ•œ ์žฅ์• ๋ฌผ๋กœ ์ž‘์šฉํ•œ๋‹ค.

CISO๋“ค์ด ๊ณต๊ฒฉ์  ๋ณด์•ˆ์„ ์ „๋žต์— ์ ๊ทน ํฌํ•จํ•˜์ง€ ๋ชปํ•˜๋Š” ๋˜ ๋‹ค๋ฅธ ์ด์œ ๋Š” ํ•ด๊ฒฐ ๋Šฅ๋ ฅ์ด ์—†๋Š” ์ทจ์•ฝ์ ์ด ๋…ธ์ถœ๋  ๊ฐ€๋Šฅ์„ฑ์— ๋Œ€ํ•œ ์šฐ๋ ค๋‹ค. ๋ฉœ๋Ÿฐ์€ โ€œ์ทจ์•ฝ์ ์„ ๋ฐœ๊ฒฌํ•˜๊ณ ๋„ ๋Œ€์‘ํ•  ์—ฌ๋ ฅ์ด ์—†๋‹ค๋ฉด, ๊ทธ ์‚ฌ์‹ค์„ ๋ชจ๋ฅธ ์ฒ™ํ•  ์ˆ˜ ์—†๋‹ค๋Š” ์ ์ด ๋ถ€๋‹ด์ด ๋œ๋‹ค. ํ•˜์ง€๋งŒ ์กฐ์ง์ด ๋ฐœ๊ฒฌํ•˜๋“  ๋ง๋“  ํ•ด์ปค๋Š” ๊ฒฐ๊ตญ ๊ทธ ์ทจ์•ฝ์ ์„ ์ฐพ์•„๋‚ธ๋‹คโ€๋ผ๊ณ  ๋งํ–ˆ๋‹ค.

๊ทธ๋Ÿผ์—๋„ ๋ฉœ๋Ÿฐ๊ณผ ์—ฌ๋Ÿฌ ์ „๋ฌธ๊ฐ€๋“ค์€ ํ•ด์ปค๊ฐ€ AI๋ฅผ ํ™œ์šฉํ•ด ๋”์šฑ ์ •๊ตํ•˜๊ณ  ๋น ๋ฅด๊ฒŒ ๊ณต๊ฒฉ์„ ์ „๊ฐœํ•˜๋Š” ์ง€๊ธˆ์ด์•ผ๋ง๋กœ CISO๋“ค์ด ๊ณต๊ฒฉ์  ๋ณด์•ˆ ๋„์ž…๊ณผ ํ™•์žฅ์— ๋‚˜์„œ์•ผ ํ•  ์‹œ์ ์ด๋ผ๊ณ  ๊ฐ•์กฐํ•œ๋‹ค. ์ „๋ฌธ๊ฐ€๋“ค์€ ๋ณด์•ˆ ์ด์ฑ…์ž„์ž๊ฐ€ ๋ณด์•ˆ ๊ฒฉ์ฐจ๋ฅผ ๋” ์‹ ์†ํ•˜๊ฒŒ ํŒŒ์•…ํ•ด ํ•ด์†Œํ•  ์ˆ˜ ์žˆ์–ด์•ผ ํ•˜๋ฉฐ, ๊ณต๊ฒฉ์  ๋ณด์•ˆ์ด ๋ฐ”๋กœ ๊ทธ ์—ญ๋Ÿ‰์„ ์ œ๊ณตํ•œ๋‹ค๊ณ  ์„ค๋ช…ํ•œ๋‹ค.

๊ฐœ์ธ์ •๋ณด ๋ณดํ˜ธ ๋ฐ ๋ฐ์ดํ„ฐ ๊ฑฐ๋ฒ„๋„Œ์Šค ์ž๋™ํ™” ํ”Œ๋žซํผ ํŠธ๋žœ์„ผ๋“œ(Transcend)์˜ CISO ์ž๋ฌธ์„ ๋งก๊ณ  ์žˆ๋Š” ์—์ด๋ฏธ ์นด๋“œ์›ฐ์€ โ€œ์œ„ํ˜‘ ํ–‰์œ„์ž๋“ค์€ ์ด์ œ AI ๊ธฐ๋ฐ˜ ๋„๊ตฌ๋กœ ์šฐ๋ฆฌ์—๊ฒŒ ์ „๋ก€ ์—†๋Š” ๊ณต๊ฒฉ์„ ๊ฐœ๋ฐœํ•˜๊ณ  ์žˆ๋‹ค. ๊ณผ๊ฑฐ ์Šคํฌ๋ฆฝํŠธ ํ‚ค๋”” ๊ณต๊ฒฉ์€ ์˜ˆ์ธก ๊ฐ€๋Šฅํ•œ ์ˆ˜์ค€์ด์—ˆ์ง€๋งŒ, ์ง€๊ธˆ์˜ ๊ณต๊ฒฉ์€ ๋„ˆ๋ฌด ๋‚œํ•ดํ•ด ์ดํ•ดํ•˜๊ธฐ๋„ ์–ด๋ ค์šธ ์ •๋„โ€๋ผ๊ณ  ๋งํ–ˆ๋‹ค. ์ด์–ด โ€œ์Šค์บ๋‹์—๋งŒ ์˜์กดํ•œ๋‹ค๋ฉด ์ž ์žฌ์  ์ทจ์•ฝ์ ์„ ๋„ˆ๋ฌด ๋Šฆ๊ฒŒ ํฌ์ฐฉํ•˜๊ฑฐ๋‚˜ ์•„์˜ˆ ๋ฐœ๊ฒฌํ•˜์ง€ ๋ชปํ•  ์ˆ˜ ์žˆ๋‹ค. ๊ณต๊ฒฉ์  ๋ณด์•ˆ์„ ํ†ตํ•ด ์ง€์†์ ์œผ๋กœ ์ทจ์•ฝ์ ์„ ์ฐพ์•„์•ผ ํ•œ๋‹คโ€๋ผ๊ณ  ๊ฐ•์กฐํ–ˆ๋‹ค.

๊ณต๊ฒฉ์  ๋ณด์•ˆ์˜ ๋น„์ฆˆ๋‹ˆ์Šค์  ๊ทผ๊ฑฐ

๋ฉœ๋Ÿฐ์€ ๊ณต๊ฒฉ์  ๋ณด์•ˆ ํ™œ๋™์—์„œ ์–ป์€ ์ •๋ณด๋ฅผ ํ™œ์šฉํ•ด CISO๊ฐ€ ์ถ”๊ฐ€ ๋ณด์•ˆ ํˆฌ์ž์— ๋Œ€ํ•œ ๋น„์ฆˆ๋‹ˆ์Šค ๊ทผ๊ฑฐ๋ฅผ ๋งˆ๋ จํ•  ์ˆ˜ ์žˆ๋‹ค๊ณ  ์„ค๋ช…ํ•œ๋‹ค. ๊ทธ๋Š” โ€œ๋ฐ์ดํ„ฐ ๊ธฐ๋ฐ˜ ์ฆ๊ฑฐ๋Š” ์œ„ํ—˜์„ ์ˆ˜์น˜ํ™”ํ•˜๊ณ , ๋ฌธ์ œ ํ•ด๊ฒฐ์— ํ•„์š”ํ•œ ๋…ธ๋ ฅ๊ณผ ๋น„์šฉ์„ ๋ช…ํ™•ํžˆ ์ œ์‹œํ•˜๋Š” ๋ฐ ํฐ ๋„์›€์ด ๋œ๋‹คโ€๋ผ๊ณ  ๋งํ–ˆ๋‹ค.

ํ†ต์‹  ๊ธฐ์—… ๋ฏธํ…”(Mitel)์˜ CISO ๋นŒ ๋˜๋‹ˆ์–ธ ์—ญ์‹œ ์กฐ์ง ๋‚ด๋ถ€์—์„œ ๊ณต๊ฒฉ์  ๋ณด์•ˆ์„ ํ™•๋Œ€ํ•ด์•ผ ํ•˜๋Š” ์ด์œ ๊ฐ€ ์ถฉ๋ถ„ํ•˜๋‹ค๊ณ  ๋งํ•œ๋‹ค. ๋˜๋‹ˆ์–ธ์€ โ€œ๊ณต๊ฒฉ์  ๋ณด์•ˆ์€ ๊ฒฐ๊ตญ ๊ณต๊ฒฉ์ž์˜ ๊ด€์ ์—์„œ ์ƒ๊ฐํ•˜๋Š” ์ผ์ด๋‹ค. โ€˜๋‚ด๊ฐ€ ๊ณต๊ฒฉ์ž๋ผ๋ฉด ์–ด๋–ป๊ฒŒ ์นจํˆฌํ• ๊นŒ? ์–ด๋””์— ์—ด๋ ค ์žˆ๋Š” ํ‹ˆ์ด ์žˆ์„๊นŒ?โ€™๋ฅผ ์Šค์Šค๋กœ ํ™•์ธํ•จ์œผ๋กœ์จ ์ด๋ฅผ ์ฐพ์•„ ๊ณ ์น  ์ˆ˜ ์žˆ๋‹คโ€๋ผ๊ณ  ๋งํ–ˆ๋‹ค. ๊ทธ๋Š” โ€œ๋ณด์•ˆ์—์„œ๋Š” ๋ชจ๋ฅด๋Š” ๊ฒƒ์ด ๊ฐ€์žฅ ์œ„ํ—˜ํ•˜๋‹ค. ๊ณต๊ฒฉ์  ๋ณด์•ˆ์€ ๋‚ด๊ฐ€ ๋ชจ๋ฅด๋Š” ์˜์—ญ์„ ๋“œ๋Ÿฌ๋‚ด์ฃผ๊ณ , ์•Œ๊ฒŒ ๋˜๋ฉด ๋Œ€์‘ํ•  ์ˆ˜ ์žˆ๋‹คโ€๋ผ๊ณ  ์„ค๋ช…ํ–ˆ๋‹ค.

๋˜๋‹ˆ์–ธ์€ ์ด๋ฏธ ์ทจ์•ฝ์  ๊ด€๋ฆฌ, ์นจํˆฌ ํ…Œ์ŠคํŠธ, ์œ„ํ˜‘ ํ—ŒํŒ… ๋“ฑ ์ผ๋ถ€ ๊ณต๊ฒฉ์  ๋ณด์•ˆ ์š”์†Œ๋ฅผ ๋„์ž…ํ•œ ์ƒํƒœ์ง€๋งŒ, ์ด๋ฅผ ๋”์šฑ ํ™•์žฅํ•˜๊ณ ์ž ํ•œ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด ํ˜„์žฌ ๋น„์ •๊ธฐ์ ์œผ๋กœ ์ˆ˜ํ–‰ํ•˜๋Š” ์œ„ํ˜‘ ํ—ŒํŒ…์„ ์ •์‹ ํ”„๋กœ๊ทธ๋žจ์œผ๋กœ ๊ตฌ์ถ•ํ•˜๋Š” ๋ฐฉ์•ˆ์„ ๊ฒ€ํ†  ์ค‘์ด๋‹ค.

๋”œ๋กœ์ดํŠธ ์บ๋‚˜๋‹ค์˜ IT ๋ณด์•ˆ ์„ ์ž„ ๋งค๋‹ˆ์ € ์šฐํŠธ์นด์‹œ ์ดˆ์šฐ๋‹ค๋ฆฌ๋„ ๋งŽ์€ ๊ธฐ์—…์ด ๊ณต๊ฒฉ์  ๋ณด์•ˆ์„ ๋„์ž…ํ•ด์•ผ ํ•œ๋‹ค๊ณ  ์ถ”์ฒœํ–ˆ๋‹ค. ๊ทธ๋Š” ๊ณต๊ฒฉ์  ๋ณด์•ˆ์„ โ€œ์ •์ฐฐ๋Œ€๋ฅผ ๋ณด๋‚ด ์„ฑ๋ฒฝ๊ณผ ๋ฐฉ์–ด ๊ตฌ์กฐ๊ฐ€ ์ œ๋Œ€๋กœ ์ž‘๋™ํ•˜๋Š”์ง€ ํ™•์ธํ•˜๋Š” ๊ณผ์ •โ€์— ๋น„์œ ํ•˜๋ฉฐ, โ€œ๋” ์ฒด๊ณ„์ ์ด๊ณ  ์ง€์†์ ์ธ ๊ฒ€์ฆ ๋ฐฉ์‹โ€์ด๋ผ๊ณ  ์„ค๋ช…ํ–ˆ๋‹ค. ๊ทธ๋Š” ์ตœ๊ทผ ์—”ํ„ฐํ”„๋ผ์ด์ฆˆ IT ํ™˜๊ฒฝ์ด ๋ณต์žกํ•ด์ง€๊ณ  ๊ณต๊ฒฉ ํ‘œ๋ฉด์ด ํ™•์žฅ๋˜๋ฉด์„œ ๊ณต๊ฒฉ์  ๋ณด์•ˆ์ด ํ•„์ˆ˜ ์š”์†Œ๊ฐ€ ๋๋‹ค๊ณ  ๋ง๋ถ™์˜€๋‹ค.

์ดˆ์šฐ๋‹ค๋ฆฌ๋Š” ์นจํˆฌ ํ…Œ์ŠคํŠธ์™€ ๊ฐ™์€ ๊ณต๊ฒฉ์  ๋ณด์•ˆ ์š”์†Œ๊ฐ€ ISO 27001 ๋“ฑ ๊ทœ์ œ์™€ ํ”„๋ ˆ์ž„์›Œํฌ, ๊ทธ๋ฆฌ๊ณ  ๋น„์ฆˆ๋‹ˆ์Šค ํŒŒํŠธ๋„ˆ์™€ ๊ณ ๊ฐ์˜ ์š”๊ตฌ๋กœ ์ธํ•ด ํ•„์ˆ˜์ ์œผ๋กœ ์š”๊ตฌ๋˜๋Š” ๊ฒฝ์šฐ๋„ ๋งŽ๋‹ค๊ณ  ๊ฐ•์กฐํ–ˆ๋‹ค.

๊ทธ๋Š” โ€œ๊ณต๊ฒฉ์  ๋ณด์•ˆ์€ ์กฐ์ง์ด ์œ„ํ—˜์„ ๋” ์ •ํ™•ํžˆ ์ดํ•ดํ•˜๋„๋ก ๋•๋Š”๋‹ค. ๊ฒฝํ—˜์  ํ‰๊ฐ€๋ฅผ ์ œ๊ณตํ•ด ์กฐ์ง ๋‚ด๋ถ€์˜ ์†”์งํ•จ์„ ๋Œ์–ด๋‚ด๊ณ , ๋ฌด์—‡์„ ์ž˜ํ•˜๊ณ  ๋ฌด์—‡์ด ๋ถ€์กฑํ•œ์ง€๋ฅผ ๋ช…ํ™•ํžˆ ๋ณด์—ฌ์ค€๋‹คโ€๋ผ๋ฉฐ โ€œ์‹ค์ œ ์œ„ํ—˜์„ ์ฆ๋ช…ํ•˜๋Š” ํ™•์‹คํ•œ ๊ทผ๊ฑฐ๊ฐ€ ๋œ๋‹คโ€๋ผ๊ณ  ๋งํ–ˆ๋‹ค.

ํ•˜์ง€๋งŒ ์ดˆ์šฐ๋‹ค๋ฆฌ์™€ ์ „๋ฌธ๊ฐ€๋“ค์€ ์ง„์ •ํ•œ ๊ฐ€์น˜๋ฅผ ์–ป๊ธฐ ์œ„ํ•ด์„œ๋Š” ๊ณต๊ฒฉ์  ๋ณด์•ˆ ์š”์†Œ๋ฅผ ๋‹จ์ˆœํžˆ ๋„์ž…ํ•˜๋Š” ๊ฒƒ์„ ๋„˜์–ด, ๊ณต๊ฒฉ ํ”„๋กœ๊ทธ๋žจ๊ณผ ๋ฐฉ์–ด ํ”„๋กœ๊ทธ๋žจ์„ ํ†ตํ•ฉํ•ด์•ผ ํ•œ๋‹ค๊ณ  ๊ฐ•์กฐํ•œ๋‹ค.

์ดˆ์šฐ๋‹ค๋ฆฌ๋Š” โ€œ๊ณต๊ฒฉ์€ ๋ฐฉ์–ด๋ฅผ ๋Œ€์ฒดํ•˜๋Š” ๊ฒƒ์ด ์•„๋‹ˆ๋ผ ๋ฐฉ์–ด๊ฐ€ ๋†“์นœ ๋ถ€๋ถ„์„ ๊ฐ•ํ™”ํ•œ๋‹ค. ๊ณต๊ฒฉ์  ๋ณด์•ˆ์€ ๋ฐฉ์–ด ํƒœ์„ธ๋ฅผ ํ•œ์ธต ๊ฐ•ํ™”ํ•˜๋ฉฐ, ๊ณต๊ฒฉ๊ณผ ๋ฐฉ์–ด๊ฐ€ ํ•จ๊ป˜ ์ž‘๋™ํ•ด์•ผ ์กฐ์ง์ด ์ˆ˜๋™์  ๋Œ€์‘์—์„œ ๋ฒ—์–ด๋‚˜ ์„ ์ œ์  ๋Œ€์‘์œผ๋กœ ์ „ํ™˜ํ•  ์ˆ˜ ์žˆ๋‹คโ€๋ผ๊ณ  ์„ค๋ช…ํ–ˆ๋‹ค. ๊ทธ๋Š” โ€œ์ด๋ ‡๊ฒŒ ํ•ด์•ผ ํ•ด์ปค๊ฐ€ ์นจํˆฌํ•  ๊ฐ€๋Šฅ์„ฑ์„ ์ค„์ผ ์ˆ˜ ์žˆ๋‹คโ€๋ผ๊ณ  ๋ง๋ถ™์˜€๋‹ค.
dl-ciokorea@foundryco.com

โ€œ์‹ค์‹œ๊ฐ„ ๋ฐ์ดํ„ฐ ๊ธฐ์ˆ  ๊ฒฝ์Ÿ ๋ถ„๊ธฐ์ โ€ IBM, ๋ฐ์ดํ„ฐยท์ž๋™ํ™” ํฌํŠธํด๋ฆฌ์˜ค ํ™•์žฅ ์œ„ํ•ด ์ปจํ”Œ๋ฃจ์–ธํŠธ ์ธ์ˆ˜

9 December 2025 at 02:56

IBM์€ AI ์• ํ”Œ๋ฆฌ์ผ€์ด์…˜ ๊ตฌ์ถ•์„ ์œ„ํ•œ ๋„๊ตฌ ํฌํŠธํด๋ฆฌ์˜ค๋ฅผ ํ™•์žฅํ•˜๊ธฐ ์œ„ํ•ด ํด๋ผ์šฐ๋“œ ๋„ค์ดํ‹ฐ๋ธŒ ๊ธฐ๋ฐ˜ ๊ธฐ์—…์šฉ ๋ฐ์ดํ„ฐ ์ŠคํŠธ๋ฆฌ๋ฐ ํ”Œ๋žซํผ ์ปจํ”Œ๋ฃจ์–ธํŠธ(Confluent) ์ธ์ˆ˜์— ํ•ฉ์˜ํ–ˆ๋‹ค.

IBM์€ 8์ผ ๊ณต์‹ ๋ณด๋„์ž๋ฃŒ๋ฅผ ํ†ตํ•ด ์ปจํ”Œ๋ฃจ์–ธํŠธ๊ฐ€ IBM ํ•˜์ด๋ธŒ๋ฆฌ๋“œ ํด๋ผ์šฐ๋“œ์™€ AI ์ „๋žต์— ์ž์—ฐ์Šค๋Ÿฝ๊ฒŒ ๋ถ€ํ•ฉํ•œ๋‹ค๋ฉฐ, ์ด๋ฒˆ ์ธ์ˆ˜๊ฐ€ โ€˜ํฌํŠธํด๋ฆฌ์˜ค ์ „๋ฐ˜์—์„œ ์ƒ๋‹นํ•œ ์ œํ’ˆ ์‹œ๋„ˆ์ง€โ€™๋ฅผ ์ด๋Œ ๊ฒƒ์œผ๋กœ ๊ธฐ๋Œ€ํ•œ๋‹ค๊ณ  ๋ฐํ˜”๋‹ค.

์ปจํ”Œ๋ฃจ์–ธํŠธ๋Š” ์—ฌ๋Ÿฌ ๋ฐ์ดํ„ฐ ์†Œ์Šค๋ฅผ ์—ฐ๊ฒฐํ•˜๊ณ  ๋ฐ์ดํ„ฐ๋ฅผ ์ •์ œํ•ด ์ผ๊ด€๋œ ํ˜•ํƒœ๋กœ ํ™œ์šฉํ•  ์ˆ˜ ์žˆ๋„๋ก ์ง€์›ํ•˜๋Š” ๊ธฐ์ˆ ์„ ์ œ๊ณตํ•œ๋‹ค. ์ด๋ฅผ ํ†ตํ•ด ๊ณ ๊ฐ์€ ์•„ํŒŒ์น˜ ์นดํ”„์นด ๊ธฐ๋ฐ˜์˜ ์˜คํ”ˆ์†Œ์Šค ๋ถ„์‚ฐ ์ด๋ฒคํŠธ ์ŠคํŠธ๋ฆฌ๋ฐ ํ”Œ๋žซํผ ์œ„์—์„œ ์„œ๋น„์Šค๋ฅผ ๊ตฌ์ถ•ํ•  ์ˆ˜ ์žˆ์–ด, ์ง์ ‘ ์„œ๋ฒ„ ํด๋Ÿฌ์Šคํ„ฐ๋ฅผ ๊ตฌ๋งคํ•˜๊ฑฐ๋‚˜ ๊ด€๋ฆฌํ•ด์•ผ ํ•˜๋Š” ๋ถ€๋‹ด์„ ๋œ ์ˆ˜ ์žˆ๋‹ค. ์‚ฌ์šฉ ๊ธฐ์—…์€ ํด๋Ÿฌ์Šคํ„ฐ ๋‹จ์œ„๋กœ ์›” ๊ตฌ๋…๋ฃŒ๋ฅผ ์ง€๋ถˆํ•˜๋ฉฐ, ์ €์žฅ ๋ฐ์ดํ„ฐ์™€ ๋ฐ์ดํ„ฐ ์ž…์ถœ๋ ฅ๋Ÿ‰์— ๋”ฐ๋ผ ์ถ”๊ฐ€ ๋น„์šฉ์„ ๋ถ€๋‹ดํ•˜๋Š” ๋ฐฉ์‹์ด๋‹ค.

IBM์€ ์ด๋ฒˆ ๊ฑฐ๋ž˜๋ฅผ 110์–ต ๋‹ฌ๋Ÿฌ(์•ฝ 16์กฐ ์›) ๊ทœ๋ชจ๋กœ ์‚ฐ์ •ํ–ˆ์œผ๋ฉฐ, ์ธ์ˆ˜๋Š” ๋‚ด๋…„ ์ค‘๋ฐ˜ ๋งˆ๋ฌด๋ฆฌ๋  ๊ฒƒ์œผ๋กœ ๋ณด๊ณ  ์žˆ๋‹ค.

์ปจํ”Œ๋ฃจ์–ธํŠธ์˜ ์„ค๋ฆฝ์ž์ด์ž CEO์ธ ์ œ์ด ํฌ๋ ™์Šค๋Š” ์ง์›๋“ค์—๊ฒŒ ๋ณด๋‚ธ ์ด๋ฉ”์ผ์—์„œ โ€œIBM์€ ์šฐ๋ฆฌ๊ฐ€ ๊ทธ๋ฆฌ๋Š” ๋ฏธ๋ž˜์™€ ๋™์ผํ•œ ๋ฐฉํ–ฅ์„ ๋ณด๊ณ  ์žˆ๋‹คโ€๋ผ๋ฉฐ โ€œ๊ธฐ์—…์€ ๋ชจ๋“  ๋น„์ฆˆ๋‹ˆ์Šค ์˜์—ญ์—์„œ ๋ฐ์ดํ„ฐ๊ฐ€ ์ž์œ ๋กญ๊ณ  ์•ˆ์ •์ ์œผ๋กœ ํ๋ฅด๋Š” ์ง€์†์ ์ด๊ณ  ์ด๋ฒคํŠธ ์ค‘์‹ฌ์˜ ์ธํ…”๋ฆฌ์ „์Šค๋ฅผ ๊ธฐ๋ฐ˜์œผ๋กœ ์šด์˜๋  ๊ฒƒโ€์ด๋ผ๊ณ  ์ „ํ–ˆ๋‹ค.

์ปจ์„คํŒ… ๊ธฐ์—… ์ธํฌํ…Œํฌ๋ฆฌ์„œ์น˜๊ทธ๋ฃน์˜ ์„ ์ž„ ์ž๋ฌธ๊ฐ€ ์Šค์ฝง ๋น„ํด๋ฆฌ๋Š” ์ด๋ฒˆ ์ธ์ˆ˜๊ฐ€ IBM์— ๋งค์šฐ ์œ ๋ฆฌํ•œ ๊ฒฐ์ •์ด๋ผ๊ณ  ํ‰๊ฐ€ํ–ˆ๋‹ค. ๋น„ํด๋ฆฌ๋Š” โ€œ์ปจํ”Œ๋ฃจ์–ธํŠธ๋Š” ์‹ค์‹œ๊ฐ„ ๋ฐ์ดํ„ฐ๋ฅผ ๋ชจ๋‹ˆํ„ฐ๋งํ•  ์ˆ˜ ์žˆ๋Š” ์—ญ๋Ÿ‰์„ ์ œ๊ณตํ•ด IBM์˜ ์ฐจ์„ธ๋Œ€ AI ํ”Œ๋žซํผ์ธ ์™“์ŠจX ํ”Œ๋žซํผ์˜ ๋ถ€์กฑํ–ˆ๋˜ ๋ถ€๋ถ„์„ ์‹ค์งˆ์ ์œผ๋กœ ๋ณด์™„ํ•œ๋‹คโ€๋ผ๊ณ  ์„ค๋ช…ํ•˜๋ฉฐ, ์ปจํ”Œ๋ฃจ์–ธํŠธ์˜ ๊ธฐ์ˆ ์ด ์—…๊ณ„ ํ‘œ์ค€ ์‹ค์‹œ๊ฐ„ ๋ฐ์ดํ„ฐ ์ŠคํŠธ๋ฆผ ๊ด€๋ฆฌยท์ฒ˜๋ฆฌ ๋ฐฉ์‹์— ๊ธฐ๋ฐ˜ํ•˜๊ณ  ์žˆ๋‹ค๋Š” ์ ๋„ ์–ธ๊ธ‰ํ–ˆ๋‹ค.

๋น„ํด๋ฆฌ๋Š” ๋˜ โ€œIBM์€ ์ด๋ฏธ AI ๋ชจ๋ธ์„ ๊ตฌ์ถ•ํ•˜๊ณ  ํ•™์Šตํ•˜๋Š” ๋ฐ ํ•„์š”ํ•œ ์ฃผ์š” ์š”์†Œ๋“ค์„ ๊ฐ–์ถ”๊ณ  ์žˆ๋‹คโ€๋ผ๋ฉฐ โ€œ์ปจํ”Œ๋ฃจ์–ธํŠธ๋Š” ์กฐ์ง ์ „์ฒด์—์„œ ์ง€์†์ ์œผ๋กœ ์ƒ์„ฑ๋˜๋Š” ์‹ค์‹œ๊ฐ„ ๋ฐ์ดํ„ฐ๋ฅผ ๋ชจ๋ธ์— ๊ณต๊ธ‰ํ•˜๋Š” ์—ฐ๊ฒฐ ์กฐ์ง ์—ญํ• ์„ ํ•˜๊ฒŒ ๋œ๋‹ค. ์ด๋ฅผ ํ†ตํ•ด ์‹ค์‹œ๊ฐ„ ๋ฐ์ดํ„ฐ์— ๋ฐ˜์‘ํ•  ์ˆ˜ ์žˆ๋Š” ๋” ์ •๊ตํ•œ AI ์—์ด์ „ํŠธ์™€ ์• ํ”Œ๋ฆฌ์ผ€์ด์…˜ ๊ฐœ๋ฐœ์ด ๊ฐ€์†ํ™”๋  ๊ฒƒโ€์ด๋ผ๊ณ  ๋ถ„์„ํ–ˆ๋‹ค.

๊ทธ๋Š” ์ด๋ฒˆ ์ธ์ˆ˜๊ฐ€ ์ตœ๊ทผ IBM์˜ ์ธ์ˆ˜ ์ค‘ ๊ฐ€์žฅ ํฐ ๊ทœ๋ชจ์ด๋ฉฐ, ์žฅ๊ธฐ์  ์ „๋žต์˜ ์ผํ™˜์ด๋ผ๊ณ  ์ง€์ ํ–ˆ๋‹ค. ๋น„ํด๋ฆฌ๋Š” โ€œIBM์€ ์Šค๋…ธ์šฐํ”Œ๋ ˆ์ดํฌ์™€ ๋ฐ์ดํ„ฐ๋ธŒ๋ฆญ์Šค ๊ฐ™์€ AI ๋„ค์ดํ‹ฐ๋ธŒ ๋น…๋ฐ์ดํ„ฐ ๊ธฐ์—…๊ณผ ๊ฒฝ์Ÿํ•˜๊ธฐ ์œ„ํ•ด ์„ ์ œ์ ์œผ๋กœ ์›€์ง์ด๊ณ  ์žˆ๋‹คโ€๋ผ๋ฉฐ โ€œ์„œ๋กœ ๋‹ค๋ฅธ ๋…๋ฆฝ ๊ตฌ์„ฑ ์š”์†Œ๋ฅผ ์กฐํ•ฉํ•˜๋Š” ๋ฐฉ์‹๋ณด๋‹ค, ์™„์ „ํ•œ ์ˆ˜์ง ํ†ตํ•ฉํ˜• AI ํ”Œ๋žซํผ์ธ ์™“์ŠจX๊ฐ€ ๊ธฐ์—… ๊ณ ๊ฐ์—๊ฒŒ ๋” ๋งค๋ ฅ์ ์ผ ๊ฒƒ์ด๋ผ๋Š” ํŒ๋‹จ์„ ๋‚ด๋ฆฐ ์ƒํƒœโ€๋ผ๊ณ  ์ „ํ–ˆ๋‹ค.

์–‘์ธก ๋ชจ๋‘์—๊ฒŒ ์ด์ต

๋น„ํด๋ฆฌ๋Š” ์ด๋ฒˆ ๊ฒฐ์ •์ด ๋ ˆ๋“œํ–‡ ์ธ์ˆ˜(345์–ต ๋‹ฌ๋Ÿฌ)์™€ ์ตœ๊ทผ์˜ ํ•ด์‹œ์ฝ”ํ”„ ์ธ์ˆ˜(64์–ต ๋‹ฌ๋Ÿฌ)์ฒ˜๋Ÿผ IBM์ด ๊ทธ๋™์•ˆ ์ถ”์ง„ํ•ด ์˜จ ์ „๋žต๊ณผ ํ๋ฆ„์„ ๊ฐ™์ดํ•œ๋‹ค๊ณ  ์„ค๋ช…ํ–ˆ๋‹ค. ๋ฆฌ๋ˆ…์Šค, ํ…Œ๋ผํผยท๋ณผํŠธ, ์นดํ”„์นด ๋“ฑ ์ง€๋ฐฐ์ ์ธ ์˜คํ”ˆ์†Œ์Šค ํ‘œ์ค€ ์œ„์— ๊ตฌ์ถ•๋œ ์ด๋“ค ๊ธฐ์ˆ ์„ ๋”ํ•˜๋ฉฐ, IBM์€ ERP ๋ฒค๋”๋‚˜ ๊ฐœ๋ณ„ ๊ธฐ๋Šฅ ์œ„์ฃผ์˜ ํฌ์ธํŠธ ์†”๋ฃจ์…˜๊ณผ ๊ตฌ๋ถ„๋˜๋Š” ๋…๋ฆฝํ˜• ์ˆ˜์งยทํ•˜์ด๋ธŒ๋ฆฌ๋“œ ํด๋ผ์šฐ๋“œ ์ „๋žต์„ ์™„์ „ํ•œ AI ์Šคํƒ ํ˜•ํƒœ๋กœ ์ œ๊ณตํ•  ์ˆ˜ ์žˆ๊ฒŒ ๋๋‹ค.

๊ฐ€ํŠธ๋„ˆ ์ˆ˜์„ ๋””๋ ‰ํ„ฐ ์• ๋„๋ฆฌ์ŠคํŠธ ์•ค๋“œ๋ฃจ ํ—˜ํ”„๋ฆฌ์Šค๋„ IBM MQ๋ฅผ ๋ณด์œ ํ•œ IBM์ด ์ด๋ฒคํŠธ ๋ธŒ๋กœ์ปค ์‹œ์žฅ์—์„œ ์ด๋ฏธ ์ปจํ”Œ๋ฃจ์–ธํŠธ์™€ ๊ฒฝ์Ÿํ•˜๊ณ  ์žˆ๋‹ค๊ณ  ์งš์—ˆ๋‹ค. ๊ทธ๋Š” โ€œ์ผ๋ถ€ ๊ธฐ๋Šฅ์€ ๊ฒน์น˜์ง€๋งŒ IBM MQ์™€ ์นดํ”„์นด๋Š” ์„œ๋กœ ๋‹ค๋ฅธ ๋ฌธ์ œ์™€ ํ™œ์šฉ ์‚ฌ๋ก€๋ฅผ ๋‹ค๋ฃฌ๋‹คโ€๋ผ๋ฉฐ โ€œIBM์€ ๋‘ ์ œํ’ˆ์„ ๊ฒฐํ•ฉํ•ด ์ด๋ฒคํŠธ ๊ธฐ๋ฐ˜ ์•„ํ‚คํ…์ฒ˜ ์ „๋ฐ˜์„ ํฌ๊ด„ํ•˜๋Š” ์™„์„ฑ๋„ ๋†’์€ ์ด๋ฒคํŠธ ๋ธŒ๋กœ์ปค ์ œํ’ˆ๊ตฐ์„ ์ œ๊ณตํ•  ๊ธฐํšŒ๋ฅผ ์–ป๊ฒŒ ๋๋‹คโ€๋ผ๊ณ  ๋งํ–ˆ๋‹ค.

์™“์ŠจX์˜ ํ•ต์‹ฌ ๊ตฌ์„ฑ์š”์†Œ ํ™•๋ณด

์ปจ์„คํŒ… ๊ธฐ์—… ํ“จ์ฒ˜๋Ÿผ๋ฆฌ์„œ์น˜ ๋ถ€์‚ฌ์žฅ ๋ฏธ์น˜ ์• ์А๋ฆฌ๋Š” ์ปจํ”Œ๋ฃจ์–ธํŠธ ์ธ์ˆ˜๊ฐ€ ์™“์Šจx ๊ธฐ์ˆ ์—์„œ ๋น ์ ธ ์žˆ๋˜ ํ•ต์‹ฌ ์š”์†Œ๋ฅผ ์ฑ„์šฐ๋ฉฐ, IBM์— ์‹ค์‹œ๊ฐ„ยท๊ด€๋ฆฌํ˜• ๋ฐ์ดํ„ฐ ํ๋ฆ„์„ ๋’ท๋ฐ›์นจํ•˜๋Š” ์˜คํ”ˆ์†Œ์Šค ๊ธฐ๋ฐ˜ ๊ธฐ์ˆ  ๊ธฐ๋ฐ˜์„ ์ œ๊ณตํ•œ๋‹ค๊ณ  ํ‰๊ฐ€ํ–ˆ๋‹ค.

๊ทธ๋Š” ๋˜ํ•œ ์ด๋ฒˆ ์ธ์ˆ˜๊ฐ€ ์ตœ๊ทผ IBM์˜ ๋ฐ์ดํ„ฐ ๊ด€๋ จ ์ธ์ˆ˜๋“ค์„ ํ•˜๋‚˜์˜ ์ผ๊ด€๋œ ์•„ํ‚คํ…์ฒ˜๋กœ ๋ฌถ๋Š” ์—ญํ• ์„ ํ•œ๋‹ค๊ณ  ์„ค๋ช…ํ–ˆ๋‹ค. ์• ์А๋ฆฌ๋Š” โ€œ์—ฌ๊ธฐ์„œ ์ค‘์š”ํ•œ ๊ฐ€์น˜๋Š” ๋‹จ์ˆœํžˆ ์นดํ”„์นด๋ผ๋Š” ๊ธฐ์ˆ ์ด ์•„๋‹ˆ๋ผ, IBM์˜ AI ํฌํŠธํด๋ฆฌ์˜ค ์ „๋ฐ˜์— ์‹ ์„ ํ•˜๊ณ  ๋ฌธ๋งฅ ์žˆ๋Š” ๋ฐ์ดํ„ฐ๋ฅผ ์ผ๊ด€์„ฑ๊ณผ ํ†ต์ œ๋ ฅ์„ ๊ฐ–์ถ˜ ํ˜•ํƒœ๋กœ ์ „๋‹ฌํ•  ์ˆ˜ ์žˆ๋Š” ๋Šฅ๋ ฅโ€์ด๋ผ๊ณ  ๊ฐ•์กฐํ–ˆ๋‹ค.

๋˜ ๋‹ค๋ฅธ ์ปจ์„คํŒ… ๊ธฐ์—… ๊ทธ๋ ˆ์ดํ•˜์šด๋“œ๋ฆฌ์„œ์น˜ ์ตœ๊ณ  ์• ๋„๋ฆฌ์ŠคํŠธ ์‚ฐ์น˜ํŠธ ๋น„๋ฅด ๊ณ ๊ธฐ์•„๋Š” ์ธ์ˆ˜ ๋ฐœํ‘œ ์งํ›„ ๋ฐœ๊ฐ„ํ•œ ๋ณด๊ณ ์„œ์—์„œ ์ด๋ฒˆ ์ธ์ˆ˜๊ฐ€ โ€œ๊ฐ€๊ฒฉ์ด๋‚˜ ๋‹จ์ˆœํ•œ ํฌํŠธํด๋ฆฌ์˜ค ํ™•์žฅ์„ ๋„˜์–ด์„œ๋Š” ๋ณ€๊ณก์ โ€์ด๋ผ๊ณ  ๋ถ„์„ํ–ˆ๋‹ค. ๊ทธ๋Š” โ€œ์ด ์ธ์ˆ˜๊ฐ€ ๋ณด์—ฌ์ฃผ๋Š” ์ง„์งœ ๋ณ€ํ™”๋Š” ํ˜„๋Œ€ ๋””์ง€ํ„ธ ๊ธฐ์—…์˜ โ€˜์ƒ๋ช…์„ โ€™์ธ ์‹ค์‹œ๊ฐ„ ๋ฐ์ดํ„ฐ๋ฅผ ๋ˆ„๊ฐ€ ํ†ต์ œํ•  ๊ฒƒ์ธ๊ฐ€ ํ•˜๋Š” ๋ฌธ์ œโ€๋ผ๊ณ  ๋งํ–ˆ๋‹ค.

๊ณ ๊ธฐ์•„๋Š” ์ด๊ฒƒ์ด ์ „์ˆ ์  ๊ฒฐ์ •์ด ์•„๋‹ˆ๋ผ ์ˆ˜๋…„๊ฐ„ ๊ตฌ์ถ•ํ•ด ์˜จ ์•„ํ‚คํ…์ฒ˜์˜ ์ „๋žต์  ์™„์„ฑ์ด๋ผ๊ณ  ํ‰๊ฐ€ํ–ˆ๋‹ค. ๊ทธ๋Š” โ€œ๊ธฐ์—… ๋ฆฌ๋”์—๊ฒŒ ์ด๋ฒˆ ๋ณ€ํ™”๋Š” ์ง€๋„๋ฅผ ๋ฐ”๊พธ๋Š” ์ผโ€์ด๋ผ๋ฉฐ โ€œAI๋Š” ๋” ์ด์ƒ ์‹œ์Šคํ…œ์˜ ๊ฐ€์žฅ์ž๋ฆฌ์— ๋จธ๋ฌผ์ง€ ์•Š๊ณ  ์•„ํ‚คํ…์ฒ˜ ์ค‘์‹ฌ์œผ๋กœ ์ด๋™ํ•œ๋‹ค. ์ปจํ”Œ๋ฃจ์–ธํŠธ๋Š” ๊ทธ ์ค‘์‹ฌ์„ ์‹ค์‹œ๊ฐ„ยท์ƒํ™ฉ ์ธ์‹ยท์—ฐ๊ฒฐ ์ƒํƒœ๋กœ ๋งŒ๋“œ๋Š” ๊ณ„์ธต์ด ๋  ๊ฒƒโ€์ด๋ผ๊ณ  ์ „๋งํ–ˆ๋‹ค. ์ด์–ด โ€œ์ด๋ฒˆ ์ธ์ˆ˜๋กœ IBM์€ ์˜ˆ์ธก๋งŒ ํ•˜๋Š” AI๊ฐ€ ์•„๋‹ˆ๋ผ, ๊นจ๋—ํ•˜๊ณ  ์—ฐ๊ฒฐ๋˜๊ณ  ๋Š์ž„์—†์ด ํ๋ฅด๋Š” ๋ฐ์ดํ„ฐ์— ๊ธฐ๋ฐ˜ํ•ด โ€˜๋“ฃ๊ณ  ๋ฐ˜์‘ํ•˜๋Š” AIโ€™๋ฅผ ์ œ๊ณตํ•  ์ˆ˜ ์žˆ๊ฒŒ ๋œ๋‹คโ€๋ผ๊ณ  ๋ถ„์„ํ–ˆ๋‹ค.

์˜ด๋””์•„ ๋ฐ์ดํ„ฐยทAI ์ˆ˜์„ ์• ๋„๋ฆฌ์ŠคํŠธ ์Šคํ‹ฐ๋ธ ์นดํƒ„์ฐจ๋…ธ๋„ โ€œ์ง€๊ธˆ ์ฃผ์š” ๊ธฐ์—…๋“ค์€ ๋ชจ๋‘ ์—”๋“œ ํˆฌ ์—”๋“œ ๋ฐ์ดํ„ฐ ํ”Œ๋žซํผ์„ ๊ตฌ์ถ•ํ•˜๋Š” ๋‹จ๊ณ„์— ์žˆ๋‹คโ€๋ผ๊ณ  ๋งํ–ˆ๋‹ค. ๊ทธ๋Š” โ€œ์ปจํ”Œ๋ฃจ์–ธํŠธ ์ธ์ˆ˜๋Š” IBM์ด ์ •์  ๋ฐ์ดํ„ฐ์™€ ๋™์  ๋ฐ์ดํ„ฐ, ๋น„์ •ํ˜• ๋ฐ์ดํ„ฐ์™€ ์ •ํ˜• ๋ฐ์ดํ„ฐ๋ฅผ ๋ชจ๋‘ ๋‹ค๋ฃฐ ์ˆ˜ ์žˆ๋„๋ก ํผ์ฆ์˜ ๋งˆ์ง€๋ง‰ ์กฐ๊ฐ์„ ์ฑ„์šด ์…ˆโ€์ด๋ผ๊ณ  ์„ค๋ช…ํ–ˆ๋‹ค. ์ด์–ด โ€œ๊ธฐ์—…๋“ค์€ ์ƒ์„ฑํ˜• AI์™€ ์—์ด์ „ํŠธํ˜• AI๋ฅผ ์‹ค์‹œ๊ฐ„ ๋ฐ์ดํ„ฐ์™€ ์ŠคํŠธ๋ฆฌ๋ฐ ๋ฐ์ดํ„ฐ์— ์ ์šฉํ•˜๊ณ ์ž ํ•œ๋‹ค. IBM์€ ์‹œ์žฅ์—์„œ ๊ฐ€์žฅ ํฐ ํ”Œ๋ ˆ์ด์–ด๋ฅผ ํ™•๋ณดํ•œ ๊ฒƒโ€์ด๋ผ๊ณ  ๋ง๋ถ™์˜€๋‹ค.

๋น„ํด๋ฆฌ๋Š” ๋˜ ์ปจํ”Œ๋ฃจ์–ธํŠธ์˜ ์ตœ๊ทผ ๋งค์ถœ ์„ฑ์žฅ ๋‘”ํ™”์™€ ๋งค๊ฐ ๊ฒ€ํ†  ๋ณด๋„ ๋“ฑ์„ ๊ณ ๋ คํ•˜๋ฉด ์ธ์ˆ˜ ์‹œ์ ๋„ ์ ์ ˆํ–ˆ๋‹ค๊ณ  ํ‰๊ฐ€ํ–ˆ๋‹ค. ๊ทธ๋Š” โ€œ๊ฒฐ๊ตญ ์ด๋ฒˆ ๊ฑฐ๋ž˜๋Š” ์–‘์ธก ๋ชจ๋‘์—๊ฒŒ ์ด์ต์ด ๋˜๋Š” ๊ฒฐ์ •โ€์ด๋ผ๋ฉฐ โ€œIBM์€ ์ด์ œ ๊ณ ์œ„ํ—˜ ์ „๋žต์„ ํƒํ•œ ์…ˆ์ด๊ณ , ์ตœ๊ณ ์˜ AI ๋ชจ๋ธ์„ ๋ณด์œ ํ•˜๋Š” ๊ฒƒ๋งŒ์œผ๋กœ๋Š” ์ถฉ๋ถ„ํ•˜์ง€ ์•Š์œผ๋ฉฐ ๋ฐ์ดํ„ฐ ํ๋ฆ„์„ ํ†ต์ œํ•˜๋Š” ๋Šฅ๋ ฅ์ด ํ›จ์”ฌ ์ค‘์š”ํ•ด์งˆ ๊ฒƒ์ด๋ผ๊ณ  ํŒ๋‹จํ–ˆ๋‹คโ€๋ผ๊ณ  ๋งํ–ˆ๋‹ค.
dl-ciokorea@foundryco.com

AI ๋„์ž…, ์™œ ์‹คํŒจํ• ๊นŒ? ์‚ฌ๋žŒ์˜ ํŒ๋‹จ๋ ฅ๊ณผ ๋ฐ์ดํ„ฐ ๊ฑฐ๋ฒ„๋„Œ์Šค๊ฐ€ ํ•„์š”ํ•œ ๋•Œ

9 December 2025 at 02:42

AI๋ฅผ ๋„์ž…ํ•˜๋ ค ํ•  ๋•Œ ๊ธฐ์—…์˜ ๋ชฉํ‘œ์™€ ์‹ค์ œ ์‹คํ–‰ ์‚ฌ์ด์˜ ๊ฐ„๊ทน์„ ๋ฉ”์šฐ๊ธฐ ์–ด๋ ค์šธ ์ˆ˜ ์žˆ๋‹ค. ๊ธฐ์—…์€ ์ œํ’ˆ๊ณผ ์—…๋ฌด ํ๋ฆ„, ์ „๋žต ์ „๋ฐ˜์— AI๋ฅผ ๋…น์—ฌ๋‚ด๋ ค ํ•˜์ง€๋งŒ, ๋ถ„์‚ฐ๋œ ๋ฐ์ดํ„ฐ์™€ ๋ถˆ๋ช…ํ™•ํ•œ ๊ณ„ํš์ด๋ผ๋Š” ๊ฑธ๋ฆผ๋Œ์— ๊ฐ€๋กœ๋ง‰ํ˜€ ์‹คํŒจํ•˜๋Š” ๊ฒฝ์šฐ๊ฐ€ ๋งŽ๋‹ค.

์†Œํ”„ํŠธ์›จ์–ด ๊ฐœ๋ฐœ ๊ธฐ์—… ์•„๋ผ์Šค(Aras)์˜ CTO ๋กญ ๋งฅ์–ด๋ฒ ๋‹ˆ๋Š” โ€œํ˜‘๋ ฅ ์ค‘์ธ ๊ธ€๋กœ๋ฒŒ ์ œ์กฐ๊ธฐ์—…์—์„œ ์ž์ฃผ ๋งˆ์ฃผํ•˜๋Š” ์–ด๋ ค์›€์ด ๋ฐ”๋กœ ์ด ๋ฌธ์ œ๋‹คโ€๋ผ๊ณ  ๋งํ–ˆ๋‹ค. ๊ทธ๋Š” โ€œ๋งŽ์€ ๊ธฐ์—…์ด AI๊ฐ€ ํ•„์š”ํ•˜๋‹ค๊ณ  ๋ง‰์—ฐํžˆ ์ƒ๊ฐํ•œ๋‹ค. ์‹ค์ œ ์ถœ๋ฐœ์ ์€ AI๊ฐ€ ์ง€์›ํ•ด์•ผ ํ•˜๋Š” ์˜์‚ฌ๊ฒฐ์ •์„ ๋จผ์ € ์ •์˜ํ•˜๊ณ , ์ด๋ฅผ ๋’ท๋ฐ›์นจํ•  ์ ์ ˆํ•œ ๋ฐ์ดํ„ฐ๊ฐ€ ์ค€๋น„๋ผ ์žˆ๋Š”์ง€ ํ™•์ธํ•˜๋Š” ๋ฐ ์žˆ๋‹คโ€๋ผ๊ณ  ์„ค๋ช…ํ–ˆ๋‹ค.

์ตœ๊ทผ ๋งฅํ‚จ์ง€๊ฐ€ ์‹ค์‹œํ•œ ๊ธ€๋กœ๋ฒŒ ์กฐ์‚ฌ์— ๋”ฐ๋ฅด๋ฉด, ๊ฒฝ์˜์ง„์˜ ์•ฝ 3๋ถ„์˜ 2๊ฐ€ ์ž์‚ฌ ์กฐ์ง์ด AI๋ฅผ ๋น„์ฆˆ๋‹ˆ์Šค ์ „๋ฐ˜์— ํ™•์žฅํ•˜๋Š” ๋ฐ ์–ด๋ ค์›€์„ ๊ฒช๊ณ  ์žˆ๋‹ค๊ณ  ๋‹ตํ–ˆ๋‹ค. ๋งŽ์€ ๊ธฐ์—…์ด ํŒŒ์ผ๋Ÿฟ ๋‹จ๊ณ„์—์„œ ๋” ๋‚˜์•„๊ฐ€์ง€ ๋ชปํ•˜๊ณ , ํŠนํžˆ ๊ทœ๋ชจ๊ฐ€ ์ž‘์€ ์กฐ์ง์ผ์ˆ˜๋ก ํ•œ๊ณ„๊ฐ€ ๋”์šฑ ๋šœ๋ ทํ•˜๊ฒŒ ๋‚˜ํƒ€๋‚œ๋‹ค. ํŒŒ์ผ๋Ÿฟ์ด ์„ฑ์ˆ™ ๋‹จ๊ณ„๋กœ ๋ฐœ์ „ํ•˜์ง€ ๋ชปํ•˜๋ฉด ํˆฌ์ž ๊ฒฐ์ •์„ ์ •๋‹นํ™”ํ•˜๊ธฐ๋„ ์ ์  ์–ด๋ ค์›Œ์ง„๋‹ค.

AI ๋„์ž…์—์„œ ํ”ํžˆ ๋“œ๋Ÿฌ๋‚˜๋Š” ๋ฌธ์ œ๋Š” ๋ฐ์ดํ„ฐ๊ฐ€ ์• ์ดˆ์— AI ์ ์šฉ์— ์ ํ•ฉํ•œ ์ƒํƒœ๊ฐ€ ์•„๋‹ˆ๋ผ๋Š” ์ ์ด๋‹ค. ๊ธฐ์—…์€ ๋ถ„์ ˆ๋œ ์ถœ์ฒ˜๋‚˜ ์ •์ œ๋˜์ง€ ์•Š์€ ๋ฐ์ดํ„ฐ๋ฅผ ๊ธฐ๋ฐ˜์œผ๋กœ ๋ณต์žกํ•œ ๋ชจ๋ธ์„ ๊ตฌ์ถ•ํ•˜๋ ค ์‹œ๋„ํ•˜์ง€๋งŒ, ๊ธฐ์ˆ ์ด ์ด ๊ฒฉ์ฐจ๋ฅผ ๋ฉ”์›Œ์ค„ ๊ฒƒ์ด๋ผ๋Š” ๊ธฐ๋Œ€๋Š” ๋Œ€๋ถ€๋ถ„ ์ถฉ์กฑ๋˜์ง€ ์•Š๋Š”๋‹ค.

๋งฅ์–ด๋ฒ ๋‹ˆ๋Š” โ€œ์˜๋ฏธ ์žˆ๋Š” AI ์„ฑ๊ณผ๋ฅผ ๊ฐ€๋กœ๋ง‰๋Š” ๊ฐ€์žฅ ํฐ ์žฅ์• ๋ฌผ์€ ๋ฐ์ดํ„ฐ ํ’ˆ์งˆ, ๋ฐ์ดํ„ฐ์˜ ์ผ๊ด€์„ฑ, ๊ทธ๋ฆฌ๊ณ  ๋ฐ์ดํ„ฐ์˜ ๋งฅ๋ฝ์ด๋‹ค. ๋ฐ์ดํ„ฐ๊ฐ€ ์‚ฌ์ผ๋กœ์— ๊ฐ‡ํ˜€ ์žˆ๊ฑฐ๋‚˜ ๊ณตํ†ต๋œ ๊ธฐ์ค€์œผ๋กœ ๊ด€๋ฆฌ๋˜์ง€ ์•Š์œผ๋ฉด, AI๋Š” ๊ทธ ๋ถˆ์ผ์น˜๋ฅผ ๊ทธ๋Œ€๋กœ ๋ฐ˜์˜ํ•ด ์‹ ๋ขฐํ•˜๊ธฐ ์–ด๋ ต๊ฑฐ๋‚˜ ์˜คํ•ด๋ฅผ ๋ถˆ๋Ÿฌ์ผ์œผํ‚ค๋Š” ๊ฒฐ๊ณผ๋ฅผ ๋‚ด๋†“๋Š”๋‹คโ€๋ผ๊ณ  ์ง€์ ํ–ˆ๋‹ค.

์ด ๋ฌธ์ œ๋Š” ๊ฑฐ์˜ ๋ชจ๋“  ์‚ฐ์—…์— ์˜ํ–ฅ์„ ๋ฏธ์นœ๋‹ค. ๊ธฐ์—…์€ ์ƒˆ๋กœ์šด AI ๋„๊ตฌ์— ํˆฌ์ž๋ฅผ ํ™•๋Œ€ํ•˜๊ธฐ์— ์•ž์„œ, ๋” ๊ฐ•๋ ฅํ•œ ๋ฐ์ดํ„ฐ ๊ฑฐ๋ฒ„๋„Œ์Šค๋ฅผ ๋งˆ๋ จํ•˜๊ณ  ํ’ˆ์งˆ ๊ธฐ์ค€์„ ๋ช…ํ™•ํžˆ ์ ์šฉํ•˜๋ฉฐ, ํ•ด๋‹น ์‹œ์Šคํ…œ์„ ๊ตฌ๋™ํ•  ๋ฐ์ดํ„ฐ์˜ ์‹ค์งˆ์ ์ธ ์†Œ์œ  ์ฃผ์ฒด๊ฐ€ ๋ˆ„๊ตฌ์ธ์ง€ ๋ถ„๋ช…ํžˆ ํ•ด์•ผ ํ•œ๋‹ค.

AI๊ฐ€ ์ฃผ๋„๊ถŒ์„ ์ฅ์ง€ ์•Š๋„๋ก ํ•˜๊ธฐ

AI ๋„์ž…์„ ์„œ๋‘๋ฅด๋Š” ๊ณผ์ •์—์„œ ๊ธฐ์—…์€ ํ•ด๊ฒฐํ•ด์•ผ ํ•  ํ•ต์‹ฌ ๋ฌธ์ œ๊ฐ€ ๋ฌด์—‡์ธ์ง€ ๋ฌป๋Š” ๊ทผ๋ณธ์ ์ธ ์งˆ๋ฌธ์„ ๋†“์น˜๊ณค ํ•œ๋‹ค. ์ด ์งˆ๋ฌธ์ด ๋ช…ํ™•ํ•˜์ง€ ์•Š์œผ๋ฉด ์˜๋ฏธ ์žˆ๋Š” ์„ฑ๊ณผ์— ๋„๋‹ฌํ•˜๊ธฐ ์–ด๋ ต๋‹ค.

๋ฐ”์ด์Šคํƒ€ ํฌ๋ ˆ๋””ํŠธ์œ ๋‹ˆ์˜จ(VyStar Credit Union)์˜ CTO ์•„๋ˆ„๋ผ๊ทธ ์ƒค๋ฅด๋งˆ๋Š” AI๊ฐ€ ํŠน์ • ๋น„์ฆˆ๋‹ˆ์Šค ๋ฌธ์ œ๋ฅผ ํ•ด๊ฒฐํ•˜๊ธฐ ์œ„ํ•ด ํ™œ์šฉํ•  ์ˆ˜ ์žˆ๋Š” ๋„๊ตฌ์— ์ง€๋‚˜์ง€ ์•Š๋Š”๋‹ค๊ณ  ๋ณธ๋‹ค. ์ƒค๋ฅด๋งˆ๋Š” ๋ชจ๋“  AI ์ด๋‹ˆ์…”ํ‹ฐ๋ธŒ๊ฐ€ ๋จผ์ € ๋‹ฌ์„ฑํ•˜๋ ค๋Š” ๋น„์ฆˆ๋‹ˆ์Šค ๊ฒฐ๊ณผ๋ฅผ ๋ช…ํ™•ํ•˜๊ณ  ๊ฐ„๊ฒฐํ•˜๊ฒŒ ์ •์˜ํ•˜๋Š” ๋ฐ์„œ ์‹œ์ž‘ํ•ด์•ผ ํ•œ๋‹ค๊ณ  ๊ฐ•์กฐํ–ˆ๋‹ค. ๊ทธ๋Š” ํŒ€์— AI๊ฐ€ ํ•ด๊ฒฐํ•  ์ˆ˜ ์žˆ๋Š” ๋ฌธ์ œ๋ฅผ ๋ถ„๋ฆฌํ•ด ์‹๋ณ„ํ•˜๋„๋ก ๋…๋ คํ•˜๊ณ , ์–ด๋–ค ๋ณ€ํ™”๊ฐ€ ๋ฐœ์ƒํ• ์ง€, ๋ˆ„๊ตฌ์—๊ฒŒ ์˜ํ–ฅ์ด ๋ฏธ์น ์ง€๋ฅผ ๊ฒฝ์˜์ง„์ด ์ถฉ๋ถ„ํžˆ ์ดํ•ดํ•œ ๋’ค ํ”„๋กœ์ ํŠธ๋ฅผ ์ถ”์ง„ํ•ด์•ผ ํ•œ๋‹ค๊ณ  ์กฐ์–ธํ–ˆ๋‹ค.

์ƒค๋ฅด๋งˆ๋Š” โ€œCIO์™€ CTO๋Š” ์ด๋Ÿฐ ์›์น™์„ ๊ณ ์ˆ˜ํ•จ์œผ๋กœ์จ AI ์ด๋‹ˆ์…”ํ‹ฐ๋ธŒ๊ฐ€ ๋ณธ์งˆ์—์„œ ๋ฒ—์–ด๋‚˜์ง€ ์•Š๋„๋ก ํ•  ์ˆ˜ ์žˆ๋‹ค. ํ™”๋ คํ•œ ๊ธฐ์ˆ ๊ณผ ์ „๋žต์  ๋ชฉํ‘œ๋ฅผ ๊ตฌ๋ถ„ํ•  ์ˆ˜ ์žˆ์„ ๋งŒํผ ์ถฉ๋ถ„ํžˆ ๋…ผ์˜๋ฅผ ๋Šฆ์ถฐ ํ˜„์‹ค์ ์ธ ๋ฌธ์ œ๋ฅผ ํ•ด๊ฒฐํ•ด์•ผ ํ•œ๋‹คโ€๋ผ๊ณ  ์„ค๋ช…ํ–ˆ๋‹ค.

๊ธฐ์—…์— AI CoE(Center of Excellence)๋‚˜ ์‹ค์ œ ๊ธฐํšŒ๋ฅผ ๋ฐœ๊ตดํ•˜๋Š” ์ „๋‹ด ํŒ€์ด ์žˆ์„ ๋•Œ ์ด๋Ÿฐ ๊ตฌ๋ถ„์ด ํ›จ์”ฌ ๋ช…ํ™•ํ•ด์ง„๋‹ค. ์ด ํŒ€๋“ค์€ ์•„์ด๋””์–ด๋ฅผ ๊ฒ€ํ† ํ•˜๊ณ  ์šฐ์„ ์ˆœ์œ„๋ฅผ ์„ค์ •ํ•˜๋ฉฐ, ํ”„๋กœ์ ํŠธ๊ฐ€ ์œ ํ–‰์ด ์•„๋‹Œ ์‹ค์งˆ์  ๋น„์ฆˆ๋‹ˆ์Šค ์š”๊ตฌ๋ฅผ ๊ธฐ๋ฐ˜์œผ๋กœ ์ถ”์ง„๋˜๋„๋ก ๋•๋Š”๋‹ค.

์ด ํŒ€์—๋Š” AI ๋„์ž…์œผ๋กœ ์ง์ ‘ ์˜ํ–ฅ์„ ๋ฐ›๋Š” ์‹ค๋ฌด์ž๋ฟ ์•„๋‹ˆ๋ผ, ๋น„์ฆˆ๋‹ˆ์Šค ๋ฆฌ๋”, ๋ฒ•๋ฌด ๋ฐ ์ปดํ”Œ๋ผ์ด์–ธ์Šค ๋‹ด๋‹น์ž, ๋ณด์•ˆํŒ€๋„ ์ฐธ์—ฌํ•ด์•ผ ํ•œ๋‹ค. ์ด๋“ค์€ ํ˜‘๋ ฅํ•ด AI ํ”„๋กœ์ ํŠธ๊ฐ€ ์ถฉ์กฑํ•ด์•ผ ํ•  ๊ธฐ๋ณธ ์š”๊ตฌ์‚ฌํ•ญ์„ ์ •์˜ํ•  ์ˆ˜ ์žˆ๋‹ค.

๋ณด์•ˆยท๊ฑฐ๋ฒ„๋„Œ์Šค ํ”Œ๋žซํผ ์ œ๋‹ˆํ‹ฐ(Zenity)์˜ AI ๋ณด์•ˆยท์ •์ฑ… ์˜นํ˜ธ ์ฑ…์ž„์ž ์ผ€์ผ๋ผ ์–ธ๋”์ฝ”ํ”Œ๋Ÿฌ๋Š” โ€œ์š”๊ตฌ์‚ฌํ•ญ์ด ์‚ฌ์ „์— ๋ช…ํ™•ํ•ด์ง€๋ฉด, ๊ฒ‰๋ณด๊ธฐ์—๋Š” ๋งค๋ ฅ์ ์ด์ง€๋งŒ ์‹ค์งˆ์  ๋น„์ฆˆ๋‹ˆ์Šค ๊ธฐ๋ฐ˜์ด ๋ถ€์กฑํ•œ AI ํ”„๋กœ์ ํŠธ์— ๋ถˆํ•„์š”ํ•˜๊ฒŒ ๋งค๋‹ฌ๋ฆฌ์ง€ ์•Š์„ ์ˆ˜ ์žˆ๋‹คโ€๋ผ๊ณ  ์„ค๋ช…ํ–ˆ๋‹ค.

์–ธ๋”์ฝ”ํ”Œ๋Ÿฌ๋Š” AI CoE์— ํ˜„์žฌ์˜ AI ๋ฆฌ์Šคํฌ ํ™˜๊ฒฝ์„ ์ถฉ๋ถ„ํžˆ ์ดํ•ดํ•˜๋Š” ๋‹ด๋‹น์ž๊ฐ€ ๋ฐ˜๋“œ์‹œ ํ•„์š”ํ•˜๋‹ค๊ณ  ๋ง๋ถ™์˜€๋‹ค. ๋‹ด๋‹น์ž๋Š” ๊ฐ ์ด๋‹ˆ์…”ํ‹ฐ๋ธŒ๊ฐ€ ์‹ค์ œ๋กœ ์šด์˜๋˜๊ธฐ ์ „์— ํ•ด๊ฒฐํ•ด์•ผ ํ•  ์šฐ๋ ค ์‚ฌํ•ญ์„ ํŒŒ์•…ํ•˜๊ณ , ์ค‘์š”ํ•œ ์งˆ๋ฌธ์— ๋‹ตํ•  ์ค€๋น„๊ฐ€ ๋ผ ์žˆ์–ด์•ผ ํ•œ๋‹ค.

๊ทธ๋Š” โ€œํŒ€์ด ์ธ์ง€ํ•˜์ง€ ๋ชปํ•˜๋Š” ํ—ˆ์ ์ด ๊ณ„ํš์— ์ˆจ์–ด์žˆ์„ ์ˆ˜ ์žˆ๋‹ค. ํŠนํžˆ ๋ณด์•ˆ์€ ํ”„๋กœ์ ํŠธ ์ดˆ๊ธฐ๋ถ€ํ„ฐ ๋ฐ˜๋“œ์‹œ ํฌํ•จ๋ผ์•ผ ํ•œ๋‹ค. ๊ทธ๋ž˜์•ผ ๋ณดํ˜ธ ์žฅ์น˜์™€ ๋ฆฌ์Šคํฌ ํ‰๊ฐ€๊ฐ€ ์‚ฌํ›„์— ๋ง๋ถ™์—ฌ์ง€๋Š” ๊ฒƒ์ด ์•„๋‹ˆ๋ผ, ์ฒ˜์Œ๋ถ€ํ„ฐ ์ œ๋Œ€๋กœ ์ž‘๋™ํ•˜๋„๋ก ์„ค๊ณ„ํ•  ์ˆ˜ ์žˆ๋‹คโ€๋ผ๊ณ  ์กฐ์–ธํ–ˆ๋‹ค.

๋˜ํ•œ AI ๋„์ž…์ด ์‹ค์งˆ์  ๊ฐ€์น˜๋กœ ์ด์–ด์ง€๋ ค๋ฉด, ๋ช…ํ™•ํ•˜๊ณ  ์ธก์ • ๊ฐ€๋Šฅํ•œ ๋น„์ฆˆ๋‹ˆ์Šค ์„ฑ๊ณผ๊ฐ€ ๋ฐ˜๋“œ์‹œ ์ •์˜๋ผ์•ผ ํ•œ๋‹ค. ํด๋ผ์šฐ๋“œ ๊ธฐ๋ฐ˜ ํ’ˆ์งˆ ์—”์ง€๋‹ˆ์–ด๋ง ํ”Œ๋žซํผ ๋žŒ๋‹คํ…Œ์ŠคํŠธ(LambdaTest)์˜ ๋ฐ๋ธŒ์˜ต์Šค ๋ฐ ๋ฐ๋ธŒ์„น์˜ต์Šค ๋ถ€๋ฌธ ๋ถ€์‚ฌ์žฅ ์•„์นด์‹œ ์•„๊ทธ๋ผ์™ˆ์€ โ€œ๋ชจ๋“  ์ œ์•ˆ์„œ๊ฐ€ ์„ฑ๊ณต ์ง€ํ‘œ๋ฅผ ์‚ฌ์ „์— ๋ช…ํ™•ํžˆ ๊ทœ์ •ํ•ด์•ผ ํ•œ๋‹ค. AI๋Š” ํƒ์ƒ‰ํ•˜๋Š” ๊ธฐ์ˆ ์ด ์•„๋‹ˆ๋ผ ์ ์šฉํ•˜๋Š” ๊ธฐ์ˆ โ€์ด๋ผ๊ณ  ์–ธ๊ธ‰ํ–ˆ๋‹ค.

์•„๊ทธ๋ผ์™ˆ์€ ๊ธฐ์—…์ด 30์ผ ๋˜๋Š” 45์ผ ๋‹จ์œ„๋กœ ์ •๊ธฐ ์ ๊ฒ€์„ ์ง„ํ–‰ํ•ด ํ”„๋กœ์ ํŠธ๊ฐ€ ๋น„์ฆˆ๋‹ˆ์Šค ๋ชฉํ‘œ์— ๋ถ€ํ•ฉํ•˜๋Š”์ง€ ๊ณ„์† ํ™•์ธํ•ด์•ผ ํ•œ๋‹ค๊ณ  ์กฐ์–ธํ–ˆ๋‹ค. ๊ทธ๋Š” ๊ธฐ๋Œ€์— ๋ฏธ์น˜์ง€ ๋ชปํ•˜๋Š” ๊ฒฐ๊ณผ๊ฐ€ ๋‚˜์˜จ๋‹ค๋ฉด ๊ณผ๊ฐํ•˜๊ฒŒ ์žฌํ‰๊ฐ€ํ•˜๊ณ  ํ˜„์‹ค์ ์ธ ๊ฒฐ์ •์„ ๋‚ด๋ ค์•ผ ํ•œ๋‹ค๊ณ  ์„ค๋ช…ํ–ˆ๋‹ค. ํ•„์š”ํ•˜๋‹ค๋ฉด ํ”„๋กœ์ ํŠธ๋ฅผ ์ค‘๋‹จํ•˜๋Š” ์„ ํƒ๋„ ๊ณ ๋ คํ•ด์•ผ ํ•œ๋‹ค.

์•„์šธ๋Ÿฌ ๊ธฐ์ˆ ์ด ์œ ๋งํ•ด ๋ณด์ด๋Š” ์ƒํ™ฉ์—์„œ๋„ ์‚ฌ๋žŒ์˜ ์—ญํ• ์€ ๋ฐ˜๋“œ์‹œ ์œ ์ง€๋ผ์•ผ ํ•œ๋‹ค. ์ง€์†๊ฐ€๋Šฅ ํ๊ธฐ๋ฌผ ์†”๋ฃจ์…˜ ๊ธฐ์—… ๋ฆฌ์›”๋“œ(Reworld)์˜ CIO ์Šค๋ฆฌ๋‹ค๋ฅด ์นด๋ž„๋ ˆ๋Š” โ€œ์ดˆ๊ธฐ AI ๊ธฐ๋ฐ˜ ๋ฆฌ๋“œ ๋ถ„๋ฅ˜ ํŒŒ์ผ๋Ÿฟ์—์„œ ์‚ฌ๋žŒ์˜ ๊ฒ€ํ†  ๊ณผ์ •์„ ์ƒ๋žตํ–ˆ๋”๋‹ˆ ๋น„ํšจ์œจ์ ์ธ ๋ถ„๋ฅ˜๊ฐ€ ๋ฐœ์ƒํ–ˆ๋‹ค. ์ฆ‰์‹œ ์‚ฌ๋žŒ์˜ ํ”ผ๋“œ๋ฐฑ์„ ๋‹ค์‹œ ๋ชจ๋ธ์— ๋ฐ˜์˜ํ•˜๋„๋ก ์กฐ์ •ํ–ˆ๊ณ , ์‹œ์Šคํ…œ์€ ๋” ์ •๊ตํ•ด์ง€๊ณ  ์ •ํ™•๋„๊ฐ€ ๋†’์•„์กŒ๋‹คโ€๋ผ๊ณ  ์„ค๋ช…ํ–ˆ๋‹ค.

์‚ฌ๋žŒ์˜ ๊ฒ€์ฆ ์—†์ด ๊ฒฐ์ •์ด ์ด๋ค„์ง€๋ฉด, ๊ธฐ์—…์€ ์ž˜๋ชป๋œ ๊ฐ€์ •์ด๋‚˜ ํŒจํ„ด์— ๊ธฐ๋ฐ˜ํ•ด ์›€์ง์ผ ์œ„ํ—˜์ด ์ปค์ง„๋‹ค. ๋ชฉํ‘œ๋Š” ์‚ฌ๋žŒ์„ ๋Œ€์ฒดํ•˜๋Š” ๊ฒƒ์ด ์•„๋‹ˆ๋ผ, ์‚ฌ๋žŒ๊ณผ ๊ธฐ์ˆ ์ด ์„œ๋กœ๋ฅผ ๊ฐ•ํ™”ํ•˜๋Š” ํŒŒํŠธ๋„ˆ์‹ญ์„ ๊ตฌ์ถ•ํ•˜๋Š” ๋ฐ ์žˆ๋‹ค.

๋ฐ์ดํ„ฐ๋ฅผ ์ „๋žต์  ์ž์‚ฐ์œผ๋กœ ๋‹ค๋ฃจ๊ธฐ

AI๊ฐ€ ์˜๋„ํ•œ ๋Œ€๋กœ ์ž‘๋™ํ•˜๋„๋ก ๋งŒ๋“ค๊ธฐ ์œ„ํ•ด์„œ๋Š” ํšจ๊ณผ์ ์ธ ๋ฐ์ดํ„ฐ ๊ด€๋ฆฌ๊ฐ€ ํ•„์ˆ˜์ ์ด์ง€๋งŒ, ์ด ์ „์ œ ์กฐ๊ฑด์€ ์ข…์ข… ๊ฐ„๊ณผ๋œ๋‹ค. ์˜ฌ๋ฐ”๋ฅธ ๊ธฐ๋ฐ˜์„ ๋งˆ๋ จํ•œ๋‹ค๋Š” ๊ฒƒ์€ ๋ฐ์ดํ„ฐ๋ฅผ ์ „๋žต์  ์ž์‚ฐ์œผ๋กœ ์ทจ๊ธ‰ํ•ด์•ผ ํ•œ๋‹ค๋Š” ์˜๋ฏธ๋‹ค. ์ฆ‰ ๋ฐ์ดํ„ฐ๋ฅผ ์ฒด๊ณ„ํ™”ํ•˜๊ณ  ์ •์ œํ•˜๋ฉฐ, ์‹œ๊ฐ„์ด ์ง€๋‚˜๋„ ์‹ ๋ขฐ์„ฑ์„ ์œ ์ง€ํ•  ์ˆ˜ ์žˆ๋„๋ก ์ ์ ˆํ•œ ์ •์ฑ…์„ ๊ฐ–์ถฐ์•ผ ํ•œ๋‹ค.

์•ฐ๋„ค์Šคํ‹ฐ ์ธํ„ฐ๋‚ด์…”๋„(Amnesty International)์˜ CIO ํด ์Šค๋ฏธ์Šค๋Š” โ€œCIO๋Š” ๋ฐ์ดํ„ฐ ํ’ˆ์งˆ, ๋ฐ์ดํ„ฐ์˜ ๋ฌด๊ฒฐ์„ฑ, ๋ฐ์ดํ„ฐ์˜ ์ ํ•ฉ์„ฑ์— ์ง‘์ค‘ํ•ด์•ผ ํ•œ๋‹คโ€๋ผ๊ณ  ์„ค๋ช…ํ–ˆ๋‹ค. ์ด ๋‹จ์ฒด๋Š” ๋งค์ผ ๋น„์ •ํ˜• ๋ฐ์ดํ„ฐ๋ฅผ ๋‹ค๋ฃจ๋Š”๋ฐ, ๋Œ€๋ถ€๋ถ„ ์™ธ๋ถ€์—์„œ ์œ ์ž…๋˜๋ฉฐ ํ’ˆ์งˆ ํŽธ์ฐจ๊ฐ€ ํฌ๋‹ค. ์‚ฌ๋‚ด ๋ถ„์„๊ฐ€๋“ค์€ ์„œ๋กœ ๋‹ค๋ฅธ ํ˜•์‹๊ณผ ์กฐ๊ฑด์—์„œ ์ƒ์„ฑ๋œ ๋ฌธ์„œ, ์˜์ƒ, ์ด๋ฏธ์ง€, ๋ณด๊ณ ์„œ๋ฅผ ๋Š์ž„์—†์ด ๊ฒ€ํ† ํ•œ๋‹ค. ๊ทธ๋Š” ๋ฐฉ๋Œ€ํ•˜๊ณ  ๋‚œํ•ดํ•˜๋ฉฐ ๋•Œ๋กœ๋Š” ๋ถˆ์™„์ „ํ•œ ์ •๋ณด๋ฅผ ๋‹ค๋ค„์˜จ ๊ฒฝํ—˜์„ ํ†ตํ•ด ์ฒ ์ €ํ•œ ๋ฐ์ดํ„ฐ ๊ด€๋ฆฌ์˜ ์ค‘์š”์„ฑ์„ ์ธ์‹ํ•˜๊ฒŒ ๋๋‹ค.

์Šค๋ฏธ์Šค๋Š” โ€œ๊ฒฐ๊ตญ ๋น„์ •ํ˜• ๋ฐ์ดํ„ฐ๋ž€ ์กด์žฌํ•˜์ง€ ์•Š๋Š”๋‹ค. ์•„์ง ๊ตฌ์กฐ๊ฐ€ ์ ์šฉ๋˜์ง€ ์•Š์€ ๋ฐ์ดํ„ฐ๊ฐ€ ์žˆ์„ ๋ฟ์ด๋‹คโ€๋ผ๊ณ  ๋งํ–ˆ๋‹ค. ๊ทธ๋Š” ๋˜ํ•œ ๊ธฐ์—…์ด ๊ฐ•๋ ฅํ•œ ๋ฐ์ดํ„ฐ ๊ฑฐ๋ฒ„๋„Œ์Šค ์›์น™์„ ์ผ์ƒ์ ์œผ๋กœ ์‹ค์ฒœํ•ด์•ผ ํ•œ๋‹ค๊ณ  ๊ฐ•์กฐํ–ˆ๋‹ค. ๋‹ค์‹œ ๋งํ•ด ๊ธฐ์—…์€ ๋ฐ์ดํ„ฐ์˜ ์ ํ•ฉ์„ฑ์„ ์ ๊ฒ€ํ•˜๊ณ , ์™„์ „์„ฑยท์ •ํ™•์„ฑยท์ผ๊ด€์„ฑ์„ ํ™•๋ณดํ•˜๋ฉฐ, ์˜ค๋ž˜๋œ ์ •๋ณด๊ฐ€ ๊ฒฐ๊ณผ๋ฅผ ์™œ๊ณกํ•  ์ˆ˜ ์žˆ๋‹ค๋Š” ์‚ฌ์‹ค์„ ์œ ๋…ํ•ด์•ผ ํ•œ๋‹ค.

์Šค๋ฏธ์Šค๋Š” ๋ฐ์ดํ„ฐ ๊ณ„๋ณด ๊ฒ€์ฆ์˜ ์ค‘์š”์„ฑ๋„ ๊ฐ•์กฐํ–ˆ๋‹ค. ์—ฌ๊ธฐ์—๋Š” ๋ฐ์ดํ„ฐ์˜ ์ถœ์ฒ˜๋ฅผ ๋ช…ํ™•ํžˆ ํŒŒ์•…ํ•˜๊ณ , ๊ทธ ํ™œ์šฉ์ด ๋ฒ•์ ยท์œค๋ฆฌ์  ๊ธฐ์ค€์— ๋ถ€ํ•ฉํ•˜๋Š”์ง€ ํ™•์ธํ•˜๋Š” ๊ณผ์ •์ด ํฌํ•จ๋œ๋‹ค. ๋˜ํ•œ ๋ฐ์ดํ„ฐ๊ฐ€ ์–ด๋–ป๊ฒŒ ์ˆ˜์ง‘๋˜๊ฑฐ๋‚˜ ๋ณ€ํ™˜๋๋Š”์ง€๋ฅผ ์„ค๋ช…ํ•˜๋Š” ๋ฌธ์„œ๋ฅผ ๊ฒ€ํ† ํ•˜๋Š” ๊ณผ์ •๋„ ์š”๊ตฌ๋œ๋‹ค.

๋งŽ์€ ๊ธฐ์—…์—์„œ ๋ถ„์‚ฐ๋œ ๋ฐ์ดํ„ฐ๋Š” ๊ธฐ์กด ์‹œ์Šคํ…œ์ด๋‚˜ ์ˆ˜์ž‘์—… ์ž…๋ ฅ์—์„œ ๋ฐœ์ƒํ•œ๋‹ค. ๋žŒ๋‹คํ…Œ์ŠคํŠธ์˜ ์•„๊ทธ๋ผ์™ˆ์€ โ€œ์šฐ๋ฆฌ๋Š” ์Šคํ‚ค๋งˆ ํ‘œ์ค€ํ™”, ๋ฐ์ดํ„ฐ ๊ณ„์•ฝ ์ค€์ˆ˜, ์ˆ˜์ง‘ ๋‹จ๊ณ„ ์ž๋™ ํ’ˆ์งˆ ์ ๊ฒ€, ์—”์ง€๋‹ˆ์–ด๋ง ์ „๋ฐ˜์˜ ๊ด€์ฐฐ ๊ฐ€๋Šฅ์„ฑ ํ†ตํ•ฉ์„ ํ†ตํ•ด ๋ฐ์ดํ„ฐ์˜ ์‹ ๋ขฐ์„ฑ์„ ๊ฐ•ํ™”ํ•˜๊ณ  ์žˆ๋‹คโ€๋ผ๊ณ  ์„ค๋ช…ํ–ˆ๋‹ค.

๋ฐ์ดํ„ฐ์— ๋Œ€ํ•œ ์‹ ๋ขฐ๊ฐ€ ํ™•๋ฆฝ๋˜๋ฉด AI ๊ฒฐ๊ณผ ์—ญ์‹œ ๊ฐœ์„ ๋œ๋‹ค. ์ƒค๋ฅด๋งˆ๋Š” โ€œ๋ฐ์ดํ„ฐ๊ฐ€ ์–ด๋””์„œ ์™”๋Š”์ง€, ์–ผ๋งˆ๋‚˜ ์‹ ๋ขฐํ•  ์ˆ˜ ์žˆ๋Š”์ง€ ๋ช…ํ™•ํžˆ ๋‹ตํ•˜์ง€ ๋ชปํ•œ๋‹ค๋ฉด AI ๋„์ž… ์ค€๋น„๊ฐ€ ๋˜์ง€ ์•Š์€ ๊ฒƒโ€์ด๋ผ๋ฉฐ โ€œํŠนํžˆ ์‹ ๋ขฐ๊ฐ€ ํ•ต์‹ฌ ๊ฐ€์น˜์ธ ๊ธˆ์œต ์‚ฐ์—…์—์„œ๋Š” ์ž˜๋ชป๋œ ์ธ์‚ฌ์ดํŠธ๋ฅผ ์ซ“๋Š” ๊ฒƒ๋ณด๋‹ค ์ดˆ๊ธฐ ๋‹จ๊ณ„์—์„œ ์†๋„๋ฅผ ๋Šฆ์ถ”๋Š” ๊ฒƒ์ด ๋” ๋‚ซ๋‹คโ€๋ผ๊ณ  ๋งํ–ˆ๋‹ค.

์นด๋ž„๋ ˆ๋Š” ๋ฆฌ์›”๋“œ์—์„œ โ€˜๋‹จ์ผ ์ง„์‹ค ๊ณต๊ธ‰์›(single source of truth)โ€™ ์—ญํ• ์„ ํ•˜๋Š” ๋ฐ์ดํ„ฐ ํŒจ๋ธŒ๋ฆญ์„ ๊ตฌ์ถ•ํ•˜๊ณ , ๊ฐ ๋„๋ฉ”์ธ์— ๋ฐ์ดํ„ฐ ๊ด€๋ฆฌ์ž๋ฅผ ๋ฐฐ์ •ํ–ˆ๋‹ค๊ณ  ์–ธ๊ธ‰ํ–ˆ๋‹ค. ๋˜ํ•œ ์ •์˜์™€ ์ ‘๊ทผ ์ •์ฑ…์„ ์‰ฝ๊ฒŒ ๊ฒ€์ƒ‰ํ•  ์ˆ˜ ์žˆ๋Š” โ€˜์‹ค์‹œ๊ฐ„ ์—…๋ฐ์ดํŠธ ๋ฐ์ดํ„ฐ ์‚ฌ์ „(living data dictionary)โ€™๋„ ์šด์˜ํ•˜๊ณ  ์žˆ๋‹ค. ์นด๋ž„๋ ˆ๋Š” โ€œ๊ฐ ํ•ญ๋ชฉ์—๋Š” ๊ณ„๋ณด์™€ ์†Œ์œ ๊ถŒ ์ •๋ณด๊ฐ€ ํฌํ•จ๋ผ ์žˆ์–ด ๋ชจ๋“  ํŒ€์ด ๋ˆ„๊ฐ€ ์ฑ…์ž„์ž์ธ์ง€ ๋ช…ํ™•ํžˆ ์•Œ ์ˆ˜ ์žˆ๊ณ , ํ™œ์šฉํ•˜๋Š” ๋ฐ์ดํ„ฐ๋ฅผ ์‹ ๋ขฐํ•  ์ˆ˜ ์žˆ๋‹คโ€๋ผ๊ณ  ๋งํ–ˆ๋‹ค.

์ตœ์šฐ์„  ๊ณผ์ œ๋Š” โ€˜์ž๊ฐ€ ์ ๊ฒ€โ€™

AI๋Š” ๋ฐ์ดํ„ฐ์—์„œ ๋ฐœ๊ฒฌํ•œ ํŒจํ„ด์„ ์ฆํญํ•˜๋Š” ํŠน์„ฑ์ด ์žˆ๋‹ค. ์œ ์šฉํ•œ ํŒจํ„ด๋ฟ ์•„๋‹ˆ๋ผ ์˜ค๋ž˜๋œ ํŽธํ–ฅ ์—ญ์‹œ ๊ฐ•ํ™”๋  ์ˆ˜ ์žˆ๋‹ค. ์ด๋Ÿฐ ํ•จ์ •์„ ํ”ผํ•˜๋ ค๋ฉด ํŽธํ–ฅ์ด ๊ตฌ์กฐ์  ๋ฌธ์ œ์—์„œ ๋น„๋กฏ๋œ๋‹ค๋Š” ์‚ฌ์‹ค์„ ๋จผ์ € ์ธ์‹ํ•ด์•ผ ํ•œ๋‹ค.

๋ฌธ์ œ๊ฐ€ ๋ฟŒ๋ฆฌ๋‚ด๋ฆฌ๋Š” ๊ฒƒ์„ ๋ง‰๊ธฐ ์œ„ํ•ด CIO๊ฐ€ ์ทจํ•  ์ˆ˜ ์žˆ๋Š” ์กฐ์น˜๋„ ์žˆ๋‹ค. ์–ธ๋”์ฝ”ํ”Œ๋Ÿฌ๋Š” โ€œํ›ˆ๋ จ์ด๋‚˜ ํŒŒ์ผ๋Ÿฟ ๋‹จ๊ณ„์—์„œ ์‚ฌ์šฉ๋˜๋Š” ๋ชจ๋“  ๋ฐ์ดํ„ฐ๋ฅผ ๋ฉด๋ฐ€ํžˆ ๊ฒ€์ฆํ•˜๊ณ , AI๊ฐ€ ์‹ค์ œ ์—…๋ฌด ํ๋ฆ„์— ํˆฌ์ž…๋˜๊ธฐ ์ „์— ๊ธฐ๋ณธ์ ์ธ ํ†ต์ œ ์žฅ์น˜๊ฐ€ ๋งˆ๋ จ๋ผ ์žˆ๋Š”์ง€ ํ™•์ธํ•ด์•ผ ํ•œ๋‹คโ€๋ผ๊ณ  ์กฐ์–ธํ–ˆ๋‹ค.

๋˜ํ•œ ์—์ด์ „ํ‹ฑ AI๊ฐ€ ๊ธฐ์กด์˜ ๋ฆฌ์Šคํฌ ๋ชจ๋ธ์„ ์–ด๋–ป๊ฒŒ ๋ณ€ํ™”์‹œํ‚ค๋Š”์ง€ ์„ธ๋ฐ€ํ•˜๊ฒŒ ์ดํ•ดํ•˜๋Š” ๊ฒƒ๋„ ์ค‘์š”ํ•˜๋‹ค. ์–ธ๋”์ฝ”ํ”Œ๋Ÿฌ๋Š” โ€œ์ด๋Ÿฐ ์‹œ์Šคํ…œ์€ ์ƒˆ๋กœ์šด ํ˜•ํƒœ์˜ ์ž์œจ์„ฑ, ์˜์กด์„ฑ, ์ƒํ˜ธ์ž‘์šฉ์„ ๋„์ž…ํ•œ๋‹ค. ๋”ฐ๋ผ์„œ ํ†ต์ œ ์ฒด๊ณ„๋„ ์ด์— ๋งž์ถฐ ๋ฐœ์ „ํ•ด์•ผ ํ•œ๋‹คโ€๋ผ๊ณ  ๋งํ–ˆ๋‹ค.

์–ธ๋”์ฝ”ํ”Œ๋Ÿฌ๋Š” ๊ฐ•๋ ฅํ•œ ๊ฑฐ๋ฒ„๋„Œ์Šค ํ”„๋ ˆ์ž„์›Œํฌ๊ฐ€ ๊ธฐ์—…์˜ ๋ชจ๋‹ˆํ„ฐ๋ง, ๋ฆฌ์Šคํฌ ๊ด€๋ฆฌ, ๋ณดํ˜ธ์žฅ์น˜ ์„ค์ •์„ ์ฒด๊ณ„ํ™”ํ•˜๋Š” ์—ญํ• ์„ ํ•œ๋‹ค๊ณ  ์ง„๋‹จํ–ˆ๋‹ค. ์ด ํ”„๋ ˆ์ž„์›Œํฌ๋Š” ๋ˆ„๊ฐ€ AI ์‹œ์Šคํ…œ์„ ๊ฐ๋…ํ•˜๋Š”์ง€, ์˜์‚ฌ๊ฒฐ์ •์€ ์–ด๋–ป๊ฒŒ ๊ธฐ๋ก๋˜๋Š”์ง€, ์–ธ์ œ ์‚ฌ๋žŒ์ด ํŒ๋‹จํ•ด์•ผ ํ•˜๋Š”์ง€๋ฅผ ์ •์˜ํ•œ๋‹ค. ๊ธฐ์ˆ ์ด ์ •์ฑ…๋ณด๋‹ค ๋” ๋น ๋ฅด๊ฒŒ ์ง„ํ™”ํ•˜๋Š” ํ™˜๊ฒฝ์—์„œ ์ด ๊ตฌ์กฐ๋Š” ํŠนํžˆ ์ค‘์š”ํ•˜๋‹ค.

์นด๋ž„๋ ˆ๋Š” AI๋ฅผ ๊ฐ๋…ํ•˜๋Š” ๊ณผ์ •์—์„œ ๊ณต์ •์„ฑ์„ ์ธก์ •ํ•˜๋Š” ์ง€ํ‘œ๊ฐ€ ์ค‘์š”ํ•œ ์—ญํ• ์„ ํ•œ๋‹ค๊ณ  ์„ค๋ช…ํ–ˆ๋‹ค. ์ด๋Ÿฐ ์ง€ํ‘œ๋Š” AI ์‹œ์Šคํ…œ์ด ์„œ๋กœ ๋‹ค๋ฅธ ์ง‘๋‹จ์„ ๊ณต์ •ํ•˜๊ฒŒ ๋Œ€์šฐํ•˜๋Š”์ง€, ํ˜น์€ ํŠน์ • ์ง‘๋‹จ์„ ์˜๋„์น˜ ์•Š๊ฒŒ ์šฐ๋Œ€ํ•˜๊ฑฐ๋‚˜ ๋ถˆ๋ฆฌํ•˜๊ฒŒ ๋งŒ๋“ค๊ณ  ์žˆ๋Š”์ง€๋ฅผ ํŒŒ์•…ํ•˜๋Š” ๋ฐ ๋„์›€์„ ์ค€๋‹ค. ๊ทธ๋Š” ๊ณต์ •์„ฑ ์ง€ํ‘œ๋ฅผ ๋ชจ๋ธ ๊ฒ€์ฆ ํŒŒ์ดํ”„๋ผ์ธ์— ํฌํ•จ์‹œํ‚ฌ ์ˆ˜ ์žˆ๋‹ค๊ณ  ๋ง๋ถ™์˜€๋‹ค.

๋„๋ฉ”์ธ ์ „๋ฌธ๊ฐ€ ์—ญ์‹œ ํŽธํ–ฅ๋˜๊ฑฐ๋‚˜ ์—‰๋šฑํ•œ ๊ฒฐ๊ณผ๋ฅผ ๋‚ด๋Š” ๋ชจ๋ธ์„ ์‹๋ณ„ํ•˜๊ณ  ์žฌํ›ˆ๋ จํ•˜๋Š” ๋ฐ ํ•ต์‹ฌ์ ์ธ ์—ญํ• ์„ ํ•  ์ˆ˜ ์žˆ๋‹ค. ์ด๋“ค์€ ๋ฐ์ดํ„ฐ์˜ ๋งฅ๋ฝ์„ ๋ˆ„๊ตฌ๋ณด๋‹ค ์ž˜ ์ดํ•ดํ•˜๊ธฐ ๋•Œ๋ฌธ์— ์ด์ƒ ์ง•ํ›„๋ฅผ ๊ฐ€์žฅ ๋จผ์ € ๋ฐœ๊ฒฌํ•˜๋Š” ๊ฒฝ์šฐ๊ฐ€ ๋งŽ๋‹ค. ์นด๋ž„๋ ˆ๋Š” โ€œ์ง€์†์  ํ•™์Šต์€ ๊ธฐ๊ณ„์—๊ฒŒ๋„ ์‚ฌ๋žŒ์—๊ฒŒ๋„ ๋˜‘๊ฐ™์ด ์ค‘์š”ํ•˜๋‹คโ€๋ผ๊ณ  ๊ฐ•์กฐํ–ˆ๋‹ค.

์•ฐ๋„ค์Šคํ‹ฐ์˜ ์Šค๋ฏธ์Šค ์—ญ์‹œ ๊ฐ™์€ ๊ฒฌํ•ด๋ฅผ ๋ฐํžˆ๋ฉฐ, ์ž ์žฌ์  ํŽธํ–ฅ์„ ์‹๋ณ„ํ•  ์ˆ˜ ์žˆ๋„๋ก ์ง์›๋“ค์„ ์ง€์†์ ์œผ๋กœ ๊ต์œกํ•ด์•ผ ํ•œ๋‹ค๊ณ  ๋งํ–ˆ๋‹ค. ๊ทธ๋Š” โ€œ๋ฆฌ์Šคํฌ์™€ ์ž ์žฌ์  ํ”ผํ•ด์— ๋Œ€ํ•œ ์ธ์‹์„ ๋†’์—ฌ์•ผ ํ•œ๋‹ค. ์œ„ํ—˜์„ ๊ฐ€์žฅ ๋จผ์ € ์ฐจ๋‹จํ•˜๋Š” ๋ฐฉ์–ด์„ ์€ ๊ฒฐ๊ตญ ์‚ฌ๋žŒโ€์ด๋ผ๊ณ  ์ง„๋‹จํ–ˆ๋‹ค.
dl-ciokorea@foundryco.com

์นผ๋Ÿผ | AI๋Š” ์ƒˆ๋กœ์šด ํด๋ผ์šฐ๋“œยทยทยทํ”Œ๋žซํผ ํ˜์‹ ์ด ๋งํ•ด์ฃผ๋Š” ๊ธฐ์ˆ  ํ˜์‹ ์˜ ๋ณธ์งˆ

9 December 2025 at 02:30

AI๋Š” ํด๋ผ์šฐ๋“œ ์ปดํ“จํŒ…์ด ๋“ฑ์žฅํ•œ ์ดํ›„ ๊ฐ€์žฅ ๊ฐ•๋ ฅํ•œ ๊ธฐ์ˆ ์  ์ „ํ™˜์ ์œผ๋กœ ํ‰๊ฐ€๋œ๋‹ค. 20๋…„ ์ „ ํด๋ผ์šฐ๋“œ ํ”Œ๋žซํผ์€ ๊ธฐ์—…์ด ์ธํ”„๋ผ๋ฅผ ๋ฐ”๋ผ๋ณด๋Š” ๋ฐฉ์‹์„ ๊ทผ๋ณธ์ ์œผ๋กœ ๋ฐ”๊ฟ” ๋†“์•˜๋‹ค. ์ง€๊ธˆ ์ด ๊ธ€์„ ์ฝ๊ณ  ์žˆ๋Š” ๋ฐ”๋กœ ์ด ์ˆœ๊ฐ„, AI ํ”Œ๋žซํผ์€ ๊ธฐ์—…์ด โ€˜์ง€๋Šฅโ€™์„ ์ธ์‹ํ•˜๋Š” ๋ฐฉ์‹์„ ๋‹ค์‹œ ์“ฐ๊ณ  ์žˆ๋‹ค.

๋‘ ๊ธฐ์ˆ ์˜ ํ๋ฆ„์€ ์งš์–ด๋ณผ ๊ฐ€์น˜๊ฐ€ ์ถฉ๋ถ„ํ•˜๋‹ค. 2000๋…„๋Œ€ ์ดˆ CIO๋“ค์€ ์ž์ฒด ๋ฐ์ดํ„ฐ์„ผํ„ฐ๋ฅผ ๊ตฌ์ถ•ํ• ์ง€, AWS ๊ฐ™์€ ๊ณต์œ  ํ”Œ๋žซํผ์„ ์‹ ๋ขฐํ• ์ง€๋ฅผ ๋‘๊ณ  ๋…ผ์Ÿ์„ ๋ฒŒ์˜€๋‹ค. 20๋…„์ด ์ง€๋‚œ ์ง€๊ธˆ, ๋น„์Šทํ•œ ์งˆ๋ฌธ์ด ๋‹ค์‹œ ๋“ฑ์žฅํ–ˆ๋‹ค. ์ž์ฒด ๋Œ€ํ˜•์–ธ์–ด๋ชจ๋ธ์„ ๊ตฌ์ถ•ํ•ด์•ผ ํ• ๊นŒ, ์•„๋‹ˆ๋ฉด ์ด๋ฏธ ์กด์žฌํ•˜๋Š” ๋ชจ๋ธ ์œ„์—์„œ ๊ตฌ์ถ•ํ•ด์•ผ ํ• ๊นŒ๋ผ๋Š” ๊ณ ๋ฏผ์ด๋‹ค.

ํ•„์ž๋Š” ํด๋ผ์šฐ๋“œ ์‹œ๋Œ€๊ฐ€ ๋‚จ๊ธด ๊ตํ›ˆ์ด ์—ฌ์ „ํžˆ ์œ ํšจํ•˜๋‹ค๊ณ  ๋ณธ๋‹ค. ๊ฒฝ์Ÿ๋ ฅ์€ ์ธํ”„๋ผ๋ฅผ ์†Œ์œ ํ•˜๋Š” ๋ฐ์„œ ๋‚˜์˜ค์ง€ ์•Š๋Š”๋‹ค. ์ด๋ฏธ ์กด์žฌํ•˜๋Š” ํ”Œ๋žซํผ์„ ํ™œ์šฉํ•˜๊ณ , ๊ทธ ์œ„์—์„œ ํ˜์‹ ์„ ์Œ“์•„ ์˜ฌ๋ฆฌ๋Š” ๋ฐ์„œ ๋‚˜์˜จ๋‹ค. ๊ทธ ์ด์œ ๋ฅผ ์‚ดํŽด๋ณผ ํ•„์š”๊ฐ€ ์žˆ๋‹ค.

ํด๋ผ์šฐ๋“œ๊ฐ€ ๋‚จ๊ธด ๊ตํ›ˆ: ์ƒˆ๋กœ ๋งŒ๋“ค์ง€ ๋ง๊ณ  ํ™œ์šฉํ•˜๋ผ

์ดˆ๊ธฐ ํด๋ผ์šฐ๋“œ ์„œ๋น„์Šค๊ฐ€ ๋“ฑ์žฅํ–ˆ์„ ๋•Œ ๊ฐ€์žฅ ๋„๋ฆฌ ์ฃผ๋ชฉ๋ฐ›์€ ๊ฐ€์น˜๋Š” ์†๋„์˜€๋‹ค. ๊ฐœ๋ฐœ์ž๋Š” ๋ช‡ ๋‹ฌ์ด ์•„๋‹ˆ๋ผ ๋ช‡ ๋ถ„ ๋งŒ์— ์• ํ”Œ๋ฆฌ์ผ€์ด์…˜์„ ๊ตฌ๋™ํ•  ์ˆ˜ ์žˆ์—ˆ๋‹ค.

ํ•˜์ง€๋งŒ ์†๋„๋งŒ์ด ์ „๋ถ€๋Š” ์•„๋‹ˆ์—ˆ๋‹ค. ํด๋ผ์šฐ๋“œ์˜ ์ง„์งœ ํ˜์‹ ์€ ์ „๋žต์  ์ฐจ์›์—์„œ ์ผ์–ด๋‚ฌ๋‹ค. ์ธํ”„๋ผ ๊ด€๋ฆฌ๋ฅผ ๋‚ด๋ ค๋†“์ž ๊ธฐ์—…์€ ๊ฒฝํ—˜๊ณผ ์„œ๋น„์Šค ํ˜์‹ ์— ๋” ๋งŽ์€ ์—ญ๋Ÿ‰์„ ํˆฌ์ž…ํ•  ์ˆ˜ ์žˆ๊ฒŒ ๋๋‹ค.

๋ฐ˜๋ฉด ์ผ๋ถ€ ๊ธฐ์—…์€ โ€˜ํ•˜์ดํผ์Šค์ผ€์ผ๋Ÿฌโ€™๋ฅผ ๋”ฐ๋ผ ํ•˜๊ฒ ๋‹ค๋ฉฐ ์ž์ฒด ํด๋ผ์šฐ๋“œ๋ฅผ ์ฒ˜์Œ๋ถ€ํ„ฐ ๊ตฌ์ถ•ํ•˜๋ ค ํ–ˆ์ง€๋งŒ, ํ”Œ๋žซํผ ์ง„ํ™” ์†๋„๋ฅผ ๋”ฐ๋ผ์žก๊ธฐ๊ฐ€ ์–ผ๋งˆ๋‚˜ ์–ด๋ ค์šด์ง€ ๊ณง ๊นจ๋‹ซ๊ฒŒ ๋๋‹ค. ๋น„์šฉ์€ ๋์—†์ด ์ƒ์Šนํ–ˆ๊ณ  ๊ฐœ๋ฐœ ์†๋„๋Š” ์˜คํžˆ๋ ค ๋–จ์–ด์กŒ๋‹ค. ๋ฐ˜๋Œ€๋กœ ๊ณต์šฉ ํ”Œ๋žซํผ์„ ๊ธฐ๋ฐ˜์œผ๋กœ ์‚ผ๋Š” โ€˜ํ™œ์šฉ ๋ชจ๋ธโ€™์„ ์ˆ˜์šฉํ•œ ๊ธฐ์—…์€ ๋” ๋น ๋ฅด๊ฒŒ ์›€์ง์˜€๊ณ  ๋น„์šฉ๋„ ์ ๊ฒŒ ๋“ค์—ˆ๋‹ค.

AI๋„ ์ง€๊ธˆ ๋˜‘๊ฐ™์€ ๊ฐˆ๋ฆผ๊ธธ์— ์„œ ์žˆ๋‹ค. ๋ชจ๋“  ๊ฒƒ์„ ์ฒ˜์Œ๋ถ€ํ„ฐ ์ง์ ‘ ๋งŒ๋“ค๋ ค๋Š” ๋ณธ๋Šฅ์€ ์ต์ˆ™ํ•œ ํ๋ฆ„์ด์ง€๋งŒ, ํด๋ผ์šฐ๋“œ ์‹œ๋Œ€์™€ ๋งˆ์ฐฌ๊ฐ€์ง€๋กœ ์ตœ์„ ์˜ ์„ ํƒ์ด๋ผ๊ณ  ๋ณด๊ธฐ๋Š” ์–ด๋ ต๋‹ค. ๋Œ€ํ˜•์–ธ์–ด๋ชจ๋ธ์€ ํด๋ผ์šฐ๋“œ ์‹œ๋Œ€์˜ ์ปดํ“จํŒ…ยท์Šคํ† ๋ฆฌ์ง€์™€ ๊ฐ™์€ ์ƒˆ๋กœ์šด ๋””์ง€ํ„ธ ์ธํ”„๋ผ ๊ณ„์ธต์ด ๋๋‹ค. ๊ฐ•๋ ฅํ•˜๊ณ  ํ™•์žฅ ๊ฐ€๋Šฅํ•˜๋ฉฐ, ์ง‘๋‹จ์  ์‚ฌ์šฉ์„ ํ†ตํ•ด ์ง€์†์ ์œผ๋กœ ๊ณ ๋„ํ™”๋˜๋Š” ์ผ์ข…์˜ ๊ธฐ๋ฐ˜ ์œ ํ‹ธ๋ฆฌํ‹ฐ๋‹ค.

์ด์ œ ์ธํ”„๋ผ ์ž์ฒด๋ฅผ ์†Œ์œ ํ•˜๋Š” ๊ฒƒ์€ ๋” ์ด์ƒ ์ฐจ๋ณ„ํ™” ์š”์†Œ๊ฐ€ ๋˜์ง€ ์•Š๋Š”๋‹ค. ์‚ฌ์‹ค ๊ณผ๊ฑฐ์—๋„ ๊ทธ๋žฌ๋‹ค. ๊ธฐ์ˆ  ๋ฆฌ๋”์—๊ฒŒ ํ•„์š”ํ•œ ์งˆ๋ฌธ์€ โ€œ์šฐ๋ฆฌ๊ฐ€ ์ง์ ‘ ๋ชจ๋ธ์„ ๋งŒ๋“ค ์ˆ˜ ์žˆ๋Š”๊ฐ€?โ€๊ฐ€ ์•„๋‹ˆ๋‹ค. โ€œ๊ธฐ์กด ๋ชจ๋ธ ์œ„์—์„œ ์šฐ๋ฆฌ๊ฐ€ ์ œ๊ณตํ•  ์ˆ˜ ์žˆ๋Š” ๊ณ ์œ ํ•œ ๊ฐ€์น˜๋Š” ๋ฌด์—‡์ธ๊ฐ€?โ€๋ผ๋Š” ์ ์ด๋‹ค.

๊ฐœ๋ฐฉํ˜• ์ƒํƒœ๊ณ„์˜ ํž˜

ํด๋ผ์šฐ๋“œ์˜ ๋ถ€์ƒ์€ ํŠน์ • ์ œํ’ˆ์ด ์•„๋‹Œ ์ƒํƒœ๊ณ„ ๊ตฌ์ถ•์—์„œ ์‹œ์ž‘๋๋‹ค. ํ•„์ž๊ฐ€ AWS์—์„œ ์ผํ•  ๋‹น์‹œ ์ง์ ‘ ํ™•์ธํ•œ ๊ฐ€์žฅ ํฐ ํ˜์‹ ์€, ๋ˆ„๊ตฌ๋‚˜ ๊ทธ ์œ„์—์„œ ์ƒˆ๋กœ์šด ๊ฒƒ์„ ๋งŒ๋“ค ์ˆ˜ ์žˆ๋„๋ก ์„ค๊ณ„๋œ ์•„ํ‚คํ…์ฒ˜์˜€๋‹ค. ๋ชจ๋“  API ํ˜ธ์ถœ์ด ๋˜ ๋‹ค๋ฅธ ์„œ๋น„์Šค์˜ ๊ตฌ์„ฑ ์š”์†Œ๊ฐ€ ๋  ์ˆ˜ ์žˆ์—ˆ๋‹ค.

AI ํ”Œ๋žซํผ๋„ ๊ฐ™์€ ํ๋ฆ„์„ ๋”ฐ๋ฅด๊ณ  ์žˆ๋‹ค. ์˜คํ”ˆAI, ์•คํŠธ๋กœํ”ฝ ๋“ฑ์€ ๊ฐœ๋ฐฉํ˜• ์ธํ„ฐํŽ˜์ด์Šค์™€ SDK๋ฅผ ์ œ๊ณตํ•ด โ€˜์ง€๋Šฅโ€™์„ ์ ‘๊ทผ ๊ฐ€๋Šฅํ•œ ์„œ๋น„์Šค ํ˜•ํƒœ๋กœ ๋ฐ”๊พธ๊ณ  ์žˆ๋‹ค. ์ด ๊ฐœ๋ฐฉ์„ฑ์€ ๊ฐœ๋ฐœ์ž, ๋ฐ์ดํ„ฐ ๊ณผํ•™์ž, ๋น„์ฆˆ๋‹ˆ์Šค ๋ถ„์„๊ฐ€ ๋“ฑ ๋ˆ„๊ตฌ๋‚˜ ํ˜์‹  ๊ณผ์ •์— ๊ธฐ์—ฌํ•  ์ˆ˜ ์žˆ๋Š” ์ƒํƒœ๊ณ„๋ฅผ ๋งŒ๋“ค์–ด๋‚ด๋ฉฐ, ์ถ•์ ๋œ ํ˜์‹ ์„ ๊ฐ€์†ํ•œ๋‹ค.

์˜คํ”ˆ ์ƒํƒœ๊ณ„์™€ ๋ณด์กฐ๋ฅผ ๋งž์ถ”๋Š” ๊ธฐ์—…์€ ๊ณต์œ ๋œ ์ง„์ „์˜ ํ˜œํƒ์„ ๊ณ ์Šค๋ž€ํžˆ ๋ˆ„๋ฆฐ๋‹ค. ์ „์ฒด ๊ธฐ์ˆ  ์Šคํƒ์„ ์†Œ์œ ํ•˜์ง€ ์•Š๋”๋ผ๋„ ์ž์œ ๋กญ๊ฒŒ ์‹คํ—˜ํ•  ์ˆ˜ ์žˆ๊ณ , ๊ธฐ๋ฐ˜ ๊ธฐ์ˆ ์ด ๋ฐœ์ „ํ•  ๋•Œ๋งˆ๋‹ค ๋” ๋น ๋ฅด๊ฒŒ ์›€์ง์ผ ์ˆ˜ ์žˆ๋‹ค. ๋ฐ˜๋Œ€๋กœ ํ์‡„ํ˜• ์‹œ์Šคํ…œ์€ ์‰ฝ๊ฒŒ ์ •์ฒด๋œ๋‹ค. ํ˜์‹ ์„ ๋‚ด๋ถ€ ์—ญ๋Ÿ‰์—๋งŒ ์˜์กดํ•ด์•ผ ํ•˜๋ฏ€๋กœ ์„ฑ์žฅ ์†๋„๋Š” ๋А๋ ค์ง€๊ณ  ๋น„์šฉ์€ ์ฆ๊ฐ€ํ•˜๋ฉฐ ์ธ์žฌ ์œ ์ถœ๋„ ์‹ฌํ™”๋œ๋‹ค.

ํ•„์ž๊ฐ€ ์ปค๋ฆฌ์–ด ์ „๋ฐ˜์—์„œ ๊ด€์ฐฐํ•œ ๋ฐ”๋กœ๋Š”, ๋ฏธ๋ž˜๋Š” ์‚ฌ์šฉ์ž๋ฅผ ๊ณต๋™ ์ฐฝ์ž‘์ž๋กœ ๋Œ€์šฐํ•˜๋Š” ํ”Œ๋žซํผ์— ๋Œ์•„๊ฐˆ ๊ฒƒ์ด๋‹ค. ์‚ฌ์šฉ์ž ํ•œ ๋ช… ํ•œ ๋ช…์ด ๊ธฐ์—ฌ์ž๋กœ ๊ธฐ๋Šฅํ•˜๊ธฐ ๋•Œ๋ฌธ์— ์ œํ’ˆ๊ณผ ์ƒํƒœ๊ณ„์˜ ํ™•์žฅ ์†๋„๋Š” ๊ธฐํ•˜๊ธ‰์ˆ˜์ ์œผ๋กœ ๋†’์•„์ง„๋‹ค.

ํ”ผ๋“œ๋ฐฑ ๊ธฐ๋ฐ˜ ์„ฑ์žฅ

ํ”ผ๋“œ๋ฐฑ์€ ๊ธฐ์ˆ  ์ง„ํ™”๋ฅผ ์ถ”์ง„ํ•˜๋Š” ๋ฐ ๋น„ํ•ด ์ง€๋‚˜์น˜๊ฒŒ ํ‰๊ฐ€์ ˆํ•˜๋œ ๋™๋ ฅ์ด๋‹ค. AWS๋Š” ๋กœ๋“œ๋งต์˜ 90%๊ฐ€ ๊ณ ๊ฐ ์š”์ฒญ์—์„œ ๋‚˜์˜จ๋‹ค๊ณ  ๋ฐํžŒ ๋ฐ” ์žˆ๋‹ค. ํ•„์ž ์—ญ์‹œ ๊ทธ๊ณณ์—์„œ ์ผํ•˜๋ฉฐ ๊ฐœ์„ ์ด ์‚ฌ์šฉ์„ ๋Š˜๋ฆฌ๊ณ , ์‚ฌ์šฉ์ด ์ƒˆ๋กœ์šด ํ”ผ๋“œ๋ฐฑ์„ ๋งŒ๋“ค๊ณ , ๊ทธ ํ”ผ๋“œ๋ฐฑ์ด ๋‹ค์‹œ ํ˜์‹ ์„ ์ผ์œผํ‚ค๋Š” ์„ ์ˆœํ™˜ ๊ตฌ์กฐ๋ฅผ ์ง์ ‘ ๊ฒฝํ—˜ํ–ˆ๋‹ค.

AI ์‹œ์Šคํ…œ๋„ ๊ฐ™์€ ๊ตฌ์กฐ๋กœ ์›€์ง์ธ๋‹ค. ๊ฐ•ํ™”ํ•™์Šต, ํŒŒ์ธํŠœ๋‹, ์‚ฌ์šฉ์ž ํ™œ๋™ ๋ฐ์ดํ„ฐ ๋“ฑ ๋ชจ๋“  ์š”์†Œ๊ฐ€ ๋ชจ๋ธ์˜ ์ง„ํ™”๋ฅผ ์ด‰์ง„ํ•œ๋‹ค. ํ•˜๋‚˜์˜ ์งˆ๋ฌธ, ํ•˜๋‚˜์˜ ์ˆ˜์ •, ํ•˜๋‚˜์˜ ํ”„๋กฌํ”„ํŠธ๊ฐ€ ๋‹ค์Œ ์‘๋‹ต์„ ๋” ์ •๊ตํ•˜๊ฒŒ ๋งŒ๋“œ๋Š” ์‹ ํ˜ธ๊ฐ€ ๋œ๋‹ค.

์ด๋Ÿฐ ํ”ผ๋“œ๋ฐฑ ๊ธฐ๋ฐ˜ ์„ฑ์žฅ์€ ์ด์ œ ์—”ํ„ฐํ”„๋ผ์ด์ฆˆ AI ๋„์ž… ์ „๋ฐ˜์œผ๋กœ ํ™•์žฅ๋˜๊ณ  ์žˆ๋‹ค. ๊ฐ ์—…๋ฌด ํ๋ฆ„, ๋Œ€ํ™”ํ˜• ์ƒํ˜ธ์ž‘์šฉ, ๋ชจ๋ธ ์ถœ๋ ฅ ํ•˜๋‚˜ํ•˜๋‚˜๊ฐ€ ํ•™์Šต ๊ธฐํšŒ๊ฐ€ ๋œ๋‹ค. ์‚ฌ์šฉ์žโ€“๋ฐ์ดํ„ฐโ€“๊ฐœ๋ฐœ์ž ๊ฐ„ ํ”ผ๋“œ๋ฐฑ ๋ฃจํ”„๋ฅผ ์˜๋„์ ์œผ๋กœ ์„ค๊ณ„ํ•˜๋Š” ์กฐ์ง์€ AI๋ฅผ ์ •์ ์ธ ๋„๊ตฌ๋กœ ๋‹ค๋ฃจ๋Š” ์กฐ์ง๋ณด๋‹ค ํ›จ์”ฌ ๋น ๋ฅด๊ฒŒ ์‹œ์Šคํ…œ์„ ์ง„ํ™”์‹œํ‚จ๋‹ค. ์ „์ž๊ฐ€ ์—…๊ณ„๋ฅผ ์ฃผ๋„ํ•˜๋Š” ์กฐ์ง์ด ๋˜๊ณ , ํ›„์ž๋Š” ์ž์—ฐ์Šค๋Ÿฝ๊ฒŒ ๋’ค์ฒ˜์ง€๊ฒŒ ๋œ๋‹ค.

ํ˜„์žฅ์—์„œ ์ด๋ฅผ ๊ตฌํ˜„ํ•˜๋ ค๋ฉด AI ํ™œ์šฉ ์‚ฌ๋ก€์— ์ •๋Ÿ‰ ์ง€ํ‘œ๋ฅผ ์‹ฌ๊ณ , ์ •ํ™•๋„์™€ ๋งฅ๋ฝ์„ ์ง€์†์ ์œผ๋กœ ๋ชจ๋‹ˆํ„ฐ๋งํ•˜๋ฉฐ, ๋ฌธ์ œ๊ฐ€ ๋ฐœ์ƒํ–ˆ์„ ๋•Œ ๋น ๋ฅด๊ฒŒ โ€˜๋ฃจํ”„๋ฅผ ๋‹ซ๋Š”โ€™ ๊ตฌ์กฐ๋ฅผ ์„ธ์›Œ์•ผ ํ•œ๋‹ค. ํ”ผ๋“œ๋ฐฑ์€ ๋‹จ์ˆœ ์ง€์› ๊ธฐ๋Šฅ์ด ์•„๋‹ˆ๋ผ ์ง€์† ํ•™์Šต์„ ์œ„ํ•œ ์ „๋žต์  ๋ฉ”์ปค๋‹ˆ์ฆ˜์ด๋‹ค.

๊ฐ€์žฅ ์•ž์„œ ์žˆ๋Š” AI ์กฐ์ง์€ ๊ฐ€์žฅ ํฐ ๋ชจ๋ธ์„ ๊ฐ€์ง„ ์กฐ์ง์ด ์•„๋‹ˆ๋‹ค. ๊ฐ€์žฅ ์ด˜์ด˜ํ•œ ํ”ผ๋“œ๋ฐฑ ์ˆœํ™˜ ๊ตฌ์กฐ๋ฅผ ์šด์˜ํ•˜๋Š” ์กฐ์ง์ด๋‹ค.

์—”ํ„ฐํ”„๋ผ์ด์ฆˆ ํ”Œ๋žซํผ ์‚ฌ๊ณ 

์ด ๋ชจ๋“  ๋…ผ์˜๋Š” CIO์™€ ๊ธฐ์ˆ  ๋ฆฌ๋”์—๊ฒŒ ์ค‘์š”ํ•œ ๋ฉ”์‹œ์ง€๋ฅผ ์ „๋‹ฌํ•œ๋‹ค. ์กฐ์ง ๋‚ด๋ถ€์—์„œ๋„ ํ”Œ๋žซํผ ์‚ฌ๊ณ ๋ฅผ ์ ์šฉํ•ด์•ผ ํ•œ๋‹ค๋Š” ์ ์ด๋‹ค. ํ•„์ž๋Š” ๊ธฐ์—…์„ ์—ฌ๋Ÿฌ ์‹œ์Šคํ…œ์˜ ๋ฌถ์Œ์œผ๋กœ ๋ฐ”๋ผ๋ณด๋Š” ๋Œ€์‹ , ๋‹ค์–‘ํ•œ ๋ถ€์„œ๊ฐ€ ๊ทธ ์œ„์—์„œ ์ƒˆ๋กœ์šด ๊ฐ€์น˜๋ฅผ ๊ตฌ์ถ•ํ•  ์ˆ˜ ์žˆ๋Š” ํ•˜๋‚˜์˜ ํ”Œ๋žซํผ์œผ๋กœ ์ธ์‹ํ•˜๋ผ๊ณ  ์กฐ์–ธํ•œ๋‹ค. ๋ฐ์ดํ„ฐ ํŒŒ์ดํ”„๋ผ์ธ, ๊ฑฐ๋ฒ„๋„Œ์Šค ํ”„๋ ˆ์ž„์›Œํฌ, ํ†ตํ•ฉ ํŒจํ„ด์ฒ˜๋Ÿผ ์žฌ์‚ฌ์šฉ ๊ฐ€๋Šฅํ•œ AI ์—ญ๋Ÿ‰์„ ๋งˆ๋ จํ•ด ๊ฐ ์‚ฌ์—… ๋ถ€๋ฌธ์ด ์•ˆ์ „ํ•˜๊ฒŒ ํ™œ์šฉํ•  ์ˆ˜ ์žˆ๋„๋ก ํ•˜๊ณ , ํŒ€์ด ์‹คํ—˜ํ•  ์ˆ˜ ์žˆ๋Š” ๊ฐ€๋“œ๋ ˆ์ผ๊ณผ API๋ฅผ ์ œ๊ณตํ•ด ๋ถ„์‚ฐ๋œ ํ˜์‹ ์„ ์ด‰์ง„ํ•ด์•ผ ํ•œ๋‹ค.

ํด๋ผ์šฐ๋“œ ์‹œ๋Œ€์— ์…€ํ”„์„œ๋น„์Šค ์ธํ”„๋ผ๊ฐ€ ๊ฐœ๋ฐœ ๋ฌธํ™”๋ฅผ ๋ฐ”๊ฟจ๋“ฏ, AI ์‹œ๋Œ€์—๋Š” ์…€ํ”„์„œ๋น„์Šค ์ง€๋Šฅ์ด ๊ฐ™์€ ๋ณ€ํ™”๋ฅผ ์ด๋Œ๊ณ  ์žˆ๋‹ค. ๋งˆ์ผ€ํŒ… ์กฐ์ง์€ ๋น„์ •ํ˜• ๋ฐ์ดํ„ฐ์—์„œ ํ†ต์ฐฐ์„ ์ถ”์ถœํ•˜๊ณ , HR์€ ์˜จ๋ณด๋”ฉ์„ ์œ„ํ•œ ์ง€์‹ ํƒ์ƒ‰์„ ์ž๋™ํ™”ํ•˜๋ฉฐ, ์žฌ๋ฌด ์กฐ์ง์€ AI ๊ธฐ๋ฐ˜ ์˜ˆ์ธก์œผ๋กœ ๊ฒฝ์˜ ์„ฑ๊ณผ๋ฅผ ๋ชจ๋ธ๋งํ•˜๋Š” ์‹์ด๋‹ค. ๊ฐ ๊ธฐ๋Šฅ ์กฐ์ง์€ ๊ณตํ†ต ๊ธฐ๋ฐ˜ ์œ„์—์„œ ์ž์‚ฌ์˜ ๋„๋ฉ”์ธ ์ „๋ฌธ์„ฑ์„ ๋”ํ•ด ๊ณ ์œ ํ•œ ๊ฐ€์น˜๋ฅผ ๋งŒ๋“ค์–ด๋‚ธ๋‹ค.

CIO๋Š” ์ด๋Ÿฌํ•œ ๊ณผ์ •์„ ์กฐ์œจํ•˜๋Š” ํ•ต์‹ฌ ์—ญํ• ์„ ๋งก๋Š”๋‹ค. ์ƒํ˜ธ์šด์šฉ์„ฑ๊ณผ ๋ณด์•ˆ, ์œค๋ฆฌ์  ์‚ฌ์šฉ์„ ๋ณด์žฅํ•˜๋Š” ๋™์‹œ์— ์กฐ์ง๋ณ„ ํ˜์‹ ์ด ์ž์œ ๋กญ๊ฒŒ ์ผ์–ด๋‚  ์ˆ˜ ์žˆ๋„๋ก ํ™˜๊ฒฝ์„ ์กฐ์„ฑํ•ด์•ผ ํ•œ๋‹ค. ํ†ต์ œ์™€ ์ฐฝ์˜์„ฑ ์‚ฌ์ด์˜ ๊ท ํ˜•์„ ์–ด๋–ป๊ฒŒ ์žก๋А๋ƒ๊ฐ€ ์ฐจ์„ธ๋Œ€ ์—”ํ„ฐํ”„๋ผ์ด์ฆˆ ๋ฆฌ๋”์˜ ์—ญ๋Ÿ‰์„ ๊ฐ€๋ฅด๋Š” ๊ธฐ์ค€์ด ๋  ๊ฒƒ์ด๋‹ค.

์žฌ๋ฐœ๋ช… ํ•จ์ • ํ”ผํ•˜๊ธฐ

ํŠนํžˆ ๊ธฐ์ˆ  ์ค‘์‹ฌ ์กฐ์ง์—๋Š” ๋ชจ๋“  ๊ฒƒ์„ ๋‚ด๋ถ€์—์„œ ์ƒˆ๋กœ ๋งŒ๋“ค๊ณ  ์‹ถ์€ ์œ ํ˜น์ด ์กด์žฌํ•œ๋‹ค. ์ด๋ ‡๊ฒŒ ํ•˜๋ฉด ๋” ์•ˆ์ „ํ•˜๊ณ  ํ†ต์ œ ๊ฐ€๋Šฅํ•˜๋‹ค๊ณ  ๋А๋ผ๊ธฐ ๋•Œ๋ฌธ์ด๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ์—ญ์‚ฌ์  ์‚ฌ๋ก€๋Š” ์ด๋Ÿฌํ•œ ๋ณธ๋Šฅ์ด ์–ผ๋งˆ๋‚˜ ์‰ฝ๊ฒŒ ๋ฐœ์ „ ์†๋„๋ฅผ ๋Šฆ์ถœ ์ˆ˜ ์žˆ๋Š”์ง€๋ฅผ ๋ณด์—ฌ์คฌ๋‹ค.

์ž์ฒด ํ”„๋ผ์ด๋น— ํด๋ผ์šฐ๋“œ๋ฅผ ๊ตฌ์ถ•ํ•˜๋ ค ํ–ˆ์ง€๋งŒ, ํผ๋ธ”๋ฆญ ํด๋ผ์šฐ๋“œ์˜ ๊ทœ๋ชจ์™€ ์†๋„๋ฅผ ๋”ฐ๋ผ๊ฐ€์ง€ ๋ชปํ•ด ์‹คํŒจํ•œ ๊ธฐ์—…์€ ์ ์ง€ ์•Š์•˜๋‹ค. AI๋„ ๋งˆ์ฐฌ๊ฐ€์ง€๋‹ค. ๋…์ž ๋ชจ๋ธ์„ ํ•™์Šตํ•˜๋ ค๋ฉด ๋ง‰๋Œ€ํ•œ ์—ฐ์‚ฐ ์ž์›๊ณผ ์ธ์žฌ๊ฐ€ ํ•„์š”ํ•˜๊ณ , ๊ธฐ๋ฐ˜ ํ”Œ๋žซํผ์€ ๊ฐœ๋ณ„ ๊ธฐ์—…์ด ๋”ฐ๋ผ์žก๊ธฐ ํž˜๋“  ์†๋„๋กœ ๋ฐœ์ „ํ•œ๋‹ค.

๋” ํšจ์œจ์ ์ธ ์ „๋žต์€ ์• ํ”Œ๋ฆฌ์ผ€์ด์…˜ ๊ณ„์ธต์—์„œ ์ฐจ๋ณ„ํ™”ํ•˜๋Š” ๊ฒƒ์ด๋‹ค. ๋ฐ์ดํ„ฐ ์ „๋žต, ์‚ฌ์šฉ์ž ๊ฒฝํ—˜, ๋„๋ฉ”์ธ ํŠนํ™” ํ†ตํ•ฉ ๋“ฑ ๊ธฐ์—…๋งŒ์ด ์ œ๊ณตํ•  ์ˆ˜ ์žˆ๋Š” ๊ณ ์œ  ๊ฐ€์น˜๋ฅผ ๊ฐ•ํ™”ํ•˜๋Š” ๋ฐฉํ–ฅ์œผ๋กœ ์ง€๋Šฅํ™” ์—ญ๋Ÿ‰์„ ๊ตฌ์ถ•ํ•˜๊ณ , ๋ฒ”์šฉ์  ์‚ฌ๊ณ  ๋Šฅ๋ ฅ์€ ์ด๋ฏธ ์„ฑ์ˆ™ํ•œ ํ”Œ๋žซํผ์— ๋งก๊ธฐ๋Š” ํŽธ์ด ํ›จ์”ฌ ํšจ๊ณผ์ ์ด๋‹ค.

์„ฑ๊ณตํ•˜๋Š” ์กฐ์ง์€ ์ƒํƒœ๊ณ„ ์ „๋ฐ˜์—์„œ AI๋ฅผ ์กฐ์œจํ•˜๋ฉฐ ํ™œ์šฉํ•˜๋Š” ๊ณณ์ด์ง€, ๋ชจ๋“  ๊ฒƒ์„ ๋‚ด๋ถ€์—์„œ ์žฌ๋ฐœ๋ช…ํ•˜๋ ค๋Š” ๊ณณ์ด ์•„๋‹ˆ๋‹ค.

๋ฆฌ๋”์‹ญ ๊ณผ์ œ

AI๋Š” ํ•œ ์„ธ๋Œ€์— ํ•œ ๋ฒˆ ๋“ฑ์žฅํ•˜๋Š” ๊ธฐ์ˆ  ์ „ํ™˜์ ์ด๋‹ค. ํ•˜์ง€๋งŒ ๊ณผ๊ฑฐ์˜ ์ฃผ์š” ๊ธฐ์ˆ  ์ „ํ™˜์ด ๊ทธ๋žฌ๋“ฏ, ์Šน์ž๋Š” ์—ญ์‚ฌ์˜ ๊ตํ›ˆ์„ ์˜ฌ๋ฐ”๋ฅด๊ฒŒ ํ•ด์„ํ•œ ๊ธฐ์—…์ด ๋œ๋‹ค.

ํด๋ผ์šฐ๋“œ๋Š” โ€˜์†Œ์œ โ€™๋ณด๋‹ค โ€˜ํ™œ์šฉโ€™์ด, โ€˜์‚ฌ์ผ๋กœโ€™๋ณด๋‹ค โ€˜์ƒํƒœ๊ณ„โ€™๊ฐ€, ๊ณ ์ •๋œ ๊ณ„ํš๋ณด๋‹ค โ€˜ํ”ผ๋“œ๋ฐฑโ€™์ด ๊ฐ•๋ ฅํ•˜๋‹ค๋Š” ์ ์„ ๋ถ„๋ช…ํžˆ ๋ณด์—ฌ์คฌ๋‹ค. AI๋Š” ์ด๋Ÿฌํ•œ ๊ตํ›ˆ์„ ์ƒˆ๋กœ์šด ์˜์—ญ์œผ๋กœ ํ™•์žฅํ•˜๊ณ  ์žˆ์„ ๋ฟ์ด๋‹ค.

CIO์™€ ๊ธฐ์ˆ  ๋ฆฌ๋”์—๊ฒŒ ์š”๊ตฌ๋˜๋Š” ๊ณผ์ œ๋Š” ๋ช…ํ™•ํ•˜๋‹ค. ํ•™์Šตํ•˜๋Š” ์•„ํ‚คํ…์ฒ˜๋ฅผ ๊ตฌ์ถ•ํ•˜๊ณ , ์˜คํ”ˆ ์ƒํƒœ๊ณ„๋ฅผ ํ™œ์šฉํ•ด ํ˜์‹  ์†๋„๋ฅผ ๋†’์—ฌ์•ผ ํ•œ๋‹ค. ํ”ผ๋“œ๋ฐฑ์„ ๋ฌธํ™”์  ์Šต๊ด€์œผ๋กœ ๋งŒ๋“ค๊ณ , ์ด๋ฏธ ํ”Œ๋žซํผ์ด ์ œ๊ณตํ•˜๋Š” ๊ธฐ๋Šฅ์„ ๋‚ด๋ถ€์—์„œ ๋ฐ˜๋ณต ๊ฐœ๋ฐœํ•˜๋Š” ๋Œ€์‹  ๊ธฐ์—… ๊ณ ์œ ์˜ ๋น„์ฆˆ๋‹ˆ์Šค ๋ฌธ์ œ ํ•ด๊ฒฐ์— ์ธ์žฌ๋ฅผ ์ง‘์ค‘ํ•ด์•ผ ํ•œ๋‹ค.

AI๊ฐ€ ๊ธฐ์—…์„ ๋ณ€ํ™”์‹œํ‚ฌ ๊ฒƒ์ธ์ง€๋Š” ๋” ์ด์ƒ ์งˆ๋ฌธ์ด ์•„๋‹ˆ๋‹ค. ๋ณ€ํ™”๋Š” ์ด๋ฏธ ์ผ์–ด๋‚˜๊ณ  ์žˆ๋‹ค. ์ง€๊ธˆ ์ค‘์š”ํ•œ ์งˆ๋ฌธ์€, ๊ทธ ๋ณ€ํ™”๊ฐ€ ์ง€์† ๊ฐ€๋Šฅํ•˜๊ณ  ์œค๋ฆฌ์ ์ด๋ฉฐ ๋น ๋ฅด๊ฒŒ ์ผ์–ด๋‚˜๋„๋ก โ€˜์˜ฌ๋ฐ”๋ฅธ ํ”Œ๋žซํผโ€™์„ ๊ธฐ๋ฐ˜์œผ๋กœ ํ•˜๊ณ  ์žˆ๋Š”๊ฐ€์ด๋‹ค.

ํ˜์‹ ์€ ๋ฌด์—‡์„ ์†Œ์œ ํ•˜๋А๋ƒ๊ฐ€ ์•„๋‹ˆ๋ผ ๋ฌด์—‡์„ ๊ฐ€๋Šฅํ•˜๊ฒŒ ํ•˜๋А๋ƒ์—์„œ ์‹œ์ž‘๋œ๋‹ค๋Š” ์‚ฌ์‹ค์„ ์ดํ•ดํ•˜๋Š” ๋ฆฌ๋”๊ฐ€ ๋ฏธ๋ž˜๋ฅผ ์ด๋Œ ๊ฒƒ์ด๋‹ค.
dl-ciokorea@foundryco.com

Embracing sovereign AI in the financial services industry

9 December 2025 at 02:18

Generative AI (GenAI) is reshaping industries worldwide, prompting many nations to adopt sovereign AI strategies to protect data and maintain control over AI development. This shift increases the need for secure, locally integrated server and storage solutions.

Sovereign AI is especially critical for regulated sectors such as government and financial services, where data-intensive workloads continue to grow. ASUS collaborated with AMD to deliver end-to-end solutionsโ€”enterprise servers, AI, cybersecurity, and cloudโ€”to help organizations boost efficiency, strengthen data security, and improve TCO. With experience building national computing centers and partnering with AMD, ASUS enables secure, AI-driven transformation.

Transforming the financial industry

At the ASUS AI Tech event, industry leaders explored how ASUS AI infrastructure enables secure, AI-driven transformation in a rapidly evolving market.

โ€œFrom hardware servers to software platforms, our expertise has helped our customers, particularly in the public sector and financial services industry, explore and leverage the power of sovereign AI,โ€ says Paul Ju, Senior Vice President, Co-Head of Open Platform BG, ASUS.

Modern AI workloads and real-time financial services demand higher performance than traditional storage can deliver. Sovereign AI supports real-time risk assessment, market analysis, and high-performance computing with strong security and compliance.

ASUS and AMD provide end-to-end solutions that combine enterprise-grade servers, AI, cybersecurity, and cloud technologiesโ€”helping institutions manage data securely and optimize workloads.

A leading financial ISV in APAC upgraded its securities quotation system using ASUS RS700A-E13-RS4U servers with AMD EPYCโ„ข 9005 processors, achieving full software compatibility, zero downtime, and improved trading efficiency.

Similarly, an Asian brokerage firm built a millisecond-level trading platform by pairing its open-source trading system with ASUS RS501A-E12-RS4U servers powered by AMD EPYCโ„ข 9005 processorsโ€”ensuring fast execution, strong stability, and a competitive edge.


ASUS AI Infrastructure solutions, powered by AMD EPYCโ„ข 9005 Processors

AMD EPYCโ„ข processors deliver powerful and reliable performance for financial workloads, especially in quantitative finance and risk management. Using QuantLib v1.35 for benchmarking, systems powered by AMD EPYC showed superior performance and performance-per-watt, helping IT teams consolidate data center resources, cut software costs, and improve TCO.1

Benchmarks with open-source tools such as KX Nano further proved EPYCโ€™s ability to handle large-scale time-series data and multi-threaded workloads, enabling higher throughput and efficient resource use. AMD continues to work with partners to meet the evolving needs of the financial services industry.2

Powered by industry-leading AMD EPYCโ„ข 9005 processors, ASUS AI servers deliver unparalleled performance and density for AI-driven, mission-critical data center workloads. According to the latest standard benchmark tests, the RS520QA-E13 server achieved exceptional Peak Result and Base Result scores in the 2U4N configuration of the SPEC CPUยฎ 2017 benchmark. Together, ASUS and AMD provide the reliability, throughput, and scalability that modern financial services require.

srcset="https://b2b-contenthub.com/wp-content/uploads/2025/12/Asus_Foundry-BP_Image_1200x627.jpg?quality=50&strip=all 1200w, https://b2b-contenthub.com/wp-content/uploads/2025/12/Asus_Foundry-BP_Image_1200x627.jpg?resize=300%2C157&quality=50&strip=all 300w, https://b2b-contenthub.com/wp-content/uploads/2025/12/Asus_Foundry-BP_Image_1200x627.jpg?resize=768%2C401&quality=50&strip=all 768w, https://b2b-contenthub.com/wp-content/uploads/2025/12/Asus_Foundry-BP_Image_1200x627.jpg?resize=1024%2C535&quality=50&strip=all 1024w, https://b2b-contenthub.com/wp-content/uploads/2025/12/Asus_Foundry-BP_Image_1200x627.jpg?resize=150%2C78&quality=50&strip=all 150w, https://b2b-contenthub.com/wp-content/uploads/2025/12/Asus_Foundry-BP_Image_1200x627.jpg?resize=854%2C446&quality=50&strip=all 854w, https://b2b-contenthub.com/wp-content/uploads/2025/12/Asus_Foundry-BP_Image_1200x627.jpg?resize=640%2C334&quality=50&strip=all 640w, https://b2b-contenthub.com/wp-content/uploads/2025/12/Asus_Foundry-BP_Image_1200x627.jpg?resize=444%2C232&quality=50&strip=all 444w" width="1024" height="535" sizes="auto, (max-width: 1024px) 100vw, 1024px">

Table 1: 2U4N Configuration powered by AMD EPYCโ„ข 9005 processors in SPEC CPUยฎ 2017 Benchmark

These include:

  • Real-time insights with HPC data center servers

Accelerate risk modeling and AI forecasting for trading and market analysis with the ASUS RS520QA-E13 multi-node server, designed for HPC, financial analytics, and cloud workloads. Supports faster trading decisions and real-time market response.

  • Compliance-ready with scalable storage

Ensure regulatory compliance with high-capacity, reliable all-flash storage like the RS501A-E12 with WEKA. Starting from 8 nodes, it offers 1โ€“2.2PB capacity with high read/write throughputโ€”ideal for multi-pipeline AI, HPC, and financial workloads.

  • Data protection with robust, comprehensive security
    Protect sensitive financial data with comprehensive security from infrastructure to applications. ASUS Control Center enables centralized server management, enhancing security and compliance for financial operations.

Engineered for versatile workloads, ASUS delivers exceptional performance for demanding AI tasks. With optimized space and power efficiency, our comprehensive server portfolio enhances HPC capabilities and accelerates AI-driven financial digitalization.

Click here to learn how ASUS empowers financial institutions with robust hardware, intelligent software, and proactive support. Learn more from theย ASUS AI Infrastructure Solution Group.

1 Introducing a New QuantLib Benchmark

2 Competitive KX Nano Benchmarking on AMD EPYC Processors

โŒ
โŒ