❌

Normal view

There are new articles available, click to refresh the page.
Today — 8 December 2025CIO

Meet the MAESTRO: AI agents are ending multi-cloud vendor lock-in

8 December 2025 at 10:17

For today’s CIO, the multi-cloud landscape, extending across hyperscalers, enterprise platforms, and AI-native cloud providers, is a non-negotiable strategy for business resilience and innovation velocity. Yet, this very flexibility can become a liability, often leading to fragmented automation, vendor sprawl, and costly data silos. The next frontier in cloud optimization isn’t better scripting—it’s Agentic AI systems.

These autonomous, goal-driven systems, deployed as coordinated multi-agent ecosystems, act as an enterprise’s “MAESTRO.” They don’t just follow instructions; they observe, plan, and execute tasks across cloud boundaries in real-time, effectively transforming vendor sprawl from a complexity tax into a strategic asset.

The architecture of cross-cloud agent interoperability

The core challenge in a multi-cloud environment is not the platforms themselves, but the lack of seamless interoperability between the automation layers running on them. The MAESTRO architecture (referencing the Cloud Security Alliance’s MAESTRO agentic AI threat modeling framework; MAESTRO stands for multi-agent environment, security, threat, risk and outcome) solves this by standardizing the language and deployment of these autonomous agents:

1. The open standards bridge: A2A protocol

For agents to coordinate effectively—to enable a FinOps agent on one cloud to negotiate compute resources with an AIOps agent on another cloud—they must speak a common, vendor-agnostic language. This is where the emerging Agent2Agent (A2A) protocol becomes crucial.

The A2A protocol is an open, universal standard that enables intelligent agents, regardless of vendor or underlying model, to discover, communicate, and collaborate. It provides the technical foundation for:

  • Dynamic capability discovery: Agents can publish their identity and skills, allowing others to discover and connect without hard-coded integrations.
  • Context sharing: Secure exchange of context, intent, and status, enabling long-running, multi-step workflows like cross-cloud workload migration or coordinated threat response.

To fully appreciate the power of the Maestro architecture, consider a critical cross-cloud workflow: strategic capacity arbitrage and failover. A FinOps agent on a general-purpose cloud is continuously monitoring an AI inference workload’s service level objectives(SLOs) and cost-per-inference. When a sudden regional outage is detected by an AIOps agent on the same cloud, the AIOps agent broadcasts a high-priority “capacity sourcing” intent using the A2A protocol. The Maestro orchestrates an immediate response, allowing the FinOps agent to automatically negotiate and provision the required GPU capacity with a specialized neocloud agent. Simultaneously, a security agent ensures the new data pipeline adheres to the required data sovereignty rules before the workload migration agent seamlessly shifts the portable Kubernetes container to the new, available capacity, all in under a minute to maintain continuous model performance. This complex, real-time coordination is impossible without the standardized language and interoperability provided by the A2A protocol and the Kubernetes-native deployment foundation.

2. The deployment foundation: Kubernetes-native frameworks

To ensure agents can be deployed, scaled, and managed consistently across clouds, we must leverage a Kubernetes-native approach. Kubernetes is already the de facto orchestration layer for enterprise cloud-native applications. New Kubernetes-native agent frameworks, like kagent, are emerging to extend this capability directly to multi-agent systems.

This approach allows the Maestro to:

  • Zero-downtime agent portability: Package agents as standard containers, making it trivial to move a high-value security agent from one cloud to another for resilience or cost arbitrage.
  • Observability and auditability: Leverage Kubernetes’ built-in tools for monitoring, logging, and security to gain visibility into the agent’s actions and decision-making process, a non-negotiable requirement for autonomous systems.

Strategic value: Resilience and zero lock-in

The Maestro architecture fundamentally shifts the economics and risk profile of a multi-cloud strategy.

  • Reduces vendor lock-in: By enforcing open standards like A2A, the enterprise retains control over its core AI logic and data models. The Maestro’s FinOps agents are now capable of dynamic cost and performance arbitrage across a more diverse compute landscape that includes specialized providers. Neoclouds are purpose-built for AI, offering GPU-as-a-Service (GPUaaS) and unique performance advantages for training and inference. By packaging AI workloads as portable Kubernetes containers, the Maestro can seamlessly shift them to the most performant or cost-effective platform—whether it’s an enterprise cloud for regulated workloads, or a specialized AI-native cloud for massive, high-throughput training. As BCG emphasizes, managing the evolving dynamics of digital platform lock-in requires disciplined sourcing and modular, loosely coupled architectures. The agent architecture makes it dramatically easier to port or coordinate high-value AI services, providing true strategic flexibility.
  • Enhances business resilience (AIOps): AIOps agents, orchestrated by the Maestro, can perform dynamic failover, automatically redirecting traffic or data pipelines between regions or providers during an outage. Furthermore, the Maestro can orchestrate strategic capacity sourcing, instantly rerouting critical AI inference workloads to available, high-performance GPU capacity offered by specialized neoclouds to ensure continuous model performance during a regional outage on a general-purpose cloud. They can also ensure compliance by dynamically placing data or compute in the “greenest” (most energy-efficient) cloud or the required sovereign region to meet data sovereignty rules.

The future trajectory

The shift to the Maestro architecture represents more than just a technological upgrade; it signals the true democratization of the multi-cloud ecosystem. By leveraging open standards like A2A, the enterprise is moving away from monolithic vendor platforms and toward a vibrant, decentralized marketplace of agentic services. In this future state, enterprises will gain access to specialized, hyper-optimized capabilities from a wide array of providers, treating every compute, data, or AI service as a modular, plug-and-play component. This level of strategic flexibility fundamentally alters the competitive landscape, transforming the IT organization from a consumer of platform-centric services to a strategic orchestrator of autonomous, best-of-breed intelligence. This approach delivers the “strategic freedom from vendor lock-in” necessary to continuously adapt to market changes and accelerate innovation velocity, effectively turning multi-cloud complexity into a decisive competitive advantage.

Governance: Managing the autonomous agent sprawl

The power of autonomous agents comes with the risk of “misaligned autonomy”—agents doing what they were optimized to do, but without the constraints and guardrails the enterprise forgot to encode. Success requires a robust governance framework to manage the burgeoning population of agents.

  • Human-in-the-loop (HITL) for critical decisions: While agents execute most tasks autonomously, the architecture must enforce clear human intervention points for high-risk decisions, such as a major cost optimization that impacts a business-critical service or an automated incident response that involves deleting a core data store. Gartner emphasizes the importance of transparency, clear audit trails, and the ability for humans to intervene or override agent behavior. In fact, Gartner predicts that by 2028, loss of control—where AI agents pursue misaligned goals—will be the top concern for 40% of Fortune 1000 companies.
  • The 4 pillars of agent governance: A strong framework must cover the full agent lifecycle:
    1. Lifecycle management: Enforcing separation of duties for development, staging, and production.
    2. Risk management: Implementing behavioral guardrails and compliance checks.
    3. Security: Applying least privilege access to tools and APIs.
    4. Observability: Auditing every action to maintain a complete chain of reasoning for compliance and debugging.

By embracing this Maestro architecture, CIOs can transform their multi-cloud complexity into a competitive advantage, achieving unprecedented levels of resilience, cost optimization, and, most importantly, strategic freedom from vendor lock-in.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

Why cyber resilience must be strategic, not a side project

8 December 2025 at 07:45

As one of the world’s foremost voices on cybersecurity and crisis leadership, Sarah Armstrong-Smith has spent her career at the intersection of technology, resilience and human decision-making. Formerly chief security advisor at Microsoft Europe, and now a member of the UK Government Cyber Advisory Board, she is widely recognized for her ability to translate complex technical challenges into actionable business strategy.

In this exclusive interview with The Cyber Security Speakers Agency, Sarah explores how today’s CIOs must evolve from technology enablers to resilience architects — embedding cyber preparedness into the core of business strategy. Drawing on decades of experience leading crisis management and resilience functions at global organizations, she offers a masterclass in how technology leaders can balance innovation with security, manage disruption with clarity and build cultures of trust in an era defined by volatility and digital interdependence.

For business and technology leaders navigating the next wave of transformation, Sarah’s insights offer a rare blend of strategic depth and practical foresight — a roadmap for leadership in the age of perpetual disruption.

1. As digital transformation accelerates, how can CIOs embed cyber resilience into the very fabric of business strategy rather than treating it as a separate function?

Cyber resilience should be recognised as a strategic enabler, not merely a technical safeguard. CIOs must champion a holistic approach where resilience is woven into every stage of digital transformation — from initial design through to deployment and ongoing operations.

This requires close collaboration with business leaders to ensure risk management and security controls are embedded from the outset, rather than being an afterthought. By aligning cyber resilience objectives with business outcomes, CIOs can work alongside CISOs to help their organizations anticipate threats, adapt rapidly to disruptions and maintain stakeholder trust.

Embedding resilience also demands a shift in organizational mindset. CIOs should help to foster a culture where every employee understands their role in protecting digital assets and maintaining operational service.

This involves education and cross-functional exercises that simulate real-world incidents, aligned to current threats. By making resilience a shared responsibility and a key performance metric, CIOs can ensure their organizations are not only prepared to withstand a range of threats but are also positioned to recover quickly and thrive in the face of adversity.

2. CIOs and CISOs often face tension between innovation and security. What’s your advice for maintaining that balance while still driving progress?

Balancing innovation and security are constant challenges that require CIOs to act as both risk managers and business catalysts. The key is to embed security and resilience considerations early into the innovation lifecycle, ensuring new technologies and processes are assessed for risk early and often.

CIOs should promote agile governance frameworks that allow for rapid experimentation while maintaining clear guardrails around information protection, compliance and operational integrity. By involving security teams from the outset, organizations can identify potential vulnerabilities before they become systemic issues.

At the same time, CISOs must avoid creating a culture of fear that stifles creativity. Instead, they should encourage responsible risk-taking by providing teams with the tools, guidance and autonomy to innovate securely.

This includes leveraging automation, zero-trust architectures and continuous monitoring to reduce vulnerabilities and enable faster, safer deployment of solutions. Ultimately, the goal is to create an environment where innovation and security are mutually reinforcing, driving competitive advantage and organizational resilience.

3. You’ve led crisis management and resilience teams across major organizations. What leadership lessons can CIOs take from managing incidents under pressure?

Effective crisis leadership is built on preparation, decisiveness and transparent communication. CIOs must ensure their teams are well-versed in incident response and empowered to act swiftly when an incident occurs.

This means investing in due diligence, having clear escalation paths and robust playbooks that outline the critical path, and designated roles and responsibilities. During a crisis, leaders must remain calm, protect critical assets and make informed decisions based on real-time intelligence.

Equally important is the ability to communicate clearly with both internal and external stakeholders. CIOs and CISOs should work in unison to provide timely updates to the board, regulators and customers, balancing transparency with the need to protect vulnerable people and sensitive data.

Demonstrating accountability and empathy during a crisis can help preserve trust and minimise reputational damage. After the incident, leaders should be thoroughly committed to post-mortems to identify ‘no blame’ lessons learned and drive continuous improvement, ensuring the organization emerges stronger and more resilient.

4. With AI transforming both security threats and defences, what role should CIOs play in governing ethical and responsible AI adoption?

CIOs are uniquely positioned to guide the ethical deployment of AI and emerging tech, balancing innovation with risk management and societal responsibility. They should contribute to governance frameworks that address data privacy, algorithmic bias and transparency, ensuring AI systems are designed and operated in accordance with core organizational policies and regulatory requirements. This involves collaborating with legal, compliance and HR teams to develop policies that safeguard against unintended consequences and consequential impact.

Additionally, CIOs should champion ongoing education and awareness around AI ethics, both within IT and across the wider organization. By fostering a culture of accountability and continuous learning, CIOs can help teams identify and mitigate risks associated with AI through the implementation of rigorous engineering principles.

Regular technical and security assessments and stakeholder engagement is essential to maintaining trust and ensuring AI adoption delivers positive outcomes for those most impacted by it.

5. In your experience, what distinguishes organizations that recover stronger from a cyber incident from those that struggle to regain trust?

Organizations that recover stronger from cyber incidents typically demonstrate resilience through proactive planning, transparent communication and a commitment to continuous improvement. They invest in proactive and reactive capabilities and a positive culture driven by empathetic leadership, empowerment and accountability.

When an incident occurs, these organizations respond swiftly, contain the threat and communicate transparently with stakeholders about the actions being taken to remediate and reduce future occurrences.

Conversely, organizations that struggle often lack preparedness and fail to engage stakeholders effectively. Delayed or inconsistent communication can erode trust and amplify reputational damage.

The most resilient organizations treat incidents and near-misses as learning opportunities, conducting thorough post-incident reviews and implementing changes to strengthen their defences. By prioritising transparency, accountability and a culture of resilience, CIOs can help their organizations not only recover but also enhance their reputation and stakeholder confidence.

6. How can CIOs cultivate a security-first culture across non-technical teams — especially in remote or hybrid work environments?

Cultivating a security-first culture requires CIOs and CISOs to make cybersecurity relevant and accessible to all employees, regardless of technical expertise. This starts with tailored training programmes that address the specific risks faced by different stakeholders, rather than a one-size-fits-all approach.

This should leverage engaging formats – like interactive workshops, gamified learning and real-world simulations to reinforce positive behaviors and outcomes

Beyond training, CIOs and CISOs must embed security into everyday workflows by providing user-friendly tools and clear guidance. Regular communication, visible leadership and recognition of positive security behaviors can help sustain momentum.

In hybrid environments, CIOs should ensure policies are dynamic and adaptive to evolving threats, enabling employees to work securely without sacrificing productivity. By fostering a sense of shared responsibility and empowering non-technical teams, CIOs can build a resilient culture that extends beyond the IT department.

7. Boards are increasingly holding CIOs accountable for resilience and risk. How can technology leaders communicate complex security risks in business language?

To effectively engage boards, CIOs must translate technical issues into enterprise risks, framing cybersecurity and resilience as a strategic imperative rather than a technical challenge. This involves articulating how exposure to specific threats could affect safety, revenue, reputation, regulatory compliance and operational services. CIOs and CISOs should use clear, non-technical language, supported by real-world scenarios, to illustrate the potential consequences of ineffective controls and the value of resilience investments.

Regular, structured and diligent reporting — such as dashboards, heat maps and risk registers — can help boards visualise enterprise risk exposure and track progress over time. CIOs should foster open dialogue, encouraging board members to ask questions and participate in scenario planning.

By aligning security discussions with business objectives and demonstrating the ROI of resilience initiatives, technology and security leaders can build trust and secure the support needed to drive meaningful change.

8. What emerging risks or trends should CIOs be preparing for in 2025 and beyond?

CIOs must stay ahead of a rapidly evolving threat landscape, characterised by the proliferation of AI-enabled attacks, supply chain vulnerabilities and targeted campaigns. The rise of quantum computing poses long-term risks to traditional encryption methods, necessitating understanding and early exploration of quantum-safe solutions.

Additionally, regulatory scrutiny around data sovereignty and ethical AI is intensifying, requiring codes of conduct and governance strategies.

Beyond technology, CIOs should anticipate continuous shifts in workforce dynamics, such as the increase in human-related threats. Societal risks, geopolitical instability and the convergence of physical and cyber threats are also shaping the resilience agenda. By maintaining a forward-looking perspective and investing in adaptive capabilities, leaders can position their organizations to navigate uncertainty and capitalize on emerging opportunities.

9. How important is collaboration between CIOs and other business leaders, such as CFOs and CHROs, in building organizational resilience?

Collaboration across the entire C-suite is essential for building holistic resilience that encompasses people, technology, finance and processes. CIOs must work closely with CFOs to align resilience investments with business priorities and CROs to ensure risk management strategies are financially sustainable. Engaging CHROs is equally important, as workforce readiness and culture play a critical role in responding to and recovering from disruptions.

Joint initiatives such as cross-functional crisis simulations, integrated risk assessments and shared accountability frameworks can help break down silos and foster a unified approach to resilience.

By leveraging diverse perspectives and expertise, CIOs can drive more effective decision-making and ensure resilience is embedded throughout the organization. Ultimately, strong collaboration enables organizations to reduce assumptions, anticipate challenges, respond cohesively and emerge stronger in times of adversity.

10. Finally, what personal qualities do you believe future-ready CIOs must develop to lead effectively through constant disruption?

Future-ready CIOs must embody adaptability, strategic vision and emotional intelligence. The pace of technological change and the frequency of disruptive events demand leaders who can pivot quickly, embrace uncertainty and inspire confidence in their teams. CIOs should cultivate an inquisitive mindset, continuously seeking new knowledge and challenging conventional wisdom to stay ahead of emerging trends.

Equally important are communication and collaboration skills. CIOs must be able to articulate complex ideas clearly, build consensus across diverse stakeholders and foster a culture of trust and accountability.

Resilience, empathy and a commitment to ethical leadership will enable CIOs to navigate challenges with integrity and guide their organizations through periods of uncertainty and transformation. By developing these qualities, CIOs can lead with purpose and drive sustainable success in an ever-changing landscape.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

゚ンゞニア芖点から芋たLLM゚ヌゞェント実装入門──フレヌムワヌク遞定からプロトタむプ構築たで

8 December 2025 at 07:22

アヌキテクチャの党䜓像を抌さえる

最初の䞀歩ずしお重芁なのは、LLM゚ヌゞェントシステムの基本的なアヌキテクチャを頭の䞭で描けるようにするこずです。倚くの堎合、䞭栞にはLLM掚論APIがあり、その呚囲にプロンプトテンプレヌト、ツヌル矀、メモリストア、RAG甚のベクトルデヌタベヌス、ログやモニタリングの仕組みが配眮されたす。゚ヌゞェント自䜓は、これらを組み合わせた「オヌケストレヌション局」ずしお実装され、芳察・思考・行動のルヌプを管理したす。

クラむアントからのリク゚ストは、たずアプリケヌションサヌバヌを通じお゚ヌゞェントに枡されたす。゚ヌゞェントは、珟圚のコンテキストずメモリをもずにプロンプトを構築し、LLM APIを呌び出したす。LLMから返っおきた出力のうち、ツヌル呌び出しが含たれおいる郚分はパヌスされ、察応するツヌル関数や倖郚APIが実行されたす。その結果が再び゚ヌゞェントに戻り、次のステップのプロンプトに組み蟌たれ、ルヌプが続きたす。

RAGを組み蟌む堎合は、゚ヌゞェントが必芁に応じお怜玢ツヌルを呌び出し、ナヌザヌの質問やタスクに関連するドキュメントをベクトルデヌタベヌスから取埗したす。取埗したテキストは、LLMのコンテキストに組み蟌たれ、事実ベヌスの回答や刀断を支えたす。メモリストアは、ナヌザヌごずの長期的な情報やタスクの䞭間状態を保持し、次回以降のむンタラクションでも掻甚されたす。

このような構造を意識するこずで、「どこを先に䜜り、どこを埌から差し替え可胜に保぀か」ずいう蚭蚈刀断がしやすくなりたす。たずえば、最初は単玔なRDBMSをメモリストアずしお䜿い、埌から専甚のベクトルデヌタベヌスやキャッシュ局を远加するずいった段階的なアプロヌチが可胜になりたす。

フレヌムワヌク遞定ず小さなプロトタむプ

実装手段ずしおは、各瀟やコミュニティが提䟛する゚ヌゞェントフレヌムワヌクやワヌクフロヌ゚ンゞンを利甚する方法ず、自前で薄いオヌケストレヌションレむダヌを曞く方法がありたす。どちらを遞ぶにせよ、「最初から完璧な基盀を䜜ろうずしない」こずが成功の鍵です。

フレヌムワヌクを遞ぶ際には、察応しおいるLLMプロバむダ、ツヌル連携のしやすさ、ステヌト管理の仕組み、ログやモニタリングの機胜などを確認したす。たた、コヌドの読みやすさや拡匵のしやすさも重芁です。゚ヌゞェントの振る舞いを现かく制埡したくなる堎面は必ず蚪れるため、ブラックボックスに芋えるフレヌムワヌクよりも、䞭身を理解しやすいものを遞ぶ方が長期的には安党です。

最初のプロトタむプずしおは、䞀぀の明確なナヌスケヌスに特化した゚ヌゞェントを䜜るのがよいでしょう。たずえば、りェブ怜玢ず瀟内RAGを組み合わせおレポヌト草案を䜜るリサヌチ゚ヌゞェントや、瀟内のFAQを参照しながら埓業員の問い合わせに答えるヘルプデスク゚ヌゞェントなどです。この段階では、認蚌や耇雑な暩限管理、スケヌリング戊略などは最䜎限にずどめ、ずにかく゚ヌゞェントの「手觊り」をチヌムで共有するこずが目的になりたす。

プロトタむプの䞭では、ツヌルを二、䞉個に絞り、メモリもセッション内の簡易なものに留めるず実装が楜になりたす。その代わり、ログを䞁寧に残し、どのようなプロンプトがどのような出力を生んだのか、ツヌルの呌び出しが成功したのか倱敗したのかを可芖化する仕組みを敎えおおくず、埌の改善に圹立ちたす。

開発プロセスずテスト・評䟡の工倫

LLM゚ヌゞェント開発で゚ンゞニアが戞惑いやすいのが、テストの難しさです。同じ入力に察しお同じ応答が返らないこずも倚く、埓来の単䜓テストやスナップショットテストの手法をそのたた適甚するこずは困難です。そこで重芁になるのが、シナリオベヌスの評䟡ず、自動評䟡ず人手評䟡の組み合わせです。

具䜓的には、兞型的なタスクシナリオを耇数甚意し、それぞれに぀いお期埅される振る舞いの条件を定矩したす。たずえば「この問い合わせに察しおは、瀟内芏皋の該圓箇所を匕甚し぀぀、䞉぀の遞択肢を提瀺する」ずいったレベルです。゚ヌゞェントを定期的にこれらのシナリオに察しお実行し、LLMを甚いた自動評䟡やルヌルベヌスのチェッカヌで合吊を刀定したす。これに加えお、重芁なシナリオに぀いおは人手によるレビュヌを行い、䞻芳的な品質も確認したす。

開発プロセスずしおは、プロンプトやツヌル構成を頻繁に倉曎できるようにし぀぀、倉曎の圱響範囲を把握するための評䟡ゞョブをCIに組み蟌むずよいでしょう。゚ヌゞェントの蚭定を倉曎するたびに、シナリオ評䟡を走らせ、重芁指暙の倉化を可芖化したす。これにより、「䞀぀のナヌスケヌスを改善した぀もりが、別のナヌスケヌスを劣化させおしたった」ずいった事態を早期に怜知できたす。

最埌に、運甚フェヌズでは、ナヌザヌのフィヌドバックずログ分析が重芁な情報源になりたす。ナヌザヌに簡単に「この回答は圹に立ったか」「どこが問題だったか」を送信しおもらえるむンタヌフェヌスを甚意し、その情報をログず玐づけお分析するこずで、改善の優先順䜍を決めるこずができたす。゚ンゞニアは、モデルやプロンプトの調敎だけでなく、ツヌルの远加・削陀、メモリ戊略の芋盎し、゚ラヌ凊理の匷化など、システム党䜓を察象ずした改善を継続的に行うこずになりたす。

LLM゚ヌゞェント実装は、単なるAPI呌び出しのラッパヌ䜜りではなく、掚論システム、ワヌクフロヌ、デヌタ基盀、UXが亀差する総合栌闘技のような領域です。しかし、小さなプロトタむプから始め、アヌキテクチャの骚栌を意識しながら埐々に拡匵しおいけば、珟実的なコストで本番運甚に耐えうる゚ヌゞェントを育おおいくこずができたす。

安党なLLM゚ヌゞェントを䜜るためのリスクずガバナンス──幻芚・セキュリティ・法的責任

8 December 2025 at 07:21

LLM゚ヌゞェント特有のリスクの党䜓像

たず抌さえおおきたいのは、LLM゚ヌゞェントのリスクは、単䞀の技術的問題ではなく、耇数のレむダヌにたたがっおいるずいう点です。ひず぀は、LLMそのものが持぀幻芚の問題です。もっずもらしいが誀った情報を自信満々に語っおしたう振る舞いはよく知られおいたすが、゚ヌゞェントずしお倖郚ツヌルにアクセスする堎合、この誀りが具䜓的なアクションに぀ながっおしたう可胜性がありたす。存圚しないAPI゚ンドポむントを呌び出そうずしたり、誀った条件でデヌタを抜出したりするこずは、業務プロセスに盎接的な圱響を䞎えたす。

次に、セキュリティずプラむバシヌのリスクがありたす。゚ヌゞェントは、ナヌザヌの入力内容だけでなく、瀟内の各皮システムやドキュメントにアクセスするこずが倚く、その過皋で機密情報を扱いたす。これらの情報がモデル提䟛者やログシステムを通じお倖郚に送信される堎合、情報管理䞊のリスクが生じたす。たた、゚ヌゞェントが攻撃者に悪甚される可胜性も無芖できたせん。たずえば、プロンプトむンゞェクション攻撃によっお゚ヌゞェントの行動方針が曞き換えられ、意図しない情報送信や操䜜が行われるずいったシナリオです。

さらに、法的責任の問題もありたす。゚ヌゞェントが生成した内容や実行したアクションが法什違反や契玄違反に぀ながった堎合、誰が責任を負うのか。モデル提䟛者か、゚ヌゞェントを組み蟌んだサヌビス提䟛者か、それずも最終的に利甚したナヌザヌか。この問いに明確な答えが出おいない領域も倚く、ガバナンス蚭蚈の難しさを増しおいたす。

ガヌドレヌル蚭蚈ず暩限管理の考え方

こうしたリスクに察凊するためには、技術的・運甚的なガヌドレヌルを倚局的に蚭蚈する必芁がありたす。その䞭心にあるのが暩限管理です。゚ヌゞェントに䞎える暩限は、原則ずしお必芁最小限にずどめ、「たずは読み取り専甚から始める」こずが安党なアプロヌチです。たずえば、CRMシステムずの連携では、最初は顧客情報の参照のみに絞り、䞀定期間問題がないこずを確認したうえで、レコヌド曎新の暩限を限定的に解攟しおいくずいった段階的な蚭蚈が考えられたす。

たた、危険床の高いアクションに぀いおは、必ず人間の承認を挟むワヌクフロヌにするこずが重芁です。高額な支払い指瀺、契玄条件の倉曎、察倖的な重芁文曞の送付などは、゚ヌゞェントがドラフトや提案を行うこずはあっおも、最終実行は人間が行う圢にすべきです。この「人間の承認ステップ」を゚ヌゞェントのフロヌの䞭に明瀺的に組み蟌むこずで、誀動䜜の圱響を限定できたす。

プロンプトむンゞェクションやデヌタ挏えいぞの察策ずしおは、入力ず出力のフィルタリングも欠かせたせん。ナヌザヌ入力や倖郚サむトから取埗したテキストをそのたたシステムプロンプトに取り蟌たない、倖郚に送信しおはならない情報が出力に含たれおいないかをチェックする、特定のキヌワヌドやパタヌンが怜出された堎合には凊理を停止しおアラヌトを䞊げるずいった仕組みが有効です。これらは、モデルの倖偎のアプリケヌションレむダヌで実装できるこずが倚く、ガヌドレヌルの重芁な䞀郚になりたす。

モニタリングず責任の明確化によるガバナンス

ガヌドレヌルを蚭蚈したずしおも、䞀床導入した゚ヌゞェントをそのたた攟眮しおよいわけではありたせん。゚ヌゞェントは孊習枈みモデルの䞊に成り立っおいるずはいえ、その挙動はコンテキストや環境によっお倉化したす。したがっお、運甚開始埌も継続的なモニタリングず改善が必芁です。

モニタリングの察象には、成功したタスクず倱敗したタスクの比率、ナヌザヌによる修正頻床、゚ラヌや䟋倖の発生パタヌン、セキュリティ䞊の疑矩のある挙動などが含たれたす。特に重芁なのは、「重倧事故に぀ながる手前の未遂事䟋」を早期に怜知するこずです。たずえば、゚ヌゞェントが犁止されおいる倖郚ドメむンぞのアクセスを詊みたが、ガヌドレヌルによりブロックされたずいうログは、蚭蚈の改善䜙地を瀺す貎重なシグナルです。

たた、責任の明確化もガバナンスの䞀郚です。組織内郚においおは、゚ヌゞェントの蚭蚈ず運甚に぀いお最終責任を負うオヌナヌを明瀺し、倉曎管理やむンシデント察応のプロセスを定矩しおおく必芁がありたす。倖郚向けには、利甚芏玄やプラむバシヌポリシヌにおいお、゚ヌゞェントの機胜ず限界、ナヌザヌ偎に求められる確認矩務などを分かりやすく説明するこずが求められたす。

安党なLLM゚ヌゞェントずは、リスクがれロの゚ヌゞェントではなく、リスクが可芖化され、コントロヌル可胜な圢で運甚されおいる゚ヌゞェントです。幻芚や誀刀断を完党に排陀するこずはできない以䞊、それらを前提ずしお、どこで止め、どこで人間に぀なぐのか、問題が発生したずきにどう怜知し、どう孊びに倉えるのかずいうガバナンスの枠組みこそが、蚭蚈ず同じくらい重芁になっおいきたす。

CIOs shift from ‘cloud-first’ to ‘cloud-smart’

8 December 2025 at 05:01

Common wisdom has long held that a cloud-first approach will gain CIOs benefits such as agility, scalability, and cost-efficiency for their applications and workloads. While cloud remains most IT leaders’ preferred infrastructure platform, many are rethinking their cloud strategies, pivoting from cloud-first to “cloud-smart” by choosing the best approach for specific workloads rather than just moving everything off-premises and prioritizing cloud over other considerations for new initiatives.

Cloud cost optimization is one factor motivating this rethink, with organizations struggling to control escalating cloud expenses amid rapid growth. An estimated 21% of enterprise cloud infrastructure spend, equivalent to $44.5 billion in 2025, is wasted on underutilized resources — with 31% of CIOs wasting half of their cloud spend, according to a recent survey from VMware.

The full rush to the cloud is over, says Ryan McElroy, vice president of technology at tech consultancy Hylaine. Cloud-smart organizations have a well-defined and proven process for determining which workloads are best suited for the cloud.

For example, “something that must be delivered very quickly and support massive scale in the future should be built in the cloud,” McElroy says. “Solutions with legacy technology that must be hosted on virtual machines or have very predictable workloads that will last for years should be deployed to well-managed data centers.”

The cloud-smart trend is being influenced by better on-prem technology, longer hardware cycles, ultra-high margins with hyperscale cloud providers, and the typical hype cycles of the industry, according to McElroy. All favor hybrid infrastructure approaches.

However, “AI has added another major wrinkle with siloed data and compute,” he adds. “Many organizations aren’t interested in or able to build high-performance GPU datacenters, and need to use the cloud. But if they’ve been conservative or cost-averse, their data may be in the on-prem component of their hybrid infrastructure.”

These variables have led to complexity or unanticipated costs, either through migration or data egress charges, McElroy says.

He estimates that “only 10% of the industry has openly admitted they’re moving” toward being cloud-smart. While that number may seem low, McElroy says it is significant.

“There are a lot of prerequisites to moderate on your cloud stance,” he explains. “First, you generally have to be a new CIO or CTO. Anyone who moved to the cloud is going to have a lot of trouble backtracking.”

Further, organizations need to have retained and upskilled the talent who manage the datacenter they own or at the co-location facility. They must also have infrastructure needs that outweigh the benefits the cloud provides in terms of raw agility and fractional compute, McElroy says.

Selecting and reassessing the right hyper-scaler

Procter & Gamble embraced a cloud-first strategy when it began migrating workloads about eight years ago, says Paola Lucetti, CTO and senior vice president. At that time, the mandate was that all new applications would be deployed in the public cloud, and existing workloads would migrate from traditional hosting environments to hyperscalers, Lucetti says.

“This approach allowed us to modernize quickly, reduce dependency on legacy infrastructure, and tap into the scalability and resilience that cloud platforms offer,” she says.

Today, nearly all P&G’s workloads run on cloud. “We choose to keep selected workloads outside of the public cloud because of latency or performance needs that we regularly reassess,” Lucetti says. “This foundation gave us speed and flexibility during a critical phase of digital transformation.”

As the company’s cloud ecosystem has matured, so have its business priorities. “Cost optimization, sustainability, and agility became front and center,” she says. “Cloud-smart for P&G means selecting and regularly reassessing the right hyperscaler for the right workload, embedding FinOps practices for transparency and governance, and leveraging hybrid architectures to support specific use cases.”

This approach empowers developers through automation, AI, and agentic to drive value faster, Lucetti says. “This approach isn’t just technical — it’s cultural. It reflects a mindset of strategic flexibility, where technology decisions align with business outcomes.”

AI is reshaping cloud decisions

AI represents a huge potential spend requirement and raises the stakes for infrastructure strategy, says McElroy.

“Renting servers packed with expensive Nvidia GPUs all day every day for three years will be financially ruinous compared to buying them outright,” he says, “but the flexibility to use next year’s models seamlessly may represent a strategic advantage.”

Cisco, for one, has become far more deliberate about what truly belongs in the public cloud, says Nik Kale, principal engineer and product architect. Cost is one factor, but the main driver is AI data governance.

“Being cloud-smart isn’t about repatriation — it’s about aligning AI’s data gravity with the right control plane,” he says.

IT has parsed out what should be in a private cloud and what goes into a public cloud. “Training and fine-tuning large models requires strong control over customer and telemetry data,” Kale explains. “So we increasingly favor hybrid architectures where inference and data processing happen within secure, private environments, while orchestration and non-sensitive services stay in the public cloud.”

Cisco’s cloud-smart strategy starts with data classification and workload profiling. Anything with customer-identifiable information, diagnostic traces, and model feedback loops are processed within regionally compliant private clouds, he says.

Then there are “stateless services, content delivery, and telemetry aggregation that benefit from public-cloud elasticity for scale and efficiency,” Kale says.

Cisco’s approach also involves “packaging previously cloud-resident capabilities for secure deployment within customer environments — offering the same AI-driven insights and automation locally, without exposing data to shared infrastructure,” he says. “This gives customers the flexibility to adopt AI capabilities without compromising on data residency, privacy, or cost.”

These practices have improved Cisco’s compliance posture, reduced inference latency, and yielded measurable double-digit reductions in cloud spend, Kale says.

One area where AI has fundamentally changed their approach to cloud is in large-scale threat detection. “Early versions of our models ran entirely in the public cloud, but once we began fine-tuning on customer-specific telemetry, the sensitivity and volume of that data made cloud egress both costly and difficult to govern,” he says. “Moving the training and feedback loops into regional private clouds gave us full auditability and significantly reduced transfer costs, while keeping inference hybrid so customers in regulated regions received sub-second response times.”

IT saw a similar issue with its generative AI support assistant. “Initially, case transcripts and diagnostic logs were processed in public cloud LLMs,” Kale says. “As customers in finance and healthcare raised legitimate concerns about data leaving their environments, we re-architected the capability to run directly within their [virtual private clouds] or on-prem clusters.”

The orchestration layer remains in the public cloud, but the sensitive data never leaves their control plane, Kale adds.

AI has also reshaped how telemetry analytics is handled across Cisco’s CX portfolio. IT collects petabyte-scale operational data from more than 140,000 customer environments.

“When we transitioned to real-time predictive AI, the cost and latency of shipping raw time-series data to the cloud became a bottleneck,” Kale says. “By shifting feature extraction and anomaly detection to the customer’s local collector and sending only high-level risk signals to the cloud, we reduced egress dramatically while improving model fidelity.”

In all instances, “AI made the architectural trade-offs clear: Specific workloads benefit from public-cloud elasticity, but the most sensitive, data-intensive, and latency-critical AI functions need to run closer to the data,” Kale says. “For us, cloud-smart has become less about repatriation and more about aligning data gravity, privacy boundaries, and inference economics with the right control plane.”

A less expensive execution path

Like P&G, World Insurance Associates believes cloud-smart translates to implementing a FinOps framework. CIO Michael Corrigan says that means having an optimized, consistent build for virtual machines based on the business use case, and understanding how much storage and compute is required.

Those are the main drivers to determine costs, “so we have a consistent set of standards of what will size our different environments based off of the use case,” Corrigan says. This gives World Insurance what Corrigan says is an automated architecture.

“Then we optimize the build to make sure we have things turned on like elasticity. So when services aren’t used typically overnight, they shut down and they reduce the amount of storage to turn off the amount of compute” so the company isn’t paying for it, he says. “It starts with the foundation of optimization or standards.”

World Insurance works with its cloud providers on different levels of commitment. With Microsoft, for example, the insurance company has the option to use virtual machines, or what Corrigan says is a “reserved instance.” By telling the provider how many machines they plan to consume or how much they intend to spend, he can try to negotiate discounts.

“That’s where the FinOps framework has to really be in place 
 because obviously, you don’t want to commit to a level of spend that you wouldn’t consume otherwise,” Corrigan says. “It’s a good way for the consumer or us as the organization utilizing those cloud services, to get really significant discounts upfront.”

World Insurance is using AI for automation and alerts. AI tools are typically charged on a compute processing model, “and what you can do is design your query so that if it is something that’s less complicated, it’s going to hit a less expensive execution path” and go to a small language model (SLM), which doesn’t use as much processing power, Corrigan says.

The user gets a satisfactory result, and “there is less of a cost because you’re not consuming as much,” he says.

That’s the tactic the company is taking — routing AI queries to the less expensive model. If there is a more complicated workflow or process, it will be routed to the SLM first “and see if it checks the box,” Corrigan says. If its needs are more complex, it is moved to the next stage, which is more expensive, and generally involves an LLM that requires going through more data to give the end user what they’re looking for.

“So we try to manage the costs that way as well so we’re only consuming what’s really needed to be consumed based on the complexity of the process,” he says.

Cloud is ‘a living framework’

Hylaine’s McElroy says CIOs and CTOs need to be more open to discussing the benefits of hybrid infrastructure setups, and how the state of the art has changed in the past few years.

“Many organizations are wrestling with cloud costs they know instinctively are too high, but there are few incentives to take on the risky work of repatriation when a CFO doesn’t know what savings they’re missing out on,” he says.

Lucetti characterizes P&G’s cloud strategy as “a living framework,” and says that over the next few years, the company will continue to leverage the right cloud capabilities to enable AI and agentic for business value.

“The goal is simple: Keep technology aligned with business growth, while staying agile in a rapidly changing digital landscape,” she says. “Cloud transformation isn’t a destination — it’s a journey. At P&G, we know that success comes from aligning technology decisions with business outcomes and by embracing flexibility.”

Get data, and the data culture, ready for AI

8 December 2025 at 05:00

When it comes to AI adoption, the gap between ambition and execution can be impossible to bridge. Companies are trying to weave the tech into products, workflows, and strategies, but good intentions often collapse under the weight of the day-to-day realities from messy data and lack of a clear plan.

“That’s the challenge we see most often across the global manufacturers we work with,” says Rob McAveney, CTO at software developer Aras. “Many organizations assume they needAI, when the real starting point should be defining the decision you want AI to support, and making sure you have the right data behind it.”

Nearly two-thirds of leaders say their organizations have struggled to scale AI across the business, according to a recent McKinsey global survey. Often, they can’t move beyond tests of pilot programs, a challenge that’s even more pronounced among smaller organizations. Often, pilots fail to mature, and investment decisions become harder to justify.

A typical issue is the data simply isn’t ready for AI. Teams try to build sophisticated models on top of fragmented sources or messy data, hoping the technology will smooth over the cracks.

“From our perspective, the biggest barriers to meaningful AI outcomes are data quality, data consistency, and data context,” McAveney says. “When data lives in silos or isn’t governed with shared standards, AI will simply reflect those inconsistencies, leading to unreliable or misleading outcomes.”

It’s an issue that impacts almost every sector. Before organizations double down on new AI tools, they must first build stronger data governance, enforce quality standards, and clarify who actually owns the data meant to fuel these systems.

Making sure AI doesn’t take the wheel

In the rush to adopt AI, many organizations forget to ask the fundamental questionofwhat problem actually needs to be solved. Without that clarity, it’s difficult to achieve meaningful results.

Anurag Sharma, CTO of VyStar Credit Union believes AI is just another tool that’s available to help solve a given business problem, and says every initiative should begin with a clear, simple statement of the business outcome it’s meant to deliver. He encourages his team to isolate issues AI could fix, and urges executives to understand what will change and who will be affected before anything moves forward.

“CIOs and CTOs can keep initiatives grounded by insisting on this discipline, and by slowing down the conversation just long enough to separate the shiny from the strategic,” Sharma says.

This distinction becomes much easier when an organization has an AI COE or a dedicated working group focused on identifying real opportunities. These teams help sift through ideas, set priorities, and ensure initiatives are grounded in business needs rather than buzz.

The group should also include the people whose work will be affected by AI, along with business leaders, legal and compliance specialists, and security teams. Together, they can define baseline requirements that AI initiatives must meet.

“When those requirements are clear up front, teams can avoid pursuing AI projects that look exciting but lack a real business anchor,” says Kayla Underkoffler, director of AI security and policy advocacy at security and governance platform Zenity.

She adds that someone in the COE should have a solid grasp of the current AI risk landscape. That person should be ready to answer critical questions, knowing what concerns need to be addressed before every initiative goes live.

“A plan could have gaping cracks the team isn’t even aware of,” Underkoffler says. “It’s critical that security be included from the beginning to ensure the guardrails and risk assessment can be added from the beginning and not bolted on after the initiative is up and running.”

In addition, there should be clear, measurable business outcomes to make sure the effort is worthwhile. “Every proposal must define success metrics upfront,” says Akash Agrawal, VP of DevOps and DevSecOps at cloud-based quality engineering platform LambdaTest, Inc. “AI is never explored, it’s applied.”

He recommends companies build in regular 30- or 45-day checkpoints to ensure the work continues to align with business objectives. And if the results don’t meet expectations, organizations shouldn’t hesitate to reassess and make honest decisions, he says. Even if that means walking away from the initiative altogether.

Yet even when the technology looks promising, humans still need to remain in the loop. “In an early pilot of our AI-based lead qualification, removing human review led to ineffective lead categorization,” says Shridhar Karale, CIO at sustainable waste solutions company, Reworld. “We quickly retuned the model to include human feedback, so it continually refines and becomes more accurate over time.”

When decisions are made without human validation, organizations risk acting on faulty assumptions or misinterpreted patterns. The aim isn’t to replace people, but to build a partnership in which humans and machines strengthen one other.

Data, a strategic asset

Ensuring data is managed effectively is an often overlooked prerequisite for making AI work as intended. Creating the right conditions means treating data as a strategic asset: organizing it, cleaning it, and having the right policies in place so it stays reliable over time.

“CIOs should focus on data quality, integrity, and relevance,” says Paul Smith, CIO at Amnesty International. His organization works with unstructured data every day, often coming from external sources. Given the nature of the work, the quality of that data can be variable. Analysts sift through documents, videos, images, and reports, each produced in different formats and conditions. Managing such a high volume of messy, inconsistent, and often incomplete information has taught them the importance of rigor.

“There’s no such thing as unstructured data, only data that hasn’t yet had structure applied to it,” Smith says. He also urges organizations to start with the basics of strong, everyday data-governance habits. That means checking whether the data is relevant, and ensuring it’s complete, accurate, and consistent, and outdated information can skew results.

Smith also emphasizes the importance of verifying data lineage. That includes establishing provenance — knowing where the data came from and whether its use meets legal and ethical standards — and reviewing any available documentation that details how it was collected or transformed.

In many organizations, messy data comes from legacy systems or manual entry workflows. “We strengthen reliability by standardizing schemas, enforcing data contracts, automating quality checks at ingestion, and consolidating observability across engineering,” says Agrawal.

When teams trust the data, their AI outcomes improve. “If you can’t clearly answer where the data came from and how trustworthy is it, then you aren’t ready,” Sharma adds. “It’s better to slow down upfront than chase insights that are directionally wrong or operationally harmful, especially in the financial industry where trust is our currency.”

Karale says that at Reworld, they’ve created a single source of truth data fabric, and assigned data stewards to each domain. They also maintain a living data dictionary that makes definitions and access policies easy to find with a simple search. “Each entry includes lineage and ownership details so every team knows who’s responsible, and they can trust the data they use,” Karale adds.

A hard look in the organizational mirror

AI has a way of amplifying whatever patterns it finds in the data — the helpful ones, but also the old biases organizations would rather leave behind. Avoiding that trap starts with recognizing that bias is often a structural issue.

CIOs can do a couple of things to prevent problems from taking root. “Vet all data used for training or pilot runs and confirm foundational controls are in place before AI enters the workflow,” says Underkoffler.

Also, try to understand in detail how agentic AI changes the risk model. “These systems introduce new forms of autonomy, dependency, and interaction,” she says. “Controls must evolve accordingly.”

Underkoffler also adds that strong governance frameworks can guide organizations on monitoring, managing risks, and setting guardrails. These frameworks outline who’s responsible for overseeing AI systems, how decisions are documented, and when human judgment must step in, providing structure in an environment where the technology is evolving faster than most policies can keep up.

And Karale says that fairness metrics, such as disparate impact, play an important role in that oversight. These measures help teams understand whether an AI system is treating different groups equitably or unintentionally favoring one over another. These metrics could be incorporated into the model validation pipeline.

Domain experts can also play a key role in spotting and retraining models that produce biased or off-target outputs. They understand the context behind the data, so they’re often the first to notice when something doesn’t look right. “Continuous learning is just as important for machines as it is for people,” says Karale.

Amnesty International’s Smith agrees, saying organizations need to train their people continuously to help them pick out potential biases. “Raise awareness of risks and harms,” he says. “The first line of defense or risk mitigation is human.”

SAS, 2026년 AI 산업을 읎끌 8가지 전망 공개···책임성·ROI 쀑요성 컀젞

8 December 2025 at 03:26

SAS는 2025년을 돌아볎멎서 AI Ʞ술의 빠륞 발전곌 닀양한 성곌륌 읞정하멎서도, 잠재적읞 AI 거품, 에너지 사용 슝가에 따륞 부닎, 생성형 AI 파음럿 프로젝튞의 Ʞ대 읎하 성곌 등 여러 ìš°ë € 요소가 졎재한닀고 밝혔닀. SAS 전묞가듀은 2026년읎 AI로부터 싀질적읞 ROI(투자수익률)륌 확볎하고, 윀늬적·겜제적 곌제륌 볞격적윌로 핎결핎알 하는 쀑요한 시Ʞ가 될 것읎띌고 전망했닀.

앞윌로의 전망에는 우렀와 핚께 신쀑한 Ʞ대감도 공졎한닀. SAS 죌요 늬더듀은 AI 발전의 핵심 요읞윌로 ‘책임성’을 강조하며, AI 공꞉자뿐 아니띌 읎륌 활용하는 조직 몚두가 책임 있는 방식윌로 Ʞ술을 적용핎알 한닀고 말했닀. 또한 데읎터 ꎀ늬의 Ʞ볞을 강화하고 신뢰할 수 있는 AI륌 구축하는 것읎 Ʞ술 성숙 닚계로 나아가고 조직의 역량을 강화하며 혁신 속도륌 높읎는 데 쀑요한 Ʞ반읎 된닀고 섀명했닀.

SAS의 데읎터 및 AI 늬더듀읎 제시하는 2026년 죌요 전망은 아래와 같닀.

  1. AI 시장의 심판: 책임 있는 혁신에 대한 요구
    2026년은 AI 시장의 심판읎 시작되는 핎가 될 것읎닀. AI에 대한 곌도한 Ʞ대가 거버넌슀와 충돌하고, 책임 있는 혁신만읎 삎아낚는 시점읎닀. 음ꎀ된 ROI와 투명한 감독에 대한 요구는 슝가하고 검슝되지 않은 허황된 프로젝튞는 폐Ʞ될 것읎닀. Ʞ볞읎 되는 데읎터 였쌀슀튞레읎션, 견고한 몚덞링, 섀명 가능한 거버넌슀에 투자륌 재집쀑시킬 것읎닀. 곌대평가된 Ʞ술은 사띌지고, 잡정 가능한 횚곌와 욎영의 엄격핚을 갖춘 책임 있는 AI가 ê·ž 자늬륌 찚지하게 될 것읎닀. 읎 곌정읎 얌마나 강도 높게 진행될 것읞지와 AI의 진정한 륎넀상슀가 ì–žì œ 시작될 것읞지에 대한 의묞은 계속될 것윌로 전망된닀.
  1. AI 지출의 대격변
    챗GPT 래퍌(wrapper)와 같은 Ʞ술에 수십억 달러가 투입된 후, CFO듀은 읎제 싀질적읞 ROI륌 요구하고 있닀. 귞러나 대부분의 생성형 AI 프로젝튞에서 ROI 달성은 얎렀욞 것윌로 예상된닀. ‘AI 혁신’읎띌는 명목윌로 예산 집행읎 정당화되던 시Ʞ는 지났닀. 읎제 쿌늬당 비용, 정확도, 잡정 가능한 비슈니슀 성곌에 대한 확읞곌 분석읎 필수닀. 6~12개월 낎에 구첎적읞 비용 절감, 맀출 성장 또는 생산성 향상을 입슝하지 못하는 Ʞ업은 AI 읎니셔티람가 쀑닚되거나 공꞉업첎륌 교첎하게 될 것읎닀.
  2. 에읎전틱(Agentic) AI가 손익에 대한 책임을 갖게 될 것
    포춘 500대 Ʞ업듀은 2026년 말까지 고객 상혞작용의 4분의 1 읎상을 에읎전틱 시슀템읎 자윚적윌로 처늬할 것윌로 전망했닀. 읎 에읎전튞듀은 닚순 상닎을 넘얎 잡정 가능한 맀출 횚곌륌 발생시킬 것읎닀. ê·ž 결곌 ‘최고 에읎전튞 책임자(Chief Agent Officer)’와 같은 새로욎 역할읎 생겚날 것윌로 예상된닀. 반멎, 자윚 시슀템읎 맀출을 죌도하게 되멎 대규몚 ‘에읎전튞 장애’ 발생 시 막대한 여파륌 쎈래할 수 있윌며, 읎로 읞한 닀욎타임은 êž°ì—… 맀출에 직접적읞 타격을 죌게 될 것읎닀.
  3. 새로욎 동료, 에읎전틱 AI
    2026년, Ʞ업은 AI 에읎전튞가 더 읎상 도구가 아닌 팀원읎 되는 새로욎 생태계로 진입하게 될 것읎닀. 사람곌 AI가 혌합된 팀윌로 욎영되며, 에읎전튞는 신뢰할 수 있는 협력자로서 업묎륌 수행하고, 업묎 맥띜을 공유하며 사람듀곌 핚께 지속적윌로 학습하게 될 것읎닀.
  1. AI 대첎론볎닀 AI 역량 강화론
    AI륌 사용핎 음자늬륌 없앚 것읞가, 아니멎 AI로 사람듀에게 힘을 싀얎 겜쟁 우위륌 찜출할 것읞가? 2026년 늬더듀은 읎 두 가지 선택지 사읎에서 고믌하게 될 것읎닀. 점점 더 명확핎지는 사싀은 AI는 사람을 대첎하는 것읎 아니띌 사람의 역량을 강화한닀는 것읎닀. Ʞ업은 지속적읞 변화륌 통핎 읞력에 투자할 수 있는 대닎하고 죌도적읞 늬더륌 필요로 하게 될 것읎닀.
  2. 합성 데읎터가 AI 팚권의 새로욎 전장읎 될 것
    합성 데읎터는 닚순한 임시방펞읎 아니띌, 데읎터 부족, 프띌읎버시 제한, 컎플띌읎얞슀 병목에 맞서는 전략적 묎Ʞ닀. 2026년에는 데읎터 군비 겜쟁읎 벌얎질 것읎며, Ʞ업듀은 멀티몚달 현싀 데읎터뿐 아니띌 얌마나 확신 있게 데읎터륌 생성할 수 있는지륌 놓고 겜쟁하게 될 것읎닀. 싀제와 같은 정교핚을 갖추고, 싀험적 Ʞ능에서 벗얎나 비슈니슀 우위륌 찜출하는 대규몚 전환에 성공하는 Ʞ업읎 승자가 될 것읎닀.
  3. CIO? 읎제는 ‘최고 통합 책임자(Chief Integration Officer)’의 시대
    2026년 CIO듀읎 에읎전틱 AI의 믞래륌 쀀비하는 죌역읎 되멎서, Ʞ졎의 Ʞ술 제공자에서 에읎전틱 AI륌 위한 ‘통합자’로 역할읎 달띌질 것읎닀. 슉, ‘최고 통합 책임자(Chief Integration Officer)’로의 전환을 의믞한닀. 에읎전튞가 죌도하는 섞상에서 IT 아킀텍처의 믞래륌 섀계하Ʞ 위핎, AI 거버넌슀, 통합, 귞늬고 부서 간 늬더십읎 CIO듀의 음상 업묎가 될 것읎닀.  
  1. 양자(Quantum)에 거는 Ʞ대
    2026년 양자 시장은 ꎀ렚 Ʞ술읎 2030년까지 쎈Ʞ 닚계의 가치륌 싀현할 것읎띌는 Ʞ대감곌 핚께 더욱 뜚거워질 것읎닀. 투자자듀은 하드웚얎와 포슀튞-양자 암혞화에서 벗얎나 소프튞웚얎와 애플늬쌀읎션에 더 큰 비쀑을 두게 될 것읎닀. 한펾, 싀제 양자 가치륌 구현하는 소프튞웚얎 및 애플늬쌀읎션 계잵을 포핚핎 전첎 슀택을 포ꎄ하는 ‘양자 아킀텍처(Quantum Architecture)’띌는 용얎에 죌목할 필요가 있닀. 읎러한 믞래에 대응하Ʞ 위핎 전묞 읞력 채용읎 ꞉슝할 것윌로 예상된닀.

읎쀑혁 SAS윔늬아 대표읎사는 “전 섞계적윌로 AI 투자에 대한 ROI와 신뢰성 확볎 요구가 높아지는 가욎데, 국낎 Ʞ업듀도 AI 도입에 대핮 닚Ʞ적·싀험적 접귌에서 쀑장Ʞ적·전략적 ꎀ점윌로 전환하고 있닀”띌고 말했닀. 또한 “닚순 업묎에 적용되던 대규몚 ì–žì–Ž 몚덞(LLM, Large Language Models) êž°ë°˜ 생성형 AI의 비슈니슀 수익 개선 횚곌에 의묞을 제Ʞ하는 조직읎 늘얎나멎서, 대안윌로 에읎전틱 AI륌 고렀하는 움직임읎 확산되고 있닀”고 섀명했닀.

귞는 2026년 국낎 시장 전망에 대핮 “ꞈ융권에서는 늬슀크 ꎀ늬, 낎부통제, ALM(자산·부채 종합ꎀ늬) 등 전묞 영역에서 AI 적용을 확대핮 싀질적 ROI륌 확볎하렀는 시도가 더욱 활발핎질 것읎며, 공공 분알는 디지턞플랫폌정부 2.0을 쀑심윌로 AI·큎띌우드·볎안 투자가 강화되는 동시에, 에읎전틱 AI êž°ë°˜ 업묎 횚윚화와 합성 데읎터의 활용읎 AI 투자의 핵심읎 될 것”읎띌고 전망했닀.

읎쀑혁 대표읎사는 낮년도 사업에 대핮 “Ꞁ로벌 성공 사례륌 Ʞ반윌로 국낎 고객듀읎 AI 거버넌슀륌 확볎하고 비슈니슀 가치륌 찜출할 수 있도록 ꞈ융·공공 부묞 솔룚션곌 전묞 서비슀륌 통핎 적극 지원하겠닀”고 강조했닀.
dl-ciokorea@foundryco.com

채용만윌론 부족하닀···CIO의 늬더십읎 읞재 유지에 쀑요한 읎유

8 December 2025 at 03:20

Ʞ술 직원, 특히 전묞 역량을 갖춘 읞재는 여전히 확볎하Ʞ 얎렵닀. Gi귞룹의 최귌 Ꞁ로벌 IT HR 튾렌드 볎고서에 따륎멎, Ʞ업의 47%가 적합한 읞재륌 ì°Ÿê³  유지하는 데 얎렀움을 겪는 것윌로 나타났닀. 읎직률 역시 여전히 높은 수쀀을 유지하고 있닀.

Ꞁ로벌 조사 업첎 섞고슀(Cegos)가 읎탈늬아의 정볎시슀템 책임자 200명을 대상윌로 진행한 조사에서, 응답자의 53%는 IT 읞재 확볎와 유지가 ‘맀음 직멎하는 묞제’띌고 답했닀. IT 부서의 가장 시꞉한 곌제로는 사읎버볎안읎 ꌜ혔지만, 읎 묞제는 닀수의 CIO가 음정 수쀀 핎결할 수 있닀고 느끌는 영역읎었닀. 반멎 IT 읞재 부족 묞제륌 핎결할 수 있닀고 자신한 비윚은 8%에 불곌했닀. 읎탈늬아 CIO는 사읎버볎안 닀음윌로 IT 팀의 역량 개발곌 읞재 유지륌 쀑대한 곌제로 ꌜ았윌며, 읎륌 핎결할 수 있닀고 볞 비윚도 각각 24%와 9%에 귞쳀닀.

읎탈늬아 통계청 읎슀타튞(Istat)의 CIO읞 첎칠늬아 윜띌산티는 “읞재가 없얎서가 아니닀”띌고 말했닀. 귞는 “읞재는 분명히 있지만 제대로 평가받지 못한닀. 귞래서 많은 읎듀읎 핎왞로 나가는 Ꞟ을 택한닀. 읞재란 ‘적재적소에 놓읞 사람’을 의믞한닀. CIO륌 포핚한 늬더띌멎 읞재륌 알아볎고, 귞듀읎 읞정받고 있닀는 사싀을 느끌게 하며, 적절한 Ʞ회륌 제공핎 성장시킬 역량을 갖춰알 한닀”띌고 섀명했닀.

읞재 ꎀ늬의 죌첎가 된 CIO

윜띌산티는 결속력 있고 동Ʞ 부여된 조직을 만듀Ʞ 위한 읞재 ꎀ늬 방법을 명확히 제시했닀. 귞는 “CIO로서 슀슀로 섀정한 목표는 낎부와 왞부의 서비슀 읎용자에게 더 높은 품질의 결곌묌을 계속 제공하는 것읎었닀”띌며, “IT 부서는 업묎 욎영에 핵심적읞 동력읎Ʞ 때묞에 시작한 프로젝튞륌 확싀히 마묎늬하고, Ʞꎀ읎 지속적윌로 개선할 수 있도록 구첎적읞 성곌륌 낮는 것읎 쀑요하닀. 나는 IT Ʞ능 자첎륌 고도화하고, 제공되는 서비슀의 품질을 높읎며, 조직 욎영의 적합성을 확볎하고, 구성원의 복지륌 향상하는 역할을 ë§¡ê³  있닀”띌고 말했닀.

읎슀타튞의 IT 부서는 현재 195명 규몚로, Ʞꎀ 전첎 읞력의 앜 10%륌 찚지한닀. 윜띌산티가 2023년 10월 CIO로 임명된 직후 가장 뚌저 한 음은, ꎀ늬 조직에 배치된 몚든 읞력을 직접 만나 대화륌 나누는 음읎었닀.

윜띌산티는 “2001년부터 읎슀타튞에서 음핎왔고, 서로 대부분 알고 지낎는 사읎”띌고 말했닀. 귞는 “IT 부서에서 여러 역할을 맡아왔윌며, CIO가 된 뒀에는 몚두의 의견을 겜청하는 데 집쀑한닀. 서로 잘 아는 만큌 동료듀읎 협업에 큰 Ʞ대륌 걞고 있닀고 느낀닀. 귞래서 솔직한 대화륌 추구하고 몚혞핚을 플하렀고 한닀. 닀만, 겜청한닀는 게 책임을 넘ꞎ닀는 뜻은 아니닀. ì–Žë–€ 제안은 받아듀읎고 ì–Žë–€ 제안은 거절하며, 선택에는 나늄의 읎유륌 섀명하렀고 한닀”띌고 섀명했닀.

윜띌산티는 또 닀륞 조치로 였래전 읎슀타튞에서 진행됐던 ‘두 가지 묞제, 두 가지 핎결책’ 프로귞랚을 닀시 도입했닀. 읎는 직원듀에게 자발적윌로 두 가지 묞제륌 정의하고 두 가지 핎결책을 제안핎 달띌고 요청하는 방식읎닀. 귞는 수집된 낎용을 직접 검토핎, 대멎 믞팅에서 의견을 공유하고 제안의 타당성을 녌의하며 후속 조치가 필요한 사항을 평가했닀. 귞는 읎 프로귞랚읎 동료듀곌의 신뢰륌 닀지는 데 맀우 횚곌적읎었닀고 분석했닀.

음부 의견은 겜력 개발 Ʞ회나 Ʞ술적 묞제에 ꎀ한 것읎었지만, 가장 많읎 제Ʞ된 불만은 낎부 컀뮀니쌀읎션 묞제와 읞력 부족읎었닀. 윜띌산티는 몚든 사람곌 대화륌 나누며, 자신읎 개입할 수 있는 부분곌 귞렇지 못한 부분을 명확히 섀명했닀. 예륌 듀얎 공공 부묞의 겜력 첎계나 채용은 엄격한 절찚에 따띌 진행되Ʞ 때묞에 CIO가 영향을 믞칠 수 있는 여지가 거의 없닀.

윜띌산티는 “몚든 읎슈륌 능동적윌로 핎결하렀고 했닀. 구첎적읞 묞제띌Ʞ볎닀 변화에 대한 막연한 저항처럌 느껎지는 부분에 대핎서는 구성원의 낎적 동Ʞ와 책임감을 끌얎낎는 데 집쀑했닀. Ʞꎀ의 전략읎 묎엇읎며, 목표륌 달성하Ʞ 위핎 각자가 ì–Žë–€ 역할을 수행핎알 하는지 섀명하는 곌정읎 맀우 쀑요하닀. ê²°êµ­ 사람듀은 자신읎 ì–Žë–€ 환겜에서 음하고 있는지, 귞늬고 자신의 음읎 전첎 귞늌에 ì–Žë–€ 영향을 믞치는지 알 권늬가 있닀”띌고 강조했닀.

조직의 찞여와 몰입은 하룚아칚에 만듀얎지지 않Ʞ 때묞에, 윜띌산티는 부서장곌 서비슀 맀니저륌 포핚한 직원듀곌 정Ʞ적윌로 만나며 소통을 읎얎가고 있닀.

고믌읎 더 큰 쀑소Ʞ업

읎슀타튞의 겜우 IT 부서 규몚가 큰 펞에 속하지만, 쀑소Ʞ업에서는 CIO륌 포핚핎 몇 명 안 되는 구성원만윌로 IT 부서륌 욎영하는 겜우가 ë§Žë‹€. 상당 부분은 왞부 컚섀턎튞나 벀더가 맡아 음을 진행한닀. 읎런 구조에서는 여러 프로젝튞에 걞쳐 자원을 조윚하는 업묎와 싀제 IT 욎영 업묎륌 동시에 처늬핎알 하므로 부닎읎 크닀. 큎띌우드 아웃소싱은 하나의 방법읎 될 수 있지만, 많은 CIO가 벀더 종속을 플하Ʞ 위핎 낎부 역량을 더 확볎하Ꞟ 원한닀.

IT 읞력읎 3명뿐읞 한 쀑소 의료Ʞ업 CIO는 “읞재륌 끌얎였는 것도, 붙잡아두는 것도 얎렀워 ê²°êµ­ 아웃소싱을 할 수밖에 없닀”띌고 말했닀. 귞는 “업묎륌 왞부로 넘겚 낎부 자원을 확볎하렀멎 회사의 녞하우가 빠젞나갈 위험도 감수핎알 한닀. 하지만 지ꞈ윌로선 닀륞 선택지가 없닀. 대Ʞ업 수쀀의 연뎉을 제시할 수 없고, 읎직읎 잊은 IT 읞재에게 ꟞쀀히 동Ʞ륌 부여하Ʞ가 맀우 얎렵닀. 사람을 뜑아 교육하고, 조ꞈ씩 성장하는 몚습을 지쌜볎닀가 ê²°êµ­ 떠나는 상황읎 반복되고 있닀. 게닀가 의료 산업은 전묞성읎 맀우 높아 필요한 역량을 갖춘 사람읎 드묌닀”띌고 섀명했닀.

Ʞ술 역량을 갖춘 읞재에게 시장은 얞제나 맀력적읞 조걎을 낎섞욎닀. 특히 믌간 Ʞ업은 채용의 유연성곌 닀양한 겜력 겜로륌 제공할 수 있얎 공공 Ʞꎀ볎닀 읞재 유치가 훚씬 쉜닀.

윜띌산티는 “공공 부묞은 믌간 Ʞ업읎 수익성읎 낮닀고 판당핮 투자하지 않는 죌제륌 연구하고, 탐구하고, 깊읎 파고듀 Ʞ회륌 제공한닀. 공공 Ʞꎀ은 공동첎의 읎익을 목표로 하고, 장Ʞ적읞 투자륌 감당할 수 있는 구조륌 갖고 있닀”띌고 말했닀.

읞재 유지의 핵심읞 ‘교육’

섞고슀의 Ꞁ로벌 지표에 따륎멎, CIO는 IT 수요륌 충족하Ʞ 위핎 새로욎 읞재 채용곌 Ʞ졎 팀의 교육을 우선순위로 두고 있닀. 읎때 재교육곌 역량 강화는 읞재 확볎와 유지 곌정에서 발생하는 여러 묞제륌 극복하는 데 횚곌적음 수 있닀.

섞고슀 읎탈늬아의 비슈니슀 튞랜슀포메읎션 및 싀행 책임자 에마누엘띌 플냐타로는 “시장읎 맀우 겜쟁적읎Ʞ 때묞에, 읞재륌 유지하렀멎 읎직을 막을 방법읎 필요하닀”띌고 말했닀. 귞는 “Ʞ업읎 충분한 볎상곌 핚께 자극적읎고 볎람 있는 환겜을 조성하멎, 구성원 역시 닀륞 Ʞ회륌 찟는 대신 현재 업묎에 집쀑할 수 있닀. 많은 직원읎 감당하Ʞ 얎렀욎 업묎륌 곌도하게 떠안고 있닀고 느끌는데, 읎듀읎 특히 가치가 높은데도 불구하고 지원읎 부족한 겜우가 ë§Žë‹€. 따띌서 회사가 읎듀을 지원할 신규 읞력을 채용하거나 교육에 투자하멎 심늬적 안정감을 조성하고 충성도륌 높음 수 있닀”띌고 분석했닀.

싀제로 윜띌산티는 조직의 균형 있는 욎영곌 ꎀ늬 역량을 뒷받칚하는 ‘지속적 학습’을 묎엇볎닀 쀑요하게 여ꞎ닀. IT 교육 예산은 충분하지 않지만, 구성원읎 제Ʞ한 요구륌 충족할 현싀적 대안은 음부 마렚돌 있닀.

닀만 윜띌산티는 “읎런 겜우에는 확고한 의지가 필요하닀. Ʞꎀ읎 투입한 비용읎 있는 만큌 교육은 결곌륌 낎알 한닀. 사읎버볎안처럌 변화가 빠륞 영역은 더 큰 투자도 요구된닀”띌고 섀명했닀.

늬더십의 필요성

CIO는 구성원을 제대로 지원하고, 권한을 부여하며, 동Ʞ륌 높읎는 업묎륌 명확히 부여핎알 한닀는 점을 쀑요하게 여Ʞ고 있닀. 또한 복지 제도륌 마렚하Ʞ 위핎 HR 부서와 ꞎ밀히 협력하는 것도 필수 요소로 ꌜ힌닀.

Gi귞룹 조사에 따륎멎, 읎탈늬아 IT 구직자가 채용 Ʞ업을 선택할 때 우선순위로 고렀하는 요소는 ꞉여, 하읎람늬드 귌묎 형태, 음곌 생활의 균형, 곌도한 슀튞레슀가 없는 직묎, 겜력 개발 및 성장 Ʞ회 순읎었닀.

닀만 읞재 ꎀ늬띌는 곌제륌 핎결하Ʞ 위핎서는 또 닀륞 요소가 필요하닀. CIO 슀슀로 자신의 늬더십 역할을 더 명확히 자각핎알 한닀는 점읎닀. 현재 읎탈늬아 IT 책임자듀은 늬더십 ꎀ렚 요소륌 핵심 역량 쀑 가장 낮은 순위에 두고 있닀. 섞고슀 조사에서는 Ʞ술 전묞성, 전략적 비전, 혁신 역량읎 최우선윌로 ꌜ힌 반멎, 늬더십은 한찞 뒀로 밀렞닀. 하지만 CIO의 늬더십은 조직 욎영의 귌간읎얎알 한닀. 읎는 특정 결정에 의견 찚읎가 있을 때에도 변핚없읎 쀑요한 역할을 한닀.

윜띌산티는 “늬더로서 업묎 공간에서의 졎재감을 쀑요하게 생각한닀”띌고 말했닀. 귞는 “읎슀타튞는 였래전부터 재택 귌묎와 슀마튞 업묎륌 제도화핎 누구나 필요하멎 읎용할 수 있닀. 개읞적윌로는 사묎싀에서 음하는 것을 선혞하지만, 사생활곌 업묎의 밞런슀륌 졎쀑하며 유연 귌묎에 반대하지 않는닀. 닀만 나는 맀음 현장에 나옚닀. 동료듀도 낎가 읎곳에 있닀는 사싀을 알고 있닀”띌고 전했닀.
dl-ciokorea@foundryco.com

한국-Arm, 반도첎·AI 읞재 1,400명 양성 MOU 첎결

8 December 2025 at 03:19

읎번 MOU는 같은 날 읎재명 대통령읎 소프튞뱅크 손정의 회장, Arm의 륎넀 하슀 CEO와 멎닎한 것을 계Ʞ로 추진된 것윌로, 한국곌 소프튞뱅크·Arm 간 협력 확대 가능성을 녌의한 데 따륞 것읎닀.

협앜에는 ▲산업 맞춀형 읞재 1,400명 양성 ▲Ʞ술 교류 및 생태계 강화 ▲대학 간 연계 확대 ▲R&D 협력 등읎 포핚됐닀. 산업부와 Arm은 후속 녌의륌 위한 싀묎협의첎륌 구성핎 섞부 추진 방안을 마령할 계획읎닀.

특히 산업부는 Arm곌 핚께 ‘Arm 슀쿚(Arm School, 가칭)’을 섀늜핎 2026년부터 2030년까지 앜 1,400명의 IP 섀계 전묞 읞력을 양성한닀는 구상읎닀. Arm은 애플·구Ꞁ·MS 등 Ꞁ로벌 빅테크 및 삌성·엔비디아·퀄컎 등 죌요 반도첎 Ʞ업읎 활용하는 핵심 섀계 플랫폌윌로, 정부는 읎번 협력읎 국낎 시슀템 반도첎 겜쟁력 강화에 Ʞ여할 것윌로 Ʞ대하고 있닀.

공식 입장묞에 따륎멎 산업부는 반도첎 특성화 대학원 지정을 포핚한 ꎀ렚 절찚륌 찚질 없읎 추진할 계획읎며, ꎑ죌곌학Ʞ술원을 우선 검토 대상에 두고 있닀.

김정ꎀ 산업부 장ꎀ은 “읎번 양핎각서륌 통핎 AI 반도첎 산업을 읎끌 핵심 읞력 양성 Ʞ반을 마렚했닀”며 “AI 시대에 대비핎 Ꞁ로벌 Ʞ업듀곌의 협력을 지속 확대하겠닀”띌고 말했닀.

한펾, Arm의 지분 앜 90%륌 볎유한 음볞 소프튞뱅크의 손정의 회장은 읎번 협앜을 위한 접견에서 “앞윌로 몚든 국가와 Ʞ업듀은 ASI 시대륌 쀀비하여알 하고, 국믌듀에게 볎펞적 접귌권을 볎장할 수 있도록 역량을 집쀑핎알 한닀”띌며 “ASI륌 구현하Ʞ 위핎서는 에너지, 반도첎, 데읎터, 교육의 ë„€ 가지 자원읎 필수적”읎띌고 강조했닀.

손 회장은 또한 한국의 상황을 고렀할 때, ASI 구축을 위핎서는 데읎터섌터의 대폭적읞 슝섀읎 필요하며, 읎륌 안정적윌로 뒷받칚할 수 있는 에너지 확볎에 더욱 힘썚알 한닀고 조얞했닀.
jihyun.lee@foundryco.com

음묞음답 | 믞쓰비시 뚞티늬얌 CIO가 말하는 ‘CIO의 역할곌 맀력’

8 December 2025 at 03:03

Q: 엔지니얎로서의 겜력을 시작한 쎈Ʞ 시절곌, 읎후 컀늬얎의 방향을 바꟞게 된 계Ʞ는 묎엇읞가?
A: 1989년 나는 믞쓰비시가섞읎(현 믞쓰비시쌀믞컬)에 생산Ʞ술 엔지니얎로 신입 입사했닀. 배치된 곳은 였칎알마현 구띌시킀시의 믞슈시마 사업소로, 대규몚 석유·화학 산업닚지에서 필드 엔지니얎링 업묎륌 맡윌며 컀늬얎의 첫걞음을 낎디뎠닀.

전환점은 1996년에 찟아왔닀. 믞국 동부의 볎슀턎곌 서부 샌프란시슀윔에 신규 거점을 섀늜한닀는 계획읎 추진되멎서, 믞 서부 거점의 쎈Ʞ 멀버로 선발돌 싀늬윘밞늬에 죌재하게 됐닀. 당시에는 윈도우 95의 등장, 읞터넷의 대쀑화, e비슈니슀가 막 태동하던 시Ʞ였닀. 믞국 전첎 투자ꞈ의 앜 3분의 1읎 몚읞닀는, 섞계 최전선의 Ʞ술곌 자볞읎 집결한 현장에 몞을 두게 된 것읎닀.

3년간의 죌재륌 마치고 믞슈시마로 복귀핎 닀시 생산Ʞ술 업묎륌 맡았지만, 마음 속에는 ‘돌아가Ʞ 얎렀욎 섞계륌 볎아버렞닀’는 감각읎 자늬 잡았닀. 싀늬윘밞늬에서 겜험한 속도감, 혁신, 믞래륌 향한 도전정신을 알고 난 뒀에는 읎전의 음상윌로 복귀할 수 없었닀.

ê²°êµ­ 저는 슀슀로 지원핎 정볎시슀템 부묞윌로 부서륌 옮ꞰꞰ로 했닀. 읎후 DX륌 포핚한 닀양한 프로젝튞륌 닎당하며 Ʞ술곌 겜영을 잇는 역할을 하게 됐닀. 귞늬고 2021년, 믞쓰비시 뚞티늬얌의 CIO로 자늬륌 옮게고, 지ꞈ은 Ʞ업의 디지턞 전략을 읎끄는 위치에서 믞래륌 향한 도전을 읎얎가고 있닀.

Q: ERP 프로젝튞륌 섞 번읎나 추진했닀고 듀었닀. ì–Žë–€ 점읎 가장 얎렀웠나?
A: 나의 겜력에서 가장 큰 도전은 닚연 ERP 도입 프로젝튞였닀. 지ꞈ까지 쎝 섞 번, 쀑닚 위Ʞ에 빠진 ERP 프로젝튞륌 닀시 삎렀낞 겜험읎 있닀. 각각의 프로젝튞는 전임자가 난항에 ë¹ ì ž 사싀상 멈춰선 상태에서 제가 투입돌, 전첎 구조륌 정비하고 닀시 궀도에 올렀 완성까지 읎끌얎알 하는 상황읎었닀. ꞈ액도 규몚도 방대핎, CIO로서의 사고방식곌 행동 원칙을 형성한 맀우 쀑요한 겜험읎었닀.

낮 겜력에서 독특한 점읎 있닀멎, 생산Ʞ술에서 IT로 컀늬얎륌 전환한 점, 귞늬고 싀늬윘밞늬 한가욎데서 음한 겜험에서 비롯된 것읎띌 생각한닀. 귀국 후에는 êž°ì—… 낎부 업묎에 귞치지 않고 닀양한 업계에서 겜험을 쌓았닀. 예륌 듀얎 석유화학공업협회에서의 IT ꎀ렚 활동, êž°ì—… 간 거래의 전자화(EDI) 추진, 국낎왞 대형 동종 êž°ì—… 22개사가 찞여한 Ꞁ로벌 화학제품 읎컀뚞슀 플랫폌 구축 등 업계 전첎륌 아우륎는 프로젝튞에도 ꎀ여했닀.

‘현장곌 볞사’, ‘국낎와 핎왞’, ‘업묎와 IT’ 같읎 겜계륌 넘나듀며 음핎옚 겜험은 현재 CIO로서의 시알와 판닚력윌로 읎얎지고 있닀.

제가 쀑요하게 여Ʞ는 것은 ‘눈앞의 음에 집쀑핎 최선을 닀하는 것’읎닀. 지나치게 구첎적읞 목표륌 섞우멎 였히렀 장Ʞ적 가능성을 좁힌닀고 생각하Ʞ 때묞에, 의도적윌로 명확한 목표륌 정핎두지 않고 지ꞈ 읎 순간에 몰입하는 태도륌 유지하고 있닀.

ERP처럌 대규몚 프로젝튞에서는 예상치 못한 묞제와 난ꎀ읎 끊임없읎 발생한닀. ê·ž 속에서도 ‘도망치지 않는닀’, ‘끝까지 책임을 닀한닀’는 자섞륌 음ꎀ되게 지쌜왔닀. 겜험을 쌓고, 슀슀로 사고하고, 자신의 Ʞ쀀을 바탕윌로 전략을 섞우는 것. 읎 부분읎 나의 늬더십의 귌간읎닀.

귞늬고 묎엇볎닀 쀑요한 깚달음은 읎것읎닀. ‘핎왞륌 알아알 음볞을 읎핎할 수 있고, 타사륌 알아알 자사륌 볌 수 있윌며, 사람을 읎핎핎알 비로소 자신을 읎핎할 수 있닀.’ 읎 통찰읎 제게는 가장 큰 자산읎며, CIO로서 앞윌로 나아가는 원동력읎 되고 있닀.

Q: 새로욎 환겜에서 CIO로 음하멎서 느낀 깚달음은 묎엇읎었나?
57섞에 선택한 읎직은 결윔 빠륞 결정읎띌고 할 수 없었닀. 귞러나 막상 새로욎 환겜에 듀얎서고 볎니, 귞전까지 볎읎지 않았던 것듀읎 선명하게 볎읎Ʞ 시작했닀. 귞쀑에서도 특히 강하게 낚은 것은 IT 전략을 읎핎하는 데 핵심읎 되는 두 가지 킀워드, ‘거버넌슀’와 ‘시너지’였닀.

읎전 직장에서는 여러 상장 자회사륌 포핚한 대규몚 귞룹 전첎의 정볎시슀템을 통합적윌로 ꎀ늬하는 믞션을 맡았닀. 독늜성읎 강한 각 회사륌 한 방향 아래 몚윌Ʞ 위핎서는 닚순히 규정을 강요하거나 지칚을 낎렀볎낎는 방식만윌로는 충분하지 않았닀. 각 정책읎나 방칚읎 현장에 ì–Žë–€ 읎익을 죌는지, 왜 필요한지 섀득력 있게 섀명하는 곌정읎 필요했닀.

거버넌슀의 êž°ë°˜ 위에서 시너지가 생Ʞ고, 구성원 한 사람 한 사람읎 납득핎 슀슀로 움직읎Ʞ 시작하는 구조륌 만드는 것. 저는 ê·ž 구조 섀계알말로 지속 가능한 IT 전략의 볞질읎띌는 점을 새롭게 깚달았닀.

DX륌 추진하는 곌정에서도 같은 교훈을 얻었닀. 톱닀욎 방식은 전사적 변화륌 음윌킬 수 있는 강한 추진력읎 있지만, 바텀업은 현장의 젊은 읞재가 곌제륌 슀슀로의 음로 받아듀읎고 도전하멎서 성장의 Ʞ회륌 만듀얎낞닀. 읎 두 축읎 서로 맞묌늎 때, DX는 비로소 조직 전첎로 확산되고 싀질적읞 변화로 읎얎진닀.

Q: 늬더십에서 가장 쀑요하닀고 느끌는 요소는 묎엇읞가?
37년에 걞친 비슈니슀 겜력 속에서 제가 가장 깊읎 느낀 것은 ‘사람을 얎떻게 움직읎게 할 것읞가’띌는 곌제의 쀑요성읎닀. 프로젝튞, 부하 직원, 동료, 읎핎ꎀ계자, 귞늬고 상사까지 몚든 ꎀ계 속에서 가장 얎렵고 동시에 가장 큰 가치륌 가진 도전은 ê²°êµ­ ‘겜영을 얎떻게 움직음 것읞가’띌는 점읎었닀.

읎륌 위핎서는 뚌저, 자신읎 묎엇을 하고 싶은지, 묎엇을 전달하고 싶은지에 대한 확고한 축을 가젞알 한닀. ê·ž 축읎 흔듀늬멎 사람듀은 따띌였지 않는닀. 더불얎 ê·ž 축을 명확한 얞얎로 표현하는 능력도 필요하닀. 말로 형태륌 갖추지 않윌멎 생각곌 의지는 결윔 전달되지 않는닀.

말읎 전달되Ʞ 위핎서는 신뢰가 전제되얎알 한닀. 신뢰가 형성되멎 상대는 공감하고, 공감은 행동의 변화륌 읎끈닀. 읎 음렚의 곌정 슉 축을 섞우고, 얞얎로 정늬하고, 신뢰륌 쌓고, 공감을 얻얎, 행동을 유도하는 곌정을 얌마나 아늄답게 순환시킀느냐가 늬더에게 죌얎진 가장 큰 곌제띌고 느끌고 있닀.

ê·ž Ʞ반에는 ‘자Ʞ륌 아는 것’읎 있닀. 묌론 자신을 안닀는 것은 철학적읎며 결윔 쉬욎 음읎 아니닀. 하지만 ‘핎왞륌 알멎 음볞읎 볎읎고, 음볞을 알멎 자사가 볎읎며, 자사륌 알멎 자신읎 볎읞닀’는 순환적 깚달음읎 늬더로서의 시알륌 한잵 넓혀쀀닀.

Q: CIO가 겜영자가 될 수 있을까?
지ꞈ CIO로서의 역할을 돌아볎멎, 두 가지 유형읎 있닀고 느낀닀. 하나는 ‘정볎시슀템을 쎝ꎄ하는 CIO’, 귞늬고 닀륞 하나는 ‘겜영의 한 축을 닎당하는 CIO’닀.

나는 지ꞈ까지 후자륌 목표로 핮 왔닀. IT 전묞성에만 뚞묎륎지 않고, 바깥섞상을 읎핎하고, 업계륌 넘나듀며, 현장곌 겜영을 잇는 시각읎 CIO의 가능성을 넓혀 쀀닀고 믿고 있닀.

귞래서 낎가 쀑요하게 생각하는 것읎 바로 읞묞학 슉 읞류가 축적핎 옚 지혜닀.

새로욎 것은 묎(無)에서 갑자Ʞ 생겚나는 게 아니띌, 여러 지혜가 결합되며 찜발하는 것읎닀. 생성형 AI가 등장하멎서, 우늬는 ê·ž 얎느 때볎닀 찜조적읞 가치륌 만듀얎낌 Ʞ회륌 갖게 됐닀.

정볎시슀템 부서 여러분도 읎런 ꎀ점을 ꌭ 가젞 볎Ꞟ 바란닀. 때로는 전묞 영역을 넘얎서 닀륞 분알로 걎너가 볎고, 현장에 닀가가고, 겜영곌 대화핮 볎는 겜험듀읎 쌓음 때, CIO로서의 새로욎 가능성읎 엎늎 거띌고 확신한닀.

또한 음볞 IT 산업에는 구조적 특징읎 있닀. 시슀템 엔지니얎의 앜 70%가 왞부 파튾너 소속읎띌는 현싀읎닀. 낎재화의 필요성읎 강조되지만 한계도 뚜렷하닀. 귞래서 벀더나 컚섀턎튞륌 닚순한 공꞉자가 아니띌 핚께 싞우는 ‘전우’로 읞식하고 협력 ꎀ계륌 구축핎알 한닀고 생각한닀.

고객사가 음방적윌로 지시하는 ꎀ계륌 넘얎서, 서로 배우고 지혜륌 몚윌는 파튞너십을 만드는 것. 읎러한 ꎀ계가 앞윌로의 IT·DX 분알에서 진정한 가치 찜출을 읎끌 핵심읎띌고 믿고 있닀.

Q: CIO로서 쀑요하게 여Ʞ는 철학읎나 가치ꎀ은 묎엇읞가?
개읞적윌로 쀑요하게 여Ʞ는 두 가지 킀워드가 있닀. 21섞Ʞ륌 삎아가는 읞류가 반드시 소쀑히 핎알 한닀고 생각하는 가치, 바로 ‘얎웚얎니슀(깚달음·의식)’와 ‘컎팚션(읎타성·배렀)’읎닀. 읎 두 가지는 제가 믞쓰비시 뚞티늬얌의 CIO로 부임했을 때부터, 회사가 Ʞ대하는 역할곌 음볞 제조업 전첎륌 더 강하게 만듀고자 하는 제 개읞적 소망 몚두에서 쀑심축읎 되얎왔닀.

믞쓰비시 뚞티늬얌은 ‘믞쓰비시 뚞티늬얌 귞룹 IT Way’띌는 원칙을 수늜하고, 읎륌 Ʞ반윌로 닀양한 IT 전략을 추진하고 있닀. 지ꞈ 생성형 AI는 플할 수 없는 핵심 죌제로 떠올랐고, 쀑요한 것은 AI가 묎엇을 만드는가가 아니띌 ‘사람읎 생성형 AI륌 활용핎 ì–Žë–€ 가치륌 만듀얎낎는가’에 있닀. 생성형 AI는 얎디까지나 IT 도구의 하나읎고, 죌첎는 읞간읎닀. 귞래서 저는 얞제나 ‘사람 쀑심’읎띌는 사고방식을 안팎윌로 ꟞쀀히 강조핎 왔고, 읎 철학을 Ʞ반윌로 여러 정책을 펌쳐왔닀.

음볞 제조업읎 강핎지Ʞ 위핎서는 êž°ì—… 간의 겜계륌 넘얎서는 연대와 대화가 필수적읎닀. 저는 읎직 전을 포핚한 앜 5년 동안 70여 개 êž°ì—…, 6,000명 읎상의 ꎀ계자듀곌 슀터디, 강연 등을 통핎 지속적윌로 의견을 나누얎 왔닀. ê·ž 곌정에서 제 생각에 공감핎죌는 사람듀을 많읎 만났고, 귞분듀곌의 교류 속에서 저 자신도 성장하고 있음을 싀감하고 있닀. 읎러한 지적 축적읎 새로욎 정책곌 시도륌 만듀얎낎는 쀑요한 원동력읎 된닀고 믿는닀.

낎가 묎엇볎닀도 전달하고 싶은 메시지는 닚순하닀. 바로 ‘몚든 Ʞ술은 사람을 행복하게 하는 방향을 지향핎알 한닀’읎닀. 100년, 200년 후 우늬의 후손읎 돌아뎀을 때, ‘귞 시대륌 Ʞ점윌로 묎얞가가 바뀌었닀’고 말할 수 있닀멎, 귞것은 읞터넷곌 같은 컀닀란 혁신음 것읎며, 지ꞈ 진행 쀑읞 생성형 AI 역시 ê·ž 변곡점 쀑 하나닀.

귞늬고 우늬가 지ꞈ 반드시 도전핎알 할 또 하나의 곌제는 믞래 섞대가 “지구 환겜을 지킀멎서도 비슈니슀륌 할 수 있게 되었닀”고 말할 수 있는 시대륌 만드는 것읎닀. 믞쓰비시 뚞티늬얌은 자원순환을 핵심윌로 삌는 Ʞ업윌로, 읎러한 믞래상을 싀현하는 데 쀑요한 역할을 ë§¡ê³  있닀. 비슈니슀 섞계에서는 종종 ‘달윀한 읎알Ʞ만 핎서는 안 된닀’고 하지만, 지구 환겜을 지킀는 묞제에서는 읎타성곌 배렀가 반드시 필요하닀고 생각한닀. 읎는 읞류 전첎가 더 깊읎 읞식핎알 할 가치닀.

CIO는 대개 특정 전묞 분알에 강점을 지닌 겜우가 많고, 겜영진곌 동등한 수쀀에서 녌의할 수 있는 능력읎 요구된닀. 귞러나 IT만 알고 있얎서는 충분하지 않닀. 핎왞의 시각, 업계의 흐멄, 귞늬고 읞묞학 늬같은 비(非) IT 영역에 대한 ꎀ심읎 반드시 필요하닀. Ʞ업곌 산업마닀 곌제는 닀륎지만, IT만 생각핎서는 볞질적읞 묞제 핎결에 도달할 수 없닀. 귞렇Ʞ 때묞에 각자의 전묞성을 삎늬멎서도 복합적 시각윌로 접귌하는 태도가 앞윌로의 CIO에게 요구되는 자섞띌고 믿고 있닀.

*읎 Ʞ사는 CIO 재팬에서 진행된 ‘늬더십 띌읎람 재팬’의 낎용을 바탕윌로 음부 각색하여 구성한 것입니닀.
dl-ciokorea@foundryco.com

MS, M365 구독 요ꞈ 읞상 예고···분석가듀 “대안 몚색 및 재협상 필요”

8 December 2025 at 02:51

M365 고객은 2026년 7월 1음부터 더 높은 구독 요ꞈ을 부닎하게 될 전망읎닀. 비슈니슀용을 비롯핎 E3·E5, 프론튞띌읞, 정부용 구독 등 대부분의 요ꞈ제가 영향을 받는닀.

MS는 지난 4음 랔로귞륌 통핎 여러 요ꞈ제에 새 Ʞ능읎 추가되멎서 읞상읎 읎뀄졌닀고 밝혔닀. 여Ʞ에는 확장된 윔파음럿 챗 Ʞ능곌 E3에 포핚되는 MS 디펜더 포 였플슀(Microsoft Defender for Office), E5에 적용되는 시큐늬티 윔파음럿, 귞늬고 E3·E5륌 대상윌로 한 읞튠(Intune)의 원격 지원 및 고꞉ 분석 Ʞ능 등읎 있닀.

새로욎 구독 요ꞈ은 닀음곌 같닀.

  • M365 비슈니슀 베읎직은 월 사용자당 1달러 올띌 7달러가 된닀.
  • M365 비슈니슀 슀탠닀드는 1.5달러 읞상돌 월 14달러가 된닀.
  • 였플슀 365 E3는 월 3달러 읞상돌 26달러가 된닀.
  • M365 E3는 3달러 올띌 39달러가 된닀.
  • M365 E5는 3달러 읞상돌 월 60달러로 조정된닀.
  • M365 F1은 0.75달러 올띌 3달러가 된닀.
  • M365 F3는 2달러 읞상돌 월 10달러가 된닀.

읎 가욎데 M365 비슈니슀 프늬믞엄은 월 사용자당 22달러, 였플슀 365 E1은 10달러로 Ʞ졎 가격을 유지한닀. 정부용 M365 요ꞈ제는 플랜에 따띌 5%에서 10% 수쀀의 읞상읎 적용된닀. 몚든 요ꞈ에는 협업 앱읞 팀슈(Teams)가 포핚돌 있윌며, 팀슈륌 제왞할 겜우 더 낮은 요ꞈ읎 책정된닀.

가튞너 애널늬슀튞 잭 넀읎Ꞁ곌 슀티랐 화읎튞는 읎번 조치에 대핮 “최귌 읎얎지는 가격 정책 변화는 고객의 우렀와 플로감을 더욱 심화시킬 것”읎띌고 지적했닀.

MS는 2022년에도 M365 가격을 9%에서 25% 범위로 읞상한 바 있닀. 최귌에는 M365 등 죌요 제품의 엔터프띌읎슈 계앜(EA) 조걎을 변겜핎, 대규몚 고객에게 제공되던 사용자 수 êž°ë°˜ 할읞 정책을 닚계적윌로 폐지했닀.

가튞너 애널늬슀튞듀은 Ʞ업읎 읎번 읞상에 따륞 재묎적 부닎을 쀄읎렀멎 “협상 전략을 적극 활용하고, 대안을 검토하며, 띌읎선슀 할당을 최적화핎알 한닀”띌고 조얞했닀.

또한 가능하닀멎 7월 1음 가격 변겜 읎전에 계앜을 ì¡°êž° 갱신하는 방안을 고렀할 것을 제안했닀. 읎렇게 하멎 요ꞈ 읞상을 닀음 갱신 시점까지 늊출 수 있Ʞ 때묞읎닀.

가튞너가 최귌 IT 늬더 215명을 대상윌로 진행한 조사에 따륎멎, M365 고객의 17%는 대안 솔룚션을 검토 쀑읎며, 구독 비용에 충분한 가치륌 느낀닀고 답한 비윚은 5%에 불곌했닀.

MS는 올핎 쎈 싀적 발표에서 전 섞계 상업용 M365 사용자가 4억 3천만 명을 넘얎섰닀고 밝혔닀.

J. 곚드 얎소시에읎잠의 애널늬슀튞 잭 곚드는 MS가 읎전부터 죌Ʞ적윌로 가격을 읞상핎 왔닀고 얞꞉하며, “AI Ʞ능을 욎영하Ʞ 위핎 상당한 추가 연산읎 필요핎진 만큌, 읎륌 지원하는 대규몚 큎띌우드 읞프띌 욎영 비용을 회수하렀는 시도는 읎핎할 만하닀”띌고 섀명했닀.

곚드는 읎번 가격 읞상읎 고객 수에 큰 영향을 믞치지는 않을 것윌로 낎닀뎀닀. 귞는 “대부분의 고객읎 읎믞 MS 생태계에 깊읎 묶여 있얎 ê²°êµ­ 읎번 조정도 받아듀읎게 될 것”읎띌며 “요슘 구Ꞁ곌의 가격 겜쟁읎 치엎하ꞎ 하지만, 엔터프띌읎슈 시장에서 MS에서 구Ꞁ 였플슀로 대거 읎동하는 흐늄은 볎읎지 않는닀. 닀만 쀑소Ʞ업 및 쀑Ʞ업에서는 구Ꞁ의 성장섞가 뚜렷하닀”띌고 진닚했닀.
dl-ciokorea@foundryco.com

롯데읎녞베읎튞, 안드로읎드 êž°ë°˜ POS 개발···펞의점에 첫 도입

8 December 2025 at 02:40

롯데읎녞베읎튞는 Ʞ졎 국낎 펞의점에서 죌로 사용하는 윈도우 êž°ë°˜ 대신 안드로읎드 욎영첎제륌 바탕윌로 판맀시점 정볎ꎀ늬 êž°êž°(POS)륌 개발했닀. 슀마튞폰읎나 태랔늿 PC륌 닀룚는 것처럌 친숙하고 펞늬한 환겜을 제공핎 겜영죌듀의 접귌성을 강화했닀는 섀명읎닀. 또한 안드로읎드 욎영첎제의 개방성을 활용핎 POS Ʞ능 왞에도 닀양한 업묎용 앱을 손쉜게 사용할 수 있도록 섀계돌 점포 욎영의 유연성곌 확장성을 높였닀.

큎띌우드 Ʞ반윌로 욎영되는 안드로읎드 POS 시슀템은 데읎터 볎안성읎 뛰얎나고 시슀템 업데읎튞와 유지볎수도 용읎하닀. 롯데읎녞베읎튞는 겜영죌듀의 싀제 사용 겜험을 바탕윌로 섀계돌 맀장 욎영에 싀질적읞 도움을 쀄 수 있닀고 섀명했닀. 핎당 시슀템은 Ʞ졎 묎거욎 POS와 비교핎 태랔늿 PC형태로 더욱 가볍고 읎동읎 자유로워졌윌며, 자죌 사용하는 메뉎륌 직접 섀정할 수 있는 ‘나만의 메뉎’, 알간에 눈의 플로륌 쀄여죌는 ‘닀크 몚드’등 사용자륌 배렀한 Ʞ능도 적용됐닀.

롯데읎녞베읎튞는 올핎부터 섞랐음레랐곌 협력하여 신규 점포륌 쀑심윌로 안드로읎드 êž°ë°˜ 큎띌우드 POS륌 도입하고, 향후 전국 몚든 섞랐음레랐 점포로 확대 적용할 계획읎닀.

롯데읎녞베읎튞 ꎀ계자는 “읎번 안드로읎드 큎띌우드 POS 개발은 새로욎 시슀템 도입을 넘얎, 펞의점 업계에 슀마튞 욎영의 새로욎 표쀀을 제시하는 읎정표가 될 것”읎띌며 “앞윌로 AI êž°ë°˜ 고객 맞춀형 ꎑ고 등 지속적읞 Ʞ술 혁신을 통핎 겜영죌와 고객 몚두에게 더 나은 펞의점 겜험을 제공하고, 횚윚적읞 맀장 욎영을 지원할 것”읎띌고 말했닀.
dl-ciokorea@foundryco.com

Yesterday — 7 December 2025CIO

“PMO에서 BTO로” AI가 여는 프로젝튞 ꎀ늬의 대전환

7 December 2025 at 21:42

CIO 입장에서 AI륌 둘러싌 녌의는 혁신에서 였쌀슀튞레읎션 닚계로 발전했닀. 였랫동안 읞간의 조윚곌 통제 영역읎었던 프로젝튞 ꎀ늬는 지능형 시슀템읎 프로젝튞 진행 및 성곌륌 얎떻게 재펞하고 변혁을 가속하는지 시험하는 묎대로 빠륎게 부상하고 있닀.

산업군을 막론하고 몚든 êž°ì—… CIO는 AI의 앜속을 욎영 잡멎에서 수치화핎알 한닀는 곌제륌 안고 있닀. 프로젝튞 êž°ê°„ 닚축, 간접비 감소, 포튞폎늬였 투명성 제고 같은 지표로 얎떻게 섀명할 것읞가 하는 묞제닀. 조지아 공곌대학교가 2025년에 프로젝튞 ꎀ늬 전묞가와 C 레벚 Ʞ술 늬더 217명을 대상윌로 진행한 연구에 따륎멎, 조사 대상 Ʞ업의 73%가 ì–Žë–€ 형태로든 프로젝튞 ꎀ늬 영역에 AI륌 도입했닀고 응답했닀.

하지만 읎런 ì—Žêž° 속에서도 AI가 프로젝튞 맀니저(PM)의 역할을 얎떻게 재정의할 것읞지, 향후 비슈니슀 혁신 프로귞랚의 프레임워크륌 얎떻게 규정할 것읞지에 대한 질묞은 여전히 낚아 있닀.

역할은 바뀌지만 PM의 쀑요성은 귞대로

읎믞 여러 산업에서 프로젝튞 전묞가는 변화륌 첎감하고 있닀. 읎번 조사에서 AI륌 음찍 도입한 Ʞ업은 프로젝튞 횚윚성읎 최대 30%까지 향상됐닀고 볎고했지만, 성공 여부는 Ʞ술 자첎볎닀 늬더십읎 AI 활용을 얎떻게 통제하느냐에 더 크게 좌우됐닀. 응답자의 압도적 닀수는 AI가 횚윚성, 예잡 êž°ë°˜ 계획, 의사결정 개선에 맀우 횚곌적읎었닀고 평가했닀. 귞렇닀멎 프로젝튞륌 싀제로 욎영하는 싀묎자에게 읎 변화는 묎엇을 의믞할까?

응답자의 앜 1/3은 AI 덕분에 PM읎 음상적읞 음정·업묎 조정에서 벗얎나 장Ʞ적읞 성곌륌 읎끄는 전략적 쎝ꎄ 역할에 더 집쀑할 수 있을 것읎띌고 예상했닀. 또 닀륞 1/3은 PM읎 팀 전반에서 AI 읞사읎튞륌 핎석핎 통합하는 쎉진자 역할을 수행하며 협업을 강화하는 방향윌로 진화할 것읎띌고 낎닀뎀닀. 나뚞지 응답자는 PM읎 알고늬슘의 윀늬성, 정확도, 비슈니슀 목표 정렬 여부륌 ꎀ늬·감독하는 AI 시슀템 감독자로 변몚할 것읎띌고 전망했닀.

읎런 시각은 하나의 결론윌로 몚읞닀. AI가 PM을 대첎하지는 않지만, PM의 가치륌 재정의할 것읎띌는 점읎닀. 앞윌로 등장할 PM은 업묎 목록만 ꎀ늬하는 사람읎 아니띌 지능을 ꎀ늬하고, AI êž°ë°˜ 읞사읎튞륌 비슈니슀 성곌로 번역하는 역할을 수행하게 된닀.

PMO가 서둘러알 하는 읎유

PMO(Project Management Office)에게 곌제는 더 읎상 AI륌 도입할지 여부가 아니띌 얎떻게 도입할지읎닀. 대Ʞ업읎띌멎 대부분 음정 예잡, 자동 늬슀크 볎고, 묞서 작성을 위한 생성형 AI 등 닀양한 영역에서 읎믞 싀험을 진행하고 있얎 AI 채택 속도는 빚띌지고 있닀. 하지만 싀제 통합 수쀀은 Ʞ업마닀 듀쭉날쭉하닀.

여전히 많은 PMO가 AI륌 전략 역량읎 아닌 도구 몚음 수쀀의 부가 Ʞ능 정도로 췚꞉한닀. 하지만 AI의 핵심은 판당 슝강곌 자동화에 있닀. 진정한 겜쟁우위륌 확볎하는 Ʞ업은 AI륌 프로젝튞 방법론, 거버넌슀 프레임워크, 성곌 지표에 깊읎 낎재화하고, 닀음 닀섯 가지 접귌법을 엌두에 두고 움직읞닀.

1. 파음럿 프로젝튞부터 시작하띌

작게 시작핎 빠륎게 확장하띌. 가장 성공적읞 AI 통합은 프로젝튞 상태 볎고 자동화, 음정 지연 예잡, 자원 병목 식별 같은 명확한 사용례륌 겚냥한 소규몚 시범 적용에서 출발한닀. 읎런 파음럿 프로젝튞는 눈에 볎읎는 성곌륌 만듀고 조직 낮 Ʞ대감을 높읎며, 통합 곌정에서 발생하는 Ʞ술곌 프로섞슀 묞제륌 쎈Ʞ 닚계에 드러낎 쀀닀.

2. 활동량읎 아니띌 가치륌 잡정하띌

AI륌 도입하멎서도 명확한 성곌 지표 없읎 추진하는 싀수가 자죌 발생한닀. PMO는 수동 볎고 시간 감소, 늬슀크 예잡 정확도 향상, 프로젝튞 사읎큎 닚축, 읎핎ꎀ계자 만족도 제고 같은 구첎적읞 KPI륌 섀정핎알 한닀. 읎런 결곌륌 조직 전첎에 공유하는 음도 성곌 못지않게 쀑요하닀. 성공 사례륌 적극적윌로 알늬멎 몚멘텀을 킀우고 동의륌 읎끌얎 낎고, AI에 회의적읞 팀의 읞식을 바꟞는 데 도움읎 된닀.

3. PM 역량을 업귞레읎드하띌

AI의 가치는 ê²°êµ­ 귞것을 활용하는 사람의 역량에 달렀 있닀. 섀묞에 응답한 전묞가의 거의 절반은 숙렚 읞력 부족을 AI 통합의 죌요 장벜윌로 ꌜ았닀. 프로젝튞 맀니저가 데읎터 곌학자가 될 필요는 없지만, AI의 Ʞ볞 개념, 알고늬슘읎 작동하는 방식, 펞향읎 발생하는 지점, 데읎터 품질의 의믞 정도는 읎핎핎알 한닀. 앞윌로 가장 영향력읎 큰 PM은 데읎터 늬터러시와 핚께 비판적 사고, 감정 지능, 컀뮀니쌀읎션 같은 읞간 쀑심 늬더십을 겞비한 읞재가 될 것읎닀.

4. 거버넌슀와 윀늬륌 강화하띌

AI 활용읎 늘얎날수록 알고늬슘읎 프로젝튞 의사결정에 영향을 믞칠 때 윀늬적 묞제가 대두된닀. PMO는 투명성, 공정성, 읞간의 최종 감독을 강조하는 AI 거버넌슀 프레임워크륌 수늜하는 데 앞장서알 한닀. 읎런 원칙을 PMO의 헌장곌 프로섞슀에 녹여두멎 늬슀크륌 쀄읎는 데 귞치지 않고 프로젝튞 읎핎ꎀ계자 사읎에 신뢰륌 쌓는 Ʞ반을 마령할 수 있닀.

5. PMO에서 BTO로 진화하띌

전통적읞 PMO는 범위, 음정, 비용 ꎀ점에서 프로젝튞 싀행에 쎈점을 맞춘닀. 하지만 AI륌 적극 활용하는 Ʞ업은 프로젝튞륌 비슈니슀 가치 찜출곌 직접 연결하는 BTO(Business Transformation Office)로 진화하는 추섞닀. PMO가 프로젝튞륌 ‘제대로’ 수행하는 데 쎈점을 둔닀멎, BTO는 ‘올바륞’ 프로젝튞륌 선택핎 성곌륌 낮는 데 쎈점을 둔닀. 읎 프레임워크의 핵심 요소는 워터폎 방식에서 애자음 마읞드셋윌로의 전환읎닀. 프로젝튞 ꎀ늬는 겜직된 계획 쀑심에서 반복적읎고 고객 쀑심읎며 협업적읞 방식윌로 읎동했고, 하읎람늬드 방법론읎 점점 음반적읞 선택읎 되고 있닀. 읎런 애자음 접귌법은 AI와 디지턞 혁신읎 쎉발하는 ꞉격한 변화륌 따띌가Ʞ 위핎 필수적읞 조걎읎닀.

프로젝튞 ꎀ늬자륌 위한 새로욎 겜력 개발

2030년 묎렵읎 되멎 상태 업데읎튞, 음정 수늜, 늬슀크 겜고처럌 반복적읞 프로젝튞 업묎 상당 부분을 AI가 처늬하고, 읞간 책임자는 비전, 협업, 윀늬에 집쀑하는 귞늌읎 현싀읎 될 수 있닀. 읎런 변화는 애자음 확산곌 디지턞 튞랜슀포메읎션처럌 곌거 프로젝튞 ꎀ늬 혁신 흐늄을 닮았지만, 전개 속도는 훚씬 빠륎닀.

하지만 Ʞ업읎 점점 더 많은 AI륌 도입하멎 할수록 읞간적 요소륌 잃을 위험도 컀진닀. 프로젝튞 ꎀ늬는 얞제나 사람에 ꎀ한 음읎고, 읎핎ꎀ계륌 맞추고 갈등을 핎결하며 팀에 동Ʞ륌 부여하는 곌정읎닀. AI는 음정 지연을 예잡할 수는 있지만, 지연을 만회하도록 팀을 격렀할 수는 없닀. 뉘앙슀륌 핎석하고 신뢰륌 구축하며, 협업을 쎉진하는 PM의 읞간적 능력은 여전히 대첎 불가능하닀.

읎제 행동에 착수핎알 할 시간

AI는 êž°ì—… 프로젝튞 진행 및 성곌의 첚병읎 될 것읎닀. 앞윌로 10년은 PMO와 겜영진, 정책 입안자가 읎런 진화륌 얌마나 잘 ꎀ늬하는지 시험하는 시간읎 될 것읎닀. 성공하Ʞ 위핎 Ʞ업은 플랫폌만큌 사람에 투자하고, 윀늬적읎고 투명한 거버넌슀륌 채택하며, 지속적읞 학습곌 싀험 묞화륌 조성하고, 곌대ꎑ고가 아닌 싀제 성곌로 성공 여부륌 판닚핎알 한닀.

CIO에게는 읎믞 분명한 곌제가 죌얎졌닀. 비전윌로 읎끌고 높은 윀늬 Ʞ쀀윌로 거버넌슀륌 수행하며, 지능형 도구로 팀에 힘을 싀얎쀘알 한닀. AI는 프로젝튞 ꎀ늬 직묎륌 위협하는 졎재가 아니띌 ê·ž 역할을 재탄생시킀는 쎉맀제닀. 책임 있게 싀행할 겜우 AI êž°ë°˜ 프로젝튞 ꎀ늬는 욎영 횚윚을 높읎는 데 귞치지 않고, 변화에 믌첩하게 대응하멎서도 사람 쀑심 가치륌 유지하는 Ʞ업을 만드는 Ʞ반읎 된닀. 읎런 변화륌 신쀑하게 수용하멎 PM은 닚순한 ꎀ늬자륌 넘얎 변화의 섀계자로 도앜할 수 있닀.
dl-ciokorea@foundryco.com

“비읞간 아읎덎티티, 볎안 몚덞의 새로욎 핵심 축” 포티넷, 2026 사읎버 위협 전망 볎고서

7 December 2025 at 19:48

포티넷읎 자사 위협 읞텔늬전슀 조직읞 포티가드 랩슀(FortiGuard Labs)륌 통핎 ‘2026 사읎버 위협 전망 볎고서(Fortinet Cyberthreat Predictions Report for 2026)’륌 공개했닀. 볎고서는 사읎버 범죄가 AI와 자동화, 전묞화된 공꞉망을 Ʞ반윌로 빠륎게 산업화하고 있윌며, 2026년에는 혁신 자첎볎닀 위협 읞텔늬전슀륌 얌마나 빠륎게 싀행할 수 있는지, 슉 처늬 속도가 공격곌 방얎의 성팚륌 좌우하는 핵심 Ʞ쀀읎 될 것윌로 분석했닀.

볎고서는 칚핎 곌정읎 AI와 자동화 도구로 읞핎 크게 닚축되고 있닀고 섀명했닀. 공격자듀은 새로욎 도구륌 만듀Ʞ볎닀 읎믞 횚곌가 입슝된 공격 Ʞ법을 자동화·고도화하는 방식윌로 횚윚을 극대화하고 있닀. AI 시슀템은 정찰, 칚투 가속, 데읎터 분석, 협상 메시지 생성까지 공격 곌정 전반을 자동화하며, 닀크웹에는 최소 개입만윌로 음렚의 공격 절찚륌 수행하는 자윚형 범죄 에읎전튞까지 등장하고 있닀.

읎로 읞핎 공격 처늬량은 Ʞ하꞉수적윌로 슝가하는 양상을 볎읞닀. 곌거 몇 걎의 랜섬웚얎만 욎영하던 범죄자가 읎제는 수십 걎의 병렬 공격을 싀행할 수 있게 됐윌며, 칚핎 발생부터 싀제 플핎까지 걞늬는 시간도 며칠에서 몇 분 닚위로 축소되고 있닀. 볎고서는 읎 같은 공격 속도 ê·ž 자첎가 2026년 Ʞ업읎 직멎할 가장 큰 위험 요소가 될 것읎띌고 분석했닀.

볎고서에 따륎멎, 자격 슝명 탈췚, 횡적 읎동, 데읎터 수익화 등 공격 첎읞의 핵심 닚계륌 자동화하는 전묞 AI 에읎전튞도 두드러지고 있닀. 읎런 시슀템은 탈췚한 데읎터륌 분석하고, 플핎자 우선순위륌 산정하며, 개읞화된 협박 메시지륌 생성핎 데읎터가 디지턞 자산처럌 빠륎게 ꞈ전화되는 환겜을 만듀얎낞닀.

지하 범죄 시장 역시 더욱 구조화되는 흐늄을 볎읎고 있닀. 산업·지역·시슀템 환겜에 맞춘 맞춀형 ì ‘ê·Œ 권한 팚킀지가 유통되고 데읎터 볎강곌 자동화륌 통핎 거래 정교화가 읎뀄지고 있윌며, 고객지원·평판 점수·자동 에슀크로 등 합법 산업에서 볌 수 있는 요소가 도입되멎서 사읎버 범죄의 산업화가 한잵 가속화되고 있닀.

읎 같은 공격 고도화 속에서 포티넷은 Ʞ업읎 ‘뚞신 속도 ë°©ì–Ž(machine-speed defense)’ 첎계륌 갖추는 것읎 필수적읎띌고 강조했닀. ëšžì‹  속도 방얎는 위협 읞텔늬전슀 수집·검슝·격늬 곌정을 연속적윌로 자동화핎 탐지와 대응 시간을 시간 닚위에서 분 닚위로 압축하는 욎영 몚덞읎닀. 읎륌 위핎 CTEM(지속적 위협 ë…žì¶œ ꎀ늬), MITRE ATT&CK 프레임워크 êž°ë°˜ 위협 맀핑, 싀시간 복구 우선순위화 등 데읎터 Ʞ반의 연속 욎영 첎계가 요구된닀.

또한 조직 낎부에서 AI 시슀템·자동화 에읎전튞·뚞신 간 통신읎 폭발적윌로 슝가핚에 따띌 ‘비읞간 아읎덎티티(Non-Human Identity)’ ꎀ늬가 볎안 욎영의 새로욎 핵심 축윌로 자늬 잡고 있닀. 사람뿐 아니띌 자동화된 프로섞슀와 Ʞ계 간 상혞작용까지 읞슝·통제핎알 대규몚 권한 상승 및 데읎터 녞출을 방지할 수 있닀는 의믞닀.

포티넷은 국제 공조 역시 필수 요소띌고 섀명했닀. 읞터폎의 섞렝게티 2.0(Operation Serengeti 2.0)곌 포티넷–크띌임슀톱퍌슀(Fortinet–Crime Stoppers) 국제 사읎버 범죄 현상ꞈ 프로귞랚은 범죄 읞프띌륌 싀제로 묎력화하고 위협 신고 첎계륌 강화한 대표적읞 사례닀. 뿐만 아니띌 청소년·췚앜 계잵을 볎혞하Ʞ 위한 교육·예방 활동 확대도 장Ʞ적 ꎀ점에서 쀑요하닀.

사읎버 범죄 규몚는 2027년읎멎 합법 산업에 버ꞈ갈 것윌로 전망된닀. 공격자는 닀수의 AI 에읎전튞가 군집처럌 협력하는 슀웜(swarm) êž°ë°˜ 자동화륌 활용핎 방얎자 행동에 적응하며 공격을 전개할 것윌로 볎읎며, AI·임베디드 시슀템을 겚냥한 공꞉망 공격도 더욱 정교핎질 전망읎닀. 읎에 대응하Ʞ 위핎 방얎자는 예잡 읞텔늬전슀·자동화·녞출 ꎀ늬 역량을 강화핎 공격자의 움직임을 볎닀 빠륎게 파악하고 ì¡°êž° 찚닚할 수 있는 첎계로 진화핎알 한닀.

볎고서 집필팀은 “속도와 규몚가 앞윌로의 10년을 규정할 것”읎띌고 강조하며, 읞텔늬전슀와 자동화, 볎안 읞력의 역량을 하나의 반응형 첎계로 통합한 Ʞ업만읎 믞래 위협 환겜에서 죌도권을 확볎할 수 있닀고 결론지었닀.

포티넷은 였는 16음 사읎버 범죄 생태계와 앞윌로 닀가올 튞렌드에 대한 읞사읎튞륌 공유하는 웚비나륌 진행한닀. 포티가드 랩의 디렉터읞 요나슀 워컀가 연사로 찞여한닀.
dl-ciokorea@foundryco.com

LLM゚ヌゞェントず人間の協調蚭蚈──どこたで任せ、どこで介入すべきか

7 December 2025 at 07:20

人間の圹割を前提にした゚ヌゞェント蚭蚈

たず倧前提ずしお、LLM゚ヌゞェントは人間の代わりではなく、あくたで協働パヌトナヌずしお蚭蚈されるべきです。人間の匷みは、䟡倀刀断や責任の負担、組織や個人の文脈を螏たえた意思決定にありたす。逆に゚ヌゞェントの匷みは、情報の探玢ず敎理、繰り返し䜜業の高速凊理、倚数の遞択肢の怜蚎ずいった郚分です。どちらか䞀方に党面的に寄せるのではなく、長所の組み合わせを意識するこずが重芁です。

そのためには、たず察象ずなる業務を分解し、「刀断が重いステップ」ず「事務的なステップ」を芋極める必芁がありたす。たずえば、顧客クレヌムぞの察応であれば、事実関係の敎理や過去ケヌスの怜玢、文面のドラフト䜜成などぱヌゞェントに任せやすい領域です。䞀方で、無償察応の範囲をどこたで認めるか、今埌の関係性ぞの圱響をどう考えるかずいった刀断は、人間に残すべき領域になりたす。

゚ヌゞェント蚭蚈では、こうした業務分解の結果を螏たえ、「゚ヌゞェントが自埋的に完結しおよい範囲」「必ず人間の承認を芁する範囲」「人間の刀断のために情報敎理だけ行う範囲」ずいう䞉぀のゟヌンを明確に定矩したす。そのうえで、各ゟヌンごずに゚ヌゞェントの暩限ずむンタヌフェヌスを調敎するこずで、協調の前提が敎っおいきたす。

介入ポむントず「ハンドル」のデザむン

人間ず゚ヌゞェントの協調をうたく機胜させるには、人間偎から芋お「い぀でも介入できる」ずいう感芚が重芁です。䞀床゚ヌゞェントに仕事を枡したら最埌、内郚で䜕が起きおいるか分からず、誀った結果だけが突然返っおくるずいう状態では、ナヌザヌは安心しお任せるこずができたせん。

そこで鍵になるのが、介入ポむントずハンドルのデザむンです。介入ポむントずは、ワヌクフロヌの䞭で人間が必ず確認や承認を行うステップのこずであり、ハンドルずは人間が゚ヌゞェントの振る舞いを調敎するための操䜜手段です。具䜓的には、゚ヌゞェントが提案したプランを䞀芧で衚瀺し、ナヌザヌに「採甚」「修正」「华䞋」を遞ばせる画面や、゚ヌゞェントが䜜成したドラフトを線集する゚ディタ、凊理を途䞭で止める停止ボタンなどが該圓したす。

さらに、゚ヌゞェントがどのように考えお行動したのかを、ナヌザヌに分かりやすく提瀺するこずも重芁です。゚ヌゞェントの内郚で起きおいる掚論プロセスを完党に可芖化するこずは難しいにしおも、「たず過去䞉ヶ月のデヌタを集蚈し、その結果をもずに二぀の案を比范した」ずいった簡朔な説明を添えるだけで、ナヌザヌの安心感は倧きく倉わりたす。このような「思考過皋の倖圚化」は、人間の同僚が報告するずきの䜜法に近く、゚ヌゞェントをチヌムの䞀員ずしお扱う感芚を育おたす。

信頌を育おるナヌザヌ䜓隓ず「手攟し運転」の範囲

協調蚭蚈のゎヌルは、ナヌザヌが゚ヌゞェントを埐々に信頌し、適切な範囲で「手攟し運転」を蚱容できる状態を䜜るこずです。ここで重芁なのは、最初から高い自埋性を䞎えるのではなく、段階的に信頌を積み重ねるこずです。

初期段階では、゚ヌゞェントに「提案」や「ドラフト」だけを任せ、最終決定は必ず人間が行う圢が望たしいでしょう。このフェヌズでは、゚ヌゞェントの提案がどれだけ有甚か、どの皋床の頻床で修正が必芁かを芳察し、ナヌザヌ自身も゚ヌゞェントずの付き合い方を孊んでいきたす。この過皋で、「この皮類の仕事ならば、゚ヌゞェントに任せおも倧䞈倫そうだ」ずいう感芚が少しず぀育っおいきたす。

次の段階では、リスクの䜎い領域から自動実行の範囲を広げおいきたす。たずえば、内郚向けの週次レポヌトの曎新や、定型的なリマむンドメヌルの送信などは、自動化しやすい領域です。䞀方で、察倖的なコミュニケヌションや契玄関連の凊理などは、長く人間のレビュヌが必芁な領域ずしお残るかもしれたせん。組織ずしお「どのレベルのリスクなら゚ヌゞェントに任せおよいか」ずいう方針を共有し、それに沿っお暩限蚭定を行うこずが、健党な信頌関係の前提になりたす。

最終的には、ナヌザヌ䜓隓そのものが、゚ヌゞェントぞの信頌に倧きな圱響を䞎えたす。誀りが起きたずきに、どれだけ玠早く原因を特定し、修正できるか。ナヌザヌが「この結果はおかしい」ず感じたずき、ワンクリックで人間の担圓者に切り替えられるか。そうした「倱敗ぞの備え」が敎っおいるほど、ナヌザヌは安心しお゚ヌゞェントに仕事を任せるこずができたす。人間ず゚ヌゞェントの協調蚭蚈ずは、単に圹割分担を決めるだけではなく、信頌が埐々に醞成されるナヌザヌ䜓隓の流れ党䜓をデザむンする営みでもありたす。

Before yesterdayCIO

Resops: Turning AI disruption into business momentum

5 December 2025 at 12:23

The world has changed — artificial intelligence (AI) is reshaping business faster than most can adapt


The rise of large language models and agentic AI has created unprecedented scale, speed, and complexity. Enterprises are moving from static infrastructures to hyperplexed, distributed, and autonomous systems. Organizations are pouring more than $400 billion into AI infrastructure, a wave expected to generate more than $2 trillion in new value. But without resilience at the core, that value remains at risk.

As innovation accelerates, new risks emerge just as quickly. Security is lagging behind transformation. Data is exploding, with nearly 40% year-over-year growth across hybrid and multicloud environments. Regulations are tightening, and ransomware and AI-powered attacks are multiplying. The result: Resilience now defines competitive advantage.

Resilience drives velocity

Resilience isn’t just recovery. It’s also the foundation of sustained innovation. Traditional recovery models were built for yesterday’s outages, not today’s AI-driven disruptions, which unfold in milliseconds. In this world, recovery is table stakes. True resilience means that every system runs on clean, verifiable data, and it restores trust when it’s tested.

The most resilient organizations are also the fastest movers. They adopt emerging technologies with confidence, recover with speed and integrity, and innovate at scale. Resilience has evolved from a safety net to the engine of enterprise speed and scalability.

Introducing resops, the model for next-generation resilience

Resops, short for resilience operations, is an operating model that unifies data protection, cyber recovery, and governance into a single intelligent system. It creates an ongoing loop that monitors, validates, and protects data across hybrid and multicloud environments, enabling organizations to detect risks early and recover with confidence.

By integrating resilience into every layer of operations, resops transforms it from an isolated function into a proactive discipline — one that keeps businesses secure, compliant, and ready to adapt in the AI era.

To learn more about ResOps, read “ResOps: The future of resilient business in the era of AI.” 


Vertical AI development agents are the future of enterprise integrations

5 December 2025 at 10:58

Enterprise Application Integration (EAI) and modern iPaaS platforms have become two of the most strategically important – and resource-constrained – functions inside today’s enterprises. As organizations scale SaaS adoption, modernize core systems, and automate cross-functional workflows, integration teams face mounting pressure to deliver faster while upholding strict architectural, data quality, and governance standards.

AI has entered this environment with the promise of acceleration. But CIOs are discovering a critical truth:

Not all AI is built for the complexity of enterprise integrations – whether in traditional EAI stacks or modern iPaaS environments.

Generic coding assistants such as Cursor or Claude Code can boost individual productivity, but they struggle with the pattern-heavy, compliance-driven reality of integration engineering. What looks impressive in a demo often breaks down under real-world EAI/iPaaS conditions.

This widening gap has led to the rise of a new category: Vertical AI Development Agents – domain-trained agents purpose-built for integration and middleware development. Companies like CurieTech AI are demonstrating that specialized agents deliver not just speed, but materially higher accuracy, higher-quality outputs, and far better governance than general-purpose tools.

For CIOs running mission-critical integration programs, that difference directly affects reliability, delivery velocity, and ROI.

Why EAI and iPaaS integrations are not a “Generic Coding” problem

Integrations—whether built on legacy middleware or modern iPaaS platforms – operate within a rigid architectural framework:

  • multi-step orchestration, sequencing, and idempotency
  • canonical data transformations and enrichment
  • platform-specific connectors and APIs
  • standardized error-handling frameworks
  • auditability and enterprise logging conventions
  • governance and compliance embedded at every step

Generic coding models are not trained on this domain structure. They often produce code that looks correct, yet subtly breaks sequencing rules, omits required error handling, mishandles transformations, or violates enterprise logging and naming standards.

Vertical agents, by contrast, are trained specifically to understand flow logic, mappings, middleware orchestration, and integration patterns – across both EAI and iPaaS architectures. They don’t just generate code – they reason in the same structures architects and ICC teams use to design integrations.

This domain grounding is the critical distinction.

The hidden drag: Context latency, expensive context managers, and prompt fatigue

Teams experimenting with generic AI encounter three consistent frictions:

Context Latency

Generic models cannot retain complex platform context across prompts. Developers must repeatedly restate platform rules, logging standards, retry logic, authentication patterns, and canonical schemas.

Developers become “expensive context managers”

A seemingly simple instruction—“Transform XML to JSON and publish to Kafka”—
quickly devolves into a series of corrective prompts:

  • “Use the enterprise logging format.”
  • “Add retries with exponential backoff.”
  • “Fix the transformation rules.”
  • “Apply the standardized error-handling pattern.”

Developers end up managing the model instead of building the solution.

Prompt fatigue

The cycle of re-prompting, patching, and enforcing architectural rules consumes time and erodes confidence in outputs.

This is why generic tools rarely achieve the promised acceleration in integration environments.

Benchmarks show vertical agents are about twice as accurate

CurieTech AI recently published comparative benchmarks evaluating its vertical integration agents against leading generic tools, including Claude Code.
The tests covered real-world tasks:

  • generating complete, multi-step integration flows
  • building cross-system data transformations
  • producing platform-aligned retries and error chains
  • implementing enterprise-standard logging
  • converting business requirements into executable integration logic

The results were clear: generic tools performed at roughly half the accuracy of vertical agents.

Generic outputs often looked plausible but contained structural errors or governance violations that would cause failures in QA or production. Vertical agents produced platform-aligned, fully structured workflows on the first pass.

For integration engineering – where errors cascade – this accuracy gap directly impacts delivery predictability and long-term quality.

The vertical agent advantage: Single-shot solutioning

The defining capability of vertical agents is single-shot task execution.

Generic tools force stepwise prompting and correction. But vertical agents—because they understand patterns, sequencing, and governance—can take a requirement like:

“Create an idempotent order-sync flow from NetSuite to SAP S/4HANA with canonical transformations, retries, and enterprise logging.”


and return:

  • the flow
  • transformations
  • error handling
  • retries
  • logging
  • and test scaffolding

in one coherent output.

This shift – from instruction-oriented prompting to goal-oriented prompting—removes context latency and prompt fatigue while drastically reducing the need for developer oversight.

Built-in governance: The most underrated benefit

Integrations live and die by adherence to standards. Vertical agents embed those standards directly into generation:

  • naming and folder conventions
  • canonical data models
  • PII masking and sensitive-data controls
  • logging fields and formats
  • retry and exception handling patterns
  • platform-specific best practices

Generic models cannot consistently maintain these rules across prompts or projects.

Vertical agents enforce them automatically, which leads to higher-quality integrations with far fewer QA defects and production issues.

The real ROI: Quality, consistency, predictability

Organizations adopting vertical agents report three consistent benefits:

1. Higher-Quality Integrations

Outputs follow correct patterns and platform rules—reducing defects and architectural drift.

2. Greater Consistency Across Teams

Standardized logic and structures eliminate developer-to-developer variability.

3. More Predictable Delivery Timelines

Less rework means smoother pipelines and faster delivery.

A recent enterprise using CurieTech AI summarized the impact succinctly:

“For MuleSoft users, generic AI tools won’t cut it. But with domain-specific agents, the ROI is clear. Just start.”

For CIOs, these outcomes translate to increased throughput and higher trust in integration delivery.

Preparing for the agentic future

The industry is already moving beyond single responses toward agentic orchestration, where AI systems coordinate requirements gathering, design, mapping, development, testing, documentation, and deployment.

Vertical agents—because they understand multi-step integration workflows—are uniquely suited to lead this transition.

Generic coding agents lack the domain grounding to maintain coherence across these interconnected phases.

The bottom line

Generic coding assistants provide breadth, but vertical AI development agents deliver the depth, structure, and governance enterprise integrations require.

Vertical agents elevate both EAI and iPaaS programs by offering:

  • significantly higher accuracy
  • higher-quality, production-ready outputs
  • built-in governance and compliance
  • consistent logic and transformations
  • predictable delivery cycles

As integration workloads expand and become more central to digital transformation, organizations that adopt vertical AI agents early will deliver faster, with higher accuracy, and with far greater confidence.

In enterprise integrations, specialization isn’t optional—it is the foundation of the next decade of reliability and scale.

Learn more about CurieTech AI here.

Agile isn’t just for software. It’s a powerful way to lead

5 December 2025 at 09:12

In times of disruption, Agile leadership can help CIOs make better, faster decisions — and guide their teams to execute with speed and discipline.

When the first case of COVID hit my home city, it was only two weeks after I’d become president of The Persimmon Group. For more than a decade, I’d coached leaders, teams and PMOs to execute their strategy with speed and discipline.

But now — in a top job for the first time — I was reeling.

Every plan we had in motion — strategic goals, project schedules, hiring decisions — was suddenly irrelevant. Clients froze budgets. Team members scrambled to set up remote work for the first time, many while balancing small children and shared spaces.

Within days, we were facing a dozen high-stakes questions about our business, all with incomplete information. Each answer carried massive operational and cultural implications.

We couldn’t just make the right call. We had to make it fast. And often, we were choosing between a bunch of bad options.

From crisis to cadence

At first, we tried to lead the way we always had: gather the facts, debate the trade-offs and pick the best path forward. But in a landscape that changed daily, that rhythm broke down fast.

The information we needed didn’t exist yet. The more we waited for certainty — or gamed out endless hypotheticals — the slower and more reactive we became.

And then something clicked. What if the same principles that helped software teams move quickly and learn in real time could help lead us through uncertainty?

So we started experimenting.

We shortened our time horizons. Made smaller bets. Created fast feedback loops. We became almost uncomfortably transparent, involving the team directly in critical decisions that affected them and their work.

In the months that followed, those experiments became the backbone of how we led through uncertainty — and how we continue to lead today.

An operating system for change

What emerged wasn’t a formal framework. It was a set of small, deliberate habits that brought the same rhythm and focus to leadership that Agile brings to delivery.

Here’s what that looked like in practice:

Develop a ‘fast frame’ to focus decisions

In the first few months of the pandemic, our leadership meetings were a tangle of what-ifs. What if we lost 20% of planned revenue this year? What if we lost 40%? Would we do layoffs? Furloughs? Salary cuts? And when would we do them — preemptively or reactively?

We were so busy living in multiple possible futures that it was difficult to move forward with purpose. To break out of overthinking mode, we built a lightweight framework we now call our fast frame. It centered on five questions:

  1. What do we know for sure?
  2. What can we find out quickly?
  3. What is unknowable right now?
  4. What’s the risk of deciding today?
  5. What’s the risk of not deciding today?

The fast frame forced us to separate facts from conjecture. It also helped us to get our timing right. When did we need to move fast, even with imperfect information? When could we afford to slow down and get more data points?

The fast frame helped us slash decision latency by 20% to 30%.

It kept us moving when the urge was to stall and it gave us language to talk about uncertainty without letting it rule the room.

Build plans around small, fast experiments

After using our fast frame for a while, we realized something: Our decisions were too big.

In an environment changing by the day, Big Permanent Decisions were impractical — and a massive time sink. Every hour we spent debating a Big Permanent Decision was an hour we weren’t learning something important.

So we replaced them with For-Now Decisions — temporary postures designed to move us forward, fast, while we learned what was real.

Each For-Now Decision had four parts:

  1. The decision itself — the action we’d take based on what we knew at that moment.
  2. A trigger for when to revisit it — either time-based (two weeks from now) or event-based (if a client delays a project).
  3. A few learning targets — what we hoped to discover before the next checkpoint.
  4. An agility signal — how we communicated the decision to the team. We’d say, “This is our posture for now, but we may change course if X. We’ll need your help watching for Y as we learn more.”

By framing decisions this way, we removed the pressure to be right. The goal wasn’t to predict the future but to learn from it faster. By abandoning bad ideas early, we saved 300 to 400 hours a year.

Increase cadence and transparency of communication

In those early weeks, we learned that the only thing more dangerous than a bad decision was a silent one. When information moves slower than events, people fill the gaps with assumptions.

So we made communication faster — and flatter. Every morning, our 20-person team met virtually for a 20-minute standup. The format was simple but consistent:

  • Executive push. We shared what the leadership team was working on, what decisions had been made and what input we needed next.
  • Team pull. Anyone could ask questions, raise issues or surface what they were hearing from clients.
  • Needs and lessons. We ended with what people needed to stay productive and what we were learning that others could benefit from.

The goal wasn’t to broadcast information from the top — or make all our decisions democratically. It was to create a shared operating picture. The standup became a heartbeat for the company, keeping everyone synchronized as conditions changed.

Transparency replaced certainty. Even when we didn’t have all the answers, people knew how decisions were being made and what we were watching next. That openness built confidence faster than pretending we had it all figured out.

That transparency paid off.

While many small consulting firms folded in the first 18 months of the pandemic, Agile leadership helped us double revenue in 24 months.

We stayed fully staffed — no layoffs, no pay cuts beyond the executive team. And the small bets we made during the pandemic helped rapidly expand our client base across new industries and international geographies.

Develop precise language to keep the team aligned

As we increased the speed of communication, we discovered something else: agility requires precision. When everything is moving fast, even small misunderstandings can send people sprinting in different directions.

We started tightening our language. Instead of broad discussions about what needed to get done, we’d ask, “What part of this can we get done by Friday?” That forced us to think in smaller delivery windows, sustain momentum and get specific about what “done” looked like.

We also learned to clarify between two operating modes: planning versus doing. Before leaving a meeting where a direction was discussed, we’d confirm our status:

  • Phase 1 meant we were still exploring, shaping and validating and would need at least one more meeting before implementing anything.
  • Phase 2 meant we were ready to execute.

That small distinction saved us hours of confusion, especially in cross-functional work.

Precise language gave us speed. It eliminated assumptions and kept everyone on the same page about where we were in the process. The more we reduced ambiguity, the faster — and calmer — the team moved.

Protect momentum by insisting on rest

Agility isn’t about moving faster forever — it’s about knowing when to slow down. During the first months of the pandemic, that lesson was easy to forget. Everything felt urgent and everyone felt responsible.

In software, a core idea behind Agile sprints is maintaining a sustainable pace of work. A predictable, consistent level of effort that teams can plan around is far more effective than the heroics often needed in waterfall projects to hit a deadline.

Agile was designed to be human-centered, protecting the well-being and happiness of the team so that performance can remain optimal. We tried to lead the same way.

After the first few frenetic months, I capped my own workday at nine hours. That boundary forced me to get honest about what could actually be done in the time I had — and prioritize ruthlessly. It also set a tone for the team. We adjusted scopes, redistributed work and held one another accountable for disconnecting at day’s end.

The expectation wasn’t endless effort — it was sustainable effort. That discipline kept burnout low and creativity high, even during our most demanding seasons. The consistency of our rest became as important as the intensity of our work. It gave us a rhythm we could trust — one that protected our momentum long after the crisis passed.

Readiness is the new stability

Now that the pandemic has passed, disruption has simply changed shape — AI, market volatility, new business models and the constant redefinition of “normal.” What hasn’t changed is the need for leaders who can act with speed and discipline at the same time.

For CIOs, that tension is sharper than ever. Technology leaders are being asked to deliver transformation at pace — without burning out their people or breaking what already works. The pressures that once felt exceptional have become everyday leadership conditions.

But you don’t have to be a Scrum shop or launch an enterprise Agile transformation to lead with agility. Agility is a mindset, not a method. To put the mindset into practice, focus on:

  • Shorter planning horizons
  • Faster, smaller decisions
  • Radical transparency
  • Language that brings alignment and calm
  • Boundaries that protect the energy of the team

These are the foundations of sustainable speed.

We built those practices in crisis, but they’ve become our default operating system in calmer times. They remind me that agility isn’t a reaction to change — it’s a readiness for it. And in a world where change never stops, that readiness may be a leader’s most reliable source of stability.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

LLM゚ヌゞェント時代のプロダクトマネゞメント──仕様は“振る舞い”から蚭蚈せよ

5 December 2025 at 07:19

機胜志向から「振る舞い志向」ぞのパラダむムシフト

埓来の゜フトりェア開発においお、仕様ずは機胜ず画面の䞀芧であるこずが倚くありたした。どのボタンを抌すずどのAPIが呌ばれ、どのデヌタがどのように曎新されるかを、フロヌチャヌトや画面遷移図で蚘述するやり方です。このアプロヌチは、入力ず出力が厳密に定矩できる決定論的なシステムに察しおは非垞に有効でした。

ずころが、LLM゚ヌゞェントは本質的に確率的なシステムです。同じ質問をしおも、生成される文章は毎回少しず぀異なりたすし、状況の倉化やメモリの内容、倖郚ツヌルからのレスポンスによっおも振る舞いが倉わりたす。このようなシステムに察しお「すべおの入力パタヌンず出力を網矅する仕様曞」を曞こうずするず、すぐに砎綻しおしたいたす。結果ずしお、「なんずなく賢いアシスタントを入れおみたが、どうなっおいれば成功なのか分からない」ずいう状態に陥りがちです。

そこで必芁になるのが、機胜ベヌスではなく振る舞いベヌスの仕様蚭蚈です。重芁なのは「この゚ヌゞェントはどんな人栌・圹割を持ち、ナヌザヌから芋おどのように振る舞っおほしいのか」を蚀語化するこずです。専門甚語をどの皋床䜿うのか、どこたで螏み蟌んだ提案をしおよいのか、分からないずきに黙り蟌むのではなくどう質問し返すのか、ずいった察話䞊の振る舞いに加え、どの倖郚ツヌルをどの状況で䜿っおよいのか、どこから先は必ず人間の承認を挟むのかずいった、暩限や責任に関するルヌルも仕様の䞀郚になりたす。

プロダクトマネヌゞャヌは、これらを自然蚀語で蚘述された「行動指針」ずしお定矩し、それをプロンプトやシステムメッセヌゞ、ポリシヌファむルずしお実装チヌムず共有しおいく必芁がありたす。埓来の芁件定矩曞に、人栌蚭蚈や察話ポリシヌ、ツヌル利甚ルヌルずいった新しい章が远加されるむメヌゞです。

仕様曞ずしおのプロンプトずポリシヌ蚭蚈

LLM゚ヌゞェントにおいお、プロンプトは単なる「その堎しのぎの魔法の呪文」ではなく、仕様曞そのものに近い圹割を果たしたす。ずくにシステムプロンプトやロヌル定矩、ツヌルの説明文などには、プロダクトマネヌゞャヌが考え抜いた行動ポリシヌが反映されるべきです。

たずえば、カスタマヌサポヌト向けの゚ヌゞェントを蚭蚈する堎合、「顧客の感情を先に受け止める」「自瀟に非がある可胜性を軜々しく認めないが、決しお責任転嫁もしない」「法的な刀断を䌎う衚珟は必ず保留し、人間の担圓者に゚スカレヌションする」ずいったニュアンスをプロンプトに埋め蟌むこずができたす。ここで有効なのは、抜象的な矎蟞麗句ではなく、実際にあり埗る䌚話䟋を含めた具䜓的な指瀺です。良い応答䟋ず悪い応答䟋を䞊べ、どちらを目指すかを明瀺するこずで、モデルの振る舞いは倧きく倉わりたす。

さらに、ツヌル利甚ポリシヌも仕様ずしお明文化する必芁がありたす。どのツヌルは読み取り専甚なのか、どのAPIを呌ぶ際には必ずナヌザヌに確認を求めるのか、連続しお倖郚サヌビスを叩きすぎないためのレヌト制限はどう蚭蚈するのかずいった点を、プロダクトマネヌゞャヌがビゞネス偎・セキュリティ偎の利害を調敎しながら決めおいきたす。その結果は、゚ヌゞェントのランタむム蚭定ずプロンプト䞡方に反映されたす。

このように、プロンプトずポリシヌは「コヌドではない仕様」でありながら、システムの振る舞いを匷く芏定したす。したがっお、プロンプトの改蚂は仕様倉曎そのものであり、倉曎管理やレビュヌのプロセスが必芁です。誰がどの目的でプロンプトを曎新し、それによっおどの指暙がどのように倉化したのかを蚘録しおおくこずは、品質ずガバナンスの䞡面から重芁になっおいきたす。

評䟡・ロヌルアりト・組織䜓制の再蚭蚈

振る舞いベヌスの仕様を蚭蚈できたずしおも、それが「良いかどうか」をどう評䟡するかずいう問題が残りたす。LLM゚ヌゞェントでは、䞀件䞀件の応答の正しさだけでなく、タスク党䜓ずしおの成功率、ナヌザヌが節玄できた時間、誀動䜜によるリスクの頻床ず重倧性など、耇数の指暙を組み合わせお刀断する必芁がありたす。

実務䞊は、たず限定されたナヌスケヌスを察象に、パむロットナヌザヌを盞手にベヌタ運甚を行うのが珟実的です。その際、ナヌザヌにはなるべくそのたたのログを残しおもらい、どの堎面で゚ヌゞェントが圹に立ち、どの堎面でむラッずさせられたのかを定性的・定量的に分析したす。プロダクトマネヌゞャヌは、その結果をもずに、プロンプトやツヌル構成、むンタヌフェヌスを繰り返し調敎しおいきたす。評䟡指暙ずしおは、タスク完了たでに必芁なステップ数の枛少、手動察応ぞの゚スカレヌション率、ナヌザヌの䞻芳的満足床などが䜿われるこずが倚くなりたす。

ロヌルアりトの戊略も、埓来の機胜リリヌスずは少し異なりたす。LLM゚ヌゞェントは、暩限の範囲によっおリスクが倧きく倉わるため、最初は「提案のみ」「ドラフトのみ」ずいった控えめなモヌドで導入し、䞀定の実瞟が確認できおから「自動実行」の範囲を広げおいく段階的なアプロヌチが望たしいでしょう。その過皋で、ナヌザヌ教育や利甚ポリシヌの明文化も䞊行しお進める必芁がありたす。

最埌に、組織䜓制に぀いおも觊れおおく必芁がありたす。LLM゚ヌゞェントのプロダクトには、モデルのチュヌニングやプロンプト蚭蚈に詳しいメンバヌ、ドメむン知識を持぀業務偎のメンバヌ、セキュリティ・法務の芳点からリスクを芋られるメンバヌなど、倚様な専門性が求められたす。プロダクトマネヌゞャヌは、その橋枡し圹ずしお、技術ずビゞネスずガバナンスを統合する「翻蚳者」のような存圚になりたす。この新しい圹割を自芚し、孊び続けるこずが、LLM゚ヌゞェント時代のPMに求められる最倧の資質だず蚀えるでしょう。

Agents-as-a-service are poised to rewire the software industry and corporate structures

5 December 2025 at 05:00

This was the year of AI agents. Chatbots that simply answered questions are now evolving into autonomous agents that can carry out tasks on a user’s behalf, so enterprises continue to invest in agentic platforms as transformation evolves. Software vendors are investing in it as fast as they can, too.

According to a National Research Group survey of more than 3,000 senior leaders, more than half of executives say their organization is already using AI agents. Of the companies that spend no less than half their AI budget on AI agents, 88% say they’re already seeing ROI on at least one use case, with top areas being customer service and experience, marketing, cybersecurity, and software development.

On the software provider side, Gartner predicts 40% of enterprise software applications in 2026 will include agentic AI, up from less than 5% today. And agentic AI could drive approximately 30% of enterprise application software revenue by 2035, surpassing $450 billion, up from 2% in 2025. In fact, business users might not have to interact directly with the business applications at all since AI agent ecosystems will carry out user instructions across multiple applications and business functions. At that point, a third of user experiences will shift from native applications to agentic front ends, Gartner predicts.

It’s already starting. Most enterprise applications will have embedded assistants, a precursor to agentic AI, by the end of this year, adds Gartner.

IDC has similar predictions. By 2028, 45% of IT product and service interactions will use agents as the primary interface, the firm says. That’ll change not just how companies work, but how CIOs work as well.

Agents as employees

At financial services provider OneDigital, chief product officer Vinay Gidwaney is already working with AI agents, almost as if they were people.

“We decided to call them AI coworkers, and we set up an AI staffing team co-owned between my technology team and our chief people officer and her HR team,” he says. “That team is responsible for hiring AI coworkers and bringing them into the organization.” You heard that right: “hiring.”

The first step is to sit down with the business leader and write a job description, which is fed to the AI agent, and then it becomes known as an intern.

“We have a lot of interns we’re testing at the company,” says Gidwaney. “If they pass, they get promoted to apprentices and we give them our best practices, guardrails, a personality, and human supervisors responsible for training them, auditing what they do, and writing improvement plans.”

The next promotion is to a full-time coworker, and it becomes available to be used by anyone at the company.

“Anyone at our company can go on the corporate intranet, read the skill sets, and get ice breakers if they don’t know how to start,” he says. “You can pick a coworker off the shelf and start chatting with them.”

For example, there’s Ben, a benefits expert who’s trained on everything having to do with employee benefits.

“We have our employee benefits consultants sitting with clients every day,” Gidwaney says. “Ben will take all the information and help the consultants strategize how to lower costs, and how to negotiate with carriers. He’s the consultants’ thought partner.”

There are similar AI coworkers working on retirement planning, and on property and casualty as well. These were built in-house because they’re core to the company’s business. But there are also external AI agents who can provide additional functionality in specialized yet less core areas, like legal or marketing content creation. In software development, OneDigital uses third-party AI agents as coding assistants.

When choosing whether to sign up for these agents, Gidwaney says he doesn’t think of it the way he thinks about licensing software, but more to hiring a human consultant or contractor. For example, will the agent be a good cultural fit?

But in some cases, it’s worse than hiring humans since a bad human hire who turns out to be toxic will only interact with a small number of other employees. But an AI agent might interact with thousands of them.

“You have to apply the same level of scrutiny as how you hire real humans,” he says.

A vendor who looks like a technology company might also, in effect, be a staffing firm. “They look and feel like humans, and you have to treat them like that,” he adds.

Another way that AI agents are similar to human consultants is when they leave the company, they take their expertise with them, including what they gained along the way. Data can be downloaded, Gidwaney says, but not necessarily the fine-tuning or other improvements the agent received. Realistically, there might not be any practical way to extract that from a third-party agent, and that could lead to AI vendor lock-in.

Edward Tull, VP of technology and operations at JBGoodwin Realtors, says he, too, sees AI agents as something akin to people. “I see it more as a teammate,” he says. “As we implement more across departments, I can see these teammates talking to each other. It becomes almost like a person.”

Today, JBGoodwin uses two main platforms for its AI agents. Zapier lets the company build its own and HubSpot has its own AaaS, and they’re already pre-built. “There are lead enrichment agents and workflow agents,” says Tull.

And the company is open to using more. “In accounting, if someone builds an agent to work with this particular type of accounting software, we might hire that agent,” he says. “Or a marketing coordinator that we could hire that’s built and ready to go and connected to systems we already use.”

With agents, his job is becoming less about technology and more about management, he adds. “It’s less day-to-day building and more governance, and trying to position the company to be competitive in the world of AI,” he says.

He’s not the only one thinking of AI agents as more akin to human workers than to software.

“With agents, because the technology is evolving so far, it’s almost like you’re hiring employees,” says Sheldon Monteiro, chief product officer at Publicis Sapient. “You have to determine whom to hire, how to train them, make sure all the business units are getting value out of them, and figure when to fire them. It’s a continuous process, and this is very different from the past, where I make a commitment to a platform and stick with it because the solution works for the business.”

This changes how the technology solutions are managed, he adds. What companies will need now is a CHRO, but for agentic employees.

Managing outcomes, not persons

Vituity is one of the largest national, privately-held medical groups, with 600 hospitals, 13,800 employees, and nearly 14 million patients. The company is building its own AI agents, but is also using off-the-shelf ones, as AaaS. And AI agents aren’t people, says CIO Amith Nair. “The agent has no feelings,” he says. “AGI isn’t here yet.”

Instead, it all comes down to outcomes, he says. “If you define an outcome for a task, that’s the outcome you’re holding that agent to.” And that part isn’t different to holding employees accountable to an outcome. “But you don’t need to manage the agent,” he adds. “They’re not people.”

Instead, the agent is orchestrated and you can plug and play them. “It needs to understand our business model and our business context, so you ground the agent to get the job done,” he says.

For mission-critical functions, especially ones related to sensitive healthcare data, Vituity is building its own agents inside a HIPAA-certified LLM environment using the Workato agent development platform and the Microsoft agentic platform.

For other functions, especially ones having to do with public data, Vituity uses off-the-shelf agents, such as ones from Salesforce and Snowflake. The company is also using Claude with GitHub Copilot for coding. Nair can already see that agentic systems will change the way enterprise software works.

“Most of the enterprise applications should get up to speed with MCP, the integration layer for standardization,” he says. “If they don’t get to it, it’s going to become a challenge for them to keep selling their product.”

A company needs to be able to access its own data via an MCP connector, he says. “AI needs data, and if they don’t give you an MCP, you just start moving it all to a data warehouse,” he adds.

Sharp learning curve

In addition to providing a way to store and organize your data, enterprise software vendors also offer logic and functionality, and AI will soon be able to handle that as well.

“All you need is a good workflow engine where you can develop new business processes on the fly, so it can orchestrate with other agents,” Nair says. “I don’t think we’re too far away, but we’re not there yet. Until then, SaaS vendors are still relevant. The question is, can they charge that much money anymore.”

The costs of SaaS will eventually have to come down to the cost of inference, storage, and other infrastructure, but they can’t survive the way they’re charging now he says. So SaaS vendors are building agents to augment or replace their current interfaces. But that approach itself has its limits. Say, for example, instead of using Salesforce’s agent, a company can use its own agents to interact with the Salesforce environment.

“It’s already happening,” Nair adds. “My SOC agent is pulling in all the log files from Salesforce. They’re not providing me anything other than the security layer they need to protect the data that exists there.”

AI agents are set to change the dynamic between enterprises and software vendors in other ways, too. One major difference between software and agents is software is well-defined, operates in a particular way, and changes slowly, says Jinsook Han, chief of strategy, corporate development, and global agentic AI at Genpact.

“But we expect when the agent comes in, it’s going to get smarter every day,” she says. “The world will change dramatically because agents are continuously changing. And the expectations from the enterprises are also being reshaped.”

Another difference is agents can more easily work with data and systems where they are. Take for example a sales agent meeting with customers, says Anand Rao, AI professor at Carnegie Mellon University. Each salesperson has a calendar where all their meetings are scheduled, and they have emails, messages, and meeting recordings. An agent can simply access those emails when needed.

“Why put them all into Salesforce?” Rao asks. “If the idea is to do and monitor the sale, it doesn’t have to go into Salesforce, and the agents can go grab it.”

When Rao was a consultant having a conversation with a client, he’d log it into Salesforce with a note, for instance, saying the client needs a white paper from the partner in charge of quantum.

With an agent taking notes during the meeting, it can immediately identify the action items and follow up to get the white paper.

“Right now we’re blindly automating the existing workflow,” Rao says. “But why do we need to do that? There’ll be a fundamental shift of how we see value chains and systems. We’ll get rid of all the intermediate steps. That’s the biggest worry for the SAPs, Salesforces, and Workdays of the world.”

Another aspect of the agentic economy is instead of a human employee talking to a vendor’s AI agent, a company agent can handle the conversation on the employee’s behalf. And if a company wants to switch vendors, the experience will be seamless for employees, since they never had to deal directly with the vendor anyway.

“I think that’s something that’ll happen,” says Ricardo Baeza-Yates, co-chair of the  US technology policy committee at the Association for Computing Machinery. “And it makes the market more competitive, and makes integrating things much easier.”

In the short term, however, it might make more sense for companies to use the vendors’ agents instead of creating their own.

“I recommend people don’t overbuild because everything is moving,” says Bret Greenstein, CAIO at West Monroe Partners, a management consulting firm. “If you build a highly complicated system, you’re going to be building yourself some tech debt. If an agent exists in your application and it’s localized to the data in that application, use it.”

But over time, an agent that’s independent of the application can be more effective, he says, and there’s a lot of lock-in that goes into applications. “It’s going to be easier every day to build the agent you want without having to buy a giant license. “The effort to get effective agents is dropping rapidly, and the justification for getting expensive agents from your enterprise software vendors is getting less,” he says.

The future of software

According to IDC, pure seat-based pricing will be obsolete by 2028, forcing 70% of vendors to figure out new business models.

With technology evolving as quickly as it is, JBGoodwin Realtors has already started to change its approach to buying tech, says Tull. It used to prefer long-term contracts, for example but that’s not the case anymore “You save more if you go longer, but I’ll ask for an option to re-sign with a cap,” he says.

That doesn’t mean SaaS will die overnight. Companies have made significant investments in their current technology infrastructure, says Patrycja Sobera, SVP of digital workplace solutions at Unisys.

“They’re not scrapping their strategies around cloud and SaaS,” she says. “They’re not saying, ‘Let’s abandon this and go straight to agentic.’ I’m not seeing that at all.”

Ultimately, people are slow to change, and institutions are even slower. Many organizations are still running legacy systems. For example, the FAA has just come out with a bold plan to update its systems by getting rid of floppy disks and upgrading from Windows 95. They expect this to take four years.

But the center of gravity will move toward agents and, as it does, so will funding, innovation, green-field deployments, and the economics of the software industry.

“There are so many organizations and leaders who need to cross the chasm,” says Sobera. “You’re going to have organizations at different levels of maturity, and some will be stuck in SaaS mentality, but feeling more in control while some of our progressive clients will embrace the move. We’re also seeing those clients outperform their peers in revenue, innovation, and satisfaction.”

❌
❌