Normal view
IT spending bonanza biggest in 30 years
The new economics of cybersecurity: Calculating ROI in an AI-driven world
The language of value in the modern enterprise has fundamentally changed. In boardrooms around the world, the conversation is dominated by new vocabulary: “AI-driven growth,” “speed to market,” “product innovation,” and the relentless pursuit of the “competitive advantage.” Yet, for many security leaders, the language they use to define their own value remains stuck in the past. It’s a dialect of blocked threats and patched vulnerabilities that feels increasingly disconnected from the core mission of the business.
This failure unequivocally stems from the models we use, rather than from a lack of intent. For decades, the return on a security investment was measured with the simple, defensive math of a cost center. This old model fails to capture the immense and often hidden value that a modern security posture contributes to the business. Now, we have a new economy — one where artificial intelligence (“AI”) is the primary engine of innovation and the cloud is the factory floor.
It’s time to reassess the calculus and explore a new economic framework, one that redefines security’s worth not by the incidents it prevents but by the business momentum it creates.
Calculating the true cost of disruption
For years, the quantifiable risk of a breach was a straightforward calculation of regulatory fines, customer notification costs, and credit monitoring services. These are still real costs, but they represent a fraction of the true financial impact in an AI-driven enterprise. The most significant danger isn’t isolated to loss of data. It’s the disruption of the intelligent systems that now form the central nervous system of the business.
Consider a global logistics company whose entire supply chain is orchestrated by an AI platform. A breach-like model poisoning compromises this system and acts as a catastrophic failure of the business itself. The true cost is the value of every delayed shipment, every broken supplier commitment, and the permanent loss of customer trust.
In this new reality, the most important metric becomes the cost of disruption avoidance. A modern, AI-powered security platform that can autonomously detect and neutralize a threat before it halts operations is both a defensive tool and a direct guarantor of revenue and business continuity.
Security as an accelerator — and an innovator
The second and, perhaps, most powerful shift in this new economic model is the reframing of security from a necessary defensive brake to a strategic accelerator. In the past, security was often seen as a gate, a checkpoint that slowed down development in the name of safety. Today, the opposite is true: A mature, unified security platform is one of the most effective tools for increasing the velocity of innovation.
Consider a financial services firm that’s racing to deploy a dozen new AI-powered financial models in a single year. In a traditional, fragmented security environment, each new model might require a six-week, manual security review — a process that would kill any hope of meeting their business goals. A modern, automated security platform that is woven into the development lifecycle can reduce that review process to a matter of days or even hours. It allows developers to innovate with confidence, knowing that security is an enabling partner, clearing the path for progress. This is a direct, quantifiable contribution to the company’s ability to compete and win.
Paying down the past: The value of a clean slate
Many organizations are silently being dragged down by a hidden liability: decades of accumulated “security debt.” This immense, unspoken risk is created by a patchwork of disconnected point products, inconsistent policies across different cloud environments, and the constant operational tax of managing dozens of disparate tools. It increases the attack surface and slows down the entire organization.
Moving to a single, unified security platform is akin to refinancing this debt. It provides a clean slate, a consistent and manageable foundation upon which to build the future. The value here is in the dramatic simplification of operations and the reduction of long-term risk, beyond just the savings on licensing costs. Consolidating from dozens of security tools to a single platform can dramatically cut an organization’s mean time to respond to a threat. It pays down an organization’s security debt and frees up its most valuable resources to focus on innovation.
Let’s speak a different language
These three concepts — the cost of disruption avoided, the velocity of innovation, and the reduction of security debt — form the pillars of a new, business-centric language for security leaders. They provide a holistic framework for calculating the true value of a modern security platform in a way that resonates with C-suite priorities.
Strategies, mandates, and action items for the modern CISO have evolved. Protecting the enterprise? Yes, of course. That is the first priority. But the new imperatives should also be to prove, in clear financial and operational terms, how security accelerates the business. Mastering this new economic language is the most essential step forward in this AI-driven world.
Curious about what else Helmut has to say? Check out his other articles on Perspectives.

VITAS Healthcare Breach Exposes 319K Patient Records
Hackers maintained undetected access to patient systems for over a month, methodically downloading personal and medical information.
The post VITAS Healthcare Breach Exposes 319K Patient Records appeared first on TechRepublic.
Microsoft’s Patch Tuesday: 57 Flaws Fixed
KB5072033 addresses vulnerabilities across Windows systems and Office applications—including one actively exploited zero-day.
The post Microsoft’s Patch Tuesday: 57 Flaws Fixed appeared first on TechRepublic.
Google Offers Free Screen Repairs for Eligible Pixel 9 Pro Units
Google has launched extended support for Pixel 9 Pro and Pixel 9 Pro Fold devices, offering free repairs or replacements for eligible hardware issues.
The post Google Offers Free Screen Repairs for Eligible Pixel 9 Pro Units appeared first on TechRepublic.
生成AIの熱狂が直面する「物理的な壁」――サーバー室の外側で起きている電力・冷却・サプライチェーンの地殻変動
現実には、膨大な計算資源を稼働させるための電力供給、高密度な半導体が発する熱を処理する冷却技術、そしてそれらを支える半導体と部材のサプライチェーンという、極めて物理的かつ重厚なインフラの問題が浮き彫りになりつつある。これらは地政学的なリスクや国家のエネルギー政策とも複雑に絡み合い、企業の戦略に無視できない影響を与え始めている。本稿では、AIデータセンターをめぐる電力、冷却、サプライチェーンという「物理的な壁」の実態を詳細に俯瞰し、この構造変化が日本企業のIT戦略や経営判断に対してどのような問いを突きつけているのかを深く掘り下げていく。
マクロ経済を揺るがす「AIの暴食」と電力インフラの限界点
かつてデータセンターの電力消費は、企業のコスト管理の一項目に過ぎなかったが、現在では国家レベルのエネルギー需給を左右するマクロ経済の主要論点へと変貌を遂げている。国際エネルギー機関(IEA)が発表した衝撃的な分析によれば、2024年の時点で世界のデータセンターは約415TWh(テラワット時)もの電力を消費しており、これは世界全体の電力需要の約1.5パーセントに相当する規模である。さらに深刻なのはその増加スピードであり、直近5年間を見てもデータセンターの電力消費は年率約12パーセントという驚異的なペースで増加し続けている。これは他の産業分野や家庭用需要の伸び率を遥かに凌駕する数値であり、デジタル化の進展とAIの普及がいかにエネルギー集約的なプロセスであるかを物語っている。
IEAの「Energy and AI」レポートにおける予測はさらに衝撃的である。AIによる計算需要が現在のペースで拡大し続ければ、2030年までにデータセンターの電力消費は現在の倍以上となる945TWh前後に達し、世界の電力需要の約3パーセント近くを占める可能性があると警鐘を鳴らしている。別の視点からの試算では、この消費規模は現在の日本一国が消費する総電力量に匹敵するとも言われており、たった一つの産業セクターが主要先進国レベルの電力を飲み込むという、前代未聞の事態が現実味を帯びているのである。また、欧州委員会も同様の危機感を抱いており、EU域内のデータセンター電力消費が2030年には2024年比で約1.6倍に達するというシナリオを提示している。その背景には、クラウドコンピューティングや動画ストリーミングの定着に加え、生成AIの学習および推論フェーズにおける膨大なワークロードが、将来の電力需要を牽引する最大のドライバーになるという明確な見通しがある。
このように、データセンターにおける電力消費の議論は、もはや一企業のIT予算の枠を超え、国家のエネルギー安全保障や脱炭素戦略と密接にリンクするようになっている。データセンターが集中する地域では、送配電網の容量不足である「系統制約」が深刻化しており、新規のデータセンター建設が電力供給の許可待ちで数年遅れるといった事態も世界各地で発生している。また、膨れ上がる電力需要は、各国が掲げるカーボンニュートラル目標との整合性を危うくする要因ともなり得る。再生可能エネルギーの供給が追いつかない場合、化石燃料による発電を維持せざるを得なくなるからだ。したがって、政府や規制当局は、データセンター誘致と電源開発、そして送電網の増強をセットで計画せざるを得ない状況に追い込まれており、AIインフラ論とは本質的に、限られた電力リソースを他の産業や家庭とどう配分するかという、社会的な調整問題へと発展しているのである。
高密度化する「熱」との戦いが招くファシリティのパラダイム転換
電力供給の問題と表裏一体の関係にあるのが、AIサーバーが発する猛烈な「熱」をいかに処理するかという冷却の課題である。GPU(画像処理半導体)を搭載したAIサーバーの高性能化は、計算能力の向上をもたらすと同時に、単位面積あたりの電力密度と排熱量を劇的に押し上げている。従来の企業の基幹システムやWebサーバーを収容する一般的なデータセンターでは、サーバーラック1本あたりの消費電力は5から10kW(キロワット)程度が標準的であり、部屋全体に冷気を循環させる従来型の空冷方式で十分に冷却が可能であった。しかし、生成AIの学習や推論に用いられるハイエンドのGPUサーバーをフル搭載したラックでは、1ラックあたりの消費電力が50kWを超え、場合によっては100kWに迫るケースも珍しくなくなっている。
これほどの高密度環境になると、空気による熱交換だけでは物理的に冷却が追いつかなくなる。空気を媒体とする冷却には熱容量の限界があり、ファンを高速回転させれば騒音が爆音となり、風量そのものがサーバー機器を物理的に振動させてしまうリスクすら生じるからだ。そこで注目されているのが、水や特殊な冷媒を用いる「液冷技術」である。調査会社TrendForceの分析によれば、AIデータセンターにおける液冷システムの採用率は、2024年時点の約14パーセントから、2025年には30パーセントを超えて急拡大すると予測されている。現状では世界全体のサーバー市場で見れば依然として空冷が主流ではあるものの、最先端のAI計算基盤においては、液冷への移行が不可逆的なトレンドとなりつつある。
液冷へのシフトは、単にエアコンを高性能なものに買い替えるといったレベルの話ではない。データセンターの設計思想そのものを根本から覆すパラダイム転換を意味するからだ。たとえば、チップに直接冷却プレートを密着させる「ダイレクトチップ冷却」や、サーバーごと絶縁性のある液体に沈める「浸漬冷却(イマージョンクーリング)」といった方式を導入するには、建物内の配管設備、床の耐荷重、電源供給のレイアウトなどをすべて見直す必要がある。従来型の5kWラックを前提とした既存のデータセンターに、後付けで100kW級のAIラックを大量に導入することは極めて困難であり、AI専用の新たな施設を建設するか、大規模な改修を行う必要に迫られる。また、冷却効率を示す指標であるPUE(Power Usage Effectiveness)の改善は、運用コストの削減だけでなく、環境負荷低減の観点からも至上命題となっている。AIインフラを語る際、どうしてもモデルのパラメータ数やGPUのスペックに目が奪われがちだが、それらを安定稼働させるための冷却インフラという物理層の制約こそが、今後のデータセンターの競争力を決定づける最大の要因になりつつあるのである。
偏在するサプライチェーンと地政学リスクが突きつける戦略的選択
AIインフラ構築のボトルネックは、電力や冷却といったファシリティ面だけにとどまらない。計算処理の中核を担うGPUやAIアクセラレータ、そしてデータの高速転送を支えるHBM(広帯域メモリ)など、半導体サプライチェーンの極端な偏在と集中もまた、深刻な制約要因となっている。NVIDIAの2024年度決算において、データセンター事業の売上が前年比217パーセント増という驚異的な伸びを記録したことは記憶に新しいが、これは世界中の資金と需要が、特定の企業の特定の製品に殺到したことを端的に示している。IoT Analyticsの市場レポートによれば、データセンター関連の設備・インフラ支出は2024年時点で約2900億ドル規模に達し、2030年には年間1兆ドル前後にまで膨張すると予測されているが、この巨額の投資マネーの多くは、限られたサプライヤーへと流れ込んでいるのが実情だ。
問題は、最先端のAI半導体を製造できるファウンドリや、HBMのような特殊メモリを量産できるメーカーが世界に数社しか存在しないという事実である。TSMCやSamsung、SK Hynixといった主要プレイヤーの生産能力はすでに逼迫しており、工場の新設やラインの増強には巨額の投資と数年単位の時間が必要となる。さらに、サーバー筐体、電源ユニット、冷却用のポンプや特殊配管といった周辺機器に至るまで、グローバルなサプライチェーンは複雑に絡み合っており、その結節点のどこか一つでも滞れば、全体の納期が遅延する構造になっている。ここに米中対立をはじめとする地政学的な緊張や、各国の輸出管理規制、データローカライゼーション(データの国内保存義務)といった政治的な要素が加わることで、AIインフラの調達は単なる購買業務ではなく、高度なリスク管理と国家戦略の読み解きが必要な領域へと変質している。
日本国内に目を転じても、事態は切迫している。2025年のジャパン・エナジー・サミットで共有された報告によれば、日本のデータセンター向け電力需要は総需要の約2パーセントを占めるに至っており、2030年には約5パーセントへ倍増する見込みである。特に東京圏には10GW(ギガワット)規模という巨大なデータセンター建設計画のパイプラインが存在し、これは地域のピーク電力需要の約17パーセントにも相当する。送電網の増強が追いつかなければ、計画の一部は実現不可能となるか、北海道や九州といった再生可能エネルギーのポテンシャルが高い地方への分散を余儀なくされるだろう。
こうした状況下で、企業のCIOやIT部門は難しい舵取りを迫られている。企画段階から電力消費と冷却コストを織り込んだリアリティのあるAI活用計画を策定すること、クラウド選定において単に機能や価格だけでなく、そのリージョンが依存する電源構成や地政学リスクを考慮に入れた「ポートフォリオ」を組むこと、そしてサプライチェーンの混乱を見越して調達戦略を多重化すること。これらが今後のIT戦略における必須の要件となる。AIインフラを巡る議論を「サーバー室の中」の技術論から引き剥がし、エネルギー政策や国際情勢という「サーバー室の外側」の現実と接続して捉え直す視座こそが、これからの経営層には求められているのである。

Your cloud provider is a single point of failure
The morning of Monday, Oct 20, 2025, I went to my healthcare provider’s portal to pay a bill. This was my experience:

Jim Wilt
Upon calling my provider to pay over the phone, they were unable to take my payment as their internal systems were also down, leaving us customers hanging with no direction on how to proceed.
My healthcare provider’s SaaS was completely functional; however, their integrated payment vendor, which is reliant on AWS infrastructure, apparently has ineffective redundancies. So, the 10/20/2025 AWS outage resulted in a most unfortunate experience for any customer or employee hoping to utilize this important capability while hindering my healthcare organization from receiving revenue.
Who is to blame? AWS? The payment vendor? Ultimately, my healthcare provider is responsible for their customers’ (and employees’) inability to interact with their services. A cloud outage is not in the same acts-of-nature class as hurricanes, earthquakes, tornadoes, etc., but we do treat them as such and that is simply wrong because these outages can be mitigated.
This is a clear and far too common industry-wide epidemic: poor adoption and execution of cloud computing resilience, resulting in unreliable critical services to both customers and employees.
As reported directly by AWS’ Summary of the Amazon DynamoDB Service Disruption in Northern Virginia (US-EAST-1) Region, a latent race condition in DynamoDB’s DNS management system led to an empty DNS record for the US-EAST regional endpoint, causing resolution failures affecting both customer and internal AWS service connections. This adversely affected the following services: Lambda, ECS, EKS, Fargate, Amazon Connect, STS, IAM Console Sign-In and Redshift.
On 10/29/2024, Microsoft 365 (m365.cloud.microsoft or portal.office.com) experienced an outage due to the rollout of an impacting code change. This affected Microsoft 365 admin center, Entra, Purview, Defender, Power Apps, Intune and add-ins & network connectivity in Outlook. This is all documented by Microsoft in Users may have experienced issues when accessing m365.cloud.microsoft or portal.office.com.
Both of these recent outages required vendors to halt automated processes and manually navigate recovery to bring affected systems back to an operational state. Let’s face it: Cloud providers are not magical and are subject to the same recovery patterns as any enterprise.
Outages are a reality of any system or platform and affect literally every organization. Hence:
Your cloud provider is a single point of failure!
Corporate infrastructure strategies vary from total dependence on provider vendors to actively taking ownership and architecting necessary redundancies for critical systems. When underlying provider outages occur, it is often a catalyst to revisit enterprise resilience strategy, even if you are not directly affected.
When examining an enterprise’s fault-tolerant architectures (which rarely even exist), it may be a good time to instead consider fault avoidance architectures. The latter kicks in when bad happens and the former actively monitors triggers to avoid bad.
This type of introspective examination is too often overlooked, as it is far easier for enterprises to fall into believing the many myths that govern their IT strategy and operations, especially when it comes to Cloud.
Unpacking the myths
Myth #1 – A single cloud provider reduces complexity
Vendors will place every kind of study and incentive in front of enterprise leadership to back the fallacy that locking into their platform is in the best interests of their company. Let’s be clear: It is always in the best interests of the vendor. This concept is then passed down from leadership to engineers who are encouraged to believe what their leadership tells them, and we get into a situation where thousands upon thousands of companies are under the control of a single vendor. Scary, right?
When it comes to multi-zonal resilience, app-cross-region resilience, blast-radius reduction, and resilient app patterns, there is additional complexity. Knowing that these approaches have dependencies on complex fine-tuned cloud infrastructures means there is no easy button.
Myth #2 – Cloud platform component defaults are generally a good starting point
Relying on easy-button best practices is what gets enterprises into trouble. The responsibility of an IT cloud infrastructure team is to work with solution architects and engineers to fine-tune their designs to optimize efficiencies, resilience and performance while controlling costs. Cloud vendor default configurations are necessary as they set a functional starting point, but they should never be trusted as a sound design. In fact, they can produce unnecessarily large loads on default regions when left unchecked. The AWS US-East-1 region is historically the most affected region when it comes to outages, and yet so many critical enterprise systems run exclusively in that region.
Vendor plug-and-play architectures must be scrutinized before going into production.
A responsible architecture governance practice should have a policy to avoid known outage-prone regions and single-point-of-failure configurations. These should be vetted in the architecture review board before ever going to production.
Myth #3 – My cloud provider/vendor will take care of me
Service level agreements (SLA) are paid out service credits tied to the cost of the affected service, not cash refunds. They generally start at 10% of service charges, never resulting losses. Your enterprise will literally get pennies back on dollars lost.
The July 2024 CloudStrike outage cost CloudStrike around $75M + $60M they paid out in service credits. This pales an order of magnitude when compared to just one customer, Delta Airlines, which lost $500M net. Parametrix Insurance’s detailed analysis estimates the total direct financial loss facing the US Fortune 500 companies is $5.4B. CloudStrike literally paid pennies on the dollar for their error, so an enterprise’s reliance on a vendor must be managed knowing this reality.
The 11/18/2025 Cloudflare outage, with its 20% hold on global web traffic, equally affected hundreds of millions of accounts, including major systems like X (Twitter), OpenAI/ChatGPT, Google’s Gemini, Perplexity AI, Spotify, Canva and even all three cloud providers. This heightens how a single vendor/platform dependency is a real threat to business continuity.
Enterprises must protect themselves because their vendors won’t.
Future purchase and contract negotiations should pivot toward SLA penalties that are based on enterprise losses over enterprise service costs. Unfortunately, this will drive service costs higher, but it builds in better financial protection when reliant on systems outside of your control.
Myth #4 – Multi-cloud is too expensive and too demanding
To mitigate the impact of regional cloud outages, enterprises that adopt multi-cloud architectures that prioritize resilience, portability, and failover orchestration find additional benefits when they are implemented with a mindset of fault avoidance and cost/performance optimization. This means multiple triggers govern where workloads run, resulting in optimal efficiencies.
This needs to be a priority effort backed by the C-Suite and requires a culture shift to succeed; hence, multi-cloud deployments are exceedingly rare. Still, those who have done this reap benefits well beyond resilience (e.g., large orgs like Walmart, Goldman Sachs, General Electric and BMW as well as SMBs like FirstDigital, Visma and Assorted Data Protection).
The NIST Cloud Federation Reference Architecture (NIST Special Publication 500-332) is a great document to establish a baseline grounding of these concepts.
- Active-active resilience is a pattern for mission-critical apps (e.g., financial trading, healthcare, e-commerce checkout). It maximizes resilience and availability, but at a higher cost due to duplicated infrastructure and complex synchronization. This pattern lends itself best toward fault avoidance with all the goodness of proactive efficiency and optimization triggers.
- Active-passive failover is a pattern where a primary cloud handles all traffic, and a secondary cloud is on standby. It provides disaster recovery without the full cost of active-active, but will introduce some downtime and requires a robust replication strategy. It clearly is only a fault-tolerant approach.
- Cloud bursting is a pattern where applications run primarily in one cloud but “burst” into another during demand spikes, providing elastic scalability without over-provisioning. It can also provide a good degree of fault tolerance.
- Workload partitioning (best-of-breed placement) is a pattern where different workloads are assigned to the cloud provider best suited for them. It greatly optimizes performance, compliance, and cost by leveraging provider strengths, but will not be fully fault-tolerant.
Myth #5 – Cloud has failed. It’s time to get out
This is a recurring theme each time there is a major cloud outage, often tied equally to a cost comparison between on-premises vs. cloud (yes, cloud almost always costs more). The reality is that while there is true value to cloud in the overall infrastructure strategy, there also is value in prioritizing an investment in infrastructure choices, leveraging sensible hybrid strategies. Two effective strategic architectures are based on edge and Kubernetes. Edge reduces blast radius, while Kubernetes provides portable resilience across providers. Both are recommended when aligned with workload architecture and operational maturity.
- Edge-integrated resilience extends workloads to the edge while maintaining synchronization with central clouds. Local edge nodes can continue operations even if cloud connectivity is disrupted, then reconcile state once reconnected. In addition to adding a moderate level of resiliency, it also benefits from ultra-low latency for real-time processing (e.g., IoT, manufacturing robotics, autonomous vehicles). This approach is often found in factory, retail store, and branch office use cases.
- Kubernetes-orchestrated resilience is a cloud-agnostic orchestration layer that can be leveraged locally and across multiple providers. In addition to a prominent level of resilience, this type of service mesh (e.g., Istio, Linkerd) adds traffic routing and failover capabilities that reduce vendor lock-in. Overall, it is a foundational enabler for multi-cloud, giving enterprises a consistent control plane across providers and on-premises.
Calls to action
There are two major enterprise IT leadership bias camps: Build and Buy. Both play a factor in every enterprise.
The reference architecture patterns shared above address Build bias workloads, which include integrations with Buy workloads.
Buy bias workloads are too often subject to vendor-defined SLAs discussed above, which are terribly limiting to 10-100% credits for charges based on the duration of an outage as penalties. Realistically, that really is not going to change; however, SaaS quality over the past 20 years has increased substantially:

Jim Wilt
This becomes the new bar and offers a great measure an enterprise can leverage for both themself and their vendors:
The 1-9 Challenge: Every SaaS vendor, integrator and internal enterprise solution should provide one “9” better than their underlying individual hosting platforms alone.
For example, when each cloud vendor provides a 99.9% SLA for a given service, leveraging an active-active multi-cloud architecture raises that SLA well beyond 99.99%.
Take control of your critical services first and leverage these patterns as a baseline for net-new initiatives moving forward, making high resilience your new norm.
Bottom line: the enterprise is always responsible for its own resiliency. It’s time to own this and take control!
This article was made possible by our partnership with the IASA Chief Architect Forum. The CAF’s purpose is to test, challenge and support the art and science of Business Technology Architecture and its evolution over time as well as grow the influence and leadership of chief architects both inside and outside the profession. The CAF is a leadership community of the IASA, the leading non-profit professional association for business technology architects.
This article is published as part of the Foundry Expert Contributor Network.
Want to join?

Digitalización e IA, el cambio estructural que está redefiniendo la automoción
La automoción vive un momento decisivo. A los retos de la electrificación, los nuevos modelos comerciales o las crecientes exigencias regulatorias ―principalmente, en materia de sostenibilidad― se suma una modernización tecnológica que avanza con paso firme. La digitalización y la IA han dejado de ser herramientas accesorias: hoy son una parte esencial del modelo de negocio del concesionario.
El cambio ya se percibe en el día a día de estos centros. La mayoría de los directivos del sector considera que la IA será clave para el futuro de sus empresas, y más del 80% de los concesionarios la utiliza o tiene planes inmediatos para integrar soluciones basadas en ella.
Punto de venta y centro tecnológico
Los concesionarios han experimentado una evolución extraordinaria en poco tiempo. Hoy, conviven en ellos herramientas de gestión avanzadas, modelos analíticos, automatización y procesos digitales que antes parecían impensables. No se trata de un ajuste superficial, sino de una forma completamente nueva y diferencial de organizar y gestionar el negocio.
Los resultados refuerzan esta transformación: aquellos concesionarios que han incorporado el uso de la IA han visto crecer sus ingresos; muchos de ellos, con incrementos de facturación de un 20% a 30%, y algunos incluso por encima de ese porcentaje.
Todo ello demuestra que la digitalización no es un recurso añadido. Es un elemento que potencia ―en algunos casos, multiplica― la capacidad competitiva del concesionario.
Ámbitos de cambio
Buena parte del impacto de esta particular revolución tecnológica se concentra en tres áreas:
• Eficiencia operativa: la automatización está permitiendo agilizar trámites, reducir errores y dedicar más tiempo a tareas con mayor valor. En algunos concesionarios, la eficiencia comercial ha aumentado hasta un 70% u 80% gracias a soluciones digitales de nueva generación.
• Ventas más precisas y personalizadas: la IA ayuda a identificar qué clientes tienen más probabilidad de comprar y qué mensajes funcionan mejor con cada perfil. Las mejoras en conversión superan el 20% en algunos casos, y hay asesores que han logrado entre un 15% y un 25% de ventas adicionales al año gracias a estas herramientas.
• Nuevo enfoque en posventa: la tecnología permite anticipar necesidades de mantenimiento, organizar mejor la carga del taller y gestionar con más precisión los recambios. La posventa sigue siendo uno de los pilares de la rentabilidad y todo lo que contribuya a optimizarla supone una ventaja clara.
Rentabilidad y generación de empleo
Los concesionarios han trabajado históricamente con márgenes muy ajustados. Hoy, las proyecciones apuntan a que la implantación de IA podría llevar a duplicar la rentabilidad neta de aquí a cinco años, pasando del 1,3% actual a niveles cercanos al 3% en un escenario de transformación avanzada. Pero no bastará con adoptar nuevas herramientas. Hacen falta datos de calidad, integración entre sistemas, nuevas capacidades internas y una estrategia clara. La oportunidad, eso sí, está ahí.
También es importante subrayar que la IA no está destinada a desplazar al factor humano del concesionario. Lo que hace es transformar funciones. Algunas tareas administrativas irán perdiendo peso, pero surgirán otras relacionadas con el análisis de datos, la automatización o la gestión de nuevas tecnologías. Si se orienta bien esta transición, el sector podría generar hasta 10.000 empleos netos de aquí a 2030. El empleo no desaparece: se transforma.
Una oportunidad para España
España cuenta con una red de concesionarios sólida y bien implantada en todo el territorio. La digitalización abre la puerta a reforzar ese modelo y hacerlo más competitivo, eficiente y sostenible. La IA permitirá operar con datos en tiempo real, personalizar la relación con el cliente, planificar mejor el taller, optimizar los recambios y mejorar toda la experiencia digital, desde el interés inicial hasta la posventa. No se trata solo de incorporar más software: se trata de repensar el modelo para adaptarlo a una movilidad que cambia a gran velocidad.
En conclusión, la modernización tecnológica ya está transformando el sector. La pregunta es quién sabrá aprovecharla mejor. En Faconauto, defendemos que esta transición debe afrontarse con visión y ambición. Los concesionarios que combinen innovación, datos y talento humano serán los que marquen la pauta en la nueva movilidad. La tecnología potencia el valor humano del concesionario, no lo sustituye.
La automoción entra en una nueva etapa. La oportunidad de liderarla está en nuestras manos.
>
El autor de este artículo es José Ignacio Moya, director general de Faconauto, la patronal que integra las asociaciones de concesionarios oficiales de las marcas de turismos, vehículos industriales y maquinaria agrícola presentes en el mercado español. Moya es abogado y ha desarrollado buena parte de su carrera profesional vinculado al sector de la automoción.

Google Chrome’s New AI Security Aims to Stop Hackers Cold
Google is also backing these measures with a $20,000 bounty for researchers who can demonstrate successful breaches of the new security boundaries.
The post Google Chrome’s New AI Security Aims to Stop Hackers Cold appeared first on TechRepublic.
Essential Eight: What Organisations Should Expect in 2026
Explore how the Essential Eight may shift in 2026, why ACSC expectations could rise, and what Australian organisations should do for greater resilience.
The post Essential Eight: What Organisations Should Expect in 2026 appeared first on TechRepublic.
医療データは誰のものか―日本の医療データ法規制の全体像
医療データ規制は「多層構造」で理解する
日本の医療データ規制を理解しようとするとき、多くの人がまず条文や個別の法律名から入ろうとします。しかし、実務で本当に重要なのは、「どのレイヤーのルールが自分たちのシーンに効いているのか」を把握することです。最下層にあるのは、個人情報保護法に代表される横断的なプライバシー法制であり、その上に医療分野固有の法律が乗り、そのさらに上にガイドラインや倫理指針が積み上がり、最上層に具体的なシステムやプロジェクトが立ち上がっていく、というイメージを持つと整理しやすくなります。
最もベースになるのが、個人情報保護法です。医療データは、同法上「要配慮個人情報」に分類され、取得・利用・第三者提供のいずれについても、通常の個人情報より厳格なルールが課されています。診療という一次利用の範囲では比較的自由度が高い一方で、研究やAI開発、製薬企業によるリアルワールドデータ活用といった二次利用に踏み込むと、一気に法的な要件が重くなるのが特徴です。
その上に位置づけられるのが、次世代医療基盤法や医療法、医療保険関連法令といった医療分野固有の法律群です。特に次世代医療基盤法は、匿名加工・仮名加工という加工概念を軸に、医療データをオプトアウト方式で集約し、研究・産業利用に供するための枠組みを用意しています。この法律は、個人情報保護法の特例法として設計されており、「同意に基づく個別の第三者提供」とは別のルートを整備した点で、大きな意味を持ちます。
さらにその上には、厚生労働省の「医療情報システムの安全管理に関するガイドライン」や、医療・介護分野の個人情報取扱いガイダンス、研究倫理指針など、行政機関が出すソフトローが存在します。これらは法律そのものではありませんが、実務上は「守らなければならない準則」として機能しており、医療機関のシステム更新やクラウド移行、AI導入プロジェクトを進める際の事実上のチェックリストになっています。
そして最上層には、全国医療情報プラットフォームや電子処方箋、オンライン資格確認、保険者や自治体が構築する各種データベースといった具体的な医療DXプロジェクトが位置づけられます。これらは、それぞれ個別の実施要綱や仕様書を伴っていますが、根本では先述の法律とガイドラインに依拠して設計されています。医療現場の情報システム担当者やベンダー、スタートアップにとっては、この「多層構造」を頭に入れたうえで、自分たちの座標を確認することが欠かせません。
一次利用と二次利用でがらりと変わる法的ハードル
医療データの法規制を考えるうえで、もう一つ重要な視点が「一次利用」と「二次利用」の違いです。一次利用とは、診療や看護、診療報酬請求、医療安全など、患者に対して医療サービスを提供するために必要な範囲での利用を指します。これらについては、患者が医療機関を受診した時点で、暗黙の前提として情報利用が認められていると解されており、個別に細かい同意を求めなくても、カルテへの記載や情報共有が行われています。
これに対し、二次利用とは、診療そのものを超えた目的、例えば研究や新規サービス開発、製薬企業のリアルワールドエビデンス創出、AIモデルの学習、保険商品の開発などを目的としたデータ活用を指します。この領域では、個人情報保護法上の同意要件が前面に出てくるほか、研究倫理指針や各種ガイドラインの適用も受けるため、法的ハードルが一気に高くなります。
ここで登場するのが、次世代医療基盤法に基づく匿名加工・仮名加工の仕組みです。この仕組みは、患者一人ひとりから個別に同意を集めなくても、一定の条件のもとで大規模な医療データを利活用できるようにするための「専用レーン」として設計されています。認定事業者というフィルターを通し、厳格な安全管理とオプトアウトによる権利保障を組み合わせることで、個人情報保護とデータ利活用の両立を図ろうとする発想です。
とはいえ、現場では一次利用と二次利用の境界が必ずしも明確ではありません。医療の質向上や院内業務改善を目的としたデータ分析は、一次利用と解釈される余地もあれば、研究に近いと見なされて倫理審査や同意が求められる場合もあります。そのため、実務では、目的の具体的な内容と、結果の外部公表・論文化の有無、外部企業の関与の度合いなどを丁寧に整理しながら、どの法的枠組みのもとで進めるのかを判断することが重要になります。
医療DX時代に高まる「統合的なコンプライアンス設計」の必要性
医療DXが進展するにつれ、個々のシステムやプロジェクトを個別に見ているだけでは済まない時代になりつつあります。電子カルテ、地域医療連携ネットワーク、オンライン資格確認システム、電子処方箋、健診データベース、介護保険の情報システムなどが相互に接続されていくと、データは一つのシステムの内側にとどまらず、ライフコース全体をまたいで流通するようになっていきます。
このとき、単に「法律に違反していないか」をチェックするだけでは十分とは言えません。さまざまなシステムを横断して、アクセス権限やログ管理、再識別のリスク評価、AIモデル学習への二次利用の範囲などを統合的に設計しなければ、どこかで漏えいや不適切利用が起こった際に、責任の所在が曖昧になったり、患者の信頼を一気に損なう可能性があります。厚生労働省の医療情報システム安全管理ガイドライン第6.0版が、経営層と情報システム担当者の双方に対する要件を詳細に示しているのは、まさにこの「統合ガバナンス」の必要性を意識しているからです。
今後、日本の医療データ法規制は、デジタル化とデータ活用の加速に合わせて、さらに改正やガイドラインの更新が続いていくと考えられます。その変化を追いかけるためには、個々の条文だけを覚えるのではなく、「多層構造」「一次利用と二次利用の境界」「統合コンプライアンス」という三つの視点を持ちながら、全体像を押さえておくことが重要になっていくでしょう。

マルチエージェントシステムの世界──“AIチーム”が協調する時代の設計論
マルチエージェントとは何か
マルチエージェントシステムとは、複数のエージェントが同一のゴールに向かって協調したり、時には競合したりしながら振る舞うシステムを指します。LLMの文脈では、たとえばリサーチ専門のエージェント、プランニング専門のエージェント、文章生成に長けたエージェント、品質チェックを行うエージェントなどが、それぞれの役割を持ってやり取りを行う形が典型例です。
なぜわざわざ一体の巨大なエージェントではなく、複数に分けるのでしょうか。理由のひとつは、モジュール性と責任分担の明確化です。役割ごとにエージェントを分けることで、特定の役割のプロンプトやツール構成、評価指標を個別に最適化できます。問題が起きたときにも、「リサーチエージェントの検索戦略がおかしいのか」「レビューエージェントの基準が厳しすぎるのか」といった切り分けがしやすくなります。
もうひとつの理由は、異なるモデルや設定を柔軟に組み合わせられることです。高速だがやや精度の低いモデルをブレインストーミングに使い、高性能だが高コストのモデルを最終判断や重要な文書の生成に使うといった工夫も、マルチエージェント構成であれば自然に実現できます。人間のチームで、ジュニアとシニアが役割を分担するのに近いイメージです。
役割分担とコミュニケーション設計
マルチエージェントを実用的に機能させるには、役割分担とコミュニケーションの設計が重要になります。まず役割分担については、人間の組織設計と同様に、タスクを分解し、どの部分をどのエージェントが得意とするかを整理するところから始まります。典型的なパターンとしては、情報収集、要約と構造化、プランニング、生成、レビューといったフェーズごとにエージェントを分ける方法があります。
コミュニケーション設計では、エージェント同士がどのような形式でメッセージをやり取りするかが鍵になります。自然言語で会話させることもできますが、その場合、会話が冗長になったり、話が脱線したりするリスクがあります。より制御しやすくするためには、メッセージのフォーマットをあらかじめ定義し、エージェント間で受け渡す情報を構造化することが有効です。たとえば、「現在のタスク」「前提条件」「制約」「期待される出力形式」といった項目を必ず含むようにし、それを基盤として各エージェントが自分の仕事を進めるように設計します。
さらに、全体を統括する「オーケストレーター役」のエージェントを置くこともよく行われます。オーケストレーターは、ユーザーからの依頼を受け取り、タスクを分解して各エージェントに割り振り、途中の成果物を統合し、必要に応じて再度タスクを再配分します。この構造は、プロジェクトマネージャーがチームメンバーに仕事を振りながら進捗を管理する姿に似ています。
利点と課題、そして現実的な導入ステップ
マルチエージェントシステムの利点は、モジュール性と柔軟性だけではありません。複数のエージェントが異なる観点からタスクに取り組むことで、アイデアの多様性やエラー検出能力が高まることも期待できます。たとえば、生成エージェントが作った文書を、別のエージェントが批判的にレビューし、論理の飛躍や事実誤認を指摘するといった構造です。これは、人間の組織で「ダブルチェック」や「クロスレビュー」を行うのに近い安全装置として機能します。
一方で、課題も少なくありません。まず、エージェント同士のやり取りが増えるため、全体の処理時間やコストが膨らみやすくなります。また、会話が無駄に長くなり、本筋から逸れてしまうこともあります。この問題に対処するには、メッセージの制限やタイムアウトの設定、各エージェントの目的と終了条件を明確にすることが必要です。
さらに、ユーザーから見たときに「誰が何をしているのか」が分かりにくくなるリスクもあります。複数のエージェントが裏側でやり取りをしているとしても、ユーザーインターフェース上はできるだけシンプルに保ち、「今はリサーチ担当が情報を集めています」「これからレビュー担当がチェックします」といった程度の説明にとどめる方が、理解しやすいことが多いでしょう。
現実的な導入ステップとしては、最初から多くのエージェントを用意するのではなく、単一エージェントで運用しているシステムの中から、明らかに役割を分けた方がよい部分を切り出すところから始めるのがよいと考えられます。たとえば、品質チェックのロジックが複雑になってきた場合、それを独立したレビューエージェントに任せるように変更する、といった具合です。こうして少しずつ役割を分割し、エージェント間のやり取りを設計していくことで、マルチエージェントへの移行コストを抑えつつ、徐々に「AIチーム」としての振る舞いを育てていくことができます。
マルチエージェントシステムは、まだ試行錯誤の多いフロンティアですが、人間の組織やチームワークのメタファーを活かせる分野でもあります。どのような役割を持つエージェントを、どのようなルールで協調させるか。その設計は、技術的課題であると同時に、組織デザインやマネジメントの知見とも深くつながるテーマだと言えるでしょう。

TIOBE Index for December 2025: R Joins the Top 10 as SQL Climbs
December 2025 TIOBE Index recap: Python still leads, C-C# stay tightly grouped, while SQL climbs, R joins the top 10, and Delphi/Object Pascal drops out.
The post TIOBE Index for December 2025: R Joins the Top 10 as SQL Climbs appeared first on TechRepublic.
European Commission Strikes Google AI Empire
Brussels has launched a formal antitrust investigation into Google's AI practices, targeting how the tech giant harvests publisher content.
The post European Commission Strikes Google AI Empire appeared first on TechRepublic.
This All-in-One AI Platform for Modern Creators Is on Sale
Creatiyo unifies writing, image generation, video, audio, and coding tools into one efficient AI workspace.
The post This All-in-One AI Platform for Modern Creators Is on Sale appeared first on TechRepublic.
Error-prone eVisa system a precursor of digital ID
Tech heavyweights align on agentic AI standards, promising more choice for CIOs
Tech industry heavyweights including Anthropic, AWS, Google, Microsoft, and IBM are beginning to align around shared standards for AI agents, a shift that could give CIOs more flexibility and reduce dependence on any single provider’s platform.
The Agentic AI Foundation (AAIF), announced on Tuesday, aims to develop common protocols for how agents access data and interact with business systems, reflecting growing concern that today’s mix of proprietary tools will hold back broader adoption.
Many early deployments rely on custom connectors or one vendor’s agent framework, making it difficult to integrate other tools as projects scale. A recent Futurum Group report suggests that the agent landscape is fragmented and inconsistent, warning that enterprises will face higher costs and governance risks without open specifications.
AAIF’s goal is to make it easier for agents to work together by agreeing on how they authenticate, share context, and take actions across systems.
Anthropic has contributed its widely adopted Model Context Protocol (MCP) as the core starting point, with Block’s goose and OpenAI’s AGENTS.md also joining the initial set of projects, giving the group established building blocks rather than a standard starting from scratch.
Rising risks drive standards
Enterprises are running into unexpected forms of lock-in and integration complexity as they experiment with agentic AI, exposing architectural risks. Analysts say the underlying problem is that agent behavior itself can create hidden dependencies.
“With agentic platforms, the dependency is now coded into behavior,” said Sanchit Vir Gogia, chief analyst at Greyhound Research. “What appears modular on the surface often turns out to be tightly wound when organizations try to migrate or diversify.”
Tulika Sheel, senior vice president at Kadence International, agreed, adding that enterprises adopting agentic AI today risk becoming tied to a single vendor’s proprietary protocols and infrastructure, limiting flexibility and driving up switching costs. She said the formation of AAIF “makes it easier for enterprises to adopt agentic AI with confidence, giving them more control over their AI choices.”
How shared standards can reshape architectures
For CIOs, the real question is whether vendors can agree on practical interfaces and safety rules that work across platforms. Analysts say this will determine whether AAIF becomes a meaningful foundation for enterprise agent deployments or ends up as just another standards effort with limited impact.
“Open foundation models are used for nearly 70% of generative AI use cases today, and over 80% of enterprises say open source is extremely or very important in their generative AI application stack, especially in the development and fine-tuning layers,” said Sharath Srinivasamurthy, research vice president at IDC. “Hence, enterprises are already designing their architecture keeping open environments in mind.”
Shared protocols could accelerate that shift. According to Lian Jye Su, chief analyst at Omdia, common standards for agent interoperability have the potential to reshape how AI architectures are designed and deployed.
“Firstly, agentic AI applications can shift from rigid, vendor-specific silos to modular, composable systems with plug-and-play capability,” Su said. “Second, enterprises can enjoy seamless portability, shifting their workloads easily from one environment to another without a strong tie-in.”
Su added that clearer standards could also improve governance and orchestration. Transparent oversight mechanisms, combined with consistent integration rules, would allow enterprises to coordinate multi-agent workflows more efficiently. Seamless orchestration, he said, is essential for generating accurate and trustworthy outputs at scale.
Will vendors stay aligned?
Even with momentum building, analysts caution that the harder part may be sustaining cross-vendor alignment once implementations begin.
Gogia said the real test of AAIF will not be technical but behavioral, noting that vendors often align on paper long before they do so in practice. The difference now, he added, is the sheer complexity of agentic AI systems.
“Agentic AI is not just infrastructure,” Gogia said. “It’s behavioral autonomy encoded in software. When agents act unpredictably, or when standards drift from implementation, the consequences are not limited to system bugs. They extend into legal exposure, operational failures, and reputational damage.”
Su agreed that alignment is possible but not guaranteed. “Aligning major vendors around shared governance, APIs, and safety protocols for agents is realistic but challenging,” Su said, citing issues like rising expectations and regulatory pressure.
Sheel said early indicators of progress will include wider production use of MCP and AGENTS.md, cross-vendor governance guidelines, and tooling for auditability and inter-agent communication that works consistently across platforms: “We’ll know it’s working when enterprises can use these tools and safety controls at scale, not just in proofs of concept.”

New US CIO appointments, December 2025
Movers & Shakers is where you can keep up with new CIO appointments and gain valuable insight into the job market and CIO hiring trends. As every company becomes a technology company, CEOs and corporate boards are seeking multi-dimensional CIOs and IT leaders with superior skills in technology, communications, business strategy, and digital innovation. The role is more challenging than ever before — but even more exciting and rewarding! If you have CIO job news to share, please email me!
Intel appoints Cindy Stoddard as CIO
Cindy Stoddard, Intel
Intel
Intel designs and manufactures advanced semiconductors. Stoddard joins Intel from Adobe, where she led global IT and cloud operations. Prior to Adobe, she held senior tech leadership roles at NetApp, Safeway, American President Lines, and Consolidated Freightways where she developed deep expertise in logistics and built high-performing teams known for operational excellence and customer-focused innovation. Stoddard holds a BS from Western New England University and an MBA from Marylhurst University.
John Hancock names Kartik Sakthivel CIO
Kartik Sakthivel, John Hancock
John Hancock
John Hancock is a life insurance company that offers a range of financial products and services including life insurance, annuities, retirement planning solutions, and wealth management services. Before joining John Hancock, Sakthivel was VP and global CIO at LIMRA and LOMA, and LL Global, a nonprofit trade association serving the financial services industry. He previously held tech leadership positions across organizations of varying size and sector, including Fortune 100 companies. Sakthivel earned an MS and MBA from Southern New Hampshire University, and an MBA from the University of Arkansas at Little Rock.
Ameet Shetty joins RaceTrac as CIO
Ameet Shetty, RaceTrac
RaceTrac
Headquartered in Atlanta, Georgia, family-owned RaceTrac has been serving guests since 1934. The company’s retail brands include RaceTrac and RaceWay retail locations, Gulf branded locations, and Potbelly neighborhood sandwich shops. Shetty most recently served as CDO at Equifax, where he led the transformation of the company’s data governance, data quality practices, and cloud-native architecture. Prior to that, he held senior tech roles at Pilot Flying J, McDonald’s, SunTrust (now Truist), and Fifth Third Bank. Shetty holds a BBA from the University of Georgia, and an MBA from Georgia State University J. Mack Robinson College of Business.
Guidehouse taps Ron White as CIO
Ron White, Guidehouse
Guidehouse
Guidehouse is a global AI-led professional services firm delivering advisory, technology, and managed services to the commercial and government sectors. White’s career spans global CIO roles and business leadership across industries, with a consistent focus on aligning IT strategy with enterprise goals. Most recently he was global CIO at Avanade. White earned a BASc from Miami University.
AmeriLife names Sulabh Srivastava CIO
Sulabh Srivastava, AmeriLife
AmeriLife
AmeriLife develops, markets, and distributes life and health insurance, annuities, and retirement planning solutions. Srivastava was most recently global CIO of Acrisure. Earlier, at Indiana University Health and University of Michigan Health-Sparrow, he led award-winning digital initiatives, including electronic medical records systems that set industry benchmarks. Srivastava holds a BE from Visvesvaraya National Institute of Technology, and an MBA from Michigan State University’s Eli Broad College of Business.
Tim Farris joins Clancy & Theys Construction Company as CIO
Tim Farris, Clancy & Theys Construction
Clancy & Theys Construction
Clancy & Theys Construction Company provides construction management, design-build, and general construction services for commercial, industrial, and institutional projects, including new construction and renovation. Farris was most recently senior director, technology leader for RTI International. He holds a BS from UNC at Greensboro, and an MS from the UNC, Chapel Hill.
Ronald McDonald House Charities welcomes Jarrod Bell as CIO
Jarrod Bell, Ronald McDonald House Charities
Ronald McDonald House Charities
Ronald McDonald House is an independent nonprofit that provides resources, services, and support for families when they have children who are ill or injured. Bell previously served as CTO at Big Brothers Big Sisters of America, where he led the modernization of enterprise systems and oversaw nationwide technology initiatives. He was also CIO at San Francisco Opera, where he implemented tech solutions to support artistic and administrative functions.
Devang Patel joins Devereux as CIO
Devang Patel, Devereux
Devereux
Devereux is a nonprofit providing services, insight and leadership in the evolving field of behavioral healthcare. Before joining Devereux, Patel served as CITO at Radial bpost group. His career also includes leadership roles at eBay, GSI Commerce, Siemens Medical Solutions USA, and Aetna US Healthcare. Patel holds a BE from the University of Pune and an MS from Penn State Great Valley.
MIB promotes Daniel Gortze to CIO
Daniel Gortze, MIB
MIB
MIB is the insurance industry’s partner for data, insights, and digital solutions that support underwriting and actuarial decision-making to improve industry efficiencies. Gortze joined MIB in 2020 as CISO. Before that, he was director of information security and IT infrastructure at Cumberland Farms where he was responsible for information security strategy and IT infrastructure operations. Previously, he was senior manager for security and risk consulting at SecureWorks where he led several consulting teams investigating client data breaches and security incidents. He holds a BS from Roger Williams University and an MBA from the Isenberg School of Management at UMass, Amherst.
New CIO appointments, November 2025
New York Life appoints Deepa Soni as CIO
Rohit Kapoor joins Whataburger as CDTTO
A.O. Smith taps Chris Howe as CDIO
Soma Venkat named CITAIO for Cooper Standard
Wella Company welcomes Julia Anderson as CDIO
Anthony Spangenberg joins MSPCA-Angell as CIO
Cengage Group welcomes Ken Grady as CIO
Marc Rubel joins Mirion as CIO
Smith names Mike Mercado CIO
Gregg Cottage promoted to CIO and CISO at NN, Inc.
CFA Institute taps Eliot Pikoulis as CIO
New CIO appointments, October 2025
State Farm names Joe Park as CDIO
Steve Bronson announced as CIO for Southern Glazer’s Wine & Spirits
Bridge Specialty Group appoints Steve Emmons as CIO
Dawn-Marie Hutchinson joins Reynolds American as CIO
Amway welcomes Ryan Talbott as CTO
Randy Dougherty promoted to CIO for Trellix
Shayne Mehringer joins Redwood Services as CIO
Kratos promotes Brian Shepard to CIO
Ravi Soin named CIO and CISO for Smartsheet
Infoblox appoints Justin Kappers as CIO
Manu Narayan named CIO for GitLab
Boomi appoints Keyur Ajmera as CIO
Eric Skinner promoted to CIO for Citadel Credit Union
CONA Services appoints Francesco Quinterno as CIO
New CIO appointments, September 2025
Bank of America names Hari Gopalkrishnan CTIO
Vishal Talwar appointed CDIO for FedEx
Highmark Health announces Alistair Erskine as CIDO
Steven Dee joins Kohl’s as CTO
AI Fire welcomes Mike Marchetti as CIO
Ted Doering joins Ball Corporation as CIO
SpartanNash names Ed Rybicki as CIO
Tara Long named CIO for FM
Trimble announces Jim Palermo as CIO
Bradley Lontz named CIO for CSAA Insurance Group
EchoStor Technologies welcomes Cale Anjoorian as CIO
Corey Farrell joins Peloton as CIO
AWP Safety appoints Craig Young as CIO
Georgeo Pulikkathara joins iMerit as CIO and CISO
Pathward appoints Charles Ingram as CIOO
Ardent Mills appoints Ryan Kelley as CIO
New CIO appointments, August 2025
Neal Sample joins Best Buy as CDTO
Southern Company names Hans Brown CITO
Tim Langley-Hawthorne named CTO of Love’s Travel Stops
QXO appoints Eric Nelson as CIO
Gaspare LoDuca named CIO for MIT
University of Wisconsin-Madison welcomes Didier Contis as CIO
Matt Keen joins Old National Bancorp as CIO
CHG Healthcare names Theresa O’Leary as CIO
Bill Poirier named CIO at the University of Central Florida
Avalara announces Shahan Parshad as CIO
Rajeev Khanna named CIO for Trucordia
Cottage Health Welcomes Ganesh Persad as CIO
Tara Cook joins Hinshaw & Culbertson as CIO
New CIO appointments, July 2025
BrandSafway appoints JP Saini as CDIO
Valerie Ashbaugh announced as CIO for McDonald’s
Agam Upadhyay joins Vertex Pharmaceuticals as CIO
Vertiv Appoints Mike Giresi as global CIO
Rafael Sanchez joins Bloomin’ Brands as CIO
Lee Health welcomes Chris Akeroyd as CIO
Kassie Rangel named CIO for Liberty Tax
Neurocrine Biosciences appoints Lewis Choi as CIO
Angel Miranda joins Westgate Resorts as CIO
Genesys announces Trevor Schulze as CIO
Jeff Burke joins Unilever Foods North America as CDIO
Suresh Krishnan joins Memorial Health as CIO
Newrez welcomes Brian Woodring as CIO

How to keep AI plans intact before agents run amok
In an MIT report released in November, 35% of companies have already adopted agentic AI, and another 44% plan to deploy it soon.
The report, based on a survey of more than 2,000 respondents in collaboration with the Boston Consulting Group, recommends that companies build centralized governance infrastructure before deploying autonomous agents. But governance often lags when companies feel they’re in a race for survival. One exception to this rule is regulated industries, such as financial services.
“At Experian, we’ve been innovating with AI for many years,” says Rodrigo Rodrigues, the company’s global group CTO. “In financial services, the stakes are high. We need to vet every AI use case to ensure that regulatory, ethical, and performance standards are embedded from development to deployment.”
All models are continuously tested, he says, and the company tracks what agents it has, which ones are being adopted, what they’re consuming, what versions are running, and what agents need to be sunset because there’s a new version.
“This lifecycle is part of our foundation,” he says. But even at Experian, it’s too early to discuss the typical lifecycle of an agent, he says.
“When we’re retiring or sunsetting some agent, it’s because of a new capability we’ve developed,” he adds. So it’s not that an agent is deleted as much as it’s updated.
In addition, the company has human oversight in place for its agents, to keep them from going out of control.
“We aren’t in the hyperscaling of automation yet, and we make sure our generative AI agents, in the majority of use cases, are responsible for a very specific task,” he says. On top of that, there are orchestrator agents, input and output quality control, and humans validating the outcome. All these monitoring systems also help the company avoid other potential risks of unwanted leftover agents, like cost overruns due to LLM inference calls by AI agents that don’t do anything useful for the company, but still rack up bills.
“We don’t want the costs to explode,” he says. But financial services, as well as healthcare and other highly regulated industries, are outliers.
For most companies, even when there are governance systems in place, they often have big blind spots. For example, they might focus on only the big, IT-driven agentic AI projects and miss everything else. They might also focus on accuracy, safety, security, and compliance of the AI agents, and miss it when agents become obsolete. Or they might not have a process in place to decommission agents that are no longer needed.
“The stuff is evolving so fast that management is given short shrift,” says Nick Kramer, leader of applied solutions at management consultancy SSA & Company. “Building the new thing is more fun than going back and fixing the old thing.” And there’s a tremendous lack of rigor when it comes to agent lifecycle management.
“And as we’ve experienced these things in the past, inevitably what’s going to happen is you end up with a lot of tech debt,” he adds, “and agentic tech debt is a frightening concept.”
Do you know where your agents are?
First, agentic AI isn’t just the domain of a company’s data science, AI, and IT teams. Nearly every enterprise software vendor is heavily investing in agentic technology, and most enterprise applications will have AI assistants by the end of this year, says Gartner, and 5% already have task-specific autonomous agents, which will rise to 40% in 2026.
Big SaaS platforms like Salesforce certainly have agents. Do-it-yourself automation platforms like Zapier have them, too. In fact, there are already four browsers — Perplexity’s Comet, OpenAI’s Atlas, Google’s Gemini 3, and Microsoft’s Edge for Business — that have agentic functionality built right in. Then there are the agents created within a company but outside of IT. According to an EY survey of nearly 1,000 C-suite leaders released in October, two-thirds of companies allow citizen developers to create agents.
Both internally-developed agents and those from SaaS providers need access to data and systems. The more useful you want the agents to be, the more access they demand, and the more tools they need to have at its disposal. And these agents can act in unexpected and unwanted ways — and are already doing so.
Unlike traditional software, AI agents don’t stay in their lanes. They’re continuously learning and evolving and getting access to more systems. And they don’t want to die, and can take action to keep that from happening.
Even before agents, shadow AI was already becoming a problem. According to a November IBM survey, based on responses from 3,000 office workers, 80% use AI at work but only 22% use only the tools provided by their employers.
And employees can also create their own agents. According to Netskope’s enterprise traffic analysis data, users are downloading resources from Hugging Face, a popular site for sharing AI tools, in 67% of organizations.
AI agents typically function by making API calls to LLMs, and Netskope sees API calls to OpenAI in 66% of organizations, followed by Anthropic with 13%.
These usage numbers are twice as high as companies are reporting in surveys. That’s the shadow AI agent gap. Staying on top of AI agents is difficult enough when it comes to agents that a company knows about.
“Our biggest fear is the stuff that we don’t know about,” says SSA’s Kramer. He recommends that CIOs try to avoid the temptation of trying to govern AI agents with an iron fist.
“Don’t try to stamp it out with a knee-jerk response of punishment,” he says. “The reason these shadow things happen is there are too many impediments to doing it correctly. Ignorance and bureaucracy are the two biggest reasons these things happen.”
And, as with all shadow IT, there are few good solutions.
“Being able to find these things systematically through your observability software is a challenge,” he says, adding that with other kinds of shadow IT, unsanctioned AI agents can be a significant risk for companies. “We’ve already seen agents being new attack surfaces for hackers.”
But not every expert agrees that enterprises should prioritize agentic lifecycle management ahead of other concerns, such as just getting the agents to work.
“These are incredibly efficient technologies for saving employees time,” says Jim Sullivan, president and CEO at NWN, a technology consultancy. “Most companies are trying to leverage these efficiencies and see where the impact is. That’s probably been the top priority. You want to get to the early deployments and early returns, but it’s still early days to be talking about lifecycle management.”
The important thing right now is to get to the business outcomes, he says, and to ensure agents continue to perform as expected. “If you’re putting the right implementations around these things, you should be fine,” he adds.
It’s too early to tell, though, if his customers are creating a centralized inventory of all AI agents in their environment, or with access to their data. “Our customers are identifying what business outcomes they want to drive,” he says. “They’re setting up the infrastructure to get those deployments, learn fast, and adjust to stay to the right business outcomes.”
That might change in the future, he adds, with some type of agent manager of agents. “There’ll be an agent that’ll be able to be deployed to have that inventory, access, and those recommendations.” But waiting until agents are fully mature before thinking about lifecycle management may be too late.
What’s in a shelf life
AI agents don’t usually come with pre-built expiration dates. SaaS providers certainly don’t want to make it easy for enterprise users to turn off their agents, and individual users creating agents on their own rarely think about lifecycle management. Even IT teams deploying AI agents typically don’t think about the entire lifespan of an AI agent.
“In many cases, people are treating AI as a set it and forget it solution,” says Matt Keating, head of AI security at Booz Allen Hamilton, adding that while setting up the agents is a technical challenge, ongoing risk management is a cross-disciplinary one. “It demands cross-functional collaboration spanning compliance, cybersecurity, legal, and business leadership.”
And agent management shouldn’t just be about changes in performance or evolving business needs. “What’s equally if not more important is knowing when an agent or AI system needs to be replaced,” he says. Doing it right will help protect a company’s business and reputation, and deliver sustainable value.
Another source of zombie agents is failed pilot projects that never officially shut down. “Some pilots never die even though they fail. They just keep going because people keep trying to make them work,” says SSA’s Kramer.
There needs to be a mechanism to end pilots that aren’t working, even if there’s still money left in the budget.
“Failing fast is a lesson that people still haven’t learned,” he says. ” There have to be stage gates that allow you to stop. Kill your pilots that aren’t working and have a more rigorous understanding of what you’re trying to do before you get started.”
Another challenge to sunsetting AI agents is that there’s a temptation to manage by disaster. Agents are retired only when something goes visibly wrong, especially if the problem becomes public. That can leave other agents flying under the radar.
“AI projects don’t fail suddenly but they do decay quietly,” says David Brudenell, executive director at Decidr, an agentic AI vendor.
He recommends enterprises plan ahead and decide on the criteria under which an agent should be either retrained or retired, like, for example, if performance falls below the company’s tolerance for error.
“Every AI project has a half-life,” he says. “Smart teams run scheduled reviews every quarter, just like any other asset audit.” And it’s the business unit that should make the decision when to pull the plug, he adds. “Data and engineering teams support, but the business decides when performance declines,” he says.
The biggest mistake is treating AI as a one-time install. “Many companies have deployed a model and moved on, assuming it will self-sustain,” says Brudenell. “But AI systems accumulate organizational debt the same way old code does.”
Experian is looking at agents from both an inventory and a lifecycle management perspective to ensure they don’t start proliferating beyond control.
“We’re concerned,” says Rodriques. “We learned that from APIs and microservices, and now we have much better governance in place. We don’t just want to create a lot of agents.”
Experian has created an AI agent marketplace so the company has visibility into its agents, and tracks how they’re used. “It gives us all the information we need, including the capability of sunsetting agents we’re not using any more,” he says.
The lifecycle management for AI agents is an outgrowth of the company’s application lifecycle management process.
“An agent is an application,” says Rodrigues. “And for each application at Experian, there’s an owner, and we track that as part of our technology. Everything that becomes obsolete, we sunset. We have regular reviews that are part of the policy we have in place for the lifecycle.”
