Normal view
NCSC warns of confusion over true nature of AI prompt injection
-
CIO
- æ¥æé·ããããªã»ã¯ã©ãŠãåžå ŽïŒAIç¹ååã¯ã©ãŠãã¯æ°ããªéžæè¢ã«ãªãããã
æ¥æé·ããããªã»ã¯ã©ãŠãåžå ŽïŒAIç¹ååã¯ã©ãŠãã¯æ°ããªéžæè¢ã«ãªãããã
ççºçæé·ãäºæ³ãããããªã»ã¯ã©ãŠããšã¯
ããªã»ã¯ã©ãŠããšã¯ãGPUäžå¿ã®é«æ§èœã€ã³ãã©ã«ç¹åããã¯ã©ãŠããã©ãããã©ãŒã ãæããäž»èŠãªãµãŒãã¹ã¯ãGPUaaSïŒGPU as a ServiceïŒãGenAIïŒçæAIïŒãã©ãããã©ãŒã ãµãŒãã¹ããããŠé«å®¹éããŒã¿ã»ã³ã¿ãŒã®æäŸã ã
ãã®ããªã»ã¯ã©ãŠãåžå Žãå®ã«é©ç°çãªæé·ãéããŠããã調æ»äŒç€Ÿã®Synergy Research Groupã«ãããšã2025幎第2ååæïŒ4æ-6ææïŒã®åçã¯å幎æ¯205%ãšããé«ãæé·çãèšé²ãã50åãã«ã®å€§å°ãçªç Žããã2025幎é幎ã®åçã¯230åãã«ã«éããèŠèŸŒã¿ãšããã
æ¥æé·ã®èæ¯ã«ã¯ãAIã€ã³ãã©ãžã®é«ãéèŠãããã
äŒæ¥ã®AIéèŠã¯æ¥å¢ããŠãããããã€ããŒã¹ã±ãŒã©ãŒãªã©ã®ã¯ã©ãŠããããã€ããŒã¯ãèšå€§ãªAIéèŠã«äŸçµŠãåãããã®ã«èŠåŽããŠããç¶æ³ããšSynergyã®åµæ¥è ã§ããŒãã¢ããªã¹ããåããJeremy Dukeæ°ã¯ã³ã¡ã³ãããŠããã
åŸæ¥ãäŒæ¥ãAIã¯ãŒã¯ããŒããå®è¡ããéžæè¢ã¯ããªã³ãã¬ãã¹ããããªãã¯ã¯ã©ãŠããšå€§ããäºæã ã£ããããããããããã«å€§ããªèª²é¡ãããããªã³ãã¬ãã¹ã§ã¯ãGPUã¯é«äŸ¡ã§é»åæ¶è²»ã倧ãããå°é人æã®ç¢ºä¿ãç©ççãªå°å ¥ãå°é£ãšãã課é¡ããããäžæ¹ãAmazon Web ServicesïŒAWSïŒãMicrosoft AzureãGoogle Cloudãšãã£ããã€ããŒã¹ã±ãŒã©ãŒã®ãããªãã¯ã¯ã©ãŠãã¯ãå¹ åºããµãŒãã¹ãæäŸããåé¢ãã³ã¹ãã®äºæž¬ãé£ãããšãããªã¹ã¯ããããæ¥çããŠãŒã¹ã±ãŒã¹ãçµç¹ã®ã«ãŒã«ãªã©ã®çžããããå Žåã¯ãããŒã¿äž»æš©ã®èгç¹ãããæžå¿µãæ®ãã
ããªã»ã¯ã©ãŠãã¯ããã®äºã€ä»¥å€ã®éžæè¢ãšããŠç»å Žããããã€ããŒã¹ã±ãŒã©ãŒãåºç¯ãªã¯ã©ãŠããµãŒãã¹ãæäŸããã®ã«å¯Ÿããããªã»ã¯ã©ãŠãã¯GPUãšAIã¯ãŒã¯ããŒãã«ç¹åããããšã§å·®å¥åãå³ãã
IDCã®ã¢ãžã¢/å€ªå¹³æŽ ãšã³ã¿ãŒãã©ã€ãºãµãŒããŒããã³ããŒã¿ã»ã³ã¿ãŒãªãµãŒãã°ã«ãŒãã§ã¢ãœã·ãšã€ãã»ãªãµãŒããã£ã¬ã¯ã¿ãŒãåããCynthia Hoæ°ã¯ããããªã»ã¯ã©ãŠãäºæ¥è ã¯NVIDIAãšã®å¥çŽã«ããè¿ éã«ãªãœãŒã¹ã確ä¿ãã髿§èœãªãµãŒãã¹ãæäŸããŠããç¹ã§ãã€ããŒã¹ã±ãŒã©ãŒãããåªäœæ§ããããAIãšããæé·åžå Žã§ã·ã§ã¢ãç²åŸãã€ã€ããããšè¿°ã¹ãŠããã
ããªã»ã¯ã©ãŠãã®ã¡ãªããã¯ãå®å¿ããŠè©Šããå Žæã
ããªã»ã¯ã©ãŠãåžå Žã®äž»èŠãã¬ã€ã€ãŒã«ã¯ãCoreWeaveïŒ2017å¹Žåµæ¥ïŒãCrusoeïŒ2018å¹Žåµæ¥ïŒãLambdaïŒ2012å¹Žåµæ¥ïŒãNebiusïŒYandexãã2024幎ã«èªçïŒããããŠOpenAIïŒ2015幎ïŒãªã©ãæåŸã®OpenAIã¯ãChatGPTããæäŸãããã2025幎åãã«çºè¡šããAIã€ã³ãã©ã®ãStargateãæ§æ³ã«ãããä»åŸåžå Žã®éèŠãªãã¬ã€ã€ãŒã«ãªããšèŠãããŠããããããã«å ããŠãApplied DigitalãDataRobotãTogether AIãªã©ã®æ°èŠåå ¥ãç¶ããŠãããè峿·±ãã®ã¯ãCoreWeaveã®ããã«ãæå·ãã€ãã³ã°äŒæ¥ã髿§èœã³ã³ãã¥ãŒãã£ã³ã°ãµãŒãã¹ãããã€ããŒãžãšè»¢æããŠããã±ãŒã¹ãå€ãç¹ã ã
ãããŸã§æããããªã»ã¯ã©ãŠãäºæ¥è ã¯ç±³åœã欧å·äžå¿ã«å±éããŠãã倧æã ããããŒã«ã«ã§æäŸããããªã»ã¯ã©ãŠããããããã®1瀟ããªãŒã¹ãã©ãªã¢ã§å±éããSharon AIã ã2024幎ã«åµæ¥ããã®11æã«æå€§50MWã®å®¹éæ¡å€§å¥çŽãçºè¡šããã°ããã ã
Sharon AIã§CTOãåããDan Monsæ°ããCiscoã11æã«ãªãŒã¹ãã©ãªã¢ã»ã¡ã«ãã«ã³ã§éå¬ããã€ãã³ãã§ãCiscoã®ãªãŒã¹ãã©ãªã¢ïŒãã¥ãŒãžãŒã©ã³ã ãã€ã¹ãã¬ãžãã³ãå ŒãŒãã©ã«ãããŒãžã£ãŒãåããStefan Leitlæ°ãšå¯Ÿè«ãããããã§ã¯ããªã»ã¯ã©ãŠãã®åªäœæ§ãšããŠã以äžãæãã£ãã
ãŸãã³ã¹ãåªäœæ§ã ããã€ããŒã¹ã±ãŒã©ãŒãšæ¯èŒããŠäºæž¬å¯èœãªäŸ¡æ Œäœç³»ãæäŸããäºæ³å€ã®èª²éã®ãªã¹ã¯ãåé¿ã§ãããåŸé課éã§äºæ³å€ã®é«é¡è«æ±ãçºçãããªã¹ã¯ãå«ãäŒæ¥ã«ãšã£ãŠãããã¯å€§ããªã¡ãªããã ããã
2ã€ç®ãšããŠãå°éæ§ã®æ·±ãã§ãããããªã»ã¯ã©ãŠããããã€ããŒã®å€ãã¯ãHPCïŒãã€ã»ããã©ãŒãã³ã¹ã»ã³ã³ãã¥ãŒãã£ã³ã°ïŒãã¹ãŒããŒã³ã³ãã¥ãŒãã£ã³ã°ã®åéã§ç¥èŠãæã€ãå®éãMonsæ°ã¯ãAIã§ããŸãèªãããŠããªãç§å¯ããšããŠããAIã€ã³ãã©ã§å¿ èŠãªç¥èã®å€ãã¯ãHPCãã¹ãŒããŒã³ã³ãã¥ãŒãã£ã³ã°ã®äžçã§ã¯40幎以äžåãããã£ãŠããããšããšæãããHPCã®ããã¯ã°ã©ãŠã³ããæã€Monsæ°ãã«ãšã£ãŠã¯ãæ°ãããã®ã§ã¯ãªãããšèªãããã®å°éç¥èããããããããè€éãªAIã¯ãŒã¯ããŒãã«å¯Ÿå¿ã§ãããšããã
3ã€ç®ããè¿ éãªãªãœãŒã¹æäŸã ãããªã»ã¯ã©ãŠããããã€ããŒã¯Nvidiaãšã®å¥çŽãéããŠããã€ããŒã¹ã±ãŒã©ãŒãããè¿ éã«GPUãªãœãŒã¹ã確ä¿ã§ãããGPUäžè¶³ãç¶ãçŸç¶ã«ãããŠãããã¯æ±ºå®çãªåªäœæ§ãšãªãã
ãããã«å ããŠãMonsæ°ãæããè峿·±ãåªäœæ§ãããå®å šã«å€±æã§ããå Žæãã ãçæAIãããžã§ã¯ãã®95ïŒ ãPOCããæ¬çªç°å¢ã«å°éããªãïŒMITã¬ããŒãïŒãªã©ãšèšãããŠããããçµç¹ãå¿ èŠãšããã®ã¯å®å šã«å€±æã§ããå Žæã§ãããæ©ã倱æããããŠåŠã¶å¿ èŠããããæ°ããæè¡ã詊ããŠã¿ãããšãéèŠã§ããã®å Žæãæã ã¯æäŸã§ããããšMonsæ°ã
æåŸã«äžãã£ãã®ããããŒã¿äž»æš©ãžã®å¯Ÿå¿ã ããããŒã¿äž»æš©ã¯å€æ¬¡å çãªåé¡ã ããšMonsæ°ã¯ææãããããŒã¿ãªãã«AIã¯ãªããé貚ãšãèšãããããŒã¿ã ããããŒã¿ã®çš®é¡ã«ãã£ãŠã¯ãããŒã¿ãã©ãã«çœ®ãã¹ãããåŸãã¹ãã³ã³ãã©ã€ã¢ã³ã¹èŠå¶ã¯ã©ãããå°åã§ã¹ãã«ãã©ãèŠã€ããããä¿¡é Œã§ãããã³ããŒãã©ãèŠã€ããããªã©ãèããªããã°ãªããªããïŒMonsæ°ïŒãSharon AIãªã©ããŒã«ã«ã§å±éããããªã»ã¯ã©ãŠãã¯ããããå°åç¹æã®èŠä»¶ã«ç²ŸéããŠãããã°ããŒãã«ãªãã€ããŒã¹ã±ãŒã©ãŒãšã¯ç°ãªã䟡å€ãæäŸããŠããã
äŒæ¥ãããªã»ã¯ã©ãŠãã«æ³šç®ãã¹ãçç±
å®éã«ããªã»ã¯ã©ãŠããå©çšããäŒæ¥ã¯ã©ã®ãããªäŒæ¥ãªã®ãã
Monsæ°ã¯ãåæé¡§å®¢ã®1瀟ãšããŠVictor Changå¿èç ç©¶æã玹ä»ãããåŸæ¥ã®ç ç©¶ãGPUããŒã¹ã«ç§»è¡ããAIãæŽ»çšããŠãããšãããSharon AIãéžæããçç±ã¯GPUãžã®ã¢ã¯ã»ã¹ã ãã§ãªããå»çç ç©¶æ©é¢ãæ±ãããŒã¿ã¯æ©å¯æ§ãé«ããããããŒã¿äž»æš©ã¯æ¥µããŠéèŠãªèª²é¡ã ãSharon AIã¯ãªãŒã¹ãã©ãªã¢ã®2æ ç¹ã«ã€ã³ãã©ãèšçœ®ããŠãããããŒã¿ãåœå€ã«åºãããšã¯ãªããšããç¹ãé åã ã£ãããã ã
Victor Changå¿èç ç©¶æã®ãããªç ç©¶æ©é¢ã倧åŠã«å ããAIãµãŒãã¹ãéçºããã¹ã¿ãŒãã¢ãããªã©ãããªã»ã¯ã©ãŠãã®é¡§å®¢ã®ããã ããçŸæç¹ã§ã¯ãæè¡çãªç¥èãæ¯èŒçé«ãäŒæ¥ãçµç¹ãå€ãããã ããšHoæ°ãäžéšå°åã§ã¯ãGPUã¢ã¯ã»ã¹ã«å¶éã®ããäžåœäŒæ¥ã顧客ãšããããªã»ã¯ã©ãŠãããããšèšãããŠããã
æé·ã¯ä»åŸãç¶ãããã ãSynergyã¯2030幎ãŸã§ã«ããªã»ã¯ã©ãŠãåžå Žã®èŠæš¡ã¯çŽ1800åãã«ã«å°éãã幎平åæé·ç69%ã§æ¡å€§ãããšäºæž¬ããŠãããGPUaaS/GenAIãã©ãããã©ãŒã ãµãŒãã¹åžå Žã¯çŸåšã幎é165%ãšããé«ãæé·çãç¶æããŠãããããã§ããªã»ã¯ã©ãŠãã¯ããªãã®ã·ã§ã¢ãå ãããšããã
ã ã課é¡ã¯ãããHoæ°ã¯çšŒåçãæãããã巚倧ãªã€ã³ãã©æè³ãå¿ èŠã ããå®éã®ãšãã皌åçã¯ã©ã®ãããããå©çšã¯è¿œãã€ããŠããªãã®ã§ã¯ãªãããïŒHoæ°ïŒãããªã»ã¯ã©ãŠããšããéžæè¢ãå®çããããã«ã¯ããŠãŒã¹ã±ãŒã¹ãã¡ãªãããããæç¢ºã«ããŠèšŽæ±ããå¿ èŠãããããã ã
æ°ã«ãªãæ¥æ¬ã§ã¯ã©ããªã®ãïŒãIDCã§ã·ãã¢ãªãµãŒããããŒãžã£ãŒãšããŠæ¥æ¬åœå ã®ãšã³ã¿ãŒãã©ã€ãºã€ã³ãã©ã¹ãã©ã¯ãã£åžå Žãæ åœããå è€æ 乿°ã¯ãæ¥æ¬ã§ã¯ãããªã»ã¯ã©ãŠãããšå€§ã çã«åä¹ãäºæ¥è ã¯ç»å ŽããŠããªããšèªããªãããçµæžç£æ¥çã®ãã¯ã©ãŠãããã°ã©ã ããªã©ã«ããGPUã®å€§èŠæš¡æè³ãçºçããŠãããä»åŸGPUã¯ã©ãŠããµãŒãã¹ã掻æ³ãšãªãå¯èœæ§ã瀺åãããå®éã«ãGPUããã«ã«æŽ»çšã§ããç°å¢ãšããŠæé©åãããŠãããããã©ãŒãã³ã¹ã®ã¡ãªããã¯å€§ããããšè©±ãã çŸæç¹ã§ã®çšéã¯ç ç©¶éçºãåŠè¡ç ç©¶ãªã©ãããŸã§HPCãçšããŠãããŠãŒã¹ã±ãŒã¹ããããããããã®ã®ãäŒæ¥ãååã«æ³šç®ããŠããå¿ èŠã¯ããããã ãæ¥æ¬ã§ãããªã»ã¯ã©ãŠããµãŒãã¹ãæ¬æ ŒåããããšããæäŸè åŽã®èª²é¡ã¯ãããã®ã®ããHPCããã³AIé åã«ãããŠããã§ã«GPUãå©çšããŠãããå€§èŠæš¡ãªéèŠããã£ããããäŒæ¥ã¯ãã¯ã©ãŠãã«ããå¹ççã»å¹æçãªã€ã³ãã©ãšããéžæè¢ã«ãªãå¯èœæ§ã¯ããããšå è€æ°ããããªã»ã¯ã©ãŠãã®éèŠãªç¹åŸŽã«ã³ã¹ããšã¹ããŒãããããAIãæŽ»çšããåéã§ãéçºåã§ç«¶äºåªäœæ§ã枬ãããã«ãéžæè¢ã®1ã€ã«ãªããããšæ³šç®ããŠãããŠè¯ãã ããããšç¶ããã

IBM to buy Confluent to extend its data and automation portfolio
IBM has agreed to acquire cloud-native enterprise data streaming platform Confluent in a move designed to expand its portfolio of tools for building AI applications
The company said Monday in a release that it sees Confluent as a natural fit for its hybrid cloud and AI strategy, adding that the acquisition is expected to âdrive substantial product synergiesâ across its portfolio.
Confluent connects data sources and cleans up data. It built its service on Apache Kafka, an open-source distributed event streaming platform, sparing its customers the hassle of buying and managing their own server clusters in return for a monthly fee per cluster, plus additional fees for data stored and data moved in or out.Â
IBM expects the deal, which it valued at $11 billion, to close by the middle of next year.
Confluent CEO and co-founder Jay Kreps stated in an email sent internally to staff about the acquisition, âIBM sees the same future we do: one in which enterprises run on continuous, event-driven intelligence, with data moving freely and reliably across every part of the business.â
Itâs a good move for IBM, noted Scott Bickley, an advisory fellow at Info-Tech Research Group. â[Confluent] fills a critical gap within the watsonx platform, IBMâs next-gen AI platform, by providing the ability to monitor real-time data,â he said, and is based on the industry standard for managing and processing real-time data streams.Â
He added, âIBM already has the pieces of the puzzle required to build and train AI models; Confluent provides the connective tissue to saturate those models with continuous live data from across an organizationâs entire operation, regardless of the source. This capability should pave the road ahead for more complex AI agents and applications that will be able to react to data in real time.â
He also pointed out that the company is playing the long game with this acquisition, which is its largest in recent history. âIBM effectively positions itself proactively to compete against the AI-native big data companies like Snowflake and Databricks, who are all racing towards the same âholy grailâ of realizing AI agents that can consume, process, and react to real-time data within the context of their clientsâ trained models and operating parameters,â he said, adding that IBM is betting that a full-stack vertical AI platform, watsonx, will be more appealing to enterprise buyers than a composable solution comprised of various independent components.
The move, he noted, also complements previous acquisitions such as the $34.5 billion acquisition of Red Hat and the more recent $6.4 billion acquisition of Hashicorp, all of which are built upon dominant open source standards including Linux, Terraform/Vault, and Kafka. This allows IBM to offer a stand-alone vertical, hybrid cloud strategy with full-stack AI capabilities apart from the ERP vendor space and the point solutions currently available.
In addition, he said, the timing was right; Confluent has been experiencing a slowing of revenue growth and was reportedly shopping itself already.
âAt the end of the day, this deal works for both parties. IBM is now playing a high-stakes game and has placed its bet that having the best AI models is not enough; it is the control of the data flow that will matter,â he said.

Tech marketplaces: Solving the last-mile billing barrier to global growth
According to an IoT Analytics report from early 2024, 1.8% of global enterprise software was sold via marketplaces in 2023 and is forecasted to grow to nearly 10% by 2030. Although this represents a minority share today, it is the segment growing at a much faster pace than any other IT sales channel.
The concept of a technology marketplace as a central hub for software distribution predates the cloud, but I believe its current surge is driven by a fundamentally new dynamic. Cloud giants, or hyperscalers, have reinvented the model by transforming independent software vendors (ISVs) into a motivated army of sales channels. What are the keys to this accelerated growth? And what is the role of the principal actors in this new era of technology commercialization?
The new hyperscaler-ISV economic symbiosis
This new wave of marketplaces is spearheaded by hyperscalers, whose strategy I see as centered on an economic symbiosis with ISVs. The logic is straightforward: an ISVâs software runs on the hyperscalerâs infrastructure. Consequently, every time an ISV sells its solution, it directly drives increased consumption of cloud services, generating a dual revenue stream for the platform.
This pull-through effect, where the ISVâs success translates directly into the platformâs success, is the core incentive that has motivated hyperscalers to invest heavily in developing their marketplaces as a strategic sales channel.
The five players in the marketplace ecosystem
The marketplace ecosystem involves and impacts five key players: the ISV, the hyperscaler, the end customer, the distributor and the reseller or local hyperscaler partner. Letâs examine the role of each.
The ISV as the innovative specialist
In essence, I see the ISV as the entity that transforms the hyperscalerâs infrastructure into a tangible, high-value business solution for the end customer. For ISVs, the marketplace is a strategic channel that dramatically accelerates their time-to-market. It allows them to simplify transactional complexities, leverage the hyperscalerâs global reach and tap into the budgets of customers already under contract with the platform. This can even extend to mobilizing the hyperscalerâs own sales teams as an indirect channel through co-selling programs.
However, in my view, this model presents challenges for the ISV, primarily in managing customer relationships and navigating channel complexity. By operating through one or two intermediaries (the hyperscaler or a local partner), the ISV inevitably cedes some control over and proximity to the end customer.
Furthermore, while partner-involved arrangements simplify the transaction for the customer, they introduce a new layer of complexity for the ISV, who must now manage margin agreements, potential channel conflicts and the tax implications of an indirect sales structure, especially in international transactions.
The hyperscaler as the ecosystem enabler
As the ecosystem enabler, the hyperscaler provides the foundational infrastructure upon which ISVs operate. By leveraging their massive global customer base, I see hyperscalers strategically promote the marketplace with a dual objective: to increase customer loyalty and retention (stickiness) and to drive the cloud consumption generated by these ISVs.
In doing so, the hyperscaler transcends its original role to become the central operator of the ecosystem, assuming what I believe is a new, influential function as a financial and commercial intermediary.
The end customer as the center of gravity
In this ecosystem, the end customer acts as the center of gravity. Their influence stems from their business needs and, most critically, their budget. Both hyperscalers and ISVs align their strategies to meet the customerâs primary demand: transforming a traditionally complex procurement process into a centralized and efficient experience.
However, this appeal can be diminished by operational constraints. A primary limitation arises in territories where the customer cannot pay for purchases in the local currency. This entails managing payments in foreign currencies, reintroducing a level of fiscal and exchange-rate complexity that counteracts the very simplicity that drew them to the marketplace.
The partner as the local reseller
The partner acts as a local reseller in the customerâs procurement process, particularly in countries where the hyperscaler does not have a direct billing entity. In this model, the reseller manages the contractual relationship and invoices the end customer in the local currency, simplifying the transaction for the customer.
This arrangement, however, challenges the marketplace model, which was designed for direct transactions between the hyperscaler and the customer. When a local reseller becomes the billing intermediary, the standard model becomes complicated as it does not natively account for the elements the partner introduces:
- Partner margin: The payment flow must accommodate the resellerâs commission.
- Credit risk: The partner, not the hyperscaler, assumes the risk if the end customer defaults on payment.
- Tax implications: The partner must manage the complexities of international invoicing and related withholding taxes (WHT).
This disconnect has been, in my analysis, a significant barrier to the global expansion of ISV sales through marketplaces in regions where the hyperscaler lacks a legal entity.
The distributor as an aggregator being replaced
Historically, distributors have been the major aggregators in the technology ecosystem, managing relationships and contracts with thousands of ISVs and leading the initial wave of software commercialization. In the new era of digital distribution, however, hyperscaler marketplaces have emerged as a formidable competitor.
In my opinion, the marketplace model strikes at the core of the software distribution business by offering a more efficient platform for transacting digital assets. This leaves distributors to compete primarily on their advantage in handling tangible technology assets.
Key trends: Two noteworthy cases in marketplaces
The strategic use of cloud consumption commitments: A key driver accelerating marketplace adoption is its integration with annual and multiyear cloud consumption contracts. These agreements, in which a customer commits to a minimum expenditure, can often be used to purchase ISV solutions from the marketplace. This creates what I see as a threefold benefit:
- The customer can leverage a pre-approved budget to acquire new technology, expediting procurement.
- The ISV can close sales faster by overcoming budget hurdles.
- The hyperscaler ensures the customer fulfills their consumption commitment, thereby increasing retention.
The integration of professional services is the missing piece: A traditional limitation of marketplaces was their focus solely on software transactions, excluding the professional services (e.g., consulting, migration, implementation) required to deploy them. This created a process gap, forcing customers to manage a separate services contract.
While I have seen the inclusion of some professional services packages directly in marketplaces, this is not universally available for all ISVs. As a result, professional services remain the key missing link needed to complete the sale and offer the customer a comprehensive solution (software + services) in a single transaction.
Key actions for the ecosystem
This new wave of marketplaces is expected to continue its accelerated growth and capture a significant share of the technology distribution market. Assuming this transition is inevitable, I offer the following strategic recommendations for the ecosystemâs key players.
ISVs: Adapt the commercial model to the channel
I believe ISVS must incorporate the costs associated with the partner channel into their marketplace pricing strategy. When a sale requires a local reseller, the ISVâs commercial model must account for a clear partner margin and the impact of withholding taxes.
Iâve seen that failure to do so will disincentivize the partner from promoting the solution, potentially blocking the sale or, more likely, leading them to offer a competing solution that protects their profitability.
Hyperscalers: Resolve global billing friction
To realize the full global growth potential of the marketplace, hyperscalers must overcome the obstacle of international billing. The solution lies in one of two paths:
- Direct investment: Establish local subsidiaries in strategic countries to enable local currency invoicing and ensure compliance with regional tax regulations.
- Channel enablement: Design a financially viable model that empowers and compensates local partners to manage billing, assume credit risk and handle administrative complexity in exchange for a clear margin.
Customers: Establish governance and clarity in the billing model
The very simplicity that makes the marketplace attractive is also its greatest risk. The ease of procurement can lead to uncontrolled spending or the acquisition of redundant solutions if clear governance policies are not implemented.
It is essential to establish centralized controls to manage who can purchase and what can be purchased, thereby preventing agility from turning into a budgetary liability.
Customers must also verify whether a transaction will be billed directly by the hyperscaler (potentially involving an international payment in a foreign currency) or through a local partner. This distinction is critical as it determines the vendor of record and has direct implications for managing local taxes and withholding.
Partners: Proactively protect your profitability
From my analysis, the primary risk for a partner is financial; specifically, a loss of profitability when a managed client purchases directly from the marketplace, as this eliminates the partnerâs margin and creates tax uncertainty. Attempting to resolve this retroactively with a penalty clause is often contentious and difficult to enforce.
The solution must be preventative and contractual. A partner of record agreement should be established with the client at the outset of the relationship. This agreement must clearly stipulate that, in exchange for the value the partner provides (e.g., consulting, support, local management), they will be the designated channel for all marketplace transactions.
This protects the partnerâs profitability, prevents losses from unmanaged transactions and aligns the interests of the client and the partner, ensuring the partnerâs value is recognized and compensated with every purchase.
Distributors: Differentiate your value
Faced with diminishing relevance due to hyperscaler marketplaces, distributors must redefine their value proposition. Their strategy should focus on developing an ecosystem of value-added services on their own platform to encourage direct customer purchases and compete more effectively.
The final frontier of frictionless growth
The shift to marketplace distribution is an undeniable force that will reshape how enterprise technology is bought and sold globally. However, the true promise of this model (frictionless, one-stop procurement for the end customer) remains constrained by the very complexities it seeks to eliminate: international billing, channel compensation and tax adherence.
The transition from a domestic (US-centric), direct-sale mindset to a truly global, indirect channel model is the final frontier. Those who solve the âlast mileâ of global channel and billing complexity will be the ones to truly own the future of enterprise software distribution.
This article is published as part of the Foundry Expert Contributor Network.
Want to join?

Google Confirms Rising âAccount Takeovers ââ Users Told to Check Chrome Settings
Google warns Chrome users of rising âaccount takeoversâ and urges stronger authentication to keep accounts and synced data safe.
The post Google Confirms Rising âAccount Takeovers ââ Users Told to Check Chrome Settings appeared first on TechRepublic.
How police live facial recognition subtly reconfigures suspicion
New Splunk Windows Flaw Enables Privilege Escalation Attacks
Splunk for Windows has a high-severity flaw that lets local users escalate privileges through misconfigured file permissions. Learn how to fix it.
The post New Splunk Windows Flaw Enables Privilege Escalation Attacks appeared first on TechRepublic.
Meet the MAESTRO: AI agents are ending multi-cloud vendor lock-in
For todayâs CIO, the multi-cloud landscape, extending across hyperscalers, enterprise platforms, and AI-native cloud providers, is a non-negotiable strategy for business resilience and innovation velocity. Yet, this very flexibility can become a liability, often leading to fragmented automation, vendor sprawl, and costly data silos. The next frontier in cloud optimization isnât better scriptingâitâs Agentic AI systems.
These autonomous, goal-driven systems, deployed as coordinated multi-agent ecosystems, act as an enterpriseâs âMAESTRO.â They donât just follow instructions; they observe, plan, and execute tasks across cloud boundaries in real-time, effectively transforming vendor sprawl from a complexity tax into a strategic asset.
The architecture of cross-cloud agent interoperability
The core challenge in a multi-cloud environment is not the platforms themselves, but the lack of seamless interoperability between the automation layers running on them. The MAESTRO architecture (referencing the Cloud Security Allianceâs MAESTRO agentic AI threat modeling framework; MAESTRO stands for multi-agent environment, security, threat, risk and outcome) solves this by standardizing the language and deployment of these autonomous agents:
1. The open standards bridge: A2A protocol
For agents to coordinate effectivelyâto enable a FinOps agent on one cloud to negotiate compute resources with an AIOps agent on another cloudâthey must speak a common, vendor-agnostic language. This is where the emerging Agent2Agent (A2A) protocol becomes crucial.
The A2A protocol is an open, universal standard that enables intelligent agents, regardless of vendor or underlying model, to discover, communicate, and collaborate. It provides the technical foundation for:
- Dynamic capability discovery:Â Agents can publish their identity and skills, allowing others to discover and connect without hard-coded integrations.
- Context sharing:Â Secure exchange of context, intent, and status, enabling long-running, multi-step workflows like cross-cloud workload migration or coordinated threat response.
To fully appreciate the power of the Maestro architecture, consider a critical cross-cloud workflow: strategic capacity arbitrage and failover. A FinOps agent on a general-purpose cloud is continuously monitoring an AI inference workloadâs service level objectives(SLOs) and cost-per-inference. When a sudden regional outage is detected by an AIOps agent on the same cloud, the AIOps agent broadcasts a high-priority âcapacity sourcingâ intent using the A2A protocol. The Maestro orchestrates an immediate response, allowing the FinOps agent to automatically negotiate and provision the required GPU capacity with a specialized neocloud agent. Simultaneously, a security agent ensures the new data pipeline adheres to the required data sovereignty rules before the workload migration agent seamlessly shifts the portable Kubernetes container to the new, available capacity, all in under a minute to maintain continuous model performance. This complex, real-time coordination is impossible without the standardized language and interoperability provided by the A2A protocol and the Kubernetes-native deployment foundation.
2. The deployment foundation: Kubernetes-native frameworks
To ensure agents can be deployed, scaled, and managed consistently across clouds, we must leverage a Kubernetes-native approach. Kubernetes is already the de facto orchestration layer for enterprise cloud-native applications. New Kubernetes-native agent frameworks, like kagent, are emerging to extend this capability directly to multi-agent systems.
This approach allows the Maestro to:
- Zero-downtime agent portability:Â Package agents as standard containers, making it trivial to move a high-value security agent from one cloud to another for resilience or cost arbitrage.
- Observability and auditability:Â Leverage Kubernetesâ built-in tools for monitoring, logging, and security to gain visibility into the agentâs actions and decision-making process, a non-negotiable requirement for autonomous systems.
Strategic value: Resilience and zero lock-in
The Maestro architecture fundamentally shifts the economics and risk profile of a multi-cloud strategy.
- Reduces vendor lock-in: By enforcing open standards like A2A, the enterprise retains control over its core AI logic and data models. The Maestroâs FinOps agents are now capable of dynamic cost and performance arbitrage across a more diverse compute landscape that includes specialized providers. Neoclouds are purpose-built for AI, offering GPU-as-a-Service (GPUaaS) and unique performance advantages for training and inference. By packaging AI workloads as portable Kubernetes containers, the Maestro can seamlessly shift them to the most performant or cost-effective platformâwhether itâs an enterprise cloud for regulated workloads, or a specialized AI-native cloud for massive, high-throughput training. As BCG emphasizes, managing the evolving dynamics of digital platform lock-in requires disciplined sourcing and modular, loosely coupled architectures. The agent architecture makes it dramatically easier to port or coordinate high-value AI services, providing true strategic flexibility.
- Enhances business resilience (AIOps): AIOps agents, orchestrated by the Maestro, can perform dynamic failover, automatically redirecting traffic or data pipelines between regions or providers during an outage. Furthermore, the Maestro can orchestrate strategic capacity sourcing, instantly rerouting critical AI inference workloads to available, high-performance GPU capacity offered by specialized neoclouds to ensure continuous model performance during a regional outage on a general-purpose cloud. They can also ensure compliance by dynamically placing data or compute in the âgreenestâ (most energy-efficient) cloud or the required sovereign region to meet data sovereignty rules.
The future trajectory
The shift to the Maestro architecture represents more than just a technological upgrade; it signals the true democratization of the multi-cloud ecosystem. By leveraging open standards like A2A, the enterprise is moving away from monolithic vendor platforms and toward a vibrant, decentralized marketplace of agentic services. In this future state, enterprises will gain access to specialized, hyper-optimized capabilities from a wide array of providers, treating every compute, data, or AI service as a modular, plug-and-play component. This level of strategic flexibility fundamentally alters the competitive landscape, transforming the IT organization from a consumer of platform-centric services to a strategic orchestrator of autonomous, best-of-breed intelligence. This approach delivers the âstrategic freedom from vendor lock-inâ necessary to continuously adapt to market changes and accelerate innovation velocity, effectively turning multi-cloud complexity into a decisive competitive advantage.
Governance: Managing the autonomous agent sprawl
The power of autonomous agents comes with the risk of âmisaligned autonomyââagents doing what they were optimized to do, but without the constraints and guardrails the enterprise forgot to encode. Success requires a robust governance framework to manage the burgeoning population of agents.
- Human-in-the-loop (HITL) for critical decisions: While agents execute most tasks autonomously, the architecture must enforce clear human intervention points for high-risk decisions, such as a major cost optimization that impacts a business-critical service or an automated incident response that involves deleting a core data store. Gartner emphasizes the importance of transparency, clear audit trails, and the ability for humans to intervene or override agent behavior. In fact, Gartner predicts that by 2028, loss of controlâwhere AI agents pursue misaligned goalsâwill be the top concern for 40% of Fortune 1000 companies.
- The 4 pillars of agent governance:Â A strong framework must cover the full agent lifecycle:
- Lifecycle management:Â Enforcing separation of duties for development, staging, and production.
- Risk management:Â Implementing behavioral guardrails and compliance checks.
- Security:Â Applying least privilege access to tools and APIs.
- Observability:Â Auditing every action to maintain a complete chain of reasoning for compliance and debugging.
By embracing this Maestro architecture, CIOs can transform their multi-cloud complexity into a competitive advantage, achieving unprecedented levels of resilience, cost optimization, and, most importantly, strategic freedom from vendor lock-in.
This article is published as part of the Foundry Expert Contributor Network.
Want to join?

IBM Pursues AI Expansion With $11B Confluent Acquisition
The move reflects a rapidly intensifying race among technology giants to strengthen the data foundations required for generative and agentic AI.
The post IBM Pursues AI Expansion With $11B Confluent Acquisition appeared first on TechRepublic.
-
TechRepublic
- X Blocks EU Ads Account After â¬120m Fine, Escalating Clash Between Elon Musk and Brussels
X Blocks EU Ads Account After â¬120m Fine, Escalating Clash Between Elon Musk and Brussels
X shut down the European Commissionâs ad account after a â¬120M DSA fine, sparking a tense clash over transparency, platform rules, and rising political fallout.
The post X Blocks EU Ads Account After â¬120m Fine, Escalating Clash Between Elon Musk and Brussels appeared first on TechRepublic.
10 Best Remote Work Tools for Team Collaboration in 2026
Collaboration tools are core to every remote teamâs success. These remote work solutions improve communication, support smoother workflows, and help distributed teams get more work done efficiently.
The post 10 Best Remote Work Tools for Team Collaboration in 2026 appeared first on TechRepublic.
Why cyber resilience must be strategic, not a side project
As one of the worldâs foremost voices on cybersecurity and crisis leadership, Sarah Armstrong-Smith has spent her career at the intersection of technology, resilience and human decision-making. Formerly chief security advisor at Microsoft Europe, and now a member of the UK Government Cyber Advisory Board, she is widely recognized for her ability to translate complex technical challenges into actionable business strategy.
In this exclusive interview with The Cyber Security Speakers Agency, Sarah explores how todayâs CIOs must evolve from technology enablers to resilience architects â embedding cyber preparedness into the core of business strategy. Drawing on decades of experience leading crisis management and resilience functions at global organizations, she offers a masterclass in how technology leaders can balance innovation with security, manage disruption with clarity and build cultures of trust in an era defined by volatility and digital interdependence.
For business and technology leaders navigating the next wave of transformation, Sarahâs insights offer a rare blend of strategic depth and practical foresight â a roadmap for leadership in the age of perpetual disruption.
1. As digital transformation accelerates, how can CIOs embed cyber resilience into the very fabric of business strategy rather than treating it as a separate function?
Cyber resilience should be recognised as a strategic enabler, not merely a technical safeguard. CIOs must champion a holistic approach where resilience is woven into every stage of digital transformation â from initial design through to deployment and ongoing operations.
This requires close collaboration with business leaders to ensure risk management and security controls are embedded from the outset, rather than being an afterthought. By aligning cyber resilience objectives with business outcomes, CIOs can work alongside CISOs to help their organizations anticipate threats, adapt rapidly to disruptions and maintain stakeholder trust.
Embedding resilience also demands a shift in organizational mindset. CIOs should help to foster a culture where every employee understands their role in protecting digital assets and maintaining operational service.
This involves education and cross-functional exercises that simulate real-world incidents, aligned to current threats. By making resilience a shared responsibility and a key performance metric, CIOs can ensure their organizations are not only prepared to withstand a range of threats but are also positioned to recover quickly and thrive in the face of adversity.
2. CIOs and CISOs often face tension between innovation and security. Whatâs your advice for maintaining that balance while still driving progress?
Balancing innovation and security are constant challenges that require CIOs to act as both risk managers and business catalysts. The key is to embed security and resilience considerations early into the innovation lifecycle, ensuring new technologies and processes are assessed for risk early and often.
CIOs should promote agile governance frameworks that allow for rapid experimentation while maintaining clear guardrails around information protection, compliance and operational integrity. By involving security teams from the outset, organizations can identify potential vulnerabilities before they become systemic issues.
At the same time, CISOs must avoid creating a culture of fear that stifles creativity. Instead, they should encourage responsible risk-taking by providing teams with the tools, guidance and autonomy to innovate securely.
This includes leveraging automation, zero-trust architectures and continuous monitoring to reduce vulnerabilities and enable faster, safer deployment of solutions. Ultimately, the goal is to create an environment where innovation and security are mutually reinforcing, driving competitive advantage and organizational resilience.
3. Youâve led crisis management and resilience teams across major organizations. What leadership lessons can CIOs take from managing incidents under pressure?
Effective crisis leadership is built on preparation, decisiveness and transparent communication. CIOs must ensure their teams are well-versed in incident response and empowered to act swiftly when an incident occurs.
This means investing in due diligence, having clear escalation paths and robust playbooks that outline the critical path, and designated roles and responsibilities. During a crisis, leaders must remain calm, protect critical assets and make informed decisions based on real-time intelligence.
Equally important is the ability to communicate clearly with both internal and external stakeholders. CIOs and CISOs should work in unison to provide timely updates to the board, regulators and customers, balancing transparency with the need to protect vulnerable people and sensitive data.
Demonstrating accountability and empathy during a crisis can help preserve trust and minimise reputational damage. After the incident, leaders should be thoroughly committed to post-mortems to identify âno blameâ lessons learned and drive continuous improvement, ensuring the organization emerges stronger and more resilient.
4. With AI transforming both security threats and defences, what role should CIOs play in governing ethical and responsible AI adoption?
CIOs are uniquely positioned to guide the ethical deployment of AI and emerging tech, balancing innovation with risk management and societal responsibility. They should contribute to governance frameworks that address data privacy, algorithmic bias and transparency, ensuring AI systems are designed and operated in accordance with core organizational policies and regulatory requirements. This involves collaborating with legal, compliance and HR teams to develop policies that safeguard against unintended consequences and consequential impact.
Additionally, CIOs should champion ongoing education and awareness around AI ethics, both within IT and across the wider organization. By fostering a culture of accountability and continuous learning, CIOs can help teams identify and mitigate risks associated with AI through the implementation of rigorous engineering principles.
Regular technical and security assessments and stakeholder engagement is essential to maintaining trust and ensuring AI adoption delivers positive outcomes for those most impacted by it.
5. In your experience, what distinguishes organizations that recover stronger from a cyber incident from those that struggle to regain trust?
Organizations that recover stronger from cyber incidents typically demonstrate resilience through proactive planning, transparent communication and a commitment to continuous improvement. They invest in proactive and reactive capabilities and a positive culture driven by empathetic leadership, empowerment and accountability.
When an incident occurs, these organizations respond swiftly, contain the threat and communicate transparently with stakeholders about the actions being taken to remediate and reduce future occurrences.
Conversely, organizations that struggle often lack preparedness and fail to engage stakeholders effectively. Delayed or inconsistent communication can erode trust and amplify reputational damage.
The most resilient organizations treat incidents and near-misses as learning opportunities, conducting thorough post-incident reviews and implementing changes to strengthen their defences. By prioritising transparency, accountability and a culture of resilience, CIOs can help their organizations not only recover but also enhance their reputation and stakeholder confidence.
6. How can CIOs cultivate a security-first culture across non-technical teams â especially in remote or hybrid work environments?
Cultivating a security-first culture requires CIOs and CISOs to make cybersecurity relevant and accessible to all employees, regardless of technical expertise. This starts with tailored training programmes that address the specific risks faced by different stakeholders, rather than a one-size-fits-all approach.
This should leverage engaging formats â like interactive workshops, gamified learning and real-world simulations to reinforce positive behaviors and outcomes
Beyond training, CIOs and CISOs must embed security into everyday workflows by providing user-friendly tools and clear guidance. Regular communication, visible leadership and recognition of positive security behaviors can help sustain momentum.
In hybrid environments, CIOs should ensure policies are dynamic and adaptive to evolving threats, enabling employees to work securely without sacrificing productivity. By fostering a sense of shared responsibility and empowering non-technical teams, CIOs can build a resilient culture that extends beyond the IT department.
7. Boards are increasingly holding CIOs accountable for resilience and risk. How can technology leaders communicate complex security risks in business language?
To effectively engage boards, CIOs must translate technical issues into enterprise risks, framing cybersecurity and resilience as a strategic imperative rather than a technical challenge. This involves articulating how exposure to specific threats could affect safety, revenue, reputation, regulatory compliance and operational services. CIOs and CISOs should use clear, non-technical language, supported by real-world scenarios, to illustrate the potential consequences of ineffective controls and the value of resilience investments.
Regular, structured and diligent reporting â such as dashboards, heat maps and risk registers â can help boards visualise enterprise risk exposure and track progress over time. CIOs should foster open dialogue, encouraging board members to ask questions and participate in scenario planning.
By aligning security discussions with business objectives and demonstrating the ROI of resilience initiatives, technology and security leaders can build trust and secure the support needed to drive meaningful change.
8. What emerging risks or trends should CIOs be preparing for in 2025 and beyond?
CIOs must stay ahead of a rapidly evolving threat landscape, characterised by the proliferation of AI-enabled attacks, supply chain vulnerabilities and targeted campaigns. The rise of quantum computing poses long-term risks to traditional encryption methods, necessitating understanding and early exploration of quantum-safe solutions.
Additionally, regulatory scrutiny around data sovereignty and ethical AI is intensifying, requiring codes of conduct and governance strategies.
Beyond technology, CIOs should anticipate continuous shifts in workforce dynamics, such as the increase in human-related threats. Societal risks, geopolitical instability and the convergence of physical and cyber threats are also shaping the resilience agenda. By maintaining a forward-looking perspective and investing in adaptive capabilities, leaders can position their organizations to navigate uncertainty and capitalize on emerging opportunities.
9. How important is collaboration between CIOs and other business leaders, such as CFOs and CHROs, in building organizational resilience?
Collaboration across the entire C-suite is essential for building holistic resilience that encompasses people, technology, finance and processes. CIOs must work closely with CFOs to align resilience investments with business priorities and CROs to ensure risk management strategies are financially sustainable. Engaging CHROs is equally important, as workforce readiness and culture play a critical role in responding to and recovering from disruptions.
Joint initiatives such as cross-functional crisis simulations, integrated risk assessments and shared accountability frameworks can help break down silos and foster a unified approach to resilience.
By leveraging diverse perspectives and expertise, CIOs can drive more effective decision-making and ensure resilience is embedded throughout the organization. Ultimately, strong collaboration enables organizations to reduce assumptions, anticipate challenges, respond cohesively and emerge stronger in times of adversity.
10. Finally, what personal qualities do you believe future-ready CIOs must develop to lead effectively through constant disruption?
Future-ready CIOs must embody adaptability, strategic vision and emotional intelligence. The pace of technological change and the frequency of disruptive events demand leaders who can pivot quickly, embrace uncertainty and inspire confidence in their teams. CIOs should cultivate an inquisitive mindset, continuously seeking new knowledge and challenging conventional wisdom to stay ahead of emerging trends.
Equally important are communication and collaboration skills. CIOs must be able to articulate complex ideas clearly, build consensus across diverse stakeholders and foster a culture of trust and accountability.
Resilience, empathy and a commitment to ethical leadership will enable CIOs to navigate challenges with integrity and guide their organizations through periods of uncertainty and transformation. By developing these qualities, CIOs can lead with purpose and drive sustainable success in an ever-changing landscape.
This article is published as part of the Foundry Expert Contributor Network.
Want to join?

-
CIO
- ãšã³ãžãã¢èŠç¹ããèŠãLLMãšãŒãžã§ã³ãå®è£ å ¥éââãã¬ãŒã ã¯ãŒã¯éžå®ãããããã¿ã€ãæ§ç¯ãŸã§
ãšã³ãžãã¢èŠç¹ããèŠãLLMãšãŒãžã§ã³ãå®è£ å ¥éââãã¬ãŒã ã¯ãŒã¯éžå®ãããããã¿ã€ãæ§ç¯ãŸã§
ã¢ãŒããã¯ãã£ã®å šäœåãæŒããã
æåã®äžæ©ãšããŠéèŠãªã®ã¯ãLLMãšãŒãžã§ã³ãã·ã¹ãã ã®åºæ¬çãªã¢ãŒããã¯ãã£ãé ã®äžã§æããããã«ããããšã§ããå€ãã®å Žåãäžæ žã«ã¯LLMæšè«APIãããããã®åšå²ã«ããã³ãããã³ãã¬ãŒããããŒã«çŸ€ãã¡ã¢ãªã¹ãã¢ãRAGçšã®ãã¯ãã«ããŒã¿ããŒã¹ããã°ãã¢ãã¿ãªã³ã°ã®ä»çµã¿ãé 眮ãããŸãããšãŒãžã§ã³ãèªäœã¯ãããããçµã¿åãããããªãŒã±ã¹ãã¬ãŒã·ã§ã³å±€ããšããŠå®è£ ããã芳å¯ã»æèã»è¡åã®ã«ãŒãã管çããŸãã
ã¯ã©ã€ã¢ã³ãããã®ãªã¯ãšã¹ãã¯ããŸãã¢ããªã±ãŒã·ã§ã³ãµãŒããŒãéããŠãšãŒãžã§ã³ãã«æž¡ãããŸãããšãŒãžã§ã³ãã¯ãçŸåšã®ã³ã³ããã¹ããšã¡ã¢ãªãããšã«ããã³ãããæ§ç¯ããLLM APIãåŒã³åºããŸããLLMããè¿ã£ãŠããåºåã®ãã¡ãããŒã«åŒã³åºããå«ãŸããŠããéšåã¯ããŒã¹ããã察å¿ããããŒã«é¢æ°ãå€éšAPIãå®è¡ãããŸãããã®çµæãåã³ãšãŒãžã§ã³ãã«æ»ããæ¬¡ã®ã¹ãããã®ããã³ããã«çµã¿èŸŒãŸããã«ãŒããç¶ããŸãã
RAGãçµã¿èŸŒãå Žåã¯ããšãŒãžã§ã³ããå¿ èŠã«å¿ããŠæ€çŽ¢ããŒã«ãåŒã³åºãããŠãŒã¶ãŒã®è³ªåãã¿ã¹ã¯ã«é¢é£ããããã¥ã¡ã³ãããã¯ãã«ããŒã¿ããŒã¹ããååŸããŸããååŸããããã¹ãã¯ãLLMã®ã³ã³ããã¹ãã«çµã¿èŸŒãŸããäºå®ããŒã¹ã®åçãå€æãæ¯ããŸããã¡ã¢ãªã¹ãã¢ã¯ããŠãŒã¶ãŒããšã®é·æçãªæ å ±ãã¿ã¹ã¯ã®äžéç¶æ ãä¿æããæ¬¡å以éã®ã€ã³ã¿ã©ã¯ã·ã§ã³ã§ã掻çšãããŸãã
ãã®ãããªæ§é ãæèããããšã§ããã©ããå ã«äœããã©ããåŸããå·®ãæ¿ãå¯èœã«ä¿ã€ãããšããèšèšå€æããããããªããŸããããšãã°ãæåã¯åçŽãªRDBMSãã¡ã¢ãªã¹ãã¢ãšããŠäœ¿ããåŸããå°çšã®ãã¯ãã«ããŒã¿ããŒã¹ããã£ãã·ã¥å±€ã远å ãããšãã£ã段éçãªã¢ãããŒããå¯èœã«ãªããŸãã
ãã¬ãŒã ã¯ãŒã¯éžå®ãšå°ããªãããã¿ã€ã
å®è£ ææ®µãšããŠã¯ãå瀟ãã³ãã¥ããã£ãæäŸãããšãŒãžã§ã³ããã¬ãŒã ã¯ãŒã¯ãã¯ãŒã¯ãããŒãšã³ãžã³ãå©çšããæ¹æ³ãšãèªåã§èããªãŒã±ã¹ãã¬ãŒã·ã§ã³ã¬ã€ã€ãŒãæžãæ¹æ³ããããŸããã©ã¡ããéžã¶ã«ããããæåããå®ç§ãªåºç€ãäœãããšããªããããšãæåã®éµã§ãã
ãã¬ãŒã ã¯ãŒã¯ãéžã¶éã«ã¯ã察å¿ããŠããLLMãããã€ããããŒã«é£æºã®ãããããã¹ããŒã管çã®ä»çµã¿ããã°ãã¢ãã¿ãªã³ã°ã®æ©èœãªã©ã確èªããŸãããŸããã³ãŒãã®èªã¿ããããæ¡åŒµã®ãããããéèŠã§ãããšãŒãžã§ã³ãã®æ¯ãèãã现ããå¶åŸ¡ããããªãå Žé¢ã¯å¿ ã蚪ããããããã©ãã¯ããã¯ã¹ã«èŠãããã¬ãŒã ã¯ãŒã¯ããããäžèº«ãçè§£ãããããã®ãéžã¶æ¹ãé·æçã«ã¯å®å šã§ãã
æåã®ãããã¿ã€ããšããŠã¯ãäžã€ã®æç¢ºãªãŠãŒã¹ã±ãŒã¹ã«ç¹åãããšãŒãžã§ã³ããäœãã®ãããã§ããããããšãã°ããŠã§ãæ€çŽ¢ãšç€Ÿå RAGãçµã¿åãããŠã¬ããŒãèæ¡ãäœããªãµãŒããšãŒãžã§ã³ããã瀟å ã®FAQãåç §ããªããåŸæ¥å¡ã®åãåããã«çãããã«ããã¹ã¯ãšãŒãžã§ã³ããªã©ã§ãããã®æ®µéã§ã¯ãèªèšŒãè€éãªæš©é管çãã¹ã±ãŒãªã³ã°æŠç¥ãªã©ã¯æäœéã«ãšã©ãããšã«ãããšãŒãžã§ã³ãã®ãæè§ŠãããããŒã ã§å ±æããããšãç®çã«ãªããŸãã
ãããã¿ã€ãã®äžã§ã¯ãããŒã«ãäºãäžåã«çµããã¡ã¢ãªãã»ãã·ã§ã³å ã®ç°¡æãªãã®ã«çãããšå®è£ ãæ¥œã«ãªããŸãããã®ä»£ããããã°ãäžå¯§ã«æ®ããã©ã®ãããªããã³ãããã©ã®ãããªåºåãçãã ã®ããããŒã«ã®åŒã³åºããæåããã®ã倱æããã®ããå¯èŠåããä»çµã¿ãæŽããŠãããšãåŸã®æ¹åã«åœ¹ç«ã¡ãŸãã
éçºããã»ã¹ãšãã¹ãã»è©äŸ¡ã®å·¥å€«
LLMãšãŒãžã§ã³ãéçºã§ãšã³ãžãã¢ãæžæããããã®ãããã¹ãã®é£ããã§ããåãå ¥åã«å¯ŸããŠåãå¿çãè¿ããªãããšãå€ããåŸæ¥ã®åäœãã¹ããã¹ãããã·ã§ãããã¹ãã®ææ³ããã®ãŸãŸé©çšããããšã¯å°é£ã§ããããã§éèŠã«ãªãã®ããã·ããªãªããŒã¹ã®è©äŸ¡ãšãèªåè©äŸ¡ãšäººæè©äŸ¡ã®çµã¿åããã§ãã
å ·äœçã«ã¯ãå žåçãªã¿ã¹ã¯ã·ããªãªãè€æ°çšæããããããã«ã€ããŠæåŸ ãããæ¯ãèãã®æ¡ä»¶ãå®çŸ©ããŸããããšãã°ããã®åãåããã«å¯ŸããŠã¯ã瀟å èŠçšã®è©²åœç®æãåŒçšãã€ã€ãäžã€ã®éžæè¢ãæç€ºããããšãã£ãã¬ãã«ã§ãããšãŒãžã§ã³ãã宿çã«ãããã®ã·ããªãªã«å¯ŸããŠå®è¡ããLLMãçšããèªåè©äŸ¡ãã«ãŒã«ããŒã¹ã®ãã§ãã«ãŒã§ååŠãå€å®ããŸããããã«å ããŠãéèŠãªã·ããªãªã«ã€ããŠã¯äººæã«ããã¬ãã¥ãŒãè¡ãã䞻芳çãªå質ã確èªããŸãã
éçºããã»ã¹ãšããŠã¯ãããã³ãããããŒã«æ§æãé »ç¹ã«å€æŽã§ããããã«ãã€ã€ã倿Žã®åœ±é¿ç¯å²ãææ¡ããããã®è©äŸ¡ãžã§ããCIã«çµã¿èŸŒããšããã§ãããããšãŒãžã§ã³ãã®èšå®ã倿Žãããã³ã«ãã·ããªãªè©äŸ¡ãèµ°ãããéèŠææšã®å€åãå¯èŠåããŸããããã«ããããäžã€ã®ãŠãŒã¹ã±ãŒã¹ãæ¹åããã€ããããå¥ã®ãŠãŒã¹ã±ãŒã¹ãå£åãããŠããŸã£ãããšãã£ãäºæ ãæ©æã«æ€ç¥ã§ããŸãã
æåŸã«ãéçšãã§ãŒãºã§ã¯ããŠãŒã¶ãŒã®ãã£ãŒãããã¯ãšãã°åæãéèŠãªæ å ±æºã«ãªããŸãããŠãŒã¶ãŒã«ç°¡åã«ããã®åçã¯åœ¹ã«ç«ã£ããããã©ããåé¡ã ã£ããããéä¿¡ããŠããããã€ã³ã¿ãŒãã§ãŒã¹ãçšæãããã®æ å ±ããã°ãšçŽã¥ããŠåæããããšã§ãæ¹åã®åªå é äœã決ããããšãã§ããŸãããšã³ãžãã¢ã¯ãã¢ãã«ãããã³ããã®èª¿æŽã ãã§ãªããããŒã«ã®è¿œå ã»åé€ãã¡ã¢ãªæŠç¥ã®èŠçŽãããšã©ãŒåŠçã®åŒ·åãªã©ãã·ã¹ãã å šäœã察象ãšããæ¹åãç¶ç¶çã«è¡ãããšã«ãªããŸãã
LLMãšãŒãžã§ã³ãå®è£ ã¯ãåãªãAPIåŒã³åºãã®ã©ãããŒäœãã§ã¯ãªããæšè«ã·ã¹ãã ãã¯ãŒã¯ãããŒãããŒã¿åºç€ãUXã亀差ããç·åæ Œéæã®ãããªé åã§ããããããå°ããªãããã¿ã€ãããå§ããã¢ãŒããã¯ãã£ã®éªšæ ŒãæèããªããåŸã ã«æ¡åŒµããŠããã°ãçŸå®çãªã³ã¹ãã§æ¬çªéçšã«èããããšãŒãžã§ã³ããè²ãŠãŠããããšãã§ããŸãã

-
CIO
- å®å šãªLLMãšãŒãžã§ã³ããäœãããã®ãªã¹ã¯ãšã¬ããã³ã¹ââå¹»èŠã»ã»ãã¥ãªãã£ã»æ³ç責任
å®å šãªLLMãšãŒãžã§ã³ããäœãããã®ãªã¹ã¯ãšã¬ããã³ã¹ââå¹»èŠã»ã»ãã¥ãªãã£ã»æ³ç責任
LLMãšãŒãžã§ã³ãç¹æã®ãªã¹ã¯ã®å šäœå
ãŸãæŒãããŠããããã®ã¯ãLLMãšãŒãžã§ã³ãã®ãªã¹ã¯ã¯ãåäžã®æè¡çåé¡ã§ã¯ãªããè€æ°ã®ã¬ã€ã€ãŒã«ãŸããã£ãŠãããšããç¹ã§ããã²ãšã€ã¯ãLLMãã®ãã®ãæã€å¹»èŠã®åé¡ã§ãããã£ãšããããã誀ã£ãæ å ±ãèªä¿¡æºã ã«èªã£ãŠããŸãæ¯ãèãã¯ããç¥ãããŠããŸããããšãŒãžã§ã³ããšããŠå€éšããŒã«ã«ã¢ã¯ã»ã¹ããå Žåããã®èª€ããå ·äœçãªã¢ã¯ã·ã§ã³ã«ã€ãªãã£ãŠããŸãå¯èœæ§ããããŸããååšããªãAPIãšã³ããã€ã³ããåŒã³åºãããšãããã誀ã£ãæ¡ä»¶ã§ããŒã¿ãæœåºãããããããšã¯ãæ¥åããã»ã¹ã«çŽæ¥çãªåœ±é¿ãäžããŸãã
次ã«ãã»ãã¥ãªãã£ãšãã©ã€ãã·ãŒã®ãªã¹ã¯ããããŸãããšãŒãžã§ã³ãã¯ããŠãŒã¶ãŒã®å ¥åå 容ã ãã§ãªãã瀟å ã®åçš®ã·ã¹ãã ãããã¥ã¡ã³ãã«ã¢ã¯ã»ã¹ããããšãå€ãããã®éçšã§æ©å¯æ å ±ãæ±ããŸãããããã®æ å ±ãã¢ãã«æäŸè ããã°ã·ã¹ãã ãéããŠå€éšã«éä¿¡ãããå Žåãæ å ±ç®¡çäžã®ãªã¹ã¯ãçããŸãããŸãããšãŒãžã§ã³ããæ»æè ã«æªçšãããå¯èœæ§ãç¡èŠã§ããŸãããããšãã°ãããã³ããã€ã³ãžã§ã¯ã·ã§ã³æ»æã«ãã£ãŠãšãŒãžã§ã³ãã®è¡åæ¹éãæžãæããããæå³ããªãæ å ±éä¿¡ãæäœãè¡ããããšãã£ãã·ããªãªã§ãã
ããã«ãæ³ç責任ã®åé¡ããããŸãããšãŒãžã§ã³ããçæããå 容ãå®è¡ããã¢ã¯ã·ã§ã³ãæ³ä»€éåãå¥çŽéåã«ã€ãªãã£ãå Žåã誰ã責任ãè² ãã®ããã¢ãã«æäŸè ãããšãŒãžã§ã³ããçµã¿èŸŒãã ãµãŒãã¹æäŸè ãããããšãæçµçã«å©çšãããŠãŒã¶ãŒãããã®åãã«æç¢ºãªçããåºãŠããªãé åãå€ããã¬ããã³ã¹èšèšã®é£ãããå¢ããŠããŸãã
ã¬ãŒãã¬ãŒã«èšèšãšæš©é管çã®èãæ¹
ãããããªã¹ã¯ã«å¯ŸåŠããããã«ã¯ãæè¡çã»éçšçãªã¬ãŒãã¬ãŒã«ãå€å±€çã«èšèšããå¿ èŠããããŸãããã®äžå¿ã«ããã®ãæš©é管çã§ãããšãŒãžã§ã³ãã«äžããæš©éã¯ãååãšããŠå¿ èŠæå°éã«ãšã©ããããŸãã¯èªã¿åãå°çšããå§ãããããšãå®å šãªã¢ãããŒãã§ããããšãã°ãCRMã·ã¹ãã ãšã®é£æºã§ã¯ãæåã¯é¡§å®¢æ å ±ã®åç §ã®ã¿ã«çµããäžå®æéåé¡ããªãããšã確èªããããã§ãã¬ã³ãŒãæŽæ°ã®æš©éãéå®çã«è§£æŸããŠãããšãã£ã段éçãªèšèšãèããããŸãã
ãŸããå±éºåºŠã®é«ãã¢ã¯ã·ã§ã³ã«ã€ããŠã¯ãå¿ ã人éã®æ¿èªãæãã¯ãŒã¯ãããŒã«ããããšãéèŠã§ããé«é¡ãªæ¯æãæç€ºãå¥çŽæ¡ä»¶ã®å€æŽã察å€çãªéèŠææžã®éä»ãªã©ã¯ããšãŒãžã§ã³ãããã©ãããææ¡ãè¡ãããšã¯ãã£ãŠããæçµå®è¡ã¯äººéãè¡ã圢ã«ãã¹ãã§ãããã®ã人éã®æ¿èªã¹ããããããšãŒãžã§ã³ãã®ãããŒã®äžã«æç€ºçã«çµã¿èŸŒãããšã§ã誀åäœã®åœ±é¿ãéå®ã§ããŸãã
ããã³ããã€ã³ãžã§ã¯ã·ã§ã³ãããŒã¿æŒãããžã®å¯ŸçãšããŠã¯ãå ¥åãšåºåã®ãã£ã«ã¿ãªã³ã°ãæ¬ ãããŸããããŠãŒã¶ãŒå ¥åãå€éšãµã€ãããååŸããããã¹ãããã®ãŸãŸã·ã¹ãã ããã³ããã«åã蟌ãŸãªããå€éšã«éä¿¡ããŠã¯ãªããªãæ å ±ãåºåã«å«ãŸããŠããªããããã§ãã¯ãããç¹å®ã®ããŒã¯ãŒãããã¿ãŒã³ãæ€åºãããå Žåã«ã¯åŠçã忢ããŠã¢ã©ãŒããäžãããšãã£ãä»çµã¿ãæå¹ã§ãããããã¯ãã¢ãã«ã®å€åŽã®ã¢ããªã±ãŒã·ã§ã³ã¬ã€ã€ãŒã§å®è£ ã§ããããšãå€ããã¬ãŒãã¬ãŒã«ã®éèŠãªäžéšã«ãªããŸãã
ã¢ãã¿ãªã³ã°ãšè²¬ä»»ã®æç¢ºåã«ããã¬ããã³ã¹
ã¬ãŒãã¬ãŒã«ãèšèšãããšããŠããäžåºŠå°å ¥ãããšãŒãžã§ã³ãããã®ãŸãŸæŸçœ®ããŠããããã§ã¯ãããŸããããšãŒãžã§ã³ãã¯åŠç¿æžã¿ã¢ãã«ã®äžã«æãç«ã£ãŠãããšã¯ããããã®æåã¯ã³ã³ããã¹ããç°å¢ã«ãã£ãŠå€åããŸãããããã£ãŠãéçšéå§åŸãç¶ç¶çãªã¢ãã¿ãªã³ã°ãšæ¹åãå¿ èŠã§ãã
ã¢ãã¿ãªã³ã°ã®å¯Ÿè±¡ã«ã¯ãæåããã¿ã¹ã¯ãšå€±æããã¿ã¹ã¯ã®æ¯çããŠãŒã¶ãŒã«ããä¿®æ£é »åºŠããšã©ãŒãäŸå€ã®çºçãã¿ãŒã³ãã»ãã¥ãªãã£äžã®ç矩ã®ããæåãªã©ãå«ãŸããŸããç¹ã«éèŠãªã®ã¯ããéå€§äºæ ã«ã€ãªããæåã®æªéäºäŸããæ©æã«æ€ç¥ããããšã§ããããšãã°ããšãŒãžã§ã³ããçŠæ¢ãããŠããå€éšãã¡ã€ã³ãžã®ã¢ã¯ã»ã¹ã詊ã¿ãããã¬ãŒãã¬ãŒã«ã«ãããããã¯ããããšãããã°ã¯ãèšèšã®æ¹åäœå°ã瀺ã貎éãªã·ã°ãã«ã§ãã
ãŸããè²¬ä»»ã®æç¢ºåãã¬ããã³ã¹ã®äžéšã§ããçµç¹å éšã«ãããŠã¯ããšãŒãžã§ã³ãã®èšèšãšéçšã«ã€ããŠæçµè²¬ä»»ãè² ããªãŒããŒãæç€ºãã倿Žç®¡çãã€ã³ã·ãã³ã察å¿ã®ããã»ã¹ãå®çŸ©ããŠããå¿ èŠããããŸããå€éšåãã«ã¯ãå©çšèŠçŽããã©ã€ãã·ãŒããªã·ãŒã«ãããŠããšãŒãžã§ã³ãã®æ©èœãšéçããŠãŒã¶ãŒåŽã«æ±ãããã確èªçŸ©åãªã©ãåããããã説æããããšãæ±ããããŸãã
å®å šãªLLMãšãŒãžã§ã³ããšã¯ããªã¹ã¯ããŒãã®ãšãŒãžã§ã³ãã§ã¯ãªãããªã¹ã¯ãå¯èŠåãããã³ã³ãããŒã«å¯èœãªåœ¢ã§éçšãããŠãããšãŒãžã§ã³ãã§ããå¹»èŠãèª€å€æãå®å šã«æé€ããããšã¯ã§ããªã以äžãããããåæãšããŠãã©ãã§æ¢ããã©ãã§äººéã«ã€ãªãã®ããåé¡ãçºçãããšãã«ã©ãæ€ç¥ããã©ãåŠã³ã«å€ããã®ããšããã¬ããã³ã¹ã®æ çµã¿ããããèšèšãšåããããéèŠã«ãªã£ãŠãããŸãã

Apple and Google Alert Users Worldwide After New Spyware Activity Surfaces
Evidence shows that certain people have been targeted by malicious actors, often linked to governments or state-backed groups.
The post Apple and Google Alert Users Worldwide After New Spyware Activity Surfaces appeared first on TechRepublic.
Gartner Warns of Sharp Slowdown in Automotive AI Investment
According to Gartner, only 5% of automakers will continue expanding AI investments at current levels by 2029, a steep fall from more than 95% today.
The post Gartner Warns of Sharp Slowdown in Automotive AI Investment appeared first on TechRepublic.
Forrester: The role of internal developer platforms in DevOps
CIOs shift from âcloud-firstâ to âcloud-smartâ
Common wisdom has long held that a cloud-first approach will gain CIOs benefits such as agility, scalability, and cost-efficiency for their applications and workloads. While cloud remains most IT leadersâ preferred infrastructure platform, many are rethinking their cloud strategies, pivoting from cloud-first to âcloud-smartâ by choosing the best approach for specific workloads rather than just moving everything off-premises and prioritizing cloud over other considerations for new initiatives.
Cloud cost optimization is one factor motivating this rethink, with organizations struggling to control escalating cloud expenses amid rapid growth. An estimated 21% of enterprise cloud infrastructure spend, equivalent to $44.5 billion in 2025, is wasted on underutilized resources â with 31% of CIOs wasting half of their cloud spend, according to a recent survey from VMware.
The full rush to the cloud is over, says Ryan McElroy, vice president of technology at tech consultancy Hylaine. Cloud-smart organizations have a well-defined and proven process for determining which workloads are best suited for the cloud.
For example, âsomething that must be delivered very quickly and support massive scale in the future should be built in the cloud,â McElroy says. âSolutions with legacy technology that must be hosted on virtual machines or have very predictable workloads that will last for years should be deployed to well-managed data centers.â
The cloud-smart trend is being influenced by better on-prem technology, longer hardware cycles, ultra-high margins with hyperscale cloud providers, and the typical hype cycles of the industry, according to McElroy. All favor hybrid infrastructure approaches.
However, âAI has added another major wrinkle with siloed data and compute,â he adds. âMany organizations arenât interested in or able to build high-performance GPU datacenters, and need to use the cloud. But if theyâve been conservative or cost-averse, their data may be in the on-prem component of their hybrid infrastructure.â
These variables have led to complexity or unanticipated costs, either through migration or data egress charges, McElroy says.
He estimates that âonly 10% of the industry has openly admitted theyâre movingâ toward being cloud-smart. While that number may seem low, McElroy says it is significant.
âThere are a lot of prerequisites to moderate on your cloud stance,â he explains. âFirst, you generally have to be a new CIO or CTO. Anyone who moved to the cloud is going to have a lot of trouble backtracking.â
Further, organizations need to have retained and upskilled the talent who manage the datacenter they own or at the co-location facility. They must also have infrastructure needs that outweigh the benefits the cloud provides in terms of raw agility and fractional compute, McElroy says.
Selecting and reassessing the right hyper-scaler
Procter & Gamble embraced a cloud-first strategy when it began migrating workloads about eight years ago, says Paola Lucetti, CTO and senior vice president. At that time, the mandate was that all new applications would be deployed in the public cloud, and existing workloads would migrate from traditional hosting environments to hyperscalers, Lucetti says.
âThis approach allowed us to modernize quickly, reduce dependency on legacy infrastructure, and tap into the scalability and resilience that cloud platforms offer,â she says.
Today, nearly all P&Gâs workloads run on cloud. âWe choose to keep selected workloads outside of the public cloud because of latency or performance needs that we regularly reassess,â Lucetti says. âThis foundation gave us speed and flexibility during a critical phase of digital transformation.â
As the companyâs cloud ecosystem has matured, so have its business priorities. âCost optimization, sustainability, and agility became front and center,â she says. âCloud-smart for P&G means selecting and regularly reassessing the right hyperscaler for the right workload, embedding FinOps practices for transparency and governance, and leveraging hybrid architectures to support specific use cases.â
This approach empowers developers through automation, AI, and agentic to drive value faster, Lucetti says. âThis approach isnât just technical â itâs cultural. It reflects a mindset of strategic flexibility, where technology decisions align with business outcomes.â
AI is reshaping cloud decisions
AI represents a huge potential spend requirement and raises the stakes for infrastructure strategy, says McElroy.
âRenting servers packed with expensive Nvidia GPUs all day every day for three years will be financially ruinous compared to buying them outright,â he says, âbut the flexibility to use next yearâs models seamlessly may represent a strategic advantage.â
Cisco, for one, has become far more deliberate about what truly belongs in the public cloud, says Nik Kale, principal engineer and product architect. Cost is one factor, but the main driver is AI data governance.
âBeing cloud-smart isnât about repatriation â itâs about aligning AIâs data gravity with the right control plane,â he says.
IT has parsed out what should be in a private cloud and what goes into a public cloud. âTraining and fine-tuning large models requires strong control over customer and telemetry data,â Kale explains. âSo we increasingly favor hybrid architectures where inference and data processing happen within secure, private environments, while orchestration and non-sensitive services stay in the public cloud.â
Ciscoâs cloud-smart strategy starts with data classification and workload profiling. Anything with customer-identifiable information, diagnostic traces, and model feedback loops are processed within regionally compliant private clouds, he says.
Then there are âstateless services, content delivery, and telemetry aggregation that benefit from public-cloud elasticity for scale and efficiency,â Kale says.
Ciscoâs approach also involves âpackaging previously cloud-resident capabilities for secure deployment within customer environments â offering the same AI-driven insights and automation locally, without exposing data to shared infrastructure,â he says. âThis gives customers the flexibility to adopt AI capabilities without compromising on data residency, privacy, or cost.â
These practices have improved Ciscoâs compliance posture, reduced inference latency, and yielded measurable double-digit reductions in cloud spend, Kale says.
One area where AI has fundamentally changed their approach to cloud is in large-scale threat detection. âEarly versions of our models ran entirely in the public cloud, but once we began fine-tuning on customer-specific telemetry, the sensitivity and volume of that data made cloud egress both costly and difficult to govern,â he says. âMoving the training and feedback loops into regional private clouds gave us full auditability and significantly reduced transfer costs, while keeping inference hybrid so customers in regulated regions received sub-second response times.â
IT saw a similar issue with its generative AI support assistant. âInitially, case transcripts and diagnostic logs were processed in public cloud LLMs,â Kale says. âAs customers in finance and healthcare raised legitimate concerns about data leaving their environments, we re-architected the capability to run directly within their [virtual private clouds] or on-prem clusters.â
The orchestration layer remains in the public cloud, but the sensitive data never leaves their control plane, Kale adds.
AI has also reshaped how telemetry analytics is handled across Ciscoâs CX portfolio. IT collects petabyte-scale operational data from more than 140,000 customer environments.
âWhen we transitioned to real-time predictive AI, the cost and latency of shipping raw time-series data to the cloud became a bottleneck,â Kale says. âBy shifting feature extraction and anomaly detection to the customerâs local collector and sending only high-level risk signals to the cloud, we reduced egress dramatically while improving model fidelity.â
In all instances, âAI made the architectural trade-offs clear: Specific workloads benefit from public-cloud elasticity, but the most sensitive, data-intensive, and latency-critical AI functions need to run closer to the data,â Kale says. âFor us, cloud-smart has become less about repatriation and more about aligning data gravity, privacy boundaries, and inference economics with the right control plane.â
A less expensive execution path
Like P&G, World Insurance Associates believes cloud-smart translates to implementing a FinOps framework. CIO Michael Corrigan says that means having an optimized, consistent build for virtual machines based on the business use case, and understanding how much storage and compute is required.
Those are the main drivers to determine costs, âso we have a consistent set of standards of what will size our different environments based off of the use case,â Corrigan says. This gives World Insurance what Corrigan says is an automated architecture.
âThen we optimize the build to make sure we have things turned on like elasticity. So when services arenât used typically overnight, they shut down and they reduce the amount of storage to turn off the amount of computeâ so the company isnât paying for it, he says. âIt starts with the foundation of optimization or standards.â
World Insurance works with its cloud providers on different levels of commitment. With Microsoft, for example, the insurance company has the option to use virtual machines, or what Corrigan says is a âreserved instance.â By telling the provider how many machines they plan to consume or how much they intend to spend, he can try to negotiate discounts.
âThatâs where the FinOps framework has to really be in place ⊠because obviously, you donât want to commit to a level of spend that you wouldnât consume otherwise,â Corrigan says. âItâs a good way for the consumer or us as the organization utilizing those cloud services, to get really significant discounts upfront.â
World Insurance is using AI for automation and alerts. AI tools are typically charged on a compute processing model, âand what you can do is design your query so that if it is something thatâs less complicated, itâs going to hit a less expensive execution pathâ and go to a small language model (SLM), which doesnât use as much processing power, Corrigan says.
The user gets a satisfactory result, and âthere is less of a cost because youâre not consuming as much,â he says.
Thatâs the tactic the company is taking â routing AI queries to the less expensive model. If there is a more complicated workflow or process, it will be routed to the SLM first âand see if it checks the box,â Corrigan says. If its needs are more complex, it is moved to the next stage, which is more expensive, and generally involves an LLM that requires going through more data to give the end user what theyâre looking for.
âSo we try to manage the costs that way as well so weâre only consuming whatâs really needed to be consumed based on the complexity of the process,â he says.
Cloud is âa living frameworkâ
Hylaineâs McElroy says CIOs and CTOs need to be more open to discussing the benefits of hybrid infrastructure setups, and how the state of the art has changed in the past few years.
âMany organizations are wrestling with cloud costs they know instinctively are too high, but there are few incentives to take on the risky work of repatriation when a CFO doesnât know what savings theyâre missing out on,â he says.
Lucetti characterizes P&Gâs cloud strategy as âa living framework,â and says that over the next few years, the company will continue to leverage the right cloud capabilities to enable AI and agentic for business value.
âThe goal is simple: Keep technology aligned with business growth, while staying agile in a rapidly changing digital landscape,â she says. âCloud transformation isnât a destination â itâs a journey. At P&G, we know that success comes from aligning technology decisions with business outcomes and by embracing flexibility.â

Get data, and the data culture, ready for AI
When it comes to AI adoption, the gap between ambition and execution can be impossible to bridge. Companies are trying to weave the tech into products, workflows, and strategies, but good intentions often collapse under the weight of the day-to-day realities from messy data and lack of a clear plan.
âThatâs the challenge we see most often across the global manufacturers we work with,â says Rob McAveney, CTO at software developer Aras. âMany organizations assume they needAI, when the real starting point should be defining the decision you want AI to support, and making sure you have the right data behind it.â
Nearly two-thirds of leaders say their organizations have struggled to scale AI across the business, according to a recent McKinsey global survey. Often, they canât move beyond tests of pilot programs, a challenge thatâs even more pronounced among smaller organizations. Often, pilots fail to mature, and investment decisions become harder to justify.
A typical issue is the data simply isnât ready for AI. Teams try to build sophisticated models on top of fragmented sources or messy data, hoping the technology will smooth over the cracks.
âFrom our perspective, the biggest barriers to meaningful AI outcomes are data quality, data consistency, and data context,â McAveney says. âWhen data lives in silos or isnât governed with shared standards, AI will simply reflect those inconsistencies, leading to unreliable or misleading outcomes.â
Itâs an issue that impacts almost every sector. Before organizations double down on new AI tools, they must first build stronger data governance, enforce quality standards, and clarify who actually owns the data meant to fuel these systems.
Making sure AI doesnât take the wheel
In the rush to adopt AI, many organizations forget to ask the fundamental questionofwhat problem actually needs to be solved. Without that clarity, itâs difficult to achieve meaningful results.
Anurag Sharma, CTO of VyStar Credit Union believes AI is just another tool thatâs available to help solve a given business problem, and says every initiative should begin with a clear, simple statement of the business outcome itâs meant to deliver. He encourages his team to isolate issues AI could fix, and urges executives to understand what will change and who will be affected before anything moves forward.
âCIOs and CTOs can keep initiatives grounded by insisting on this discipline, and by slowing down the conversation just long enough to separate the shiny from the strategic,â Sharma says.
This distinction becomes much easier when an organization has an AI COE or a dedicated working group focused on identifying real opportunities. These teams help sift through ideas, set priorities, and ensure initiatives are grounded in business needs rather than buzz.
The group should also include the people whose work will be affected by AI, along with business leaders, legal and compliance specialists, and security teams. Together, they can define baseline requirements that AI initiatives must meet.
âWhen those requirements are clear up front, teams can avoid pursuing AI projects that look exciting but lack a real business anchor,â says Kayla Underkoffler, director of AI security and policy advocacy at security and governance platform Zenity.
She adds that someone in the COE should have a solid grasp of the current AI risk landscape. That person should be ready to answer critical questions, knowing what concerns need to be addressed before every initiative goes live.
âA plan could have gaping cracks the team isnât even aware of,â Underkoffler says. âItâs critical that security be included from the beginning to ensure the guardrails and risk assessment can be added from the beginning and not bolted on after the initiative is up and running.â
In addition, there should be clear, measurable business outcomes to make sure the effort is worthwhile. âEvery proposal must define success metrics upfront,â says Akash Agrawal, VP of DevOps and DevSecOps at cloud-based quality engineering platform LambdaTest, Inc. âAI is never explored, itâs applied.â
He recommends companies build in regular 30- or 45-day checkpoints to ensure the work continues to align with business objectives. And if the results donât meet expectations, organizations shouldnât hesitate to reassess and make honest decisions, he says. Even if that means walking away from the initiative altogether.
Yet even when the technology looks promising, humans still need to remain in the loop. âIn an early pilot of our AI-based lead qualification, removing human review led to ineffective lead categorization,â says Shridhar Karale, CIO at sustainable waste solutions company, Reworld. âWe quickly retuned the model to include human feedback, so it continually refines and becomes more accurate over time.â
When decisions are made without human validation, organizations risk acting on faulty assumptions or misinterpreted patterns. The aim isnât to replace people, but to build a partnership in which humans and machines strengthen one other.
Data, a strategic asset
Ensuring data is managed effectively is an often overlooked prerequisite for making AI work as intended. Creating the right conditions means treating data as a strategic asset: organizing it, cleaning it, and having the right policies in place so it stays reliable over time.
âCIOs should focus on data quality, integrity, and relevance,â says Paul Smith, CIO at Amnesty International. His organization works with unstructured data every day, often coming from external sources. Given the nature of the work, the quality of that data can be variable. Analysts sift through documents, videos, images, and reports, each produced in different formats and conditions. Managing such a high volume of messy, inconsistent, and often incomplete information has taught them the importance of rigor.
âThereâs no such thing as unstructured data, only data that hasnât yet had structure applied to it,â Smith says. He also urges organizations to start with the basics of strong, everyday data-governance habits. That means checking whether the data is relevant, and ensuring itâs complete, accurate, and consistent, and outdated information can skew results.
Smith also emphasizes the importance of verifying data lineage. That includes establishing provenance â knowing where the data came from and whether its use meets legal and ethical standards â and reviewing any available documentation that details how it was collected or transformed.
In many organizations, messy data comes from legacy systems or manual entry workflows. âWe strengthen reliability by standardizing schemas, enforcing data contracts, automating quality checks at ingestion, and consolidating observability across engineering,â says Agrawal.
When teams trust the data, their AI outcomes improve. âIf you canât clearly answer where the data came from and how trustworthy is it, then you arenât ready,â Sharma adds. âItâs better to slow down upfront than chase insights that are directionally wrong or operationally harmful, especially in the financial industry where trust is our currency.â
Karale says that at Reworld, theyâve created a single source of truth data fabric, and assigned data stewards to each domain. They also maintain a living data dictionary that makes definitions and access policies easy to find with a simple search. âEach entry includes lineage and ownership details so every team knows whoâs responsible, and they can trust the data they use,â Karale adds.
A hard look in the organizational mirror
AI has a way of amplifying whatever patterns it finds in the data â the helpful ones, but also the old biases organizations would rather leave behind. Avoiding that trap starts with recognizing that bias is often a structural issue.
CIOs can do a couple of things to prevent problems from taking root. âVet all data used for training or pilot runs and confirm foundational controls are in place before AI enters the workflow,â says Underkoffler.
Also, try to understand in detail how agentic AI changes the risk model. âThese systems introduce new forms of autonomy, dependency, and interaction,â she says. âControls must evolve accordingly.â
Underkoffler also adds that strong governance frameworks can guide organizations on monitoring, managing risks, and setting guardrails. These frameworks outline whoâs responsible for overseeing AI systems, how decisions are documented, and when human judgment must step in, providing structure in an environment where the technology is evolving faster than most policies can keep up.
And Karale says that fairness metrics, such as disparate impact, play an important role in that oversight. These measures help teams understand whether an AI system is treating different groups equitably or unintentionally favoring one over another. These metrics could be incorporated into the model validation pipeline.
Domain experts can also play a key role in spotting and retraining models that produce biased or off-target outputs. They understand the context behind the data, so theyâre often the first to notice when something doesnât look right. âContinuous learning is just as important for machines as it is for people,â says Karale.
Amnesty Internationalâs Smith agrees, saying organizations need to train their people continuously to help them pick out potential biases. âRaise awareness of risks and harms,â he says. âThe first line of defense or risk mitigation is human.â
