Reading view
Elon Musk Warns of ‘All-Out War’ in AI, Starting With Nvidia’s Blackwell
Elon Musk warns of an AI hardware “all-out war” as Nvidia’s Blackwell rollout accelerates, pushing rivals to race on speed, cost, and scale.
The post Elon Musk Warns of ‘All-Out War’ in AI, Starting With Nvidia’s Blackwell appeared first on TechRepublic.
ITER builds global, high-speed data backbone for remote scientific participation
The three cyber trends that will define 2026
In cyber security, basics matter, even in 2025
Stop mimicking and start anchoring
The mimicry trap
CIOs today face unprecedented pressure from board, business and shareholders to mirror big tech success stories. The software industry spends 19% of its revenue on IT, while hospitality spends less than 3%.
In our understanding, this isn’t an anomaly; it’s a fundamental truth that most CIOs are ignoring in their rush to emulate Big Tech playbooks. The result is a systematic misallocation of resources based on a fundamental misunderstanding of how value creation works across industries.

(Source: Collated across publications – industry & consulting)
Ankur Mittal, Rajnish Kasat
- The majority gap: Five out of seven industries spend below the cross-industry average, revealing the danger of benchmark-blind strategies
- Context matters: Industries where technology is the product (software) versus where it enables the product (hospitality, real estate) show fundamentally different spending patterns
The gap reveals a critical flaw in enterprise technology strategy, the dangerous assumption that what works for Amazon, Google or Microsoft should work everywhere else. This one-size-fits-all mindset has transformed technology from a strategic asset into an expensive distraction.
| Year | IT Spend Growth Rate (A) | Real GDP Growth Rate (B) | Growth Differential (A-B) |
| 2016 | -2.9% | 3.4% | -6.3% |
| 2017 | 2.9% | 3.8% | -0.9% |
| 2018 | 5.7% | 3.6% | 2.1% |
| 2019 | 2.7% | 2.8% | -0.1% |
| 2020 | -5.3% | -3.1% | -2.2% |
| 2021 | 13.9% | 6.2% | 7.7% |
| 2022 | 9.8% | 3.5% | 6.3% |
| 2023 | 2.2% | 3.0% | -0.8% |
| 2024 | 9.5% | 3.2% | 6.3% |
| 2025 | 7.9% | 2.8% | 5.1% |
Table 1 – IT Spend versus Real GDP differential analysis (Source: IT Spend – Gartner, GDP – IMF)
According to Gartner, “global IT spend is projected to reach $5.43 trillion in 2025 (7.9% growth)”. IT spending has consistently outpaced real GDP growth, based on IMF World Economic Outlook data, IMF WEO. Over the past decade, global IT expenditure has grown at an average rate of ~5% annually, compared to ~3% for real GDP — a differential of roughly 2 percentage points per year. While this trend reflects increasing digital maturity and technology adoption, it also highlights the cyclical nature of IT investment. Periods of heightened enthusiasm, such as the post-COVID digital acceleration and the GenAI surge in 2023–24, have historically been followed by corrections, as hype-led spending does not always translate into sustained value.
Moreover, failure rates for IT programs remain significantly higher than those in most engineered sectors and comparable to FMCG and startup environments. Within this, digital and AI-driven initiatives show particularly elevated failure rates. As a result, not all incremental IT spend converts into business value.
Hence, in our experience, the strategic value of IT should be measured by how effectively it addresses industry-specific value creation. Different industries have vastly different technology intensity and value-creation dynamics. In our view, CIOs must therefore resist trend-driven decisions and view IT investment through their industry’s value-creation to sharpen competitive edge. To understand why IT strategies diverge across industries shaped by sectoral realities and maturity differences, we need to examine how business models shape the role of technology.
Business model maze
We have observed that funding business outcomes rather than chasing technology fads is easier said than done. It’s difficult to unravel the maze created by the relentless march of technological hype versus the grounded reality of business. But the role of IT is not universal; its business relevance changes from one industry to another. Let’s explore how this plays out across industries, starting with hospitality, where service economics dominates technology application.
Hospitality
The service equation in the hospitality industry differs from budget to premium, requiring leaders to understand the different roles technology plays.
- Budget hospitality: Technology reduces cost, which drives higher margins
- Premium hospitality: Technology enables service, but human touch drives value
From our experience, it’s paramount to understand and absorb the above difference, as quick digital check-ins serve efficiency, but when a guest at a luxury hotel encounters a maze of automated systems instead of a personal service, technology defeats its own purpose.
You might ask why; it’s because the business model in the hospitality industry is built on human interaction. The brand promise centers on human connection — a competitive advantage of a luxury hotel such as Taj — something that excessive automation actively undermines.
This contrast becomes even more evident when we examine the real estate industry. A similar misalignment between technology ambition and business fundamentals can lead to identity-driven risk, such as in the case of WeWork.
Real estate
WeWork, a real estate company that convinced itself and investors that it was a technology company. The result, a spectacular collapse when reality met the balance sheet, leading to its identity crisis. The core business remained leasing physical space, but the tech-company narrative drove valuations and strategies completely divorced from operational reality. This, as we all know, led to WeWork’s collapse from a $47 billion valuation to bankruptcy.
Essentially, in real estate, the business model is built on physical assets with long transaction cycles pushing IT to a supporting function. Here, IT is about enabling asset operations and margin preservation rather than reshaping the value proposition. From what we have seen, over-engineering IT in such industries rarely shifts the value needle. In contrast, the high-tech industry represents a case where technology is not just an enabler, it is the business.
High Tech
The technology itself is the product as the business model is built on digital platforms, and technological capabilities determine market leadership. The IT spend, core to the business model, is a strategic weapon for automation and data monetization.
Business model maze
While software companies allocate nearly 19% of their revenue to IT, hospitality firms spend less than 3%. We believe that this 16-point difference isn’t just a statistic; it’s a strategic signal. It underscores why applying the same IT playbook across such divergent industries is not only ineffective but potentially harmful. What works for a software firm may be irrelevant or even harmful for a hospitality brand. These industry-specific examples highlight a deeper leadership challenge: The ability to resist trend-driven decisions and instead anchor technology investment to business truths.
Beyond trends: anchoring technology to business truths
In a world obsessed with digital transformation, CIOs need the strategic discernment to reject initiatives that don’t align with business reality. We have observed that competitive advantage comes from contextual optimization, not universal best practices.
This isn’t about avoiding innovation; it’s about avoiding expensive irrelevance. We have seen that the most successful technology leaders understand that their job is not to implement the latest trends but to rationally analyze and choose to amplify what makes their business unique.
For most industries outside of high-tech, technology enables products and services rather than replacing them. Data supports decision-making rather than becoming a monetizable asset. Market position depends on industry-specific factors. And returns come from operational efficiency and customer satisfaction, not platform effects.
Chasing every new frontier may look bold, but enduring advantage comes from knowing what to adopt, when to adopt and what to ignore. The allure of Big Tech success stories, Amazon’s platform dominance, Google’s data monetization and Apple’s closed ecosystem has created a powerful narrative, but its contextually bound. Their playbook works in digital-native business models but can be ill-fitting for others. Therefore, their model is not universally transferable, and blind replication can be misleading.
We believe, CIOs must resist and instead align IT strategy with their industry’s core value drivers. All of this leads to a simple but powerful truth — context is not a constraint; it’s a competitive advantage.
Conclusion: Context as competitive advantage
The IT spending gap between software and hospitality isn’t a problem to solve — it’s a reality to embrace. Different industries create value in fundamentally different ways, and technology strategies must reflect this truth.
Winning companies use technology to sharpen their competitive edge — deepening what differentiates them, eliminating what constrains them and selectively expanding where technology unlocks genuine new value, all anchored in their core business logic.
Long-term value from emerging technologies comes from grounded application, not blind adoption. In the race to transform, the wisest CIOs will be those who understand that the best technology decisions are often the ones that honour, rather than abandon the fundamental nature of their business. The future belongs not to those who adopt the most tech, but to those who adopt the right tech for the right reasons.
Disclosure: This article reflects author(s) independent insights and perspectives and bears no official endorsement. It does not promote any specific company, product or service.
This article is published as part of the Foundry Expert Contributor Network.
Want to join?

What is driving the rise of infostealer malware?
Computer Weekly Buyer's Guide features list 2026
Google Search Live Gets a Gemini Audio Upgrade for Smoother Replies
Search Live gets an upgrade with Gemini 2.5 native audio, delivering faster, more natural voice conversations and hands-free help in the Google app.
The post Google Search Live Gets a Gemini Audio Upgrade for Smoother Replies appeared first on TechRepublic.
Beyond lift-and-shift: Using agentic AI for continuous cloud modernization
The promise of cloud is agility, but the reality of cloud migration often looks more like a high-stakes, one-time project. When faced with sprawling, complex legacy applications — particularly in Java or .NET — the traditional “lift-and-shift” approach is only a halfway measure. It moves the complexity, but doesn’t solve it. The next strategic imperative for the CIO is to transition from periodic, costly overhauls to continuous modernization powered by autonomous agentic AI. This shift transforms migration from a finite, risk-laden project into an always-on optimization engine that continuously grooms your application portfolio, directly addressing complexity and accelerating speed-to-market.
The autonomous engine: Agentic AI for systematic refactoring
Agentic AI systems are fundamentally different from traditional scripts; they are goal-driven and capable of planning, acting and learning. When applied to application modernization, they can operate directly on legacy codebases to prepare them for a cloud-native future.
Intelligent code refactoring
The most significant bottleneck in modernization is refactoring — restructuring existing code without changing its external behavior to improve maintainability, efficiency and cloud-readiness. McKinsey estimates that Generative AI can shave 20–30% off refactoring time and can reduce migration costs by up to 40%. Agentic AI tools leverage large language models (LLMs) to ingest entire repositories, analyze cross-file dependencies and propose or even execute complex refactoring moves, such as breaking a monolith into microservices. For applications running on legacy Java or .NET frameworks, these agents can systematically:
- Identify and flag “code smells” (duplicated logic, deeply nested code).
- Automatically convert aging APIs to cloud-native or serverless patterns.
- Draft and apply migration snippets to move core functions to managed cloud services.
Automated application dependency mapping
Before any refactoring can begin, you need a complete and accurate map of application dependencies, which is nearly impossible to maintain manually in a large enterprise. Agentic AI excels at this through autonomous discovery. Agents analyze runtime telemetry, network traffic and static code to create a real-time, high-fidelity map of the application portfolio. As BCG highlights, applying AI to core platform processes helps to reduce human error and can accelerate business processes by 30% to 50%. In this context, the agent is continuously identifying potential service boundaries, optimizing data flow and recommending the most logical containerization or serverless targets for each component.
Practical use cases for continuous value
This agentic approach delivers tangible business value by automating the most time-consuming and error-prone phases of modernization:
| Use Case | AI Agent Action | Business Impact |
| Dependency mapping | Analyzes legacy code and runtime data to map component-to-component connections and external service calls. | Reduced risk: Eliminates manual discovery errors that cause production outages during cutover. |
| Intelligent code refactoring | Systematically restructures code for cloud-native consumption (e.g., converting monolithic C# or Java code into microservices). | Cost & speed: Reduces developer toil and cuts transformation timelines by as much as 50%. |
| Continuous security posture enforcement | The agent autonomously scans for new vulnerabilities (CVEs), identifies affected code components and instantly applies security patches or configuration changes (e.g., updating a policy or library version) across the entire portfolio. | Enhanced resilience: Drastically reduces the “time-to-remediation” from weeks to minutes, proactively preventing security breaches and enforcing a compliant posture 24/7. |
| Real-time performance tuning | Monitors live workload patterns (e.g., CPU, latency, concurrent users) and automatically adjusts cloud resources (e.g., rightsizing instances, optimizing database indices, adjusting serverless concurrency limits) to prevent performance degradation. | Maximized ROI: Ensures applications are always running with the optimal balance of speed and cost, eliminating waste from over-provisioning and avoiding customer-impacting performance slowdowns. |
Integrating human-in-the-loop (HITL) framework governance
The transition to an agent-driven modernization model doesn’t seek to remove the human role; rather, it elevates it from manual, repetitive toil to strategic governance. The success of continuous modernization hinges on a robust human-in-the-loop (HITL) framework. This framework mandates that while the agent autonomously identifies optimization opportunities (e.g., a component generating high costs) and formulates a refactoring plan, the deployment is always gated by strict human oversight. The role of the developer shifts to defining the rules, validating the agent’s proposed changes through automated testing and ultimately approving the production deployment incrementally. This governance ensures that the self-optimizing environment remains resilient and adheres to crucial business objectives for performance and compliance.
Transforming the modernization cost model
The agentic approach fundamentally transforms the economic framework for managing IT assets. Traditional “lift-and-shift” and periodic overhauls are viewed as massive, high-stakes capital expenditure (CapEx) projects. By shifting to an autonomous, continuous modernization engine, the financial model transitions to a predictable, utility-like pperational expenditure (OpEx). This means costs are tied directly to the value delivered and consumption efficiency, as the agent continuously grooms the portfolio to optimize for cost. This allows IT to fund modernization as an always-on optimization function, making the management of the cloud estate a sustainable, predictable line item rather than a perpetual budget shock.
Shifting the development paradigm: From coder to orchestrator
The organizational impact of agentic AI is as critical as the technical one. By offloading the constant work of identifying technical debt, tracking dependencies and executing routine refactoring or patching, the agent frees engineers from being primarily coders and maintainers. The human role evolves into the AI orchestrator or System Architect. Developers become responsible for defining the high-level goals, reviewing the agent’s generated plans and code for architectural integrity and focusing their time on innovation, complex feature development and designing the governance framework itself. This strategic shift not only reduces developer burnout and increases overall productivity but is also key to attracting and retaining top-tier engineering talent, positioning IT as a center for strategic design rather than just a maintenance shop.
The pilot mandate: Starting small, scaling quickly
For CIOs facing pressure to demonstrate AI value responsibly, the adoption of agentic modernization must begin with a targeted, low-risk pilot. The objective is to select a high-value application—ideally, a non-critical helper application or an internal-facing microservice that has a quantifiable amount of technical debt and clear performance or cost metrics. The goal of this pilot is to prove the agent’s ability to execute the full modernization loop autonomously: Discovery > Refactoring > Automated Testing > Human Approval > Incremental Deployment. Once key success metrics (such as a 40% reduction in time-to-patch or a 15% improvement in cost efficiency) are validated in this controlled environment, the organization gains the confidence and blueprint needed to scale the agent framework horizontally across the rest of the application portfolio, minimizing enterprise risk.
The strategic mandate: Self-optimizing resilience
By adopting autonomous agents, the operational model shifts from reactive fixes to a resilient, self-optimizing environment. Gartner projects that autonomous AI agents will be one of the fastest transformations in enterprise technology, with a major emphasis on their ability to orchestrate entire workflows across the application migration and modernization lifecycle. These agents are not just tools; they are continuous improvement loops that proactively:
- Identify a component that is generating high cloud costs.
- Formulate a refactoring plan for optimization (e.g., move to a managed serverless queue).
- Execute the refactoring, run automated tests and deploy the change incrementally, all under strict human oversight.
The CIO’s task is to define the strategic goals — cost, performance, resilience — and deploy the agents with the governance and human-in-the-loop controls necessary to allow them to act. This proactive, agent-driven model is the only path to truly continuous modernization, ensuring your cloud estate remains an agile asset, not a perpetual liability
This article is published as part of the Foundry Expert Contributor Network.
Want to join?

Save Big on Microsoft Project 2024 While Codes Last
Whether you’re a business leader or professional, this project management tool will streamline your workdays.
The post Save Big on Microsoft Project 2024 While Codes Last appeared first on TechRepublic.
Cross-border Payments in 2025: How They Work, Costs, and What’s Changing
Cross-border payments are getting faster and more complex. Here’s what’s driving the change, how new technologies like ISO 20022 and SWIFT GPI fit in, and what businesses should do now.
The post Cross-border Payments in 2025: How They Work, Costs, and What’s Changing appeared first on TechRepublic.
Reinventing the Arabic newsroom: How Al-Masry Al-Youm is harnessing AI, data, and cloud to transform
CERN: cómo gestiona el riesgo una institución de investigación internacional
Pocas instituciones de investigación hay en el mundo con las dimensiones y el calado de la Organización Europea para la Investigación Nuclear, el CERN. Fundado en 1954 por doce países europeos, el Laboratorio Europeo de Física de Partículas Elementales se localiza en el municipio suizo de Meyrin, en el cantón de Ginebra, aunque sus instalaciones se extienden a lo largo de la frontera franco-suiza. Entre ellas está el Gran Colisionador de Hadrones (LHC), el acelerador de partículas más grande del mundo. La colaboración internacional está en la base de su origen: más de 3.500 personas componen su plantilla fija. Una pequeña villa que se expande a 17.000 habitantes cuando se suma el personal científico de alrededor de 950 instituciones de más de 80 países distintos que colabora en proyectos del centro —o que lo hizo en 2024—. En este ecosistema propio, la gestión del riesgo de TI supone un reto a la altura de la institución.
“El principal problema es que estamos gestionando una organización enorme”, explica Stefan Lüders, CISO del CERN. “Somos uno de los institutos de investigación de física de partículas más importantes del planeta. Hacemos cosas sofisticadas e interesantes, lo que nos convierte en blanco de ataques de diferentes comunidades”, resume. Enumera varias de estas potenciales amenazas: script kiddies o hackers con un conocimiento básico, que suponen así y todo un riesgo potencial de seguridad; ransomware o exfiltración de datos; sabotajes al trabajo del CERN; acciones de espionaje y de grupos criminales que intentan infiltrarse a través de ordenadores o dispositivos.
“Aquí es donde entra la gente. Porque tenemos una comunidad de investigadores muy amplia, heterogénea y muy fluctuante. Hay muchos físicos que se unen a la organización cada año. Entran y salen para hacer su doctorado, investigan en el CERN y luego se van”, describe, apuntando al desafío de “cuidar a esta comunidad de usuarios. El otro desafío es el mundo flexible y de rápido desarrollo de las TI”. Añade también la programación —la importación de bibliotecas de código abierto, su seguridad, etc.— y la IA. “Cuanto más sofisticada se vuelve la IA, mayor es la probabilidad de que esas herramientas de seguridad o ataque impulsadas por la IA intenten infiltrarse en la organización”.
Asegurando el CERN
Con esta situación de partida, ¿cómo se asegura una implementación efectiva de las iniciativas en ciberseguridad, que no interrumpan el trabajo científico? “No puedes”, afirma Lüders. “La ciberseguridad es inconveniente. Afrontémoslo”. Lüders lo equipara a cerrar con llave la puerta de casa o utilizar el PIN para sacar dinero del cajero; pueden ser molesto, pero necesario. “Intentamos explicar a nuestra comunidad por qué se necesitan medidas de seguridad”, señala. “Y si adaptamos nuestras medidas de seguridad a nuestro entorno, la gente las adopta. Sí, hace la investigación algo más complicada, pero solo un poco”.
Lüders insiste en el factor del trabajo en investigación. “No somos un banco. No tenemos billones de dólares. No somos una base militar, lo que significa que no debemos proteger a un país. Investigamos, lo que implica adaptar el nivel de seguridad y el de libertad académica para que ambos vayan de la mano. Y esa es una conversación constante con nuestra comunidad de usuarios”. Esta engloba desde el personal científico al de gestión de sistemas de control industrial, el departamento de TI o recursos humanos. “Para afrontar ese reto, es fundamental hablar con la gente. Por eso, insisto, la ciberseguridad es un tema muy sociológico: hablar con la gente, explicarles por qué hacemos esto”. Por ejemplo, no todo el mundo usa de buen grado los sistemas multifactor porque “admitámoslo, son un fastidio. Es mucho más fácil escribir una contraseña e, incluso, ¿quién quiere escribir una contraseña? Solo quieres entrar. Pero para las necesidades de protección, hoy en día tenemos contraseñas y autenticación multifactor. Así que le explicas a la gente qué estás protegiendo. Les decimos por qué es importante proteger su trabajo, al igual que los resultados de la investigación. Y la gran mayoría entiende que se necesita un cierto nivel de seguridad”, asegura. “Pero es un desafío porque aquí conviven muchas culturas diferentes, nacionalidades diferentes, opiniones y pensamientos diferentes, y orígenes diversos. Esto es lo que intentamos adaptar permanentemente”.

Stefan Lüders y Tim Bell, del CERN.
CERN
Se suma a la conversación Tim Bell, líder de la sección de gobernanza, riesgo y cumplimiento de TI del CERN, quien se encarga de la continuidad del negocio y la recuperación ante desastres. Bell introduce el problema del empleo de tecnología propia. “Si eres visitante de una universidad, querrás traer tu portátil y usarlo en el CERN. No podemos permitirnos retirar estos dispositivos electrónicos al llegar a las instalaciones. Sería incompatible con la naturaleza de la organización. Esto implica que debemos ser capaces de implementar medidas de seguridad del tipo BYOD”.
Porque en el núcleo de todo se mantiene siempre el carácter colaborativo del CERN. “Los trabajos académicos, la ciencia abierta, la libertad de investigación, son parte de nuestro centro. La ciberseguridad necesita adaptarse a esto”, constata Lüders. “Tenemos 200.000 dispositivos en nuestra red que son BYOD”. ¿Cómo se aplica entonces la adaptación de la ciberprotección? “Se llama defensa en profundidad”, explica el CISO. “No podemos instalar nada en estos dispositivos finales porque no nos pertenecen, (…) pero tenemos monitorización de red”. De este modo, y aunque no se tenga acceso directo a cada aparato, se advierte cuándo se está realizando algo en contra de las políticas del centro, tanto a nivel de ciberseguridad como de usos no apropiados, como por ejemplo emplear la tecnología que proveen para intereses particulares.
“Es fundamental hablar con la gente. La ciberseguridad es un tema muy sociológico”, reflexiona Lüders
Estas medidas se extienden, además, a sistemas obsoletos, que la organización es capaz de asimilar porque cuentan con una red lo suficientemente resistente como para que, aunque un equipo se vea comprometido, no dañe ningún otro sistema del CERN. El problema de la tecnología heredada se extiende al equipo necesario para los experimentos de física que se realizan en el centro. “Estos están protegidos por redes dedicadas, lo que permite que la protección de la red se active y los proteja contra cualquier tipo de abuso”, explica Lüders. Sobre los dispositivos conectados IoT no diseñados con la ciberseguridad en mente, “un problema para todas las industrias”, Lüders es tajante: “Nunca se conseguirá seguridad en los dispositivos IoT”. Su solución pasa por conectarlos a segmentos de red restringidos donde no se les permite comunicarse con nada más, y luego definir destinos a los que sí comunicarse.

CERN
Marco general
Esto es parte de un reto mayor: alinear la parte de TI y la de OT, de tal forma que se establezca una continuidad en la seguridad en toda la organización. Un reto que pasa por la centralización. “Hoy en día la parte de OT, los sistemas de controles del CERN, están empleando virtualización de TI”, explica Lüders. “La estrategia es reunir a la gente de TI y la de control, de tal modo que la gente de control pueda usar los servicios TI en su beneficio”. Desde el departamento tecnológico se provee con un sistema central con distintas funcionalidades para operaciones, así como por otras áreas de la organización, accesible a través de un único punto de entrada. “Ese es el poder de la centralización”. En este sistema entran, además, nuevas herramientas como las de IA en LLM, en el que tienen en funcionamiento un grupo de trabajo para buscar la mejor manera de emplearlas. “Nos enfrentamos a un gran descubrimiento y, más adelante, lo centralizaremos mediante un servicio central de TI. Y así es como lo hacemos con todas las tecnologías”.
Igual que las materias que investigan en el CERN van evolucionando, así lo hace su marco de gobernanza de TI. Este ha ido siguiendo las novedades del sector, explica Bell, de la mano de auditorías que permiten funcionar según las mejores prácticas. “La parte de la gobernanza está volviéndose más formal. En general, todo estaba bien organizado; solo se trataba de estandarizarlo y desarrollar marcos de políticas a su alrededor”. Pese al establecimiento de estos estándares, el resultado es lo contrario de rígido, explica Bell, quien lo ejemplifica con el caso de una auditoría reciente de ciberseguridad en la que el CERN fue evaluado según uno de los estándares internacionales, lo que sirvió para mejorar el nivel de madurez. “Estamos adoptando una política de gobernanza de TI bastante flexible, aprendiendo de la experiencia de otros en la adopción de estándares del sector”, concluye.

Google’s Data Center Dreams Get UK Boost
This marks a significant victory for the company‘s European expansion plans, but unsurprisingly there is local resistance.
The post Google’s Data Center Dreams Get UK Boost appeared first on TechRepublic.
Apple Releases macOS Sequoia 15.7.3 Security Update
Apple has released macOS Sequoia 15.7.3 with important security fixes. Here’s what to know before installing the update.
The post Apple Releases macOS Sequoia 15.7.3 Security Update appeared first on TechRepublic.
IT Sustainability Think Tank: Perspectives on the print industry's net-zero push in 2025
米国アグリテックの2025年を振り返る
少ない案件に、より明確なリターン設計を求める
まず投資の空気感から整理すると、2024年のアグリフードテック投資は約160億ドル規模で、前年からの減少幅が小さくなったとするレポートが出ています。大局では「急降下が止まりつつある」という評価で、米国を含む先進市場で投資が戻り始めたという見立ても示されています。
ただし、ここで重要なのは“資金が戻った=景気が良い”ではない点です。レポートでは案件数が減っていることも示唆され、投資家が「同じ金額を、より少ない会社に」配分しやすい構図が続いています。つまり、成長ストーリーよりも、導入単位での採算、販売チャネル、継続課金の根拠、そしてハードなら保守体制まで含めた“オペレーションの設計図”が弱いと、資金が付きにくい。これが2025年の米国で体感されるアグリテックの現実です。
この「選別」は、2025年の米国アグテックの資金統計にも表れています。ロイターはPitchBookデータを引きながら、2025年Q1の米国アグテックVC投資が約16億ドル・137件で、案件数が落ち込んでいることを報じています。一方で、精密農業、とりわけロボティクスやスマート機器の領域が相対的に伸びているとも述べ、資金が“農場で効く自動化”に傾きやすい構造を描いています。
ここには米国農業側の景況も絡みます。貿易摩擦や価格変動が農家のキャッシュフローを揺らし、政府の支援策が短期の下支えになる一方で、不確実性そのものは消えません。APは、貿易戦争の影響を受ける農家向けにトランプ政権が120億ドル規模の支援に言及したと報じており、政策が市場の揺れを“埋める”局面が続いていることがうかがえます。こうした環境では、導入の意思決定が遅れやすく、結果として「確実に回収できる投資」への需要が強まります。
要するに、米国アグリテックの投資は回復基調のサインがありつつも、成長期待だけで資金が集まる時代には戻っていません。農家が払える価格、導入しても止まらない保守、そして成果が数字で説明できるプロダクトだけが前に出る。“選別が通常運転になった”という理解が、現在の立ち位置として最も正確でしょう。
自動化とAIは「現場の作業」を置き換える段階へ:自律機械の普及を左右する“修理権”という制度リスク
2025年の米国アグリテックで、最も具体的な進展が見えるのは自動化です。象徴的なのがジョンディアのCES 2025発表で、同社は第2世代の自律(Autonomy)キットを打ち出し、コンピュータビジョンやAI、複数カメラによる周辺認識を強化して適用範囲を広げる方針を示しました。公式発表では、360度の視野を確保するカメラ構成など、従来より“環境認識を厚くして、任せられる作業を増やす”方向性が読み取れます。
特に果樹園の防除のような、単調だが長時間で、しかも安全リスクも伴う作業に自律機械を当てにいくのは、労働力不足の米国農業にとって自然な流れです。これは「トラクターが勝手に走る」デモの時代から、「どの工程を、どのコストで、どれだけ安定稼働させるか」という工程設計の段階へ入ったことを意味します。
同じロイター報道は、Monarch Tractorが酪農向けの自律作業(飼料の押し込み等)で手応えを得ていること、さらに太陽光発電所の土地管理のような“農地以外の広大地”でロボットトラクター需要が伸びている点にも触れています。AIデータセンター拡大で太陽光設備が増えるほど、草管理や保守に機械化が効く。アグリテックが農業の枠を超え、広義のランドマネジメントへ伸びる兆しとして興味深い現象です。
一方で、現場導入が進むほど深刻になるのが、ソフトウェア化した農機の「直せなさ」です。米国FTCは2025年1月、ディアが診断ツールや修理ソフトへのアクセスを事実上制限し、農家や独立修理業者が修理できない構造を作っているとして提訴したと発表しています。FTCは、これが修理費の上昇や修理待ちによる稼働停止を招き、結果として農家を不利にするという構図を問題視しています。
この争点が普及に直結するのは、農業が“壊れたら終わり”の産業だからです。収穫や防除は待ってくれず、修理が数日遅れるだけで損害が跳ね上がる。自律化が進み、機械が高額化するほど、修理の自由度とダウンタイムは投資回収の前提条件になります。つまり、アグリテックの競争は技術だけでなく、保守・修理・運用権限の設計を含む制度戦になりつつあります。
州レベルでも動きがあり、コロラド州では農業機械の修理権に関する法律が2024年1月に施行されています。メーカーが修理に必要な情報や部品等を提供すべきだという方向性が法制度として明文化され、連邦の動きと並行して“修理権が現実のルールになっていく”流れが見えます。
これが意味するのは、スタートアップ側にとっても「OEMに依存しない運用性」「診断と部品供給の設計」「データの持ち主は誰か」が、プロダクト価値の一部になったということです。デジタル農業のプラットフォーム化が進むと、ビッグテックや巨大アグリビジネスがサービスを束ね、エコシステムを形成していくという研究指摘もあり、データとサービスの統合は今後さらに進むでしょう。
そして、AIの“普及の土台”としての精密農業は、現時点でも規模による格差が明確です。USDA ERSは、精密農業技術の採用が農場規模と強く相関し、大規模農場ほど導入が進むことを示しています。AIが農業に浸透する速度は、結局のところデータを取れる装備と、運用を回せる人材があるかで決まりやすい。米国アグリテックがまず大規模・高収益作物から攻めるのは、技術の問題というより“導入条件の問題”でもあります。
政策と採算の再計算が進む:気候スマートの組み替え、屋内農業の再編、MRVと生物資材が「次の収益軸」に
2025年の米国アグリテックを語るうえで、政策の揺れは避けられません。USDAは2025年4月、バイデン政権下で進められた気候スマート関連の取り組み(Partnerships for Climate-Smart Commodities)を見直し、別の枠組みへ改組する方針を発表しました。USDA自身が、既存施策を“現政権の優先事項に合わせて改革する”と述べており、補助金・実証・市場形成を前提にした事業計画は、前提条件が変わりうる局面にあります。
ここでのポイントは、気候対応が後退するという単純な話ではなく、「どの手段に公的資金が乗るか」が変わることです。アグリテック側は、政策依存度が高いモデルほど資金調達と継続性の説明が難しくなり、逆に政策が変わっても農家の損益に直接効く省力化や投入削減の価値が相対的に強くなります。
この“採算の再計算”が最も痛烈に出たのが屋内農業です。2024年にはBowery Farmingが事業停止に向かったと報じられ、垂直農法の大型プレイヤーが資金繰りと収益性で行き詰まる構図が改めて可視化されました。
さらに2025年にはPlentyがChapter 11(米連邦破産法11章)手続きに入ったと報じられ、巨額の資金調達が必ずしも事業の持続性を保証しないことが強調されました。WSJは、同社が資金調達難と債務の積み上がりに直面した経緯を伝えています。
ただし同時に、Plentyは再編後にイチゴ生産拠点の整備を進める意向も示しており、屋内農業が全面否定されたというより「どの作物で、どの立地で、どんな販売契約を持つか」という事業設計が“農業らしく”絞り込まれていく段階に入ったと見る方が実態に近いでしょう。
屋内農業が苦しむ一方で、気候・環境の価値を“現金化”する動きとして存在感を増しているのが、土壌炭素などのMRVとクレジットです。Indigoは2025年4月、Climate Action Reserveを通じて第四期のカーボンクレジットが発行され、累積で100万トン規模に近づいていると発表しました。さらに2025年5月には、Microsoftが同社の土壌炭素クレジット6万件の購入をコミットしたとしています。ここで重要なのは、単なる売買の話ではなく、「第三者検証のクレジットがまとまった規模で出る」ことが、企業側の調達を可能にしている点です。
MRVがなぜ鍵になるかと言えば、再生型農業の効果は土壌・天候・圃場条件のばらつきが大きく、スケールさせるほど不確実性が増すからです。Indigoが大規模な土壌炭素クレジットを生成するためのMRVパイプラインを構築したとする技術論文もあり、米国では「農業データを使って環境価値を定量化し、取引可能にする」方向に研究と事業が噛み合い始めています。
そしてもう一つ、現場実装が進みやすいのが生物由来資材です。米国ではEPAがバイオ農薬(微生物農薬、バイオケミカル等)を含む登録の枠組みやガイダンスを整備しており、少なくとも制度面では“生物系プロダクトが市場に出る道筋”が用意されています。化学資材の規制や消費者の要請が強まるほど、病害虫対策や土壌改良でバイオロジカルが採用される余地は広がります。
また、USDAはスペシャルティクロップ(果菜・果樹など)を支援する助成を継続しており、2024年度に約7,290万ドルのプロジェクトが整理され、2025年度も同規模の助成が示されています。スペシャルティ作物は単価が高い分、品質・残留・病害虫リスクの管理が収益に直結しやすく、デジタル農業や生物資材の“投資回収”が説明しやすい土壌があります。
2025年末の米国アグリテックをまとめると、成長の物語は「AIで農業が変わる」から、「どの工程を、どの制度の下で、どんな採算で変えるのか」へ移りました。自律機械は労働力不足という構造問題に真正面から刺さり、MRVは気候価値を現金化するインフラになりつつある。一方で、屋内農業は“工場”の論理だけでは回らず、作物選定・契約・エネルギー・建設の全てを現実の農業に合わせて組み直す必要があります。

8 tips for rebuilding an AI-ready data strategy
Any organization that wants to have a leading AI strategy must first have a winning data strategy.
That’s the message from Ed Lovely, vice president and chief data officer for IBM.
“When you think about scaling AI, data is the foundation,” he says.
However, few organizations have a data architecture aligned to their AI ambitions, he says. Instead, they have siloed data that’s not governed by consistent data standards — the result of longstanding enterprise data strategies that created IT environments application by application to deliver point-in-time decisions rather than to support enterprise-wide artificial intelligence deployments.
The 2025 IBM study AI Ambitions Are Surging, But Is Enterprise Data Ready? shows just how many are struggling with their data. It found that only 26% of 1,700 CDOs worldwide feel confident their data can support new AI-enabled revenue streams.
What’s needed, Lovely says, is an integrated enterprise data architecture, where the same standards, governance, and metadata are applied “regardless of where data is born.”
Lovely is not alone in seeing a need for organizations to update their data strategies.
“Most organizations need to modernize their data strategies because AI changes not just how data is used, but why it’s used and where value is created,” says Adam Wright, research manager for IDC’s Global DataSphere and Global StorageSphere research programs and co-author of the 2025 report Content Creation in the Age of Generative AI.
“Traditional data strategies were built for reporting, BI, and automation, but AI requires far more dynamic, granular, and real-time data pipelines that can fuel iterative, model-driven workflows. This means shifting from static data governance to continuous data quality monitoring, stronger metadata and lineage tracking, and retention policies that reflect AI’s blend of ephemeral, cached, and saved data,” he says. “The AI era demands that organizations evolve from a collect/store everything mentality toward intentional, value-driven data strategies that balance cost, risk, and the specific AI outcomes they want to achieve.”
High-maturity data foundations
Most organizations are far from that objective.
“Many organizations continue to struggle with having the ‘right’ data, whether that means sufficient volume, appropriate quality, or the necessary contextual metadata to support AI use cases,” Wright says. “In IDC research and industry conversations, data readiness consistently emerges as one of the top barriers to realizing AI value, often outranking compute cost or model selection. Most enterprises are still dealing with fragmented systems, inconsistent governance, and limited visibility into what data they actually have and how trustworthy it is.”
Lovely says IBM had faced many such challenges but spent the past three years tackling them to make its data AI ready.
IBM’s data strategy for the AI era included multiple changes to longstanding approaches, enabling it to build what Lovely calls an integrated enterprise data architecture. For example, the company retained the concept of data owners but “helped them understand that the data is an IBM asset, and if we’re able to democratize it in a controlled, secure way, we can run the business in a better, more productive way,” Lovely says.
As a result, IBM moved from multiple teams managing siloed data to a common team using common standards and common architectures. Enterprise leaders also consolidated 300 terabytes of data, selecting needed data based on the outcomes the company seeks and the workflows that drive those outcomes.
“We were deliberate,” Lovely says, adding that its data platform now covers about 80% of IBM workflows. “One of the greatest productivity unlocks for an enterprise today is to create an integrated enterprise data architecture. We’re rapidly deploying AI at our company because of our investment in data.”
8 tips for building a better data strategy
To build high maturity in data foundations and data consumption capabilities, organizations need a data strategy for the AI era — one that enforces data quality, breaks down data siloes, and aligns data capabilities with the AI use cases prioritized by the business.
Experts offer steps to take:
1. Rethink data ownership
“Traditional models that treat data ownership as a purely IT issue no longer work when business units, product teams, and AI platforms are all generating and transforming data continuously,” Wright explains. “Ideally, clear accountability should sit with a senior data leader such as a CDO, but organizations without a CDO must ensure that data governance responsibilities are explicitly distributed across IT, security, and the business.”
It’s critical to have “a single point of authority for defining policies and a federated model for execution, so that business units remain empowered but not unchecked,” he adds.
Manjeet Rege, professor and chair of the Department of Software Engineering and Data Science and director of the Center for Applied Artificial Intelligence at the University of St. Thomas, advises organizations to reframe data owners as data stewards, who don’t own the data but rather own the meaning and quality of the data based on standards, governance, security, and interoperability set by a central data function.
2. Break down siloes
To do this, “CIOs need to align business units around shared AI and data outcomes, because gen AI only delivers value when workflows, processes, and data sources are connected across the enterprise,” Wright says.
“This means establishing cross-functional governance, standardizing taxonomies and policies, and creating incentives for teams to share data rather than protect it,” he adds. “Technology helps through unified platforms, metadata layers, and common security frameworks, but the real unlock comes from coordinated leadership across the C-suite and business stakeholders.”
3. Invest in data technologies for the AI era
These technologies include modern data lakes and data lakehouses, vector databases, and scalable object storage, all of which “can handle high-volume, multimodal data with strong governance,” Wright says.
Organizations also need orchestration and pipeline tools that automate ingestion, cleansing, transformation, and movement so that AI workflows can run reliably end-to-end. Metadata engines and governance layers are essential to enable models to understand context, track lineage, and safely and reliably use both structured and unstructured data.
Build a data platform layer that is “modular, governed, and able to evolve,” Rege advises. “You need architecture that can treat data as a reusable product, and not just for a single pipeline, and can be used for both batch and real-time needs.”
Rege also endorses data lakes and data lakehouses, saying they’re “becoming the backbones of AI because they can handle structured and unstructured data.”
Additionally, Shayan Mohanty, chief AI and data officer at Thoughtworks, advises CIOs to build a composable enterprise, with modular technologies and flexible structures that enable humans and AI to access data and work across the multiple layers.
Experts also advise CIOs to invest in technologies that address emerging data lifecycle needs.
“Generative AI is fundamentally reshaping the data lifecycle, creating a far more dynamic mix of ephemeral, cached, and persistently stored content. Most gen AI outputs are short-lived and used only for seconds, minutes, or hours, which increases the need for high-performance infrastructure like DRAM and SSDs to handle rapid iteration, caching, and volatile workflows,” Wright says.
“But at the same time, a meaningful subset of gen AI outputs does persist, such as finalized documents, approved media assets, synthetic training datasets, and compliance-relevant content, and these still rely heavily on cost-efficient, high-capacity HDDs for long-term storage,” he adds. “As gen AI adoption grows, organizations will need data strategies that accommodate this full lifecycle from ultra-fast memory for transient content to robust HDD-based systems for durable archives, because the storage burden/dynamics is shifting.”
4. Automate and add intelligence to the data architecture
Mohanty blames the poor state of enterprise data on “a rift between data producers and data consumers,” with the data being produced going into a “giant pile somewhere, in what’s called data warehouses” with analytics layers then created to make use of it. This approach, he notes, requires a lot of human knowledge and manual effort to make work.
He advises organizations to adopt a data product mindset “to bring data producers and data consumers closer together” and to add automation and intelligence to their enterprise architecture so that AI can identify and access the right data when needed.
CIOs can use Model Context Protocol (MCP) to wrap data and provide that protocol-level access, Mohanty says, noting that access requires organizations to encode information in its catalog and tools to ensure data discoverability.
5. Ensure structured and unstructured data is AI-ready
“Structured data is AI-ready when it is consistently formatted, well-governed, and enriched with accurate metadata, making it easy for models to understand and use,” Wright says. “Organizations should prioritize strong data quality controls, master data management, and clear ownership so structured datasets remain reliable, interoperable, and aligned to specific AI use cases.”
Experts stress the need to bring that same discipline to unstructured data, ensuring that unstructured data is also properly tagged, classified, and enriched with metadata so AI systems can understand and retrieve it effectively.
“You need to treat unstructured data as a first-class data asset,” Rege says. “Most of the most interesting AI use cases live in unstructured data like customer service audio calls, messages, and documents, but for many organization organizations unstructured data remains a blind spot.”
Rege advises storing it in vector databases where information is searchable.
6. Consider external data sources and synthetic data
“Organizations should absolutely evaluate whether external or synthetic data is needed when their existing data is incomplete, biased, too small, or poorly aligned with the AI use case they’re trying to pursue,” Wright says, noting that “synthetic data becomes especially useful when real data is sensitive, costly to collect, or limited by privacy, regulatory, or operational constraints.”
7. Implement a high-maturity data foundation incrementally
Don’t wait until data is in a perfect place to start, says Shibani Ahuja, senior vice president of enterprise IT strategy at Salesforce.
“There are organizations that feel they have to get all their data right before they can pull the trigger, but they’re also getting pressure to start on the journey,” she says.
As is the case when maturing most enterprise programs, CIOs and their executive colleagues can — and should — take an incremental approach to building a data program for the AI era.
Ahuja recommends maturing a data program by working outcome to outcome, creating a data strategy and architecture to support one AI-driven outcome and then moving onto subsequent ones.
“It’s a way of thinking: reverse engineering from what you need,” Ahuja says. “Put something in production, make sure you have the right guardrails, observe it, and tweak it so it scales, then put in the next one.”
8. Take a cross-functional approach to data team building
“Data should be supported by a cross-functional ecosystem that includes IT, data governance, security, and the business units that actually use the data to drive decisions,” Wright says. “AI-era data strategy works best when these teams share ownership, where IT teams enable the infrastructure, governance teams ensure trust and quality, and business teams define the context and value.”

AI fosters distrust for both tech candidate and employer
AI is creating trust issues for recruiters, hiring managers, and job-seeking tech professionals, according to The Trust Gap in Tech Hiring 2025 report from Dice. The survey found that 80% of tech professionals trust fully human driven hiring process, while only 46% trust AI and human hybrid approaches. But only 14% said they’d trust fully AI-driven processes. Nearly half also said they’d choose to opt-out of AI résumé screening if they were given the choice.
Main concerns from respondents include worries that AI screening tools will favor keywords over qualifications (63%), fears that qualified candidates will be rejected due to narrow criteria (63%), and a belief that a human will never see their résumé (56%).
Distrust is further amplified by a challenging tech hiring market that follows years of layoffs, economic instability, and uncertainty, as well as new concerns around job displacement due to AI. Candidates have a lower frustration tolerance for minor process issues compared to when it was easier to find a tech role, according to the Dice report. However, despite this rise in distrust, the 2025 AI in Hiring Report from Insight Global found that a staggering 99% of hiring managers report using AI in the hiring process, with 98% saying they saw significant improvements in hiring efficiency using AI.
“AI tends to blur the line between confidence and embellishment,” says Dice president Paul Farnsworth. “When candidates feel they have to beat the algorithm, it shifts focus away from real skills and experience.”
This results in a lot of candidates who look the part on paper but may not actually be the right fit, he adds. And an influx of candidates using AI to hone their résumés means more applications to sift through.
“It can also create longer hiring cycles, which means more time spent filtering through inflated résumés and more interviews that don’t lead to the right hire,” Farnsworth says. “Over time, that erodes trust in the system from both sides.”
How AI is used in hiring
In tech hiring, AI is currently most used for scanning responses, ranking candidates, and automating communication or scheduling steps. “It’s helping recruiters manage volume and improve response speed, and that’s a clear win for efficiency,” says Sara Gutierrez, chief science officer at SHL, a global talent assessment company.
AI’s greatest strength in tech hiring, though, is through automating certain processes that can help free up recruiters and hiring managers to focus on interviews and identify qualified candidates. Companies need to have a clear strategy in place for where AI will be implemented and where humans will lead the charge. It’s still important to demonstrate transparency around AI in hiring so it doesn’t override the human side.
“The challenge comes when AI decisions are built on data that were never meant to indicate job success, like résumé phrasing, education keywords, or past job titles,” says Gutierrez.
As Farnsworth points out, while AI can help streamline the application process and spot patterns across candidates that might have been missed otherwise, there’s always the chance that AI will also filter out great people for the wrong reasons. For example, if the AI system leans too hard on keywords or rigid templates, he adds, you risk missing out on potential talent.
“We’re seeing AI pop up in a bunch of different ways like job ad targeting and even early-stage interviews, and that can be helpful, but only if it’s used responsibly,” says Farnsworth. “The overall goal is to give hiring teams better signals so they can make smarter decisions. When AI is treated like a tool, not a gatekeeper, that’s when it adds real value.”
Lack of authenticity and individuality
Dice also found that 78% of candidates feel they need to embellish qualifications to get noticed, while 65% say they’ve modified their résumés using AI to improve chances of it being seen. This brings up further concerns. “When candidates feel they have to exaggerate just to stay competitive, it chips away at authenticity and trust,” says Farnsworth.
It’s getting to the point where it’s becoming the hiring version of an arms race, says Gutierrez. Candidates are using AI to tailor résumés for algorithms, and employers are using AI to scan them, so the resulting noise makes it harder to distinguish genuine capability from AI-optimized presentation.
The more candidates use AI to structure their résumés, the more they start to look alike, which only makes it more difficult to sift through applications to find the best fit. AI might polish résumés, but it often doesn’t reflect the person’s real experience, says Farnsworth. “It’s hard to maintain your professional brand and voice if you’re running your résumé through an AI chat-bot that’ll potentially strip your personality from the final draft,” he says. “Ideally, hiring should be about who’s genuinely the best fit, not who wrote the most machine-friendly bullet points.”
Gutierrez adds that the future doesn’t require choosing between AI and authenticity, but rather finding the balance where AI can handle the administrative burden, thus freeing humans to focus on connection and context. It’s important to establish exactly how AI will support recruiters, hiring managers, and candidates as a well-defined strategy will build trust on both sides when hiring.
Leading with AI transparency
Transparency is the driving factor to consider when implementing AI in the hiring process, while still building trust. It’s important candidates understand how and where AI is used, and how to use it as an assistant. There are several steps to make sure candidates are reassured, and that they’re being evaluated by humans, not overlooked by an algorithm.
“That kind of openness not only eases candidate anxiety, but reinforces that the tech is there to enhance the process, not replace fairness or human judgment,” says Farnsworth.
Companies should focus on transparency around how AI is used in hiring, and according to Dice, this can include:
- Assuring candidates there’s a human review for applications.
- Offering secondary human review options for AI rejections with regular audits of AI decisions.
- Establishing recruiter accountability metrics and mandatory response timelines.
- Using AI to identify potential candidates rather than eliminate them.
- Introducing match scoring to show a fit percentage for candidates.
- Keeping AI focused on admin and humans focused on high-level thinking.
- Confirming with candidates when applications are received and reviewed by humans, and notifying candidates when a position is filled.
- Offering specific feedback to rejected candidates and steering clear of generic form responses.
The fear candidates express that AI will favor keywords over qualifications, they’ll be rejected if they don’t fit narrow criteria, and sense that a human won’t even see their résumé is cause for alarm. Dice found these fears have impacted tech workers to the point that 30% of respondents say they’re considering leaving the industry due to hiring frustrations, while 24% say they’re currently committed but growing frustrated.
“That stat should be a wake-up call,” says Farnsworth. “If people feel like they’re shouting into the void, they won’t stick around. Companies need to make sure the hiring experience doesn’t feel like a black box.”
