Normal view

There are new articles available, click to refresh the page.
Today — 19 December 2025IT

How Smartphones Stole Christmas (And What Parents Can Do About It)

19 December 2025 at 14:38

Evidence in the UK suggests the biggest distraction from the Christmas togetherness won’t be toys, television, or even work emails, it will be smartphone.

The post How Smartphones Stole Christmas (And What Parents Can Do About It) appeared first on TechRepublic.

Why liquid cooling is non-negotiable in the age of AI

19 December 2025 at 13:48

AI is transforming the data center — and straining its limits.

Traditional cooling methods can’t keep up with the rising density and power demands of artificial intelligence (AI) workloads, which are expected to drive a 4.2x increase in data center energy consumption between 2023 and 2028. In response, organizations are modernizing their infrastructure to achieve new performance goals without compromising energy efficiency or sustainability.

This is the story of how Schneider Electric turned inward, using its own liquid cooling and infrastructure offerings to reshape its global IT operations.

The challenge: Cooling for AI-era demands

Schneider Electric manages the data of over 130,000 employees across more than 200 plants and distribution sites worldwide, supporting 7 million compute hours per month with 46 petabytes of live storage. It’s one of the largest internal IT footprints in the world. As AI drove a surge in demand for high-density compute, conventional air cooling became insufficient.

Schneider also faced visibility, efficiency, and uptime challenges. Coordinating and optimizing energy across global locations with different workloads and equipment required new levels of monitoring, insight, and control. These demands led Schneider to pursue liquid cooling alongside new monitoring and infrastructure management tools.

Schneider’s approach: Drink your own champagne

Liquid cooling absorbs and transfers heat away from servers more efficiently than air, allowing data centers to support hotter chips, denser racks, and higher-performance systems without significantly increasing energy use. It also helps reduce cooling energy consumption, improve thermal efficiency, and shrink the physical footprint required to run advanced workloads. These capabilities are increasingly vital for organizations balancing aggressive AI adoption with equally aggressive carbon reduction goals.

Schneider Electric first established a baseline: How much energy was its IT infrastructure consuming, and where were the biggest opportunities to reduce load and emissions? Its EcoStruxure IT Data Center Infrastructure Management platform captured real-time power and emissions data across sites, then its Resource Advisor team built a dashboard to visualize trends over time. This allowed the company to make more informed decisions about refresh cycles and new technology migrations.

Schneider upgraded cooling systems to InRow Cooling units, deployed Smart-UPS devices to field locations to reduce downtime, and modernized its rack infrastructure with NetShelter solutions. Across all global sites, these changes addressed Schneider’s core challenges: modernizing cooling infrastructure, enhancing energy visibility, improving operational efficiency, and increasing uptime.

Results: Efficiency, resilience, and ROI

The benefits came almost immediately. In just one year, Schneider achieved:

  • 30% reduction in energy consumption and carbon emissions;
  • 50% fewer day-to-day IT tickets;
  • 6x increase in business continuity across critical sites; and
  • a payback period of under one year.

These results reinforced Schneider’s belief in liquid cooling as a driver of high-performance, sustainable infrastructure.

Rethinking infrastructure for what’s next

Schneider Electric demonstrated how AI preparations must go beyond capacity planning. Modern organizations need to rethink how they build, cool, and manage infrastructure to meet the challenges of the future.

To learn more, visit us here.

How CIOs can break free from reactive IT

19 December 2025 at 11:30

CIOs are facing rising expectations to improve outcomes across the organization, at a time when the digital workplace is becoming more complex to manage. Hybrid work has expanded the number of tools, devices and dependencies that IT must manage, increasing the strain on teams that still rely on operating models designed for simpler, office-based environments.

Invisible IT is emerging as a practical way for CIOs to minimize disruption and improve the performance of the digital workplace. At its simplest, it’s an approach that prevents many issues from becoming problems in the first place, reducing the need for users to raise tickets or wait for help.

As ecosystems scale, the gap between what organizations expect and what legacy workflows can deliver continues to widen. Lenovo’s latest research highlights invisible IT as a strategic shift toward proactive, personalized support that strengthens the performance of the digital workplace.

Fragmentation is slowing progress on CIO priorities

While the expansion of digital ecosystems has enabled faster collaboration and more flexible work, it has also created operational complexity that slows progress on core priorities. Research from MuleSoft indicates that enterprises typically use 897 applications, while Salesforce reports that only 28% are integrated. This lack of connection forces teams to work around gaps in their tools, which adds unnecessary steps and slows the flow of work across the organization.

Employees navigate a mix of channels such as email, chat and portals when seeking IT help. Each follows different processes and contains varying levels of detail, making it harder to maintain a consistent experience. Industry research adds another layer. One-third of organizations in the UK and Ireland cite too many monitoring tools and siloed data as a barrier to achieving full-stack observability. Without a unified view of their environment, CIOs lack the visibility needed to move from reactive fixes to strategic improvement.

Disconnected systems have become a major barrier to productivity and overall workforce effectiveness. When teams are stuck dealing with day-to-day operational challenges instead of improving performance, CIO priorities lose momentum.

Why reactive support models hold organizations back

In a workplace where devices, applications and services operate across different locations and conditions, this approach leaves CIOs without the early signals needed to prevent interruption. Faults often emerge gradually through performance drift or configuration inconsistencies, but traditional workflows only respond once the impact is visible to users.

Lenovo’s research shows how deeply this reactive pattern is embedded. Detection still occurs late in the cycle, with 19% of organizations stating they rely on manual identification and 65% detecting issues only after they occur. Only 16% identify disruptions ahead of time. Resolution follows a similar structure, with 21% resolved manually and 55% only after an incident has already affected users. Just 24% resolve issues proactively. These cycles increase hidden operational cost, slow productivity and make resource planning difficult.

Another constraint is the limited use of personalized support. Only 27% of organizations adjust assistance to match how employees actually work. Without aligning assistance to real working patterns, problems take longer to resolve and users may face more incidents than they should.

What invisible IT looks like for CIOs

Invisible IT draws on AI to interpret device health, behavioral patterns and performance signals across the organization, giving CIOs earlier awareness of degradation and emerging risks.

Predictive, lower-friction operations
When early indicators surface, automated actions can stabilize systems or route the issue with full context. Lenovo’s 2024 pilot testing show the potential of this approach:

  • 40% of issues resolved before a ticket is created
  • 30% reduction in support costs
  • 50% faster onboarding for new employees

These improvements strengthen operational resilience.

Support aligned to real work patterns
Invisible IT uses AI-driven personas to understand how employees work, which tools they depend on and where friction occurs. Assistance adjusts accordingly, creating more consistent experiences across hybrid and distributed teams and helping people stay productive wherever they are.

Strengthening capability, not cutting headcount
This maturity shift elevates IT teams rather than shrinking them. Only 12% of leaders expect headcount to fall. Automation manages routine tasks, giving IT teams more capacity to focus on long-term transformation, culture change and continuous improvement.

What CIOs can prioritize next

Lenovo’s research shows that fragmented systems are the single biggest barrier to change, cited by 51% of leaders. Addressing this requires a coordinated shift in how information flows through the digital workplace.

Build a unified view of the digital workplace
Connecting device, application and support data creates the conditions for proactive operations. When signals are consolidated, CIOs gain a clearer picture of where new automation can deliver value.

Develop the next generation of IT capability
Roles in IT are shifting away from queue-based resolution toward work that reduces disruption before it reaches employees. CIOs can enable this transition by helping teams build confidence in interpreting early signals and by redesigning workflows so routine faults no longer require manual intervention. As teams become more comfortable with AI-generated insights and automated processes, the organization is better equipped to adapt to the growing complexity of the digital workplace.

Use partners to embed new operating practices
Expert partners can help integrate data sources, test predictive models and validate early outcomes. This reduces the risk associated with modernizing the operating model and helps CIOs embed new ways of working more quickly and consistently.

Practical actions to apply now

  • Identify the top ten recurring pain points and apply pre-ticket detection to reduce noise.
  • Nominate a single leader responsible for proactive support metrics.
  • Update incident taxonomies so AI can classify problems accurately.
  • Pilot one invisible IT use case with a business unit to demonstrate value before scaling.

A more proactive future for digital leadership

Invisible IT gives CIOs a clearer path to shaping a digital workplace that strengthens productivity and resilience by design. By shifting from user-reported issues to signal-driven insight, CIOs gain earlier visibility into risks and greater control over how disruptions are managed.

Adopting this model also frees technology leaders to focus on long-term transformation, culture change and strategic improvement rather than day-to-day firefighting. Organizations that invest in proactive capabilities now will be better positioned to guide the next phase of digital workplace evolution.

To explore the full findings and recommendations, read Lenovo’s latest Work Reborn report.

After the cloud: The future of compute is everywhere

19 December 2025 at 11:30

Cloud computing reshaped how organizations build, deploy, and scale digital services, and it’s been the backbone of transformation for nearly two decades. But as 2025 draws to a close, something new is happening. Cloud computing has reached maturity; I’d argue it’s even reached saturation.

Every enterprise operates in some hybrid or multi-cloud form. Costs are rising while SaaS fatigue is setting in. AI is rewriting infrastructure economics and sustainability concerns are mounting.

The cloud isn’t fading away, though. It’s evolving into something broader: a distributed fabric of compute that stretches from hyperscale data centers to on-premises clusters and the edge of networks and devices. The next era of IT will not be defined by where workloads run, but by how intelligently they move.

Peak cloud and the hybrid reality

A decade ago, cloud migration was a mandate. We all saw numerous boards ask, “How fast can we move?” Today, though, that question has changed to, “What should stay and what should come back?”

Enterprises have learned that not all workloads belong in public clouds. Performance, cost and compliance simply vary too widely. The result is a deliberate hybrid reality where compute is distributed by design.

From what I’ve observed, nearly all large enterprises now operate across three or more cloud providers, emphasizing interoperability and cost control as top priorities. Cloud has become the default fabric of IT, but that means it’s no longer a differentiator.

I would assert that the strategic shift here is from migration to optimization. CIOs’ focus now lies in orchestrating across platforms, negotiating value and deciding which workloads create the most impact in each environment.

To put it more succinctly, I would argue that the cloud conversation has matured from “How do we get there?” to “How do we get smarter about what runs where?”

AI and the new compute gravity

Artificial intelligence has introduced a certain gravity to cloud computing. AI workloads are massive, power-hungry, and location-sensitive. They pull data and compute closer together and, as I’ve seen with many of my clients, reshape entire data centers’ economics.

Meanwhile, cloud providers are no longer just service vendors; they’re infrastructure engineers designing GPUs, AI-specific chips and advanced cooling systems. Enterprises are beginning to mirror that behavior at a smaller scale, building private GPU clusters to control costs and manage sensitive data.

The boundary between cloud, hardware and AI is fast becoming a mirage. Compute has become a strategic resource instead of a commodity. For CIOs, this means thinking beyond “cloud services” to a new model of compute availability: the ability to run intelligent workloads wherever they deliver the most value.

In this environment, power, proximity and compute performance are the new currencies of cloud strategy.

The SaaS slowdown and rise of consumption models

As we’ve watched the cloud mature, SaaS has become both indispensable and utterly exhausting. Subscription models once promised predictability, but they’ve now created renewal fatigue. Enterprises pay again and again for capabilities they rarely use, while per-user licensing and seat expansion inflate costs each year.

At the same time, AI services are introducing a different pricing paradigm: pure consumption. Instead of fixed subscriptions, usage is measured per inference, per token and per API call. Organizations pay only for what they consume and see direct correlation between cost and compute value.

I believe that this will be the dominant model for software in the decade ahead. It’s a paradigm that aligns incentives for both customers and providers while rewarding efficiency and transparency. However, it also demands new financial governance from IT leaders.

In the coming years, CIOs will need real-time visibility into variable spend, predictive budgeting tools and procurement models built around outcomes instead of licenses. Software will no longer be something you own; it will be something you continually earn through value delivered.

Edge computing and the rise of the micro-cloud

At the same time, compute is moving to the edge. As IoT, analytics and AI converge, organizations can no longer afford the latency or bandwidth of sending every transaction back to a “hyperscaler” region.

Edge computing brings processing closer to where data originates: on factory floors, in hospitals, at retail stores or even within telecom towers. Verizon, AT&T and other providers are investing heavily in distributed “micro-clouds” that operate at network edges, which enable real-time decisioning for connected systems.

Enterprises are following suit, deploying localized compute infrastructure inside their own facilities to handle high-volume or sensitive workloads. These micro-clouds act as intelligent satellites in that they’re connected to the larger ecosystem, but optimized for speed, sovereignty and control.

For CIOs, this distributed architecture introduces new considerations: how to secure and manage assets across hundreds of locations, how to unify governance and how to ensure resilience when workloads shift between core and edge. The infrastructure of the future will look less like a data center and more like a constellation of compute nodes.

Sustainability and the power problem

AI’s explosive growth has exposed compute’s physical limits. Data centers that once measured efficiency in kilowatts are now measuring in megawatts. Water usage and carbon footprints have become board-level topics.

The environmental cost of intelligence is real. A single large-scale AI training run can consume as much energy as hundreds of homes use in a year. For technology leaders, this changes the definition of scalability.

The next wave of innovation will hinge on efficiency: new cooling methods, renewable microgrids and smarter workload distribution that minimizes compute waste. In many organizations, sustainability officers are now sitting alongside CIOs to shape infrastructure decisions.

This tells me that the future of cloud strategy will be measured not ‘only’ in uptime or cost, but in carbon per compute. The enterprise that can do more with less energy will have both an operational and reputational advantage.

Leadership in the post-cloud era

The CIO’s role is evolving even faster than usual!

In the first era of cloud, leadership meant driving migration. In the second, it meant managing cost and security. In this third era, where compute reigns supreme, leadership means optimization and orchestration.

The modern CIO must balance five dimensions:

  • Economics: consumption vs. commitment
  • Performance: proximity vs. scale
  • Security: control vs. agility
  • Sustainability: efficiency vs. expansion
  • Innovation: stability vs. experimentation

This is fundamental strategic governance, not some siloed balancing act. It requires partnership with finance, operations, sustainability and product teams to ensure that compute decisions align with business outcomes. It reflects the reality that transformational success is one motion and can’t happen in a fragmented paradigm.

As AI accelerates, this orchestration mindset will become essential. Not every model needs to live in the cloud, but every enterprise must learn how to manage a distributed fabric of compute that spans it.

The cloud is everywhere, and so is opportunity

Cloud computing is not disappearing; it’s decentralizing. The boundaries between public, private and edge are dissolving into a single continuum of compute.

For CIOs, this evolution brings both complexity and opportunity. The challenge is no longer simply to migrate, but to design architectures that can adapt: placing intelligence, data and compute exactly where they create the most value.

Ultimately, the future of cloud is about liberation: the ability to run, learn and scale anywhere. The enterprises that master this flexibility will define the next era of digital leadership.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

Bridging observability gaps: How modern enterprises stop losing millions

19 December 2025 at 10:10

In today’s digital-first era, IT and business teams are waking up to alerts that critical services are down, dashboards look fine, but real users are frustrated. Hidden blind spots in legacy monitoring put companies at risk for costly outages, user dissatisfaction and serious revenue losses. In 2025 alone, one in eight enterprises loses over $10 million per month to these issues, while about half lose more than $1 million every month due to undetected disruptions.

Why traditional observability fails

Most observability tools were built to watch internal systems: monitoring MELT (metrics, events, logs, traces) on owned servers, containers and apps. But the modern enterprise runs not only on custom code, but also on SaaS platforms, APIs, external cloud providers and multi-region networks. The health of Internet components—DNS, SSL, routing and ISP reliability — now directly impacts the user experience.

Traditional Application Performance Monitoring (APM) tools struggle to capture and interpret issues that arise beyond your infrastructure. As a result, even as backend metrics report “green,” users experience outages, slowdowns and errors that go undiagnosed.

End-to-end visibility: APM plus IPM

To close these blind spots, leading organizations now couple APM with internet performance monitoring (IPM). While APM provides an inside-out view — tracking internal code, system health and traces—IPM offers the outside-in perspective. It actively tracks global Internet health, the performance of APIs, cloud services, regional ISP health and more. Together, they provide teams with real-time, end-to-end visibility from code to end user, regardless of where the disruption occurs.

 You know what? Notably, organizations such as SAP, IKEA and Akamai have leveraged this dual approach to achieve significant improvements, including faster incident detection, reduced downtime and better alignment of IT and business outcomes. Teams can now measure the actual impact of service outages on customer satisfaction and revenue, not just system uptime.

The role of OpenTelemetry for data unification

OpenTelemetry (OTel) has emerged as the glue binding APM and IPM ecosystems. As an open standard, OTel standardizes a set of traces, metrics and records across heterogeneous systems. Adopting OTel helps enterprises avoid vendor lock-in, standardize cross-platform monitoring and reduce device creep and complexity, according to CNCF.

For instance, a retail enterprise could deploy OTel SDKs in its mobile and web apps, feeding telemetry data simultaneously to both APM and IPM systems and providing centralized, actionable dashboards for both operations and business analysis.

Why centralize observability operations

With increased complexity and the growing cost of fragmented tools, many enterprises are forming centralized observability teams. These teams standardize tool selection, enforce best practices and ensure that observability is aligned with business priorities — not just tech KPIs. This consolidation reduces licensing and training expenses, prevents tool sprawl and improves collaboration and agility.

EMA research and Elastic surveys confirm that centralized teams are most likely to champion IPM adoption, acknowledging that external Internet paths are now as critical as internal code paths to user experience and business outcomes.

Real-world use cases: Preventing blind spots and business losses

1. Retail e-commerce: Outage on Black Friday

Scenario: A global retailer experiences slow-loading web pages for customers in Asia during Black Friday, even as internal dashboards show healthy traffic and low server latencies.

Resolution: With IPM, the team traces the problem to a regional ISP routing issue affecting CDN performance in Asia. Early detection allows rerouting and preemptive communication with customers, saving millions in potential lost transactions and preserving brand reputation.

2. Digital communication platforms: Slack

Scenario: Slack relies on observability to ensure reliable and timely message delivery across the globe. When intermittent message lags are reported, traditional logs offer no clues.

Resolution: Observability tools correlate real user monitoring with backend performance and external API health, enabling rapid diagnosis and resolution of issues, thereby keeping communication flowing and minimizing disruptions.

3. Financial services: Real-time transaction monitoring

Scenario: A bank’s transaction engine periodically fails to update account records after third-party payment API integrations are disrupted by regional Internet outages. Internal APM tools do not register any code exceptions.

Resolution: By integrating IPM, the bank detects anomalies in Internet traffic patterns affecting APIs, resolves issues promptly and compounds trust in financial operations.

4. Healthcare applications: Telemedicine reliability

Scenario: A telemedicine platform faces sporadic video connection drops for patients in certain rural areas. Internal systems operate normally, leaving staff without clear answers.

Resolution: Combining IPM with APM reveals that last-mile ISP instability and DNS resolution issues are to blame, not core services or app code. With this knowledge, the provider helps users switch to more reliable networks and invests in geo-distributed failovers for critical APIs.

5. Education systems: Data pipeline and grade record integrity

Scenario: An education provider’s grading system silently overwrites student records due to a malfunction in pipeline integrations, undetected for days.

Resolution: With data observability, the team is alerted to anomalies in data freshness and schema changes within minutes, thereby preserving data integrity and protecting student outcomes.

The payoff: Modern observability in action

Organizations embracing centralized observability tools that support OpenTelemetry, consistently achieve:

  • Faster incident resolution: Problems are identified and addressed within minutes, not hours.
  • Lower operational costs: Fewer redundant tools and improved efficiency lead to tangible savings.
  • Superior user experience: Monitoring what truly matters to end users closes gaps before they turn into headlines.
  • Greater alignment with business goals: Observability metrics directly support business KPIs, safeguarding revenue and reputation.

In 2025 and beyond, observability is about much more than uptime it is the foundation for business resilience, customer trust and IT value generation.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

The rise of invisible AI will redefine CX

19 December 2025 at 09:15

In the next few years, “invisible AI” will fundamentally change how enterprises approach customer experience (CX). The concept is simple yet transformative: the most effective AI will be the least visible — seamlessly integrated into workflows, guiding customer service teams, supporting managers, and surfacing insights in real-time without adding complexity. Workforce Engagement Management (WEM) platforms represent one early example of this shift toward intelligence that empowers rather than interrupts.

At SuccessKPI, we believe invisible AI represents the next great leap in customer experience — one that places humans back at the center of technology. While over 80% of AI projects industrywide fail to meet objectives, invisible AI succeeds because it starts with the end in mind: measurable business outcomes such as ROI, compliance, customer satisfaction and agent empowerment.

This is not about deploying AI for AI’s sake. It is about achieving real results for humans — quietly, efficiently and continuously.

The evolution of engagement: From tools to intelligence

Traditional CX platforms rely on dashboards, manual workflows and sampling-based analytics. As customer expectations, regulatory pressures and hybrid work models grow more complex, these systems can become bottlenecks instead of enablers. Leaders need deeper insights and more automation, but must avoid adding friction for agents or managers.

Invisible AI solves this problem by operating behind the scenes — listening, learning and supporting without requiring users to learn new tools. It continuously monitors calls, chats and interactions to evaluate sentiment, compliance and intent, delivering timely nudges, risk alerts and guidance exactly when needed.

Agents stay focused on customers. Managers stay focused on strategy. AI quietly handles the heavy lifting.

Why cloud-based AI wins over DIY AI

A critical enabler of invisible AI is the shift to cloud-native intelligence. Enterprises and customer service organizations increasingly recognize that:

  • Cloud-based AI delivers faster innovation because models improve continuously without internal rebuilds.
  • It scales instantly to support peaks in interaction volume without costly hardware or engineering.
  • It dramatically reduces total cost of ownership, eliminating the need to hire specialized AI talent, manage infrastructure or retrain models manually.
  • Security, compliance and resilience benefit from the collective investment of leading cloud providers and AI platforms.

Building AI infrastructure in-house may seem appealing, but the pace of model evolution means internal systems become outdated almost immediately. Invisible AI requires constant learning, tuning and deployment — a cycle only cloud-based platforms can realistically sustain at enterprise scale.

From reactive to predictive: Quality elevated at scale

Historically, quality processes relied on random sampling and slow feedback cycles. Invisible AI changes this entirely. Every interaction can be analyzed automatically, scored for compliance and sentiment and grouped by themes or emerging issues.

Leaders gain real-time intelligence about risks, billing issues, product failures or shifts in customer tone — long before they escalate. Agents receive immediate, supportive guidance instead of waiting for quarterly reviews. Quality improves while reducing bias, workload and manual analysis.

Invisible AI thrives when organizations define their desired outcomes from the start, such as:

  • Improving customer satisfaction and loyalty
  • Ensuring compliance and reducing risk
  • Enhancing agent performance and retention
  • Driving measurable ROI through efficiency gains

The technology then supports these goals organically, upgrading itself over time without requiring users to adapt or retrain.

This mindset — building AI results, not AI tools — is what separates successful implementations from the 80% that fail.

Automation paradoxically makes workplaces more human. By removing repetitive tasks and surfacing contextual insights, invisible AI allows people to do what they do best: empathize, problem-solve and build trust.

Imagine an agent who automatically sees emotional signals, historical interactions and account insights without searching or switching screens. Imagine a supervisor alerted instantly when sentiment dips so they can intervene proactively. That’s invisible AI in action.

One hallmark of invisible AI is continuous, silent improvement. Models grow more accurate, compliance frameworks update automatically and sentiment detection adapts to new languages and cultural nuances — all without retraining sessions or system downtime.

For CIOs and technology leaders, this means stability paired with continuous progress — a rare combination in enterprise transformation.

The Future: AI that disappears into great experiences

By 2026, invisible AI will be indispensable to customer experience operations. Early adopters will enjoy stronger outcomes, more efficient operations and more empowered employees.

As AI grows more advanced, it will also grow less visible. The future is technology that blends so naturally into workflows that users barely realize it’s there — they simply notice that everything works better.

That is the true promise of invisible AI: not to replace humans, but to elevate them.

The organizations that lead the future won’t be the ones promoting their AI. They’ll be the ones whose customers and employees hardly notice it at all — only that their experiences feel effortless.

And that’s how we’ll know the future has arrived.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

Amazon’s new AI team will report to CEO

19 December 2025 at 09:10

Amazon has set up a new in-house AI organization reporting direct to the CEO, responsible for its Nova range of AI models, silicon development (the Graviton, Trainium, and Nitro chips) and the emerging development of quantum computing. The new move will bring AI and advanced technology research into the heart of Amazon itself, where previously it was part of Amazon Web Services.

The new organization will be headed by Amazon veteran Peter DeSantis, who launched the company’s cloud storage service EC2 and is the current leader of AWS’s Utility Computing Services.

Amazon CEO Andy Jassy announced the change in a letter to staff, saying DeSantis had a track record of solving problems at the edge of what’s technically possible. “With our Nova 2 models just launched at re:Invent, our custom silicon growing rapidly, and the advantages of optimizing across models, chips, and cloud software and infrastructure, we wanted to free Peter up to focus his energy, invention cycles, and leadership on these new areas.”

DeSantis will report directly to Jassy, rather than AWS CEO Matt Garman, stressing the importance of AI to the company.

DeSantis has some history here. Last year, he took to the stage at re:Invent to announce the launch of a 10p10u, an enhanced network developed to handle the expected increase in AI-generated traffic.

The more things change…

The introduction of the new division within Amazon itself is a major shift in philosophy. Sanchit Vir Gogia, Chief Analyst, Greyhound Research said that the move was not a vanity reshuffle. “Amazon is admitting that AI is now inseparable from infrastructure economics, infrastructure control, and infrastructure power. The unit of decision shifts from ‘Which service do I try?’ to ‘Which stack do I align with?’ Within this world: framing, cloud and AI stop being parallel tracks in your organisation but merge into a single platform governance problem.”

While the reorganization has big consequences within Amazon, customers for its cloud computing services are unlikely to see much difference, at least for now.

… the more they stay the same

Brian Jackson, Research Director at Info-Tech, thinks that, in the immediate term, there will be no difference in the way that companies buy from Amazon.  “I don’t see this affecting the way that customers engage or use their AWS services. AWS has its AI products clearly defined now, with agents that help developers produce code, Bedrock to develop applications that leverage a range of LLM options, and then there is SageMaker and Nova Forge for different aspects of AI training. Those products will remain the same,” he said.

The development of Nova 2 could provide an interesting option for organizations. “Amazon positions the Nova family of LLMs as providing outputs that are almost as good as the very best models at a fraction of the cost,” said Jackson. “If you’re looking for an LLM to solve a specific business problem, Amazon Nova is an option you’ll consider and test to see if its outputs are going to be just as effective for your use case as GPT 5.2, and deliver it at a much lower cost per token.”

Justin Tung, Senior Principal Analyst at Gartner, agrees that the Amazon move will offer its customers a genuinely new option.  “While organizations seeking the most advanced, high-performance models may not view Nova 2 as the leading option, it provides a compelling balance of speed, accuracy and, importantly, cost. For many enterprise use cases, the ideal model is not necessarily the most powerful, but the one that delivers reliable performance at a more accessible price point – and in that regard, Nova 2 remains a strong and practical choice.”

Quantum computing on the horizon

Perhaps the more interesting move is the inclusion of the nascent quantum computing development within the new organization. Eric Kessler, general manager of AWS Braket, speaking at re:Invent, this year, said that Amazon estimates that fault-tolerant quantum computing will be possible for scientific use cases by the end of the decade.

“Putting quantum computing in this new division makes sense because Amazon views AI and quantum computing as having a mutually beneficial relationship. That is, AI will be used to advance quantum computer design, and quantum computing will be used to advance AI, by acting as data samplers to generate high-quality training data,” Jackson said.

Greyhound’s Gogia said that by pairing quantum with AI and silicon and bringing the whole division within Amazon itself, the company is signalling that specialized compute will increasingly be consumed as managed infrastructure with integrated services, rather than as stand-alone exotic experiments. “If CIOs treat quantum as a branding exercise without governance, they will burn credibility and budget. If they treat it as disciplined R&D with milestones, they build readiness at low cost.”

He added that “Quantum is not a mainstream production lever yet, but it is strategically rational for Amazon to keep it close to its AI and silicon agenda, because when quantum does become useful for narrow domains, the organizations that will move first are the ones that treated it like a governed capability.”

The integration of these advanced services under the general Amazon umbrella is a move that stresses how important AI is going to be to organizations in the future. Amazon recognizes that it’s not just an aspect of IT services but a crucial element in every aspect of purchasing.

Nvidia and Google Back $6.6B AI Startup Lovable

19 December 2025 at 06:57

Vibe coding firm has closed a $330 million Series B funding round, catapulting the Swedish company to a $6.6 billion valuation.

The post Nvidia and Google Back $6.6B AI Startup Lovable appeared first on TechRepublic.

Tecnología para navegar la complejidad del mercado de los juguetes

19 December 2025 at 05:43

Diciembre es un mes intenso para la industria del juguete, el momento cumbre de la campaña de Navidad. La elevada demanda lo convierte en fundamental para las cuentas de resultados: según las proyecciones de la Asociación Española de Fabricantes de Juguetes (AEFJ), en ella se concentrarán el 60% de las ventas anuales.

Sea en Navidad o fuera de ella, la industria del juguete es un potente actor económico. A nivel global, y según datos de 2023 de Circana para la International Council of Toy Industries, mueve unas ventas por valor de 108.700 millones de dólares. Las cuentas de Euromonitor (de 2024 y que incluyen en el total también a los videojuegos) hablan ya de 279.000 millones de dólares. En España, la AEFJ prevé para este año una subida del 2,5% de la facturación, con un gasto medio por niño en juguetes en la campaña de Navidad de 195 euros.

La lista de los juguetes más deseados incluye toda clase de productos. El ranking anual que elabora cada Navidad Amazon lista juegos de construcción, robots, muñecas, cocinitas y clásicos como juegos de cartas. Este amplio abanico evidencia otra de las cuestiones que transpiran los análisis del sector, el de que este es un mercado complejo, que debe atender a muchas variables. La industria del juguete no solo tiene a niños y niñas como clientes finales (y todos los familiares que se encargan de hacer realmente esa compra), sino también a personas adultas que compran para ellas mismas. Es la emergente categoría de los kidults, “un segmento clave de crecimiento”, como concluye Euromonitor. De hecho, muchos juguetes se compran por nostalgia, confort emocional, conexiones sociales o hasta escapismo, según esta firma de análisis.

A la variedad del propio público objetivo del juego y de sus razones para la compra se suman la regulación, las crecientes expectativas sobre el alcance y valores de los juguetes o los retos marcados por las tendencias cambiantes de mercado. Jugar puede parecer muy sencillo, pero cumplir con las expectativas no lo es tanto. “La industria del juguete evoluciona continuamente, integrando nuevas narrativas y valores sin renunciar a la esencia del juego”, asegura Marta Salmón, presidenta de la AEFJ, al hilo de la presentación de previsiones de la asociación. “Esta capacidad de renovación constante es clave para mantener un mercado dinámico y competitivo”, suma.

Ante un mercado complejo y variado, se necesita innovación y tecnología.

Tecnología para innovar y seguir el ritmo

“El sector del juguete siempre ha sido en cierto modo pionero a la hora de adoptar algunas tecnologías e incorporarlas en los productos”, explica a CIO ESPAÑA Joaquín Vilaplana, director de innovación y sostenibilidad en AIJU Instituto Tecnológico de Productos Infantiles y Ocio, situado en el epicentro juguetero español, el Valle del Juguete. “Es un sector que es atrevido a la hora de innovar en el producto”, añade. Esto se traduce no solo en el lanzamiento de juguetes de base tecnológica (que deben, eso sí, mantener unos precios lo suficientemente competitivos como para ser atractivos en un mercado sensible al coste: “tiene que ser bueno, bonito y barato”), sino también en el uso de herramientas tecnológicas en su día a día.

El sector “avanza a la industria 4.0”, incorporando soluciones para monitorizar y planificar la producción, sistemas de control y demás herramientas para una fabricación optimizada y eficiente. “Dependiendo del tamaño de las empresas, existe una continua actualización en las metodologías”, indica Vilaplana. La tecnología no está solo “en el producto en sí, sino también en la fase previa de diseño y fabricación”. El sector armoniza el uso de “formas tradicionales de fabricar y producir” con la incorporación de nuevas herramientas punteras.

El análisis de datos ayuda a comprender las tendencias y afinar el lanzamiento de productos (que, aunque hagan su agosto en diciembre, ya deben pasar por las ferias sectoriales de inicio del año), así como mejorar la planificación de fabricación. “Es muy importante tener una capacidad de previsión buena, porque si no es imposible llegar a todo”, señala el experto, explicando que cada vez es más complicado gestionar el riesgo y que los modelos de compromisos de compras en ferias está cambiando. “Te confirman los pedidos cada vez más tarde y la capacidad de reacción es cada vez más ajustada”, indica.

Igualmente, la propia naturaleza del mercado del juguete hace más clave a la tecnología. Al ser un sector con una estacionalidad muy clara, se necesita ser capaces de gestionar ese pico sin que nada falle. Las herramientas de gestión logística son fundamentales. Al tiempo, el boom de internet también ha cambiado el estado de las cosas: el sector ha tenido que adaptarse a la venta online. “La empresa tiene que ser capaz de suministrar o entregar en plazos muy ajustados y estrechos”, suma el experto.

Joaquín Vilaplana, director de innovación y sostenibilidad en AIJU Instituto Tecnológico de Productos Infantiles y Ocio


AJIU

Juguetes ante los retos del siglo XXI

La tecnología también está ayudando a que los juguetes naveguen los retos del siglo XXI, como ser más sostenibles o afrontar nuevas materias primas, al tiempo que ha abierto nuevas oportunidades, como el de hacerlos potencialmente más inclusivos.

“Hay infinidad de familias de productos que, siendo clásicos, tienen un potencial a la hora de integrar nuevas tecnologías o conseguir ese enfoque de inclusividad”, apunta Vilaplana. En su centro tecnológico, hacen “evaluaciones de usabilidad y jugabilidad del producto” que ayudan a comprender cómo funcionan los juguetes con cada grupo poblacional y a abrir nuevas características que hagan del producto algo “más inclusivo”, ya que “puede ser utilizado por públicos con ciertas limitaciones”. Al hacerlo, también ganan en valor educativo sobre la diversidad. “Un juguete es un elemento educativo”, recuerda, que puede “potenciar determinadas capacidades y desarrollar determinadas habilidades y actitudes”.

Otro de los grandes retos es el uso de materias primas. Algunas estimaciones señalan que el 90% de los juguetes se siguen haciendo a nivel global con plástico, lo que convierte a la juguetera en una de las industrias más intensas en el uso de este material del planeta. Muchas compañías han lanzado programas de I+D para encontrar alternativas mucho más responsables o más fácilmente reciclables. Abandonar por completo el plástico no es del todo sencillo. Ahí está el caso de Lego, que en 2023 dejó de usar plástico reciclado de botellas cuando descubrió que no reducía su huella de carbono (la compañía, aun así, insistía entonces en que seguían comprometidos con la sostenibilidad y buscando formas alternativas). Desde la industria del juguete recuerdan, con todo, que el plástico de los juguetes se puede reciclar, aunque no sea sencillo porque no hay tanta escala como puede haber en, por ejemplo, el plástico de los envases.

Y, finalmente, los juguetes deben enfrentarse también a las diferentes regulaciones, algo para lo que la tecnología está muy presente.  “Un requisito que acabará afectando es el pasaporte digital del producto”, explica Vilaplana. Es un reto que tocará a todos los sectores manufactureros, pero que el juguete tendrá también que abordar con sus propias peculiaridades.

niño con un juguete de gafas 3D

AIJU

La era de la IA

Sin duda, otro de los grandes retos a los que tendrá que enfrentarse la industria del juguete es el de la inteligencia artificial, que está también llegando ya a sus productos finales. Más allá de lo que pueda suponer como apoyo en la fabricación o comercialización, la IA se podría asentar como un complejo elemento más de la propia identidad del juguete, una pieza más del juego.  Compañías de nuevo cuño, como Curio, están integrando IA directamente en los juguetes, creando muñecos que son al tiempo chatbots. “Combinamos tecnología, seguridad e imaginación, creando un mundo de juego en el que la ciencia y las historias toman vida”. Así es como se presenta esta empresa.

A las startups que experimentan con productos potenciales se suman los movimientos de los grandes gigantes. Este mismo diciembre, Disney anunció un acuerdo estratégico con Open AI. El gran titular es que Sora podrá generar vídeos cortos (no comerciales, puesto que son para fans) con los personajes del universo de la multinacional. Sin embargo, el potencial podría ir más allá, puesto que en la nota de prensa aseguran que se integrará potencialmente en más áreas. Más claro es el movimiento de Mattel, que cerró su propio acuerdo con Open AI, en este caso a principios de verano. “Cada uno de nuestros productos y experiencias está diseñado para inspirar a los fans, entretener a las audiencias y enriquecer la vida mediante el juego. La IA tiene el poder de expandir esa misión y ampliar el alcance de nuestras marcas en nuevas y emocionantes vías”, señalaba entonces Josh Silverman, el máximo responsable de franquicias de Mattel. El comunicado ya dejaba claro que se avecinaban “productos alimentados por IA”.

Aun así, este movimiento sectorial no está exento de críticas y de dudas. Al fin y al cabo, la IA ya protagoniza estos debates fuera del mundo de los juguetes. Es más que esperable que lo haga también dentro de ellos. Como muestra una prueba de The Guardian con uno de estos muñecos IA, se cuestiona el impacto en privacidad, relaciones sociales y hasta salud mental. Se teme que abra lo que ya se ha bautizado como el empathy gap (la brecha de la empatía), perdiendo de vista que aquello son juguetes y no personas, o que se genere una nueva brecha digital entre las familias capaces de navegar este nuevo contexto y las que no. En resumidas cuentas, los juguetes deben enfrentarse a los mismos retos que esta tecnología encuentra en otros sectores.

La soberanía digital, clave para acelerar el uso seguro de la IA generativa

19 December 2025 at 05:14

¿Cómo adaptarse a los cambios que supone la irrupción de la inteligencia artificial generativa en el seno de las empresas? ¿Qué riesgos comporta en el día a día de las organizaciones? Para muchas de ellas, la IA generativa es “una ola” que nos está “pasando por encima”, a la que no es fácil adaptarse porque muchas veces llega “sin comprarla” y sin que desde la dirección se haya establecido un proceso claro de adaptación a los procesos productivos. De todo ello se habló en un almuerzo de trabajo organizado el pasado 11 de noviembre por CIO ESPAÑA con la colaboración de Red Hat y Accenture, que fue moderado por Fernando Muñoz, director del CIO Executive España y en el que responsables tecnológicos de distintas organizaciones compartieron su punto de vista sobre estas herramientas y sobre el concepto de soberanía digital, cada vez más relevante para las compañías.

Manuel Tarrasa Sánchez, CIO y CTO de TuringDream, apuntó tres grandes retos para la adopción de la inteligencia artificial generativa. “El primero —apuntó— viene desde arriba, porque el consejo de Administración ejerce presión para que se use la IA”. “El problema es que las empresas no encuentran gente que sea capaz de aunar conocimientos de IA y de negocio”, explicó. La solución a su juicio pasa por habilitar centros de competencia para formar los profesionales adecuados. “El segundo problema —prosiguió— es que la IA también llega desde abajo. Los empleados se dan cuenta de que puede ser un acelerador de su carrera y, aunque las empresas cierren el acceso a ChatGPT, los trabajadores lo llevan en el móvil”.   

El tercer problema, señaló, viene de las alucinaciones. Entre el 10 y el 30% de los modelos las sufre, aunque esto es algo que, en su opinión, se va a poder solucionar con las IA agentivas. El experto también apuntó otros de los grandes riesgos de instalar proyectos piloto, que la persona que lo haya instalado abandone la organización. “El mayor riesgo está en garantizar la continuidad del servicio”, remarcó.

foto evento red hat accenture dic 2025

Garpress | Foundry

Emilio González, jefe de sistemas del Ayuntamiento de Alcorcón, advirtió de los peligros que entraña la ‘shadow AI’, el uso no autorizado de herramientas de inteligencia artificial dentro de una organización sin la supervisión del departamento de TI. “Tenemos que dar formación a personas para que no suban a la IA documentos confidenciales”, aseguró. “Con la IA somos superfuncionarios, pero el apartado de protección de datos persiste como el principal reto”, dijo González, quien apuntó también otro desafío: la rápida obsolescencia de las inversiones en IA.

De la protección de datos habló también Raquel Pardiñas, gerente de suministro y proveedores de TI de Atradius Crédito y Caución, quien señaló que, aunque la IA generativa sirve para ganar tiempo en tareas administrativas, muchas veces sus usuarios desconocen si están incumpliendo la Ley de Protección de Datos. “Es el principal desafío”, enfatizó.

Ana Arredondo Macua, CIO de la Oficina Española de Patentes y Marcas, describió la situación la que se enfrenta una institución como la suya. “Un examinador de patentes puede tardar 18 meses en conceder una. La IA es muy útil para reducir este tiempo, pero el examinador tiene miedo a perder su trabajo, a no ser relevante”, relató. También habló de la necesidad de interoperabilidad, algo clave en una institución que comparte información y patentes con otros estados. “La fecha de una patente es crucial, por lo que el intercambio de información es imprescindible”, dijo tras señalar que la IA aporta una base de información para ser más eficiente.

Fernando Muñoz, director del CIO Executive de Foundry en España

Fernando Muñoz, director del CIO Executive de Foundry en España.

Garpress | Foundry

Propuesta de valor, no de riesgo

Sobre todos estos desafíos, Julio Sánchez Miranda, líder de la práctica de Red Hat en Accenture para EMEA, fue contundente. “La inteligencia artificial se tiene que enfocar a valor, no a riesgo. Eso va a cambiar la narrativa”, dijo. “Hay que establecer claramente el objetivo, saber lo que quiero. Es una tecnología incipiente que si no pruebas y testeas no terminas de conocer”, añadió Mar Santos, directora de ventas corporativas de Red Hat. 

Julio Sánchez introdujo también uno de los principales temas del debate: la soberanía digital. Sobre este punto, recordó que el 80% de los modelos funcionales de la IAG son americanos, un 15% chinos y solo un 5% europeos. “El concepto de soberanía es control, y hoy es un tema geopolítico”, remarcó. En su opinión, hay que tener claros los términos de contrato en los que se usa la IA y conocer qué empresas y cómo garantizan la protección de los datos. “Si usas datos personales de tus clientes en plataformas como ChatGPT puedes tener un problema”, advirtió.

En esa misma línea se expresó Nilley Gómez Rodíguez, líder de Data & AI de Reale Seguros, quien explicó cómo su compañía ha cortado el acceso a ChatGPT para asegurar con un modelo propio de IA generativa el cumplimiento normativo. “Creamos un programa que pueda ser usado para el negocio, ofreciendo formación a los usuarios”, dijo tras subrayar que el principal desafío en los próximos meses será la generalización del uso de la IA agentiva. “Para el usuario tiene que ser transparente”, aseveró.

Mar Santos, directora de ventas corporativas de Red Hat

Mar Santos, directora de ventas corporativas de Red Hat.

Garpress | Foundry

Desde la Universidad Complutense de Madrid, José Arbues Bedia, director del Centro de Inteligencia Institucional, explicó cómo su institución trabaja en una triple vertiente, empleando IA para la docencia, la gestión y el análisis de datos. No obstante, apuntó su desconfianza en un modelo que, a su juicio, aún no vale por sí solo. “La IA generativa trabaja con modelos literales y cuenta muchas mentiras”, dijo.

“Estoy trabajando en no trabajar, pero aún es demasiado pronto para ello”, dijo tras subrayar que “la única palanca de cambio en la Administración es la transparencia”. En su opinión, es demasiado arriesgado tener una IA propia para la universidad. “Con SAP sí puedo garantizar unos resultados, pero no con la inteligencia artificial”, dijo.

Para Carlos Maza, director de Digitalización y Tecnologías de la Información del Tribunal de Cuentas, los proyectos de IA los deberían pagar los departamentos de Recursos Humanos y Formación. “Ahora es el momento de conocer y aprender”, dijo, no sin antes señalar que la IA “está entrando sin comprarla” a través de muchas herramientas que ofrecen servicios incluidos en sus suscripciones.

Soberanía digital: el papel de Europa

Los asistentes analizaron también el lugar de Europa en un escenario donde la soberanía digital se ha convertido hoy en una prioridad política. “Determinar dónde reside el dato nos puede parecer una pregunta cómica, pero no lo es para el Derecho que nos regula”, dijo Carlos Maza, quien apuntó que, para una institución como el Tribunal de Cuentas, que maneja datos de terceros, la confidencialidad de los mismos es clave para el funcionamiento.

“Europa va lenta en IA. El enfoque será el correcto, pero no es competitivo, y sin recursos no estamos al nivel de otras regiones”, dijo Emilio González. Para Manuel Tarrasa, los principales problemas residen en la velocidad y la escala, que hacen que la distancia se agrande día a día. Pierre Pita, el director de ventas de TI de Atradius Crédito y Caución, apuntó una línea de actuación. “La legislación DORA nos obligó a hacer mucho trabajo, pero nos da una capa extra de seguridad”, dijo.

Julio Sánchez Miranda, líder de la práctica de Red Hat en Accenture para EMEA

Garpress | Foundry

Julio Sánchez explicó que Red Hat trabaja en un entorno de ‘confidential computing’, una tecnología que protege los datos en uso mediante el procesamiento en entornos seguros. “Es relevante gobernar los modelos en entornos controlados”, enfatizó. En esta línea, citó un estudio según el cual al menos un tercio de las cargas de IA deberían ser ejecutadas en entornos soberanos para proporcionar valor a la organización.

“En el futuro —concluyó— apostamos por crear modelos verticales customizables y pequeños que ayuden a optimizar las infraestructuras y a reducir costes”. “La IA ha venido para quedarse. Es un reto para todos, pero Red Hat está en el camino correcto para ayudar a usar la IA de forma controlada en riesgos y costes”, añadió Mar Santos.

How CIOs can win tech investments from CFOs and boards

19 December 2025 at 05:10

When I transitioned from the CFO’s desk to a leadership coach and as a director on corporate boards, I observed a truth: that securing approval for technology investment isn’t just an IT conversation. But it’s a business conversation.

It’s about trust, alignment, language and, most importantly, shared purpose. For every CIO reading this article, note that the money you seek is not simply a line item in a budget, but it’s the future your business is seeking to build. Getting it backed by your finance team and endorsed by the Board will enable smooth implementation for you as the CIO. Here’s how you can go about it.

Understand your CFO’s perspective

Early in my corporate career, I remember a tech leader walking into my office with a slide deck full of diagrams and acronyms. I didn’t reject the idea because it lacked merit, but because I couldn’t see the business outcome.

As CFO, what I cared about were three things, which were the return on investment, risk management and cash impact. If the technology didn’t speak those languages, it would struggle for approval.

According to research, the companies where the CIO-CFO relationship is strong are far more likely to secure digital funding.

Action: Before submitting your request, ask: how does this project help shape the margin, growth, cost avoidance or lead to increased productivity? Include a cost-benefit analysis if required. Let the financial pulse of the company be at the centre of your case.

Align the tech initiative with business strategy

As a Board Director, I often ask: “How does this tech initiative support our strategic objective?” Whether it was entering a new market, improving customer experience or managing a cost base, if the technology didn’t map to one of those, the Board would push back.  

As highlighted in industry research, the shift from IT function to enterprise strategy means the CIO and CFO must operate almost as co-pilots on the business growth journey.

Action: Create a clear line of sight between the initiative and your company’s strategic growth plan. Highlight with phrases such as “supports strategic priority A,” “enables 10% faster time-to-market,” “reduces cost by X%.”

Build a compelling business case

In one of my Board roles, the tech leader, I recall, had a clear ask. He presented the Board with a clear timeline, with an ROI and the payback period. It got approved. I’ve since coached CIOs to think in those terms: the ask must be frameable in financial language, such as total cost of ownership (TCO), internal rate of return (IRR), payback period, etc., not just technical merit.

According to TechTarget, CIOs working under CFO oversight need business cases that translate into straightforward financial justification.

Action: For each major cost you identify, show the offsetting benefit (reduced cost, new revenue, risk mitigation, etc.). Include scenario modelling (base case, optimistic case). Commit to tracking outcomes. If required, educate Board members on the impact of the proposed tech initiative.

Address risk mitigation and compliance

During my tenure as CFO, I learned early that even the most promising tech initiative could stall if the risk side was invisible. Whether it’s regulatory exposure, cybersecurity vulnerability, legacy-system complexity or integration failures, the Board and finance leadership want to see that you’ve “thought about what can go wrong” as much as “what good will come.”

I recall receiving an impersonating email from my CEO seeking urgent transfer of funds when he was away on vacation. I realized it was fishy and got the tech team to check on its source and fix the future risks. Remember that the cost of non-compliance is always higher than the cost of compliance. Better safe than sorry.

A recent study emphasises that IT investments require consistent governance and value measurement to assure the CFO that the initiative isn’t a black box of cost and uncertainty.

Action: In your presentation or business case, include a dedicated section titled “Risk & Mitigation.” Outline the major threats (for example: vendor lock-in, data quality gaps, regulatory change, legacy compatibility, etc.) and map each to a control or plan. Explain how you’ll manage governance, pilot phase, KPIs, etc. Demonstrating governance and transparency converts technology ambition into a credible business investment.

Communicate in a language they understand

I’ve seen tech leaders lose the CFO or Board audience by using IT jargon. Remember, they are not IT experts and may not be familiar with the IT terms. One Board member summarized it well: “Tell me what it does for the business, not just how you’ll build it.”

As one guide on executive-level selling points out, “Selling to the C-suite requires shifting from tactical to strategic language but focusing on business outcomes, not features.”

Action: Practice your presentation with a finance colleague. Replace geek-speak with business outcomes. Use visuals that show the benefits of tech and the outcomes. Speak on growth, speed, risk, cost and not just features.

Use real-life examples and success stories

In my Board role, when I heard a CIO reference successful deployments at peer companies, it built confidence and momentum. Because people believe in success stories more than mere statements. Polish your communication and storytelling skills.

In the broader research, it’s clear that resolving conflicts between CIOs and CFOs often comes through demonstrating tangible results and building trust incrementally.

Action: Include a section in your article/request: “Proof-points”. Show internal wins (if any), even small ones. Or external industry benchmarks. Share data such as “Reduced downtime from X to Y,” “Pilot delivered 5% cost saving.” Then link it to the larger roll-out.

Foster a collaborative approach

My transition from CFO to independent director reinforced one thing: tech leaders who view the CFO and Board as adversaries lose before they begin. The high-performing partnerships I saw treated the CFO as a co-owner of the strategy, not an obstacle.

Gartner found that when CIO and CFO collaborate closely, organizations are much more likely to find funding and meet business outcomes.

Action: Set up a joint governance forum (CIO + CFO + business leaders). Invite and involve the CFO in your tech roadmap discussions. Ask how you can help you with financial visibility, or what metrics they are tracking. Frame yourself as an ally, not just a spender, to get the buy-in from your CFO.

Prepare for the future

Having served as CFO and now as a Board director, one thing is crystal clear: the investments you bring to the table today can either lock you into yesterday’s world or position your organization for tomorrow’s. As a CIO, when you come to finance or the Board with a technology ask, it’s not enough to show “what this project solves now”. You must show “where this project leads us” and showcase the destination of capability, scalability and strategic advantage.

A recent survey by Boston Consulting Group found that many technology investments stall not because they lack potential, but because they don’t integrate into a longer-term roadmap or sequence of value. In other words, you need to articulate how this investment serves today and primes the business for the next wave of change.

Whether it’s AI, cloud scalability, ESG reporting or digital ecosystems, your ask should reflect readiness for tomorrow. Are we future-ready? What actions will make us future-ready? How can we educate our people and integrate AI into our business?

Action: In your proposal, include what each phase will bring about. For example:

  • Phase 1: Deliver core outcome
  • Phase 2: Scale/expand
  • Phase 3: Future-mode

Show how this investment creates optionality; your business is not just solving today’s problem but positioning for the next wave. By mapping out the journey, you reassure the CFO and Board that you’re not just spending, but you’re investing. You’re not just solving a problem, but you’re building a capability that serves the future. And that’s how technology funding shifts from one-off to strategic.

Bringing it all together

If you started this article as a transaction like “I need funding for project X”, shift your mindset to a partnership: “Here’s how we together will drive business growth, manage risk and position the company for the future.”

I’ve been on the finance side. I’ve sat in the boardroom listening to the voices that approve or veto. I know what makes them lean in and what makes them pause. For you, the CIO, this isn’t just about technology. It’s about business momentum, credibility, trust and shared language.

So before you walk into that boardroom or finance review, rehearse your message for the CFO and the Board audience. Use business language. Embed financial metrics. Build governance and risk clarity. Show you’re working with them, not against. And finally, tell a story. One that begins with the business opportunity, weaves through the tech solution and ends with measurable value.

That’s when technology funding moves from permission to partnership. And that’s when you, as CIO, step into the role of a true strategic business enabler.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

❌
❌