Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

2026 Enterprise Predictions: Expect New Heights for Vibe Coding and Retaining Tribal Knowledge

13 January 2026 at 08:02
C. Dunlap
Research Director

Summary Bullets:

• Agentic AI will help document dwindling tribal knowledge

• Vibe coding will become mainstream

The industry should expect a lot more formalization of vibe coding capabilities leading to greater opportunities among non-coders across enterprise business units. Interesting applications resulting from agentic AI will help enterprises solve age-old problems such as the loss of institutional knowledge among an aging workforce. Plus, traditional automation platforms will get a major boost from agentic AI advancements. These are among just some of the 2026 GlobalData Predictions within the category of agile automation.

Agentic AI solutions address extinction of institutional knowledge

GlobalData predicts that in 2026, platform providers will discover new opportunities to apply their range of app development and agentic AI services towards addressing the growing loss of tribal knowledge (i.e., valuable expertise developed over many years of real-world experience). GlobalData anticipates several new comprehensive solutions which provide enterprise customers with training and collaboration mechanisms that address the growing extinction of institutional knowledge. Specifically, across various business units, people’s knowledge could be captured through conversational aspects of GenAI, and stored in systems that leverage AI to train the next-generation of workers. Agents would provide additional instructions relevant to a particular job role.

Vibe coding will become mainstream

In 2026 the industry will gain greater access to mainstream quick-start app development functionalities as part of their business tools. Next-generation UIs will bring with them new business opportunities for non-coders to create and even market attractive application options which go beyond traditional applications. New vibe coding features will present users with more specific and timely apps that improve their day to day lives and work productivity. Examples of types of apps built from vibe coding include business tools, productivity tools, and educational tools. Popular vibe coding apps especially for beginners include ChatGPT, Claude, Lovable, and Zapier Agents. Bolt is popular among developers, as is Cursor and Windsurf. Expect deeper integration of these innovations in traditional business apps.

IPA leaders to disrupt rivals via agentic process automation (APA)

GlobalData predicts that in 2026, intelligent automation leaders will release next-generation agentic AI solutions, or APA, building on evolving GenAI-injected workflow platforms. The APA solutions will disrupt the current state of the agentic AI market through capabilities that include agentic integration, agent builder tools, agent orchestration, and AI gateways to ensure governance guardrails during LLM interactions. APA represents a shift from RPA-based deterministic, predictable, and rules-based workflows to dynamic business processes that are autonomously supported and focused on business outcomes, with the ability to reason and adapt by leveraging the intelligence of LLMs.

For more on agile automation predictions for 2026, please see 2026 Enterprise Predictions: Agile Automation.

I’ve Got a Lot of Problems With You People

30 December 2025 at 16:59
S. Schuchart

Summary Bullets:

  • The technology industry can do better.
  • Let’s just hope that the uneasy feeling about the AI bubble everyone is experiencing is just a bit of leftover holiday undigested beef, blot of mustard, a crumb of cheese, or a fragment of an underdone potato (with apologies to Dickens).

Festivus took place on December 23, 2025, but despite being late, there are grievances to air in regard to the technology industry as it relates to enterprises in 2025. So, let’s start. “I’ve got a lot of problems with you people!”

Artificial Intelligence
Let’s start with the biggest and likely the most diaphanous super-elephant in the room, the current AI boom/craze/bubble. Everyone in the technology industry has been affected by AI. Talk about AI is ubiquitous. Every technology product or service launch prominently mentions AI. Of course, the talk is two-fold. First, it’s about how much money everyone is going to make with AI. Second is the talk about how much money everyone is spending on AI, including all the letters of intent, acquisitions, stock trades, and projected spending.

Any dissent is quickly quashed – AI is the future. AI is all. AI will make everyone so much money. How? Well, of course, there is talk about AI coding, automation, agentic AI, and reducing staff headcount with AI. That last bit isn’t often talked about on the technology side – but it sure is in boardrooms and on Wall Street. The problem with these use cases is that nobody can seem to point to success, outside of a few highly orchestrated pilots. The grievance is that the enterprise technology industry isn’t being fair to its customers when it comes to AI. There is SO MUCH investment money involved in AI, and so many promises that technology vendors and service providers have a near-fatal case of FOMO, the fear of missing out.

It’s hard to see a path to profitability for AI companies, considering the amount they need to spend in order to make large language model (LLM) AI work. It’s equally hard to see a path for enterprises to the rich gains or savings they were promised – these use cases are either not working out or were specious in the first place.

Is the AI boom a bubble? I’m a technology guy, not a finance guy. But even as my hands itch for a soldering iron rather than the complexities of finance it seems pretty clear that there is an alarming amount of money being poured into AI companies. These companies are not profitable. Nor does there seem to be a path to profitability, considering the staggering amounts of money invested. It is more than a little reminiscent of the dot.com bubble and ensuing financial downturn. Oh, and to those that dismiss the dot.com bust with “well it all worked out”? Clearly, you didn’t live through it or were lucky enough to be insulated from it, because there was real harm done. Businesses died, jobs were lost, careers were ruined, and everybody suffered from the economic recession the dot.com created. Let’s just hope that the uneasy feeling about the AI bubble everyone is experiencing is just a bit of leftover holiday undigested beef, blot of mustard, a crumb of cheese, or a fragment of an underdone potato (with apologies to Dickens).

Customer-First
Every technology vendor wants to tell customers that they are the vendor’s first priority. Well, there is plenty of grievance to air on this point. More and more, the technology industry is valuing its financial gain over what is right for their customers. Subscription services are now the norm, and the party benefiting from those is rarely the customer. Mega merger/acquisitions result in wide-spread reorientation of the acquired firm to just its most profitable customers. New terms and conditions, increased prices, forced bundling sold as added value, and long-standing product lines ended, longer support queues, and disruption for the customer. Sure, there are good use cases for subscription services and for acquisitions, but the needs of the customer are getting lost more and more.

This extends to changes to foundational software and services. New releases with radically different user interface that simply don’t have a better workflow than the previous user interface, and now have to be learned. Unasked for features, including AI features, that cannot be turned off. Unnecessarily forcing products to constantly connect across the internet to the vendor, and cannot be shut off. Moving critical management tools to the cloud, but not giving enterprises a way to self-host. Forced arbitration agreements that more often than not only benefit the vendor or service provider.

The technology industry can do better. It has in the past, and the technology industry needs to move back toward a system in which power is more equal and both parties have the goal of providing each other with a square deal.

This Industry Analyst
Of course, I’m not going to leave myself out of the grievances. There are more use cases for technology than anyone can keep in their head, and I’m no exception. I need to be more accepting and expansive about possible use cases and less quick to focus on the negative side. I need to help our customers more by refreshing the love of technology that got me here in the first place and letting that enthusiasm lend wings to my writing and humor to my demeanor. I need to ask more questions, probe deeper when speaking with enterprises, vendors, and service providers. I need to let the worry wane so I can see the sun again and regain the wonder I used to have for technology.

I wish you all a safe, happy, healthy, and prosperous 2026.

Slow Your Roll on AI

22 December 2025 at 15:19
S. Schuchart

AI has been the rage for at least three years now, first just generative AI (GenAI), and now agentic AI. AI can be pretty useful, at GlobalData we’ve done some very cool things with AI on our site. Strategic things, that serve a defined purpose and add value. The use of AI at GlobalData hasn’t been indiscriminate – it has been thought through with how it could help our customers and ourselves. Even this skeptical author can appreciate what’s been done.

But a lot of what is happening out there with AI is indiscriminate and doesn’t attack problems in a prescriptive way. Instead, it is sold as a panacea. A cure for all business and IT ills. The claims are always huge but strangely lacking in detail. It’s particularly true for agentic AI where only in the last month managed to get MCP into the Linux Foundation as a standard. The security issues of agentic AI are still largely unaddressed and certainly not addressed in any standardized fashion. It’s not that agentic AI is a bad idea, it’s not. But the way it’s being sold has a tinge of irrational hysteria.

Sometimes when a vendor introduces a new capability that proudly uses agentic AI, it’s not clear why that capability being ‘agentic’ makes any difference than just AI. New AI features are appearing everywhere, with vendors jamming Ai in every nook and cranny, ignoring the privacy issues, and making it next to impossible to avoid or turn off. The worst part is often these AI features are half-baked ideas implemented too quickly or even worse, written by AI itself and all of the security and code bloat issues that ensue.

The prevailing wind, no scratch that, the hurricane force gale in the IT industry is that AI is everything, AI must be everywhere, and *any* AI is good AI. Any product, service, solution, or announcement must spend at least half of its content on how this is AI and how AI is good.

AI *can* be a wonderful thing. But serious enterprise IT administrators, coders, and engineers know a few things:

1. In a new market like AI, not every company selling AI will continue to sell AI. There will be consolidation, especially in an overhyped trend. Vendors and products will disappear.2. Version 1.0 hardly ever lives up to its billing. Even Windows wasn’t really minimally viable until 3.1. 3. Aligning IT/business value received vs. costs to implement/continue is a core component to the job.4. The bigger the hype, the bigger the backlash.5. The bigger the hype, the bigger the fear of missing out (FOMO) amongst senior management.6. The problems are in the details, not in the overall concept.

So let’s all slow our roll when it comes to AI. More focus on what matters, what can *demonstrably* provide value vs. what is claimed will provide value. Implementation costs as well as one year, three year, and five year costs. Risk assessment from a data privacy, cybersecurity, and regulation standpoint. In short, a little bit more due diligence and a lot less FOMO. AI is going to happen; that’s not the issue. The issue is for enterprises to implement AI where it will help, rather than viewing it as a panacea for all problems.

Is Liquid Cooling the Key Now that AI Pervades Everything?

30 September 2025 at 13:13
B. Valle

Summary Bullets:

• Data center cooling has become an increasingly insurmountable challenge because AI accelerators consume massive amounts of power.

• Liquid cooling adoption is progressively evolving from experimental to mainstream starting with AI labs and hyperscalers, then moving into the colocation space and later enterprises.

As Generative AI (GenAI) takes an ever-stronger hold in our lives, the demands on data centers continue to grow. The heat generated by the high-density computing required to run AI applications that are more resource-intensive than ever is pushing companies to adopt ever more innovative cooling techniques. As a result, liquid cooling, which used to be a fairly experimental technique, is becoming more mainstream.

Eye-watering amounts of money continue to pour into data center investment to run AI workloads. Heat management has become top of mind due to the high rack densities deployed in data centers. GlobalData forecasts that AI revenue worldwide will reach $165 billion in 2025, marking an annual growth of 26% over the previous year. The growth rate will accelerate from 2026 at 34%, and in subsequent years; in fact, the CAGR in the period 2004-2025 will reach 37%.


Source: GlobalData

The powerful hardware designed for AI workloads is growing in density. Although average density racks are usually below 10 kW, it is feasible to think of AI training clusters of 200 kW per rack in the not-too-distant future. Of course, the average number of kW per rack varies a lot, depending on the application, with traditional IT workloads for mainstream business applications requiring far fewer kW-per-rack than frontier AI workloads.

Liquid cooling is a heat management technique that uses liquid to remove heat from computing components in data centers. Liquid has a much higher thermal conductivity than air as it can absorb and transfer heat more effectively. By bringing a liquid coolant into direct contact with heat-generating components like CPUs and GPUs, liquid cooling systems can remove heat at its source, maintaining stable operating temperatures.

Although there are many diverse types of liquid cooling techniques, direct to chip is the most popular cooling method, also known as “cold plate,” accounting for approximately half of the liquid cooling market. This technique uses a cold plate directly mounted on the chip inside the server, enabling efficient heat dissipation. This direct contact enhances the heat transfer efficiency. This method allows high-end, specialized servers to be installed in standard IT cabinets, similar to legacy air-cooled equipment.

There are innovative variations on the cold plate technique that are currently under experimentation. Microsoft is currently prototyping a new method that takes the direct to chip technique one step further by bringing liquid coolant directly inside the silicon where the heat is generated. The method entails applying microfluidics via tiny channels etched into the silicon chip, creating grooves that allow cooling liquid to flow directly onto the chip and more efficiently remove heat.

Swiss startup Corintis is behind the novel technique, which blends the electronics and the heat management system that have been historically designed and made separately, creating unnecessary obstacles when heat has to propagate through multiple materials. Corintis created a design that blends the electronics and the cooling together from the beginning so the microchannels are right underneath the transistor.

Technology Leaders Can Leverage TBM to Play a More Strategic Role in Aligning Tech Spend with Business Values

By: siowmeng
19 September 2025 at 12:44
S. Soh

Summary Bullets:

  • Organizations are spending more on technology across business functions, and it is imperative for them to understand and optimize their tech spending through technology business management (TBM).
  • IBM is a key TBM vendor helping organizations to drive their IT strategy more effectively; it is making moves to extend the solution to more customers and partners.

Every company is a tech company. While this is a cliché, especially in the tech industry, it is becoming real in the era of data and AI. For some time, businesses have been gathering data and analyzing them for insights to improve processes and develop new business models. By feeding data into AI engines, enterprises accelerate transformation by automating processes and reducing human intervention. The result is less friction in customer engagement, more agile operations, smarter decision-making, and faster time to market. This is, at least on paper, the promises of AI.

However, enterprises face challenges as they modernize their tech stack, adopt more digital solutions, and move AI from trials to production. Visibility into tech spending and the ability to forecast costs, especially with many services consumed on a pay-as-you-go basis is a challenge. While FinOps addresses cloud spend, a more holistic view of technology spend is necessary, including legacy on-premises systems, GenAI costs (pricing is typically based on the tokens), as well as labor-related costs.

This has made the concept of TBM more crucial today than ever. TBM is a discipline that focuses on enhancing business outcomes by providing organizations with a systematic approach to translating technology investments into business values. It brings financial discipline and transparency to their IT expenditures with the aim of maximizing the contribution of technology to overall business success. Technology is now widely used across business functions such as enterprise resource planning (ERP) for finance, human capital management (HCM) for HR, customer resource management (CRM) for sales, and supply chain management (SCM) for operations. Based on GlobalData’s research, about half of the tech spend today is already from budgets outside of the IT department. It is becoming more crucial as the use of technology becomes even more pervasive across the organization especially with AI being embedded into workflows. Moreover, TBM capability also help to elevate the role of tech leaders within an organization, as a strategic business partners.

IBM is one of the vendors that offer a comprehensive set of solutions to support TBM in part enabled by acquisitions such as Apptio (which also acquired Cloudability and Targetprocess) and Kubecost. Cloudability underpins IBM’s FinOps and cloud cost management, which is a key component that is already seeing great demand due to the need to optimize cloud workloads and spend as companies continue to expand their cloud usage. Apptio offers IT financial management (ITFM) which helps enterprises gain visibility into their tech spend (including SaaS, cloud, on-premises systems, labor, etc.) as well as usage and performance by app or team. This enables real-time decision-making, facilitates the assessment IT investments against KPIs, makes it possible to shift IT budget from keeping the lights on to innovation, and supports showback/chargeback to promote fairness and efficient usage of resources. With Targetprocess, IBM also has a strategic portfolio management (SPM) solution that helps organizations to plan, track, and prioritize work from the strategic portfolio of projects and products to the software development team. The ability to track work delivered by teams and determine the cost per unit of work allows organizations to improve time-to-market and align talent spend to strategic priorities.

Besides IBM, ServiceNow’s SPM helps organizations make better decision based on the initiatives to pursue based on resources, people, budgets, etc. ServiceWare is another firm that offers cloud cost management, ITFM, and a digital value model for TBM. Other FinOps and ITSM vendors may also join the fray as market awareness grows.

Moreover, TBM should not be a practice of the largest enterprises but rather depends on the level of tech spending involved. While IBM/Apptio serves many enterprises (e.g., 60% of Global Fortune 100 companies) that have tech spend well over $100 million, there are other vendors (e.g., MagicOrange and Nicus) that have more cost-effective solutions to target mid-sized enterprises. IBM is now addressing this customer segment with a streamlined IBM Apptio Essentials suite announced in June 2025 which offers fundamental building blocks of ITFM practice that can be implemented quickly and more cost-effectively. Based on GlobalData’s ICT Client Prospector database, in the US alone, there are over 5,000 businesses with total spend exceeding $25 million, which expands the addressable market for IBM.

For service providers, TBM is also a powerful solution for deeper engagement with enterprises and delivers a solution that drives tangible business outcomes. Personas interested in TBM include CIOs, CFOs, and CTOs. While there are TBM tools and dashboards that are readily available, service providers can play a role in managing the stakeholders and designing the processes. Through working with multiple enterprise customers, service providers are also building experiences and best practices to help deliver value faster and avoid potential pitfalls. Service providers such as Deloitte and Wipro already offer TBM to enterprise customers. Others should also consider working with TBM vendors to develop a similar practice.

IBM Think on Tour Singapore 2025: An Agentic Enterprise Comes Down to Tech, Infrastructure, Orchestration, and Optionality

28 August 2025 at 17:30
D. Kehoe

Summary Bullets:

• Cloud will have a role in the AI journey, bit no longer the destination. The world will be hybrid, and multi-vendor.

• Agentic AI manifests from this new platform but will be double-edged sword. Autonomy is proportionate to risk. Any solution that goes to production needs governance.

The AI triathlon is underway. A year ago the race was about the size of the GenAI large language model (LLM). Today, it is the number AI agents connecting to internal systems to automate workflows, moving to the overall level of preparedness for the agentic enterprise. The latter seems about giving much higher levels of autonomy to AI agents to set own goals, self-learn and make decisions, possibly manage other agents from other vendors, that impact customers (e.g., approving home loans, dispute resolution, etc.). This, in turn, influences NPS, C-SAT, customer advocacy, compliance, and countless other metrics. It also raises many other legitimate legal, ethical, and regulatory concerns.

Blending Tech with Flexible Architectures

While AI in many of its current forms are nascent, getting things right often starts with placing the right bets. And the IBM vision, as articulated, aligns tightly to the trends on the ground. This is broadly automation, AI, hybrid and multi-cloud environments and data. Not every customer will go the same flight path, but multiple options are key in the era of disaggregation.

In February 2025 IBM acquired HashiCorp. This was a company that foresaw public cloud and on-prem integration challenges decades ago and invested early in dev tools, automation, and saw infrastructure as code. Contextualize to today’s language models, enterprises still will continue to have different needs. While public cloud will likely be the ideal environment for model training, inferencing or fine tuning may better at the edge. Hybrid is the way, and automation is the solution glue. The GlobalData CXO research shows that AI is accelerating edge infrastructure, not cloud. And there are many considerations such as performance, security, compliance, and cost causing the pendulum to swing back.

Watsonx Orchestrate

The acquisition of Red Hat six years ago helped to solidify the ‘open source’ approach into the IBM DNA. This is more relevant for AI now. Openness also translates to middleware and one of the standouts of the event with is the ‘headless architectures’ with Watsonx. The decoupling of UI/UX at the frontend with the backend databases and business logic focuses less on the number of agents, but rather how well autonomous tasks and actions are synchronized in a multi-vendor environment. Traditional vendors have a rich history of integration challenges. An open platform approach working across many of the established application environments with other frameworks is the most viable option. In this context, IBM shared examples of working with a global SaaS provider using Watsonx to support its own global orchestration roll-out; direct selling to the MNC with a large install base of competing solutions, to other scenarios of partners who have BYO agents. IBM likely wants to be seen as having the most open, less so the best technology in a tightly coupled stack.

The Opportunity

Agentic AI’s great potential has a double-edged sword. Autonomy is proportionate to risk. And risk can only be managed with governance. These can include guardrails (e.g., ethics) and process controls (e.g., explainability, monitoring and observability, etc.). Employees will need varying levels of accountability and oversight too. While IBM is a technology company with its own products and infrastructure, it also has its own consulting resources with 160,000 global staff. Most competitors will lean towards the partner-led approach. Whichever path is taken, both options are on the table for IBM. This is important for balancing risk with technology evolution. Still, very few AI peroof of concepts ever make it to production. And great concepts will require the extra consulting muscle, especially through multi-disciplinary teams, to show business value. Claims of internal capability needs to walk that tight rope with vendor agnosticism to keep both camps motivated and the markets confident.

GPT-5 Has Had a Rocky Start but Remains an Extraordinary Achievement

15 August 2025 at 12:05
B. Valle

Summary Bullets:

  • OpenAI released GPT-5 on August 7, 2025, a multimodal large language model (LLM) with agentic capabilities.
  • This is the latest iteration of the famous chatbot, and the most important upgrade since the release of the previous generation, GPT-4, in 2023.

As it happens sometimes when a product is thrust with such force into the realm of popular culture, the release of GPT-5 sparked a veritable PR crisis, leading CEO Sam Altman to make a public apology and backtrack on the decision to remove access to all previous AI models in ChatGPT. Unlike enterprise customers, which received advanced warnings of such movements, consumer ChatGPT users did not know their preferred models would disappear so suddenly. The ensuing kerfuffle highlighted the strange co-dependency relationship that some people have developed with the technology, creating no end of background noise surrounding this momentous release.

In truth, OpenAI handled this launch rather clumsily. But GPT-5 remains an extraordinary achievement, in terms of writing, research, analysis, coding, and problem-solving capabilities. The bête noire of generative AI (GenAI), hallucination, has been addressed (to a limited degree, of course), and GPT-5 is significantly less likely to hallucinate than previous generations, according to OpenAI. With web search enabled on anonymized prompts representative of ChatGPT production traffic, GPT-5’s responses are around 45% less likely to contain a factual error than GPT-4o. The startup claims that across several benchmarks, GPT-5 shows a sharp drop in hallucinations, about six times fewer than o3.

However, safety remains a concern. OpenAI has a patchy record in this area: Altman famously lobbied against the US California Senate Bill SB 1047 (SB 1047), which aimed to hold AI developers liable for catastrophic harm caused by their models if appropriate safety measures weren’t taken. In 2024, members of OpenAI’s safety team quit after voicing concerns about the company’s record in this area.

Meanwhile, there has been talk in industry circles and trade media outlets of artificial general intelligence (AGI) and GPT-5’s position in this regard. However, the AI landscape remains so dynamic that this is missing the point. Google’s announcement on August 5, 2025 (in limited research preview) of Google DeepMind’s Genie 3 frontier world models, which help users train AI agents in simulation environments, positions the company against AI behemoth Nvidia in the realm of world AI. World AI in this context means technologies that integrate so-called “world models,” i.e., simulations of how the world works from a physics, causality, or behavior perspective. It could be argued that this is where true AGI resides: in real-world representations and in the trenches of the simulation realm.

On the other hand, Google’s latest salvo in the enterprise space has involved a fierce onslaught of partnerships, with several deals announced in the last 48 hours. Oracle will sell Google Gemini models via Oracle’s cloud computing services and business applications through Google’s developer platform Vertex AI, an important step to boost its capillarity in corporate accounts. With Wipro, Google Cloud is going to launch 200 agentic AI solutions in different verticals that are production-ready and accessible via Google Cloud Marketplace. And with NTT Data, Google is launching industry-specific cloud and AI solutions, with joint go-to-market investments to support this important launch.

The AI market is advancing at rapid speed, including applications of agentic AI in enterprise environments. This includes a variety of AI-driven applications and platforms that are transforming business processes and interactions. The release of GPT-5 is simply another tool in this direction.

The Season of Agentic AI Brings Bold Promises

31 July 2025 at 16:59
C. Dunlap Research Director

Summary Bullets:

  • Spring/summer platform conferences led with AI agent news and strategies
  • AI agents represent the leading innovation of app modernization, but DevOps should be wary of over-promising

During this season of cloud platform conferences, rivals are vying to own the headlines and do battle in the cloud wars through their latest campaigns and strategies involving AI agents.

2024’s spring/summer conferences led with GenAI innovations–2025’s with agentic AI. AI assistants and copilots have transformed into tools used to create customized agents, unleashing claims of new capabilities for streamlining integrations with workflows, speeding the application development lifecycle, and supporting multi-agent orchestration and management. Vendors are making bold promises based on agentic AI for its ability to eliminate a multitude of tasks mandated by humans and taking workflow automations to new heights.

AI agents, which can autonomously complete tasks on behalf of users leveraging data from sources external to the AI model, are accelerating the transition towards a more disruptive phase of GenAI. Enhanced memory capabilities enable the AI agents to develop a greater sense of context, including the capacity for “planning.” Agents can connect to other systems through APIs, taking actions rather than just returning information or generating content.

Recap of the latest AI agent events:

  • Amazon announced Bedrock AgentCore, a set of DevOps tools and services to help developers design custom applications while easing the deployment and operation of enterprise-grade AI agents. The tools are complemented with new observability features found in AWS CloudWatch.
  • Joining the Google Gemini family of products, including Gemini 2.5 and Pro, Vertex AI Agent, ADK, and Agentspace, is Google Veo 3, a GenAI model providing more accessibility to high quality video production.
  • OpenAI released ChatGPT agent, an AI system infused with agentic capabilities, that can operate a computer, browse the web, write code, use a terminal, write reports, create images, edit spreadsheets, and create slides for users
  • Anthropic released Claude Code, which uses agentic search to understand an entire codebase without manual context selection and is optimized for code understanding and generation with Claude Opus 4.
  • IBM announced watsonx Orchestrate AI Agent, a suite of agent capabilities that include development tools to build agents on any framework, pre-built agents, and integration with platform partners including Oracle, AWS, Microsoft, and Salesforce.

Cloud platform providers are strategically highlighting their most salient strengths. These range from the breadth of their cloud stack offerings to mature serverless computing solutions to access to massive developer communities via popular Copilot tools and Marketplaces. Yet all are focused on gaining mind share amidst heated campaigns of not only traditional platform rivals, but an increasingly crowded ecosystem of new platform and digital services providers (in the form of infrastructure providers) vying to catch the enterprise developer’s attention.

Recent vendor announcements are aiming to strike a chord among over-taxed enterprise IT operations teams, with claims of easing operational provisioning complexities involved with moving modern apps into production. Use cases supporting these claims remain scarce, and details to help prove new streamlined and low-code methods, particular around AI agent orchestration, are still vague in some cases. Enterprises should remain vigilant in seeking out technology partners providing a deep understanding of an evolving technology which comes with a lot of promises.

Carriers Grow Traffic Significantly While Also Delivering Energy Efficiency

10 July 2025 at 12:25
R. Pritchard

Summary Bullets:

  • Comcast has nearly doubled the energy efficiency of its network ahead of its 2030 target while also carrying 76% more data.
  • Other examples of greater energy efficiency through new technology include BT Global Fabric, where the replacement of legacy platforms will see a 79% energy consumption reduction.

Comcast announced that it is near to reaching its goal of doubling its network energy efficiency ahead of its 2030 target, stating that it is “delivering dramatically more data at faster speeds and greater reliability at the highest quality for our customers, all while conserving the amount of energy needed to power our network.”

Comcast reported that it has achieved an 11% reduction in energy usage between 2019 and 2024, while at the same time carrying 76% more traffic over the same period as all customer segments use their connections for applications and services needing higher bandwidths – ranging from streaming videos to unified communications. As a result, the energy savings combined with network growth have delivered a 49% reduction in electricity per consumer byte since 2019 (from 18.4 kWh [kilowatt hour] per Terabyte to 9.3 kWh in 2024). Like many others, Comcast has noted both the increase in data as a result of the artificial intelligence (AI) revolution as well as its potential to optimize network performance, including enhanced monitoring/network diagnostics, and optimization.

The other trend driving improved sustainability and efficiency in networks is the latest generation of equipment, with decommissioned legacy technology having been far less efficient. GlobalData analysis has found that replacing copper lines with fiber can be up to 85% more efficient, and power-saving measures using AI can lead to energy savings of up to 40%.

Another notable example is BT’s move to the BT Global Fabric Network-as-a-Service (NaaS) platform, which replaces multiple previous technology platforms and will result in a 79% energy consumption reduction. These technology developments and evolutions are all helping to keep telecoms service providers – national and international – in the vanguard of reducing greenhouse gas (GHG) emissions. Given recent flash floods in Texas (US) and wildfires across Europe and Canada, alongside further destructive climate change impacts on society and nature, these examples of progress should be celebrated and encouraged.

Advancing AI 2025 Event: AMD Heeds the AI Opportunity

30 June 2025 at 14:26
B. Valle

Summary Bullets:

• AMD’s “Advancing AI 2025” event, held in San Jose, California (US) in June 2025 helped analysts delve deeper into the company’s strategy for the next few years.

• The chip designer aims to build a fully open ecosystem and stack, supported by a string of acquisitions, including Silo AI and Brium.

AMD continues executing upon its annual roadmap cadence since it launched the AMD Instinct MI300 GPUs in late-2023. The launch of the AMD Instinct MI350 series, with a quadruple jump in performance compared with the previous generation, was a highlight of the conference. As AI agents become conspicuous, compute requirements will grow, driving an exponential demand for infrastructure. AMD also focused on its software roadmap and highlighted the importance of an open ecosystem, something the company has invested in through acquisitions.

The chip designer announced the launch of the AMD Instinct MI350 series GPUs, the fourth generation within the AMD Instinct family, and the forthcoming rack servers based on these chips, slated for availability in late 2025. The company is also unveiling the AMD Instinct MI400 processors in 2026, which will run on AMD’s Helios rack, pitted against Nvidia’s Vera Rubin.

AI is moving beyond the data center to intelligent devices at the edge and PCs. AMD expects to see AI deployed in every single device, running on different architectures. From a portfolio standpoint the company offers a suite of computing elements spanning GPUS, DPUS, CPUs, NICS, FPGAs, and adaptive SCIs. Its strategy is based on delivering a broad portfolio of compute engines so customers can match the right compute to the right use case, and on investing in an open, developer-first ecosystem that supports every major framework, library, and model. The chip designer believes that an open ecosystem is central to the future of AI and claims to be the only company committed to openness across hardware, software, and solutions.

Openness shouldn’t be just a buzzword because it will be critical to scale adoption of AI over the coming years. AMD has invested heavily both organically and through acquisitions to promote its open software stack; in the last year, it made 25 strategic investments in this area, including the Finnish company Silo AI, and more recently, Brium. Other acquisitions across the entire AI value chain include ZT Systems, Pensando, Lamini, Enosemi, and Xilinx. However, there are always risks associated with inorganic growth that the company needs to actively address.

However powerful AMD’s hardware may be, it is a common criticism in the industry that the software cannot match up to Nvidia’s CUDA platform. AMD has pinpointed software as a key AI enabler and therefore a crucial focus, shaping M&A plans. The ROCm 7 software stack is designed to broaden the coverage of AI models by accelerating the pace of updates and foster a developer-first mentality, with integration with open-source frameworks top of mind. This lends capillarity to the AMD hardware and makes it easier to scale.

The company highlighted that demand for compute based on inference workloads will soon be equal to model training, although training will remain the foundation to develop AI systems. As AI undertakes complex tasks like reasoning, driving demand for more compute, inference will soon become the majority stake of the market. AMD is focusing on inferencing as a crucial differentiator, with a focus on “tokens-per-dollar” as a metric.

Looking ahead, the chip designer believes there is further opportunity in an environment where customers have not invested enough in the refresh cycle of the last couple of years. However, and with the industry still relatively immature in the AI stakes, it is difficult to predict how successful the agentic AI experiment will be. Many enterprises remain in the PoC phase with lots of projects still in their infancy, and it is difficult to project the real size of the opportunity within this market. For a deeper analysis of the event, please read GlobalData’s report Advancing AI 2025: AMD Announces MI350 GPUs and Targets the Inference Opportunity, June 30, 2025.

❌
❌