Reading view

There are new articles available, click to refresh the page.

Three steps to build a data foundation for federal AI innovation

America’s AI Action Plan outlines a comprehensive strategy for the country’s leadership in AI. The plan seeks, in part, to accelerate AI adoption in the federal government. However, there is a gap in that vision: agencies have been slow to adopt AI tools to better serve the public. The biggest barrier to adopting and scaling trustworthy AI isn’t policy or compute power — it’s the foundation beneath the surface. How agencies store, access and govern their records will determine whether AI succeeds or stalls. Those records aren’t just for retention purposes; they are the fuel AI models need to power operational efficiencies through streamlined workflows and uncover mission insights that enable timely, accurate decisions. Without robust digitalization and data governance, federal records cannot serve as the reliable fuel AI models need to drive innovation.

Before AI adoption can take hold, agencies must do something far less glamorous but absolutely essential: modernize their records. Many still need to automate records management, beginning with opening archival boxes, assessing what is inside, and deciding what is worth keeping. This essential process transforms inaccessible, unstructured records into structured, connected datasets that AI models can actually use. Without it, agencies are not just delaying AI adoption, they’re building on a poor foundation that will collapse under the weight of daily mission demands.

If you do not know the contents of the box, how confident can you be that the records aren’t crucial to automating a process with AI? In AI terms, if you enlist the help of a model like OpenAI, the results will only be as good as the digitized data behind it. The greater the knowledge base, the faster AI can be adopted and scaled to positively impact public service. Here is where agencies can start preparing their records — their knowledge base — to lay a defensible foundation for AI adoption.

Step 1: Inventory and prioritize what you already have

Many agencies are sitting on decades’ worth of records, housed in a mix of storage boxes, shared drives, aging databases, and under-governed digital repositories. These records often lack consistent metadata, classification tags or digital traceability, making them difficult to find, harder to govern, and nearly impossible to automate.

This fragmentation is not new. According to NARA’s 2023 FEREM report, only 61% of agencies were rated as low-risk in their management of electronic records — indicating that many still face gaps in easily accessible records, digitalization and data governance. This leaves thousands of unstructured repositories vulnerable to security risks and unable to be fed into an AI model. A comprehensive inventory allows agencies to see what they have, determine what is mission-critical, and prioritize records cleanup. Not everything needs to be digitalized. But everything needs to be accounted for. This early triage is what ensures digitalization, automation and analytics are focused on the right things, maximizing return while minimizing risk.

Without this step, agencies risk building powerful AI models on unreliable data, a setup that undermines outcomes and invites compliance pitfalls.

Step 2: Make digitalization the bedrock of modernization

One of the biggest misconceptions around modernization is that digitalization is a tactical compliance task with limited strategic value. In reality, digitalization is what turns idle content into usable data. It’s the on-ramp to AI driven automation across the agency, including one-click records management and data-driven policymaking.

By focusing on high-impact records — those that intersect with mission-critical workflows, the Freedom of Information Act, cybersecurity enforcement or policy enforcement — agencies can start to build a foundation that’s not just compliant, but future-ready. These records form the connective tissue between systems, workforce, data and decisions.

The Government Accountability Office estimates that up to 80% of federal IT budgets are still spent maintaining legacy systems. Resources that, if reallocated, could help fund strategic digitalization and unlock real efficiency gains. The opportunity cost of delay is increasing exponentially everyday.

Step 3: Align records governance with AI strategy

Modern AI adoption isn’t just about models and computation; it’s about trust, traceability, and compliance. That’s why strong information governance is essential.

Agencies moving fastest on AI are pairing records management modernization with evolving governance frameworks, synchronizing classification structures, retention schedules and access controls with broader digital strategies. The Office of Management and Budget’s 2025 AI Risk Management guidance is clear: explainability, reliability and auditability must be built in from the start.

When AI deployment evolves in step with a diligent records management program centered on data governance, agencies are better positioned to accelerate innovation, build public trust, and avoid costly rework. For example, labeling records with standardized metadata from the outset enables rapid, digital retrieval during audits or investigations, a need that’s only increasing as AI use expands. This alignment is critical as agencies adopt FedRAMP Moderate-certified platforms to run sensitive workloads and meet compliance requirements. These platforms raise the baseline for performance and security, but they only matter if the data moving through them is usable, well-governed and reliable.

Infrastructure integrity: The hidden foundation of AI

Strengthening the digital backbone is only half of the modernization equation. Agencies must also ensure the physical infrastructure supporting their systems can withstand growing operational, environmental, and cybersecurity demands.

Colocation data centers play a critical role in this continuity — offering secure, federally compliant environments that safeguard sensitive data and maintain uptime for mission-critical systems. These facilities provide the stability, scalability and redundancy needed to sustain AI-driven workloads, bridging the gap between digital transformation and operational resilience.

By pairing strong information governance with resilient colocation infrastructure, agencies can create a true foundation for AI, one that ensures innovation isn’t just possible, but sustainable in even the most complex mission environments.

Melissa Carson is general manager for Iron Mountain Government Solutions.

The post Three steps to build a data foundation for federal AI innovation first appeared on Federal News Network.

© Getty Images/iStockphoto/FlashMovie

Digital information travels through fiber optic cables through the network and data servers behind glass panels in the server room of the data center. High speed digital lines 3d illustration

Twilio Drives CX with Trust, Simple, and Smart

By: siowmeng
S. Soh

Summary Bullets:

  • The combination of omni-channel capability, effective data management, and AI will drive better customer experience.
  • As Twilio’s business evolves from CPaaS to customer experience, the company focuses its product development on themes around trust, simple, and smart.

The ability to provide superior customer experience (CX) helps a business gain customer loyalty and a strong competitive advantage. Many enterprises are looking to AI including generative AI (GenAI) and agentic AI to further boost CX by enabling faster resolution and personalized experiences.

Communications platform-as-a-service (CPaaS) vendors offer a platform that focuses on meeting omni-channel channel communications requirements. These players have now integrated a broader set of capabilities to solve CX challenges, involving different touch points including sales, marketing, and customer service. Twilio is one of the major CPaaS vendors that has moved beyond just communications applications programming interfaces (APIs), including contact center (Twilio Flex), customer data management (Segment), and conversational AI. Twilio’s product development has been focusing on three key themes: Trusted, Simple, and Smart. The company has demonstrated these themes through product announcements throughout 2025 and showcased at its SIGNAL events around the world.

Firstly, Twilio is winning customer trust through its scalable and reliable platform (e.g., 99.99% API reliability), working with all major telecom operators in each market (e.g., Optus, Telstra, and Vodafone in Australia). More importantly, it is helping clients win the trust of their customers. With the rising fraud impacting consumers, Twilio has introduced various capabilities including Silent Network Authentication and FIDO-certified passkey as part of its Verify, a user verification product. The company is also promoting the use of branded communications, which has shown to achieve consumer trust and greater willingness to engage with brands. Twilio has introduced branded calling, RCS for branded messaging, Whatsapp Business Calling, and WebRTC for browser.

The second theme is about simplifying developer experience when using the Twilio platform to achieve better CX outcomes. Twilio has long been in the business of giving businesses the ability to reach their customers through a range of communications channels. With Segment (customer data platform), Twilio enables businesses to leverage their data more effectively for gaining customer insights and taking actions. An example is the recent introduction of Event Triggered Journey (general availability in July 2025), which allows the creation of automated marketing workflows to support personalized customer journeys. This can be used to enable a responsive approach for real-time use cases, such as cart abandonment, onboarding flows, and trial-to-paid account journeys. By taking actions to promptly address issues a customer is facing can improve the chance of having a successful transaction, and a happy customer.

The third theme on ‘smart’ is about leveraging AI to make better decisions, enable differentiated experiences, and build stronger customer relationships. Twilio announced two conversational AI updates in May 2025. The first is ‘Conversational Intelligence’ (generally available for voice and private beta for messaging), which analyzes voice calls and text-based conversations and converting them into structured data and insights. This is useful for understanding sentiments, spotting compliance risks, and identifying churn risks. The other AI capability is ‘ConversationRelay’, which enables developers to create voice AI agents using their preferred LLM and integrate with customer data. Twilio is leveraging speech recognition technology and interrupt handling to enable human-like voice agents. Cedar, a financial experience platform for healthcare providers is leveraging ConversationRelay to automate inbound patient billing calls. Healthcare providers receive large volume of calls from patients seeking clarity on their financial obligations. And the use of ConversationRelay enables AI-powered voice agents to provide quick answers and reduce wait times. This provides a better patient experience and quantifiable outcome compared to traditional chatbots. It is also said to reduce costs. The real test is whether such capabilities impact customer experience metrics, such as net promoter score (NPS).

Today, many businesses use Twilio to enhance customer engagement. At the Twilio SIGNAL Sydney event for example, Twilio customers spoke about their success with Twilio solutions. Crypto.com reduced onboarding times from hours to minutes, Lendi Group (a mortgage FinTech company) highlighted the use of AI agents to engage customers after hours, and Philippines Airlines was exploring Twilio Segment and Twilio Flex to enable personalized customer experiences. There was a general excitement with the use of AI to further enhance CX. However, while businesses are aware of the benefits of using AI to improve customer experience, the challenge has been the ability to do it effectively.

Twilio is simplifying the process with Segment and conversational AI solutions. The company is tackling another major challenge around AI security, through the acquisition of Stytch (completed on November 14, 2025), an identity platform for AI agents. AI agent authentication becomes crucial as more agents are deployed and given access to data and systems. AI agents will also collaborate autonomously through protocols such as Model Context Protocol, which can create security risks without an effective identity framework.

It has come a long way from legacy chatbots to GenAI-powered voice agents, and Twilio is not alone in pursuing AI-powered CX solutions. The market is a long way off from providing quantifiable feedback from customers. Technology vendors enabling customer engagement (e.g., Genesys, Salesforce, and Zendesk) have developed AI capabilities including voice AI agents. The collective efforts and competition within the industry will help to drive awareness and adoption. But it is crucial to get the basics right around data management, security, and cost of deploying AI.

Expert Edition: How to modernize data for mission impact

By: wfedstaff

Federal tech leaders are turning data into mission power.

Deliver faster. Operate smarter. Spend less. That’s the challenge echoing across federal C suites, and data modernization is central to the answer.

In our latest Federal News Network Expert Edition, leaders from across government and industry share how agencies are transforming legacy systems into mission-ready data engines:

  • Alyssa Hundrup, health care director at the Government Accountability Office, urges DoD and VA to go beyond “just having agreements” to share health care services and start measuring the impact of these more than 180 agreements: “There’s more … that could really take a data-informed approach.”
  • Duncan McCaskill, vice president of data at Maximus, reminds us that governance is everything: “Governance is your policy wrapper. … Data management is the execution of those rules every day. If you give AI terrible data, you’re going to get terrible results.”
  • Stuart Wagner, chief data and AI officer at the Navy, calls out the risks of inconsistent classification: “If the line is unclear, they just go, ‘Well, we can’t share.’ ”
  • Vice Adm. Karl Thomas, deputy chief of Naval operations for information warfare, highlights the power of AI and open architectures: “Let machines do what machines do best … so humans can make the decisions they need.”
  • And from the Office of Personnel Management, a full overhaul of FedScope is underway to make federal workforce data more transparent and actionable.

In every case: Data is the mission driver.

Download the full ebook to explore how these agencies are addressing modernizing their data strategy!

The post Expert Edition: How to modernize data for mission impact first appeared on Federal News Network.

© Federal News Network

Maximus data ebook Dec 2025

What happens when a government gives an algorithm a seat at the cabinet table?

Interview transcript

Eric White Tell me about Diella. What was your experience and how you came to find out about her?

Sam Adebayo Yeah, so as a technology reporter, I’m always trying to sniff out and see what’s going on in the technology industry. And it’s not often that you find, you know, something that jumps at you right away and you’re like, I have to write about this. So when I saw the announcement that Albania was launching sort of like an AI minister, I’m like, that sounds interesting, you know. It Seems like something to write about. And it is not the first time really that a government has tried to use AI in its operations. The US Treasury, for instance, has this AI system for check fraud detection, which has been very helpful in checking for fraud. And as a matter of fact, there was about, I think, $400 million that was recovered in 2023. Canada has some sort of AI system for its immigration tasks. The UK has also dabbled in using some sort of AI systems. This is not the first time, but this is the first time we’re seeing a nation-state actually elevate an AI system up to the point where it’s making high-profile decisions in something as important as public procurement. When I saw the news, I’m like, I have to really look at what’s happening here. And see what could be the potential impact across governments around the world.

Eric White Yeah. And it’s the first time I’ve seen, myself, of one being actually personified. Right. I mean … it’s a character. It’s an avatar.

Sam Adebayo Yeah, yeah, dressed in all nice Albanian costumes.

Eric White Yeah. Does that add another layer to it where, you know, I’m sure that in meetings and conversations, and even when you were writing about it, did you feel as if you’re writing about an actual person sometimes?

Sam Adebayo I mean, there’s some sort of anthropomorphism here where you sort of, like, allude some sense of humanity to or animate the tool in a way. So it sort of felt like that when I was writing about it. But of course, I’ve been writing about AI for a while, so I know that it’s still a machine. It was trained on specific data sets, right? But it was very interesting to actually write about this development, especially with everything that’s happening in AI today.

Eric White Let’s get into the nitty gritty a little bit. What is it that Diella can actually do and what is her main job? As well as you can explain it, what was curious about what you found?

Sam Adebayo The Albanian government actually rolled out — but at the time it wasn’t exactly called Diella. It was like an AI system that was supposed to help people who were Albanian citizens who were looking to benefit from like citizen services. There was this E-Albanian website that you would log into if you wanted to like skip some bureaucratic processes and get maybe a document provided for you or something like covering services in general. And so, because the Albanian prime minister has actually been looking forward to showing the world that Albania is becoming more and more a transparent society, because experts have said over and over again that Albanian is like a hub for international gangs to launder money through public tenders that are often enmeshed in corrupt processes. So this was kind of like an effort by the prime minister to create a system that was gonna help public tenders to become more transparent, right, more accountable and in showing the world that, hey, Albania is not what it used to be or what the perception about Albania [is]. It was just like an attempt to change the perception about Albania really. So it was that system that they were using in January that was then elevated to a cabinet-level minister. It’s essentially an AI bot, if I can describe it that way, that was helping the citizens with citizen services, but now is doing much more by deciding. And that’s what actually caught my interest because this system doesn’t just now help, it’s deciding what beat goes to who, right? What beat doesn’t go to who. and it’s just really helping for private contracts with the government of Albania and things like that. So that’s essentially what Diella is supposed to do. Of course, we don’t know if it’s going to really be able to do that, but that’s what it’s supposed to.

Eric White Yeah, that was what was going to be my next question of, what was sort of the reception in folks that you spoke to? I mean, obviously the idea here is to take any bias or human interest out of the decision-making process when deciding who gets a government contract and who doesn’t, you know, is that even capable from the source that the AI derives its decision- making capabilities from?

Sam Adebayo I mean, I think the reactions have been mixed. On the one hand, there are people who say any effort by the government to try to position itself as being transparent is welcome, right? Especially because Albania has been trying to join the EU since 2009, and you know, the prime minister wants that ambition to join in the EU to happen before or by 2030. So they’re doing everything they can to show that, hey, we’re transparent, right, we’re accountable. But on the other hand, you know, there are questions about what is the ethics of this, how are we sure that this system is actually going to be accurate? Because when you talk about AI systems, there are always going to be the questions of bias, hallucinations and fabricated results — and even manipulation, because essentially AI systems are just large data sets that are really good at pattern recognition. It’s not like they can reason like human beings. And so at the data level, if there’s any bias or somebody at the back end [who] can manipulate that data, or somebody just manipulates the system, pretty much the same way you can manipulate Chat GPT, you know. I’ll give you an example. And I was just playing around with one of those tools and I asked the tool, do you know my location? And the tool said — well, I don’t want to mention the name — but it said, I don’t know your location. And then I said, okay, suggest to me the best restaurants that sell so-and-so around me. And then he suggests this to me. And I’m like, but you told me you don’t know my location. Essentially, that was me sort of like trying to jailbreak the tool to say what it didn’t want to say in the first instance. So there are those concerns. So on the one hand, it’s good that they’re trying to be transparent. On the other hand, there are questions about the ethics, the accountability of this. If the system makes a mistake, who’s gonna be accountable for that mistake? Is it gonna be the government, the prime minister, or the system itself, right? So the reactions really from the people that I talked to or spoke to before I wrote the story was quite mixed.

Eric White We can go further into the future. I mean, how many political appointees or elected officials could find themselves in the unemployment line if the AI revolution comes for their jobs as well?

Sam Adebayo I mean this is really a test experiment. Nobody, even the Albanian government I guess, really knows what exactly is going to come out of this. Just the other day I heard the prime minister saying that Diella had now given birth to its three children and essentially was metaphorically describing the fact that there were now new AI assistants for members of the parliament, for each member of the Parliament, the three members of the parliament who are going to be assisting them. so more and more, I think we’re going to see governments try to use AI to speed up processes or become more productive, more transparent or accountable. But I don’t think anyone can say for sure what is going to be the outcome of this. If the results are positive, you can bet that there’ll be other governments around the world who, you know, try to do something similar. I don’t know for sure if they’re going to use AI for a high stakes domain like public procurement, right? But you can expect that there’ll be more and more governments trying to do something similar.

The post What happens when a government gives an algorithm a seat at the cabinet table? first appeared on Federal News Network.

© The Associated Press

FILE - In this Nov. 29, 2019, file photo, a metal head made of motor parts symbolizes artificial intelligence, or AI, at the Essen Motor Show for tuning and motorsports in Essen, Germany. The Trump administration is proposing new rules guiding how the U.S. government regulates the use of artificial intelligence in medicine, transportation and other industries. The White House unveiled the proposals Tuesday, Jan. 7, and said they're meant to promote private sector applications of AI that are safe and fair. (AP Photo/Martin Meissner, File)

A journey through Resorts World Sentosa tailoring every moment with technology

“We leverage big data and artificial intelligence to gain a deeper understanding of our guests’ needs. Smart systems help us optimize queue management, visitor flow, and energy efficiency. By integrating online and offline experiences, we enable visitors to conveniently plan and interact through their mobile devices. Our ultimate pursuit is to make Resorts World Sentosa more than just a resort—it is an evolving destination that blends innovation with seamless hospitality.”  – Lee Shi Ruh, CEO of RWS

The digital revolution is bringing cutting-edge technologies like AI, IoT, and big data to every corner of modern life, fundamentally transforming how we live and interact with the world. Nowhere is this shift more evident than in the tourism sector, where innovation is redefining the very essence of exploration. Far from being mere hype, smart tourism solutions have emerged as game-changers; they have refreshed guest experiences, streamlined operations, and paved the way for sustainable growth in one of the world’s most dynamic industries.

Nestled on Singapore’s vibrant Sentosa Island, Resorts World Sentosa (RWS) stands as Asia’s premier integrated resort. They are pioneers in creating immersive travel experiences with modern technology.

Transforming from a tropical resort to an entire smart ecosystem

Since its launch in 2010, RWS has captivated over 200 million guests with its unique attractions. Spanning 49 hectares, this award-winning destination is home to world-class attractions like Universal Studios Singapore, the mesmerizing Adventure Cove Waterpark, and the thrilling Singapore Oceanarium. Guests have luxurious boutique hotels and state-of-the-art facilities at their disposal throughout the resort, like the International Conference Centre. For ten years, RWS has proudly held the prestigious title of Best Comprehensive Resort, solidifying its status as a global leader in entertainment and hospitality.

Behind the honour is RWS’ relentless pursuit of sustainable development and technological innovation. By harmonizing tourism with environmental stewardship and fostering mutual growth between businesses and communities, RWS sets a transformative standard for progress. To this end, RWS unveiled its ambitious RWS 2.0 expansion plan, investing SGD 6.8 billion into transformative development. Beyond iconic additions like Minion Land, The Laurus, and WEAVE, the initiative prioritizes the adoption of extensive digital and intelligent innovations, aiming to create new surprises and lasting memories for every visitor.

> allowfullscreen>

Smart services: Seamless connectivity for every step

At RWS, technology is no longer a background tool but a core part of the customer experience.

Huawei Wi-Fi 7 rethinks connectivity with intelligent converged scheduling, dynamic zooming smart antennas, and precision signal pre-correction. These innovations deliver unmatched signal strength, broader coverage, and better network performance.

RWS now boasts comprehensive Wi-Fi 7 coverage across its entire park, creating an incredible high-speed network experience. Guests can effortlessly stream live content at bustling Universal Studios Singapore, upload crystal-clear HD footage from the Oceanarium, or conduct smooth remote meetings from their hotel rooms, all with lightning-fast bandwidth and minimal latency.

“Because as we all know, connectivity now is the new ingredient in terms of elevating customer experience,” emphasized Alvin Tan, Divisional President of RWS. “We will be able to have the ability to even take in or ingest more data from all our different systems, supporting our visitors’ pre-arrival, during their stay in Resorts World, to be able to respond better to their needs.” Guests can effortlessly organize itineraries, secure bookings, and access tailored suggestions via apps on their mobiles. Leveraging behavioral insights, the system dynamically delivers hyper-personalized content relevant to the guest.

Smart operations: Efficient, data-driven collaboration

Behind the scenes, RWS uses its smart operations center to deliver unparalleled oversight, holistic service management, and real-time event control by integrating critical systems like passenger flow monitoring, energy optimization, security response protocols, and facility upkeep into one cohesive platform.

Huawei CampusInsight reinvents network management by monitoring the quality of VIP services and preemptively detecting network issues, whether they be device access failures or subpar network performance. By shifting from traditional passive maintenance to proactive operations, the solution slashes the time spent in fault alerting, diagnosis, localization, and resolution. By identifying rogue or unauthorized devices, it fortifies network security, safeguarding against data breaches with precision. “The other thing that we are looking forward to with this kind of new technology is, of course, how we can improve our business operations with this kind of connectivity, this kind of network bandwidth,” said Alvin Tan. “This will allow our business management to run the operations in actually a more seamless manner to be able to respond to customers’ needs in a very rapid fashion.”

Smart security: Building a foundation of digital resilience

RWS prioritizes security and operations management at every level. Huawei’s networks exceed stringent industry benchmarks for waterproofing, electrical resistance, and lightning defence, delivering unwavering performance under extreme weather conditions. With Wi-Fi 7 utilizing advanced WPA3 protocols alongside optional end-to-end MACsec encryption and Wi-Fi Shield, wireless connectivity remains ironclad. The system effectively protects all payment transactions and network traffic, offering visitors complete protection when they make purchases.

“We would be able to have better failover in any contingencies to keep our operations just running smoothly for our visitors,” Alvin Tan emphasized. “This kind of new capabilities, I think it just improves the overall resiliency and the security aspects of our resorts.”

Smart future: Joint innovation and continuous evolution

The partnership between RWS and Huawei is more than just technology procurement. The two companies are working together to build a new ecosystem.

“Working with Huawei has actually allowed us to create an ecosystem where we want to actually allow seamless connectivity for our customers,” said Alvin Tan. “I’ve also been introduced to an ecosystem of their partners, to which we are starting to see good progress because from their ecosystem, we are also finding partners that can bring good technology, good innovation.”

In the coming five years, RWS will embark on a transformative journey, weaving big data, AI, and smart systems into every facet of their services. Whether crafting visitor routes and streamlining queues or optimizing energy efficiency, they will elevate both products and services to extraordinary levels of excellence.

“Our ultimate pursuit is to make Resorts World Sentosa more than just a resort. It is an evolving destination that blends innovation with seamless hospitality,” said Lee Shi Ruh, CEO of RWS.

Making the new smart tourism ecosystem a reality

RWS demonstrates that the heart of smart tourism lies not in technological overload but in the seamless fusion of innovation with human-centric values. Through crafting an intelligent, eco-friendly, secure, and streamlined tourism ecosystem, Sentosa elevates visitor experiences while setting a gold standard for sustainability in the worldwide travel sector.

State-of-the-art technology is breathing fresh life into this tropical paradise, blending past and present, merging digital innovation with tangible experiences, and transforming each stay into an unforgettable voyage brimming with wonder.

Verizon Mobile Security Index: In the AI Era, the Human Element Remains the Weak Link


Amy Larsen DeCarlo – Principal Analyst, Security and Data Center Services

Summary Bullets:

  • To protect an expansive mobile environment attack surface in the face of a very dangerous threat environment, organizations are ramping up their security investments, with 75% of the 762 polled in a recent Verizon study reporting they had increased spending this year.
  • But concerns still loom large threat actors using AI and other technologies and tactics to breach the enterprise; and only 17% have implemented security controls to stave off AI-driven attacks.

Mobile and IoT devices play an essential role in most organizations’ operations today. However, the convenience and flexibility they bring comes with risk, opening new points of exposure to enterprise assets. Organizations that were quick to embrace bring your own device (BYOD) strategies often didn’t have a solid plan for safeguarding this environment when so many of these devices were under-secured. Enterprises have made progress in layering their defenses to better protect mobile and IoT environments, but there is still room for progress.

In Verizon’s eighth annual Mobile Security Index report, 77% of the people surveyed said deepfake attacks that tap AI-generated voice and video content to impersonate staff or executives, and SMS text phishing campaigns are likely to accomplish their objective. Approximately 38% think AI will make ransomware even more effective.

Despite the increase in cybersecurity spending in most organizations, only 12% have deployed security controls to safeguard their enterprise from deepfake-enhanced voice phishing. Just 16% have implemented protections against zero-day exploits.

Enterprise employees are welcoming AI-driven apps to their mobile devices – with 93% using GenAI as part of their workday routine. They raised red flags, with 64% calling data compromise via GenAI their number one mobile risk. Of 80% of enterprises that ran employee smishing tests, 39% fell for the scam.

AI aside, user error is the most frequently noted contributor to breaches in general, followed by application threats and network threats. Some 80% said they had documented mobile phishing attempts aimed at staff.

While prioritizing cybersecurity spending is important, organizations need to look at whether they are allocating this investment on the right areas. Just 45% said their organization provides comprehensive education on the potential risks mobile AI tools bring. Only half have formal policies regarding GenAI use on mobile devices, and 27% said they aren’t strictly enforced.

Govini founder charged with 4 felonies

The founder and executive chairman of Govini, a provider of acquisition data and software to the government, has been arrested and charged with four felonies, including multiple counts of unlawful contact with a minor.

Eric T. Gillespie, 57, of Pittsburgh, allegedly used an online chat platform to attempt to solicit sexual contact with a pre-teenage girl.

Eric T. Gillespie, 57, is the founder of Govini and was charged with four felonies.

The Pennsylvania’s Attorneys General Office says at arraignment, a magisterial district judge denied Gillespie bail, citing flight risk and public safety concerns.

The attorneys general says one of their agents “posed as an adult in an online chat platform often utilized by offenders attempting to arrange meetings with children, and engaged in a conversation with Gillespie. Gillespie then made attempts to arrange a meeting with a pre-teenage girl (in Lebanon County).”

Govini said in an updated statement late on Wednesday that it had fired Gillespie.

On November 12, 2025, the Govini Board of Directors terminated Eric Gillespie from the organization, including as a member of the Board, effective immediately. Mr. Gillespie stepped down from the role of CEO almost a decade ago and had no access to classified information,” a company spokesperson said. “Govini is an organization that has been built by over 250 people who share a profound commitment to America’s national security, including veterans, reservists, and people who have dedicated their lives to causes greater than themselves. The actions of one depraved individual should not in any way diminish the hard work of the broader team and their commitment to the security of the United States of America.”

Poplicus Inc., which does business as Govini, had 26 contracts with the government in fiscal 2025 worth about $52 million, according to the USASpending.gov platform. The vast majority of the awards came from the Defense Department, with two other smaller contracts coming from the departments of Commerce and Energy.

Govini’s main DoD customers include the Army, the Defense Information Systems Agency and the Navy.

Since 2021, Govini has won 107 awards worth more than $255 million.

The company said in October that it surpassed $100 million in annual recurring revenue (ARR) and secured a $150 million investment from Bain Capital.

Gillespie launched Govini in 2013 after launching Recovery.org back in the early days of the American Reinvestment and Recovery Act.

If convicted, Gillespie would spend at a minimum seven years in jail and face up to $15,000 in fines. After serving time, he would have to register as a sex offender for at least 10 years under Pennsylvania law.

The post Govini founder charged with 4 felonies first appeared on Federal News Network.

© AP Photo/Matt Rourke

FILE - The Pennsylvania Judicial Center, home to the Commonwealth Court, is pictured on Feb. 21, 2023, in Harrisburg, Pa. (AP Photo/Matt Rourke, File)

ANS’ Sci-Net Acquisition Positioned as Driving UK AI Readiness

R. Pritchard

Summary Bullets:

  • ANS’ acquisition of Sci-Net Solutions expands its portfolio of value-added enterprise technology solutions in a highly competitive UK B2B market
  • AI is a hook everyone latches on to – there are even products and solutions out there – but this is an acquisition of a service provider with current revenues

The ANS acquisition of Sci-Net Business Solutions is positioned as a complement to previous acquisitions such as Makutu as part of the ANS strategy to exploit and deliver the opportunities presented by artificial intelligence (AI). Sci-Net is an Oxford-based business solutions specialist with expertise in ERP, CRM, and cloud infrastructure solutions (e.g., 365 Business Central, Microsoft Dynamics NAV, CRM, and Microsoft Azure).

With ANS already having a strong relationship with Microsoft (Services Partner of the Year in 2024 and over 100 certified Microsoft specialists), the combination makes sense and grows the ANS talent base to over 750 including 65 technology consultants from Sci-Net. It offers opportunities to cross- and up-sell to the companies’ existing customer bases, and to continue to move up the value chain as a managed services provider (MSP).

The move also underlines some key trends in the UK marketplace. Competition remains fierce, so being able to act as a trusted advisor is becoming more important to win and retain business. At the same time, technology continues to become more complex, therefore offering a full portfolio of services ‘above and beyond’ connectivity is vital. MSPs and value-added resellers (VARs) recognize this and represent an ever-stronger force in the market as they can work closely with customers to develop technology solutions that directly address their business needs.

That is not to say that the ‘Big Three’ B2B service providers – BT, Vodafone, and O2 Daisy – do not also recognize this. All of them are positioning to become more solutions-oriented with a focus on areas like cloud, security and, increasingly, AI. They have the advantage of significant existing customer bases, deep human and partnership resources, strong brands, and nationwide fixed and mobile networks from which to deliver their services. By contrast, the likes of ANS and other VARs/MSPs can exploit their agility to differentiate themselves in the market.

It will continue to be a highly competitive market to win the custom of enterprises of all sizes in the UK, which is a tough challenge for all service providers. But it is good news for UK plc as businesses stand to benefit from innovation and value.

Is Liquid Cooling the Key Now that AI Pervades Everything?

B. Valle

Summary Bullets:

• Data center cooling has become an increasingly insurmountable challenge because AI accelerators consume massive amounts of power.

• Liquid cooling adoption is progressively evolving from experimental to mainstream starting with AI labs and hyperscalers, then moving into the colocation space and later enterprises.

As Generative AI (GenAI) takes an ever-stronger hold in our lives, the demands on data centers continue to grow. The heat generated by the high-density computing required to run AI applications that are more resource-intensive than ever is pushing companies to adopt ever more innovative cooling techniques. As a result, liquid cooling, which used to be a fairly experimental technique, is becoming more mainstream.

Eye-watering amounts of money continue to pour into data center investment to run AI workloads. Heat management has become top of mind due to the high rack densities deployed in data centers. GlobalData forecasts that AI revenue worldwide will reach $165 billion in 2025, marking an annual growth of 26% over the previous year. The growth rate will accelerate from 2026 at 34%, and in subsequent years; in fact, the CAGR in the period 2004-2025 will reach 37%.


Source: GlobalData

The powerful hardware designed for AI workloads is growing in density. Although average density racks are usually below 10 kW, it is feasible to think of AI training clusters of 200 kW per rack in the not-too-distant future. Of course, the average number of kW per rack varies a lot, depending on the application, with traditional IT workloads for mainstream business applications requiring far fewer kW-per-rack than frontier AI workloads.

Liquid cooling is a heat management technique that uses liquid to remove heat from computing components in data centers. Liquid has a much higher thermal conductivity than air as it can absorb and transfer heat more effectively. By bringing a liquid coolant into direct contact with heat-generating components like CPUs and GPUs, liquid cooling systems can remove heat at its source, maintaining stable operating temperatures.

Although there are many diverse types of liquid cooling techniques, direct to chip is the most popular cooling method, also known as “cold plate,” accounting for approximately half of the liquid cooling market. This technique uses a cold plate directly mounted on the chip inside the server, enabling efficient heat dissipation. This direct contact enhances the heat transfer efficiency. This method allows high-end, specialized servers to be installed in standard IT cabinets, similar to legacy air-cooled equipment.

There are innovative variations on the cold plate technique that are currently under experimentation. Microsoft is currently prototyping a new method that takes the direct to chip technique one step further by bringing liquid coolant directly inside the silicon where the heat is generated. The method entails applying microfluidics via tiny channels etched into the silicon chip, creating grooves that allow cooling liquid to flow directly onto the chip and more efficiently remove heat.

Swiss startup Corintis is behind the novel technique, which blends the electronics and the heat management system that have been historically designed and made separately, creating unnecessary obstacles when heat has to propagate through multiple materials. Corintis created a design that blends the electronics and the cooling together from the beginning so the microchannels are right underneath the transistor.

Cisco Quantum – Simply Network All the Quantum Computers

S. Schuchart

Cisco’s Quantum Labs research team, part of Outshift by Cisco, has announced that they have completed a complete software solution prototype. The latest part is the Cisco Quantum Complier prototype, designed for distributed quantum computing across networked processors. In short, it allows a network of quantum computers, of all types, to participate in solving a single problem. Even better, this new compiler supports distributed quantum error correction. Instead of a quantum computer needing to have a huge number of qbits itself, the load can be spread out among multiple quantum computers. This coordination is handled across a quantum network, powered by Cisco’s Quantum Network entanglement chip, which was announced in May 2025. This network could also be used to secure communications for traditional servers as well.

For some quick background – one of the factors holding quantum computers back is the lack of quantity and quality when it comes to qubits. Most of the amazing things quantum computers can in theory do require thousands or millions of qubits. Today we have systems with around a thousand qubits. But those qubits need to be quality qubits. Qubits are extremely susceptible to outside interference. Qubits need to be available in quantity as well as quality. To fix the quality problem, there has been a considerable amount of work performed on error correction for qubits. But again, most quantum error correction routines require even more qubits to create logical ‘stable’ qubits. Research has been ongoing across the industry – everyone is looking for a way to create large amounts of stable qubits.

What Cisco is proposing is that instead of making a single quantum processor bigger to have more qubits, multiple quantum processors can be strung together with their quantum networking technology and the quality of the transmitted qubits should be ensured with distributed error correction. It’s an intriguing idea – as Cisco more or less points out we didn’t achieve scale with traditional computing by simply making a single CPU bigger and bigger until it could handle all tasks. Instead, multiple CPUs were integrated on a server and then those servers networked together to share the load. That makes good sense, and it’s an interesting approach. Just like with traditional CPUs, quantum processors will not suddenly stop growing – but if this works it will allow scaling of those quantum processors on a smaller scale, possibly ushering in useful, practical quantum computing sooner.

Is this the breakthrough needed to bring about the quantum computing revolution? At this point it’s a prototype – not an extensively tested method. Quantum computing requires so much fundamental physics research and is so complicated that its extremely hard to say if what Cisco is suggesting can usher in that new quantum age. But it is extremely interesting, and it will certainly be worth watching this approach as Cisco ramps up its efforts in quantum technologies.

Technology Leaders Can Leverage TBM to Play a More Strategic Role in Aligning Tech Spend with Business Values

By: siowmeng
S. Soh

Summary Bullets:

  • Organizations are spending more on technology across business functions, and it is imperative for them to understand and optimize their tech spending through technology business management (TBM).
  • IBM is a key TBM vendor helping organizations to drive their IT strategy more effectively; it is making moves to extend the solution to more customers and partners.

Every company is a tech company. While this is a cliché, especially in the tech industry, it is becoming real in the era of data and AI. For some time, businesses have been gathering data and analyzing them for insights to improve processes and develop new business models. By feeding data into AI engines, enterprises accelerate transformation by automating processes and reducing human intervention. The result is less friction in customer engagement, more agile operations, smarter decision-making, and faster time to market. This is, at least on paper, the promises of AI.

However, enterprises face challenges as they modernize their tech stack, adopt more digital solutions, and move AI from trials to production. Visibility into tech spending and the ability to forecast costs, especially with many services consumed on a pay-as-you-go basis is a challenge. While FinOps addresses cloud spend, a more holistic view of technology spend is necessary, including legacy on-premises systems, GenAI costs (pricing is typically based on the tokens), as well as labor-related costs.

This has made the concept of TBM more crucial today than ever. TBM is a discipline that focuses on enhancing business outcomes by providing organizations with a systematic approach to translating technology investments into business values. It brings financial discipline and transparency to their IT expenditures with the aim of maximizing the contribution of technology to overall business success. Technology is now widely used across business functions such as enterprise resource planning (ERP) for finance, human capital management (HCM) for HR, customer resource management (CRM) for sales, and supply chain management (SCM) for operations. Based on GlobalData’s research, about half of the tech spend today is already from budgets outside of the IT department. It is becoming more crucial as the use of technology becomes even more pervasive across the organization especially with AI being embedded into workflows. Moreover, TBM capability also help to elevate the role of tech leaders within an organization, as a strategic business partners.

IBM is one of the vendors that offer a comprehensive set of solutions to support TBM in part enabled by acquisitions such as Apptio (which also acquired Cloudability and Targetprocess) and Kubecost. Cloudability underpins IBM’s FinOps and cloud cost management, which is a key component that is already seeing great demand due to the need to optimize cloud workloads and spend as companies continue to expand their cloud usage. Apptio offers IT financial management (ITFM) which helps enterprises gain visibility into their tech spend (including SaaS, cloud, on-premises systems, labor, etc.) as well as usage and performance by app or team. This enables real-time decision-making, facilitates the assessment IT investments against KPIs, makes it possible to shift IT budget from keeping the lights on to innovation, and supports showback/chargeback to promote fairness and efficient usage of resources. With Targetprocess, IBM also has a strategic portfolio management (SPM) solution that helps organizations to plan, track, and prioritize work from the strategic portfolio of projects and products to the software development team. The ability to track work delivered by teams and determine the cost per unit of work allows organizations to improve time-to-market and align talent spend to strategic priorities.

Besides IBM, ServiceNow’s SPM helps organizations make better decision based on the initiatives to pursue based on resources, people, budgets, etc. ServiceWare is another firm that offers cloud cost management, ITFM, and a digital value model for TBM. Other FinOps and ITSM vendors may also join the fray as market awareness grows.

Moreover, TBM should not be a practice of the largest enterprises but rather depends on the level of tech spending involved. While IBM/Apptio serves many enterprises (e.g., 60% of Global Fortune 100 companies) that have tech spend well over $100 million, there are other vendors (e.g., MagicOrange and Nicus) that have more cost-effective solutions to target mid-sized enterprises. IBM is now addressing this customer segment with a streamlined IBM Apptio Essentials suite announced in June 2025 which offers fundamental building blocks of ITFM practice that can be implemented quickly and more cost-effectively. Based on GlobalData’s ICT Client Prospector database, in the US alone, there are over 5,000 businesses with total spend exceeding $25 million, which expands the addressable market for IBM.

For service providers, TBM is also a powerful solution for deeper engagement with enterprises and delivers a solution that drives tangible business outcomes. Personas interested in TBM include CIOs, CFOs, and CTOs. While there are TBM tools and dashboards that are readily available, service providers can play a role in managing the stakeholders and designing the processes. Through working with multiple enterprise customers, service providers are also building experiences and best practices to help deliver value faster and avoid potential pitfalls. Service providers such as Deloitte and Wipro already offer TBM to enterprise customers. Others should also consider working with TBM vendors to develop a similar practice.

Mistral AI’s Independence from US Companies Lends it a Competitive Edge

B. Valle

Summary Bullets:

• Mistral AI’s valuation went up to EUR11.7 billion after a funding round of EUR1.7 billion spearheaded by Netherlands-based ASML.

• The French company has the edge in open source and is well positioned to capitalize on the sovereign AI trend sweeping Europe right now.

Semiconductor equipment manufacturer ASML and Mistral AI announced a partnership to explore the use of AI models across ASML’s product portfolio to enhance its holistic lithography systems. In addition, ASML was the lead investor in the latest funding round in the AI startup and now holds 11% share on a fully diluted basis in Mistral AI.

The deal holds a massive symbolic weight in the era of sovereign AI and trade barriers. Although not big in the great scheme of things and especially compared with the eye-watering sums usually exchanged in the bubbly AI world, it brings together Europe’s AI superstar Mistral with the world’s only manufacturer of EUV lithography machines for AI accelerators. ASML may not be a well-known name outside the industry, but the company is a key player in global technology. Although not an acquisition, the deal reminds of the many alliances between AI accelerator companies and AI software companies, as Nvidia and AMD continue to buy startups such as Silo AI and others. Moreover, Mistral, which has never been short of US funding through VC activity, has received a financial boost at the right time when US bidders were rumored to be circling like sharks. Even Microsoft was said to be considering buying the company at some point. For GlobalData’s take on this, please see Three is a Crowd: Microsoft Strikes Sweetheart Deal with Mistral while OpenAI Trains GPT-5. ASML becomes now its main shareholder, helping keep at bay the threat of US ownership at a critical time to reinforce one of its unique selling points: its credentials in “sovereign AI” by remaining independent from US companies.

From a technological perspective, Mistral AI has also developed a unique modus operandi, leveraging open-source models and targeting only enterprise customers, setting it apart from US competitors. Last June, it launched its first reasoning model, Magistral, focused on domain-specific multilingual reasoning, code, and maths. Using open source from the outset has helped it build a large developer ecosystem, long before DeepSeek’s disruption in the landscape drove competitors such as OpenAI to adopt open-source alternatives.

The company’s use of innovative mixture of experts (MoE) architectures and other optimizations means that its models are efficient in terms of computational resources while maintaining high performance, a key competitive differentiator. This means its systems achieve high performance per compute cost, making them more cost effective. Techniques such as sparse MoE allow scaling capacity without proportional increases in resource usage.

In February 2024, Mistral AI launched Le Chat, a multilingual conversational assistant, positioning itself against OpenAI’s ChatGPT and Google Cloud’s Gemini but with more robust privacy credentials. The company has intensified efforts to expand its business platform and tools around Le Chat, recently releasing free enterprise features such as advanced memory capabilities and capacity, and extensive third-party integrations at no cost to users. The latter includes a connectors list, built on MCP, supporting platforms such as Databricks, Snowflake, GitHub, Atlassian, and Stripe, among many others. This move will help Mistral AI penetrate the enterprise market by democratizing access to advanced features, and signals an ambitious strategy to achieve market dominance through integrated suites, not just applications.

Of course, the challenges are plentiful, Mistral AI’s scale is really far behind its US counterparts, and estimates on LLM usage seem to indicate that it is not nibbling market share away from them yet. It has a mammoth task ahead. But this deal can carve a path for European ambitions in AI, and for the protection of European assets in an increasingly polarized world divided across geopolitical lines. Some of the largest European tech companies including SAP and Capgemini have tight links to Mistral AI. They could make a bid to expand their ecosystems with acquisitions of European AI labs, that have so often fallen in US hands, in the future. For ASML, which has so many Asian customers, and whose revenues are going through a rough patch, the geopolitical turmoil of late has not been good news: this partnership brings a much-needed push in the realm of software, a key competitive enabler. After the US launched America’s AI Action plan last July, to strengthen the US leadership in AI with a plan based on removing red tape and regulation, the stakes are undoubtedly higher than ever.

The EU is a Trailblazer, and the AI Act Proves It

B. Valle

Summary Bullets:

• On August 2, 2025, the second stage of the EU AI Act came into force, including obligations for general purpose models.

• The AI Act first came into force in February 2025, with the first set of applications built into law; the legislation follows a staggered approach with the last wave expected for August 2, 2027.

August 2025 has been marked by the enforcement of a new set of rules as part of the AI Act, the world’s first comprehensive AI legislation, which is being implemented in gradual stages. Like GDPR was for data privacy in the 2010s, the AI Act will be the global blueprint for governance of the transformative technology of AI, for decades to come. Recent news of the latest case of legal action, this time against OpenAI, by the parents of 16-year-old Adam Raine, who ended his life after months of intensive use of ChatGPT, has thrown into stark relief the potential for harm and the need to regulate the technology.

The AI Act follows a risk management approach; it aims to regulate transparency and accountability for AI systems and their developers. Although it was enacted into law in 2024, the first wave of enforcement proper was implemented last February (please see GlobalData’s take on The AI Act: landmark regulation comes into force) covering “unacceptable risk,” including AI systems considered a clear threat to societal safety. The second wave, implemented this month, covers general purpose AI (GPAI) models and arguably is the most important one, at least in terms of scope. The next steps are expected to follow in August 2026 (“high-risk systems”) and August 2027 (final steps of implementation).

From August 2, 2025, GPAI providers must comply with transparency and copyright obligations when placing their models on the EU market. This applies not only to EU-based companies but any organization with operations in the EU. GPAI models already on the market before August 2, 2025, must ensure compliance by August 2, 2027. For the intents and purposes of the law, GPAI models include those trained with over 10^23 floating point operations (FLOP) and capable of generating language (whether text or audio), text-to-image, or text-to-video.

Providers of GPAI systems must keep technical documentation about the model, including a sufficiently detailed summary of its training corpus. In addition, they must implement a policy to comply with EU copyright law. Within the group of GPAI models there is a special tier considered to be of “systemic risk,” very advanced models that only a small handful of providers develop. Firms within this tier face additional obligations, for instance, notify the European Commission when developing a model deemed with systemic risk and take steps to ensure the model’s safety and security. The classification of which models pose systemic risks can change over time as the technology evolves. There are exceptions: AI used for national security, military, and defense purposes is exempted in the act. Some open-source systems are also outside the reach of the legislation, as are AI models developed using publicly available code.

The European Commission has published a template to help providers summarize the data used to train their models, the GPAI Code of Practice, developed by independent experts as a voluntary tool for AI providers to demonstrate compliance with the AI Act. Signatories include Amazon, Anthropic, Cohere, Google, IBM, Microsoft, Mistral AI, OpenAI and ServiceNow, but some glaring absences include Meta (at the time of print). The code covers transparency and copyright rules that apply to all GPAI models, with additional safety and security rules for the systemic risk tier.

The AI Act has drawn criticism because of its disproportionate impact on startups and SMBs, with some experts arguing that it should include exceptions for technologies that are yet to have some hold on the general public, and don’t have a wide impact or potential for harm. Others say it could slow down progress among European organizations in the process of training their AI models, and that the rules are confusing. Last July, several tech lobbies including CCIA Europe, urged the EU to pause implementation of the act, arguing that the roll-out had been too rushed, without weighing in on the potential consequences… Sounds familiar?

However, it has been developed with the collaboration of thousands of stakeholders in the private sector, at a time when businesses are craving regulatory guidance. It is also introducing standard security practices across the EU, in a critical period of adoption. It is setting a global benchmark for others to follow in a time of great upheaval. After the AI Act, the US and other countries will find it increasingly harder to continue ignoring the calls for more responsible AI, a commendable effort that will make history.

IBM Think on Tour Singapore 2025: An Agentic Enterprise Comes Down to Tech, Infrastructure, Orchestration, and Optionality

D. Kehoe

Summary Bullets:

• Cloud will have a role in the AI journey, bit no longer the destination. The world will be hybrid, and multi-vendor.

• Agentic AI manifests from this new platform but will be double-edged sword. Autonomy is proportionate to risk. Any solution that goes to production needs governance.

The AI triathlon is underway. A year ago the race was about the size of the GenAI large language model (LLM). Today, it is the number AI agents connecting to internal systems to automate workflows, moving to the overall level of preparedness for the agentic enterprise. The latter seems about giving much higher levels of autonomy to AI agents to set own goals, self-learn and make decisions, possibly manage other agents from other vendors, that impact customers (e.g., approving home loans, dispute resolution, etc.). This, in turn, influences NPS, C-SAT, customer advocacy, compliance, and countless other metrics. It also raises many other legitimate legal, ethical, and regulatory concerns.

Blending Tech with Flexible Architectures

While AI in many of its current forms are nascent, getting things right often starts with placing the right bets. And the IBM vision, as articulated, aligns tightly to the trends on the ground. This is broadly automation, AI, hybrid and multi-cloud environments and data. Not every customer will go the same flight path, but multiple options are key in the era of disaggregation.

In February 2025 IBM acquired HashiCorp. This was a company that foresaw public cloud and on-prem integration challenges decades ago and invested early in dev tools, automation, and saw infrastructure as code. Contextualize to today’s language models, enterprises still will continue to have different needs. While public cloud will likely be the ideal environment for model training, inferencing or fine tuning may better at the edge. Hybrid is the way, and automation is the solution glue. The GlobalData CXO research shows that AI is accelerating edge infrastructure, not cloud. And there are many considerations such as performance, security, compliance, and cost causing the pendulum to swing back.

Watsonx Orchestrate

The acquisition of Red Hat six years ago helped to solidify the ‘open source’ approach into the IBM DNA. This is more relevant for AI now. Openness also translates to middleware and one of the standouts of the event with is the ‘headless architectures’ with Watsonx. The decoupling of UI/UX at the frontend with the backend databases and business logic focuses less on the number of agents, but rather how well autonomous tasks and actions are synchronized in a multi-vendor environment. Traditional vendors have a rich history of integration challenges. An open platform approach working across many of the established application environments with other frameworks is the most viable option. In this context, IBM shared examples of working with a global SaaS provider using Watsonx to support its own global orchestration roll-out; direct selling to the MNC with a large install base of competing solutions, to other scenarios of partners who have BYO agents. IBM likely wants to be seen as having the most open, less so the best technology in a tightly coupled stack.

The Opportunity

Agentic AI’s great potential has a double-edged sword. Autonomy is proportionate to risk. And risk can only be managed with governance. These can include guardrails (e.g., ethics) and process controls (e.g., explainability, monitoring and observability, etc.). Employees will need varying levels of accountability and oversight too. While IBM is a technology company with its own products and infrastructure, it also has its own consulting resources with 160,000 global staff. Most competitors will lean towards the partner-led approach. Whichever path is taken, both options are on the table for IBM. This is important for balancing risk with technology evolution. Still, very few AI peroof of concepts ever make it to production. And great concepts will require the extra consulting muscle, especially through multi-disciplinary teams, to show business value. Claims of internal capability needs to walk that tight rope with vendor agnosticism to keep both camps motivated and the markets confident.

GPT-5 Has Had a Rocky Start but Remains an Extraordinary Achievement

B. Valle

Summary Bullets:

  • OpenAI released GPT-5 on August 7, 2025, a multimodal large language model (LLM) with agentic capabilities.
  • This is the latest iteration of the famous chatbot, and the most important upgrade since the release of the previous generation, GPT-4, in 2023.

As it happens sometimes when a product is thrust with such force into the realm of popular culture, the release of GPT-5 sparked a veritable PR crisis, leading CEO Sam Altman to make a public apology and backtrack on the decision to remove access to all previous AI models in ChatGPT. Unlike enterprise customers, which received advanced warnings of such movements, consumer ChatGPT users did not know their preferred models would disappear so suddenly. The ensuing kerfuffle highlighted the strange co-dependency relationship that some people have developed with the technology, creating no end of background noise surrounding this momentous release.

In truth, OpenAI handled this launch rather clumsily. But GPT-5 remains an extraordinary achievement, in terms of writing, research, analysis, coding, and problem-solving capabilities. The bête noire of generative AI (GenAI), hallucination, has been addressed (to a limited degree, of course), and GPT-5 is significantly less likely to hallucinate than previous generations, according to OpenAI. With web search enabled on anonymized prompts representative of ChatGPT production traffic, GPT-5’s responses are around 45% less likely to contain a factual error than GPT-4o. The startup claims that across several benchmarks, GPT-5 shows a sharp drop in hallucinations, about six times fewer than o3.

However, safety remains a concern. OpenAI has a patchy record in this area: Altman famously lobbied against the US California Senate Bill SB 1047 (SB 1047), which aimed to hold AI developers liable for catastrophic harm caused by their models if appropriate safety measures weren’t taken. In 2024, members of OpenAI’s safety team quit after voicing concerns about the company’s record in this area.

Meanwhile, there has been talk in industry circles and trade media outlets of artificial general intelligence (AGI) and GPT-5’s position in this regard. However, the AI landscape remains so dynamic that this is missing the point. Google’s announcement on August 5, 2025 (in limited research preview) of Google DeepMind’s Genie 3 frontier world models, which help users train AI agents in simulation environments, positions the company against AI behemoth Nvidia in the realm of world AI. World AI in this context means technologies that integrate so-called “world models,” i.e., simulations of how the world works from a physics, causality, or behavior perspective. It could be argued that this is where true AGI resides: in real-world representations and in the trenches of the simulation realm.

On the other hand, Google’s latest salvo in the enterprise space has involved a fierce onslaught of partnerships, with several deals announced in the last 48 hours. Oracle will sell Google Gemini models via Oracle’s cloud computing services and business applications through Google’s developer platform Vertex AI, an important step to boost its capillarity in corporate accounts. With Wipro, Google Cloud is going to launch 200 agentic AI solutions in different verticals that are production-ready and accessible via Google Cloud Marketplace. And with NTT Data, Google is launching industry-specific cloud and AI solutions, with joint go-to-market investments to support this important launch.

The AI market is advancing at rapid speed, including applications of agentic AI in enterprise environments. This includes a variety of AI-driven applications and platforms that are transforming business processes and interactions. The release of GPT-5 is simply another tool in this direction.

The Season of Agentic AI Brings Bold Promises

C. Dunlap Research Director

Summary Bullets:

  • Spring/summer platform conferences led with AI agent news and strategies
  • AI agents represent the leading innovation of app modernization, but DevOps should be wary of over-promising

During this season of cloud platform conferences, rivals are vying to own the headlines and do battle in the cloud wars through their latest campaigns and strategies involving AI agents.

2024’s spring/summer conferences led with GenAI innovations–2025’s with agentic AI. AI assistants and copilots have transformed into tools used to create customized agents, unleashing claims of new capabilities for streamlining integrations with workflows, speeding the application development lifecycle, and supporting multi-agent orchestration and management. Vendors are making bold promises based on agentic AI for its ability to eliminate a multitude of tasks mandated by humans and taking workflow automations to new heights.

AI agents, which can autonomously complete tasks on behalf of users leveraging data from sources external to the AI model, are accelerating the transition towards a more disruptive phase of GenAI. Enhanced memory capabilities enable the AI agents to develop a greater sense of context, including the capacity for “planning.” Agents can connect to other systems through APIs, taking actions rather than just returning information or generating content.

Recap of the latest AI agent events:

  • Amazon announced Bedrock AgentCore, a set of DevOps tools and services to help developers design custom applications while easing the deployment and operation of enterprise-grade AI agents. The tools are complemented with new observability features found in AWS CloudWatch.
  • Joining the Google Gemini family of products, including Gemini 2.5 and Pro, Vertex AI Agent, ADK, and Agentspace, is Google Veo 3, a GenAI model providing more accessibility to high quality video production.
  • OpenAI released ChatGPT agent, an AI system infused with agentic capabilities, that can operate a computer, browse the web, write code, use a terminal, write reports, create images, edit spreadsheets, and create slides for users
  • Anthropic released Claude Code, which uses agentic search to understand an entire codebase without manual context selection and is optimized for code understanding and generation with Claude Opus 4.
  • IBM announced watsonx Orchestrate AI Agent, a suite of agent capabilities that include development tools to build agents on any framework, pre-built agents, and integration with platform partners including Oracle, AWS, Microsoft, and Salesforce.

Cloud platform providers are strategically highlighting their most salient strengths. These range from the breadth of their cloud stack offerings to mature serverless computing solutions to access to massive developer communities via popular Copilot tools and Marketplaces. Yet all are focused on gaining mind share amidst heated campaigns of not only traditional platform rivals, but an increasingly crowded ecosystem of new platform and digital services providers (in the form of infrastructure providers) vying to catch the enterprise developer’s attention.

Recent vendor announcements are aiming to strike a chord among over-taxed enterprise IT operations teams, with claims of easing operational provisioning complexities involved with moving modern apps into production. Use cases supporting these claims remain scarce, and details to help prove new streamlined and low-code methods, particular around AI agent orchestration, are still vague in some cases. Enterprises should remain vigilant in seeking out technology partners providing a deep understanding of an evolving technology which comes with a lot of promises.

❌