Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

AI can improve federal service delivery, citizen survey says

23 January 2026 at 17:50

Federal employees received high marks for their work. At the same time, the public also wants more from them, and federal agencies more broadly, especially around technology.

These are among the top findings of a survey of a thousand likely voters from last August by the Center for Accountability, Modernization and Innovation (CAMI).

Stan Soloway, the chairman of the board for CAMI, said the findings demonstrate at least two significant issues for federal executives to consider.

Stan Soloway is the chairman of the board for the Center for Accountability, Modernization and Innovation (CAMI).

“It very clear to us from the survey was that public actually has faith, to a certain extent, in public employees. The public also fully recognizes that the system itself is not serving them well,” Soloway said on Ask the CIO. “We found well over half of the folks that were surveyed said that they didn’t believe that government services are efficient. We found just under half of respondents had a favorable impression of government workers. And I think this is very much I respect my local civil servant because I know what they do, but I have a lot of skepticism about government writ large.”

CAMI, a non-partisan think tank, found that when it comes to government workers:

  • 47% favorable vs 38% unfavorable toward government workers (+9% net)
  • Self-identified very conservative voters showed strong support (+30% net)
  • African Americans showed the highest favorability (+31% net)
  • Self-identified independents are the exception, showing negative views (-14% net)

At the same time, when it comes to government services, CAMI found 54% of the respondents believe agencies aren’t as efficient or as timely as they should be.

John Faso, a former Republican congressman from New York and a senior advisor for CAMI, said the call for more efficiencies and timeliness from citizens echoes a long-time goal of bringing federal agencies closer to the private sector.

“People, and we see this in the survey, look at what government provides and how they provide it, and then to what they’re maybe accustomed to in private sector economy,” Faso said. “Amazon is a prime example. You can sit home and order something, a food product, an item of clothing or something else you want for your house or your family, and oftentimes it’s there within a day or two. People are accustomed to getting that kind of service. People have an expectation that the government can do that. I think government is lagging, obviously, but it’s catching up, and it needs to catch up fast.”

Faso said it’s clear that a solid percentage of the reason for why the government is inefficient comes back to Congress. But at the same time, the CAMI survey demonstrated that there are things federal executives could do to address many of these long-standing challenges.

CAMI says respondents supported several changes to improve timely and efficient delivery of benefits:

  • 40% preferred hiring more government workers
  • 34% preferred partnering with outside organizations
  • Those self-identified as very liberal voters strongly favored more workers (+32% net)
  • Those identified as somewhat conservative voters prefer outside partnerships (-20% net)
  • Older voters (55+) preferred outside partnerships

“Whether it’s the Supplemental Nutrition Assistance Program (SNAP) or Medicaid and Medicare, the feds set all the rules for the administration and governance of the programs. So the first question you have to ask is, what is the federal role?” Soloway said. “Even though we have now shifted administrative responsibility for many programs to the states and to some cases, the counties, and reduced by 50% the financial support for administration of these programs, while the states have a lot to figure out and are somewhat panicked about it, because it’s a huge lift. The feds can’t just walk away. This is where we have issues of policy changes that are needed at the federal level, which we can talk about some of the ones that are desperately needed to give the states kind of the flexibility to innovate.”

Soloway added this also means agencies have to break down long-established siloes both around data and processes.

The Trump administration, for example, has prioritized data sharing across the government, especially to combat concerns around fraud. The Office of Management and Budget said in July it was supercharging the Do Not Pay list by removing the barriers to governmentwide data sharing.

Soloway said this is a prime example of where the private sector has figured out how to get different parts of their organization to talk to each other and where the government is lagging.

“What is the federal role in helping to break down the silos and integrate applications, and to the certain extent help with the administration of programs with like beneficiaries? The data is pretty clear that there’s a lot of commonality across multiple programs, and when you think about the number of different departments and the bureaucracy that actually control those programs, there’s got to be leadership at the federal level, both on technology and to expand process transformation, otherwise you’re not going to solve the problem,” he said. “The second thing is when we talk about issues like program integrity, there are ways you can combat fraud and also protect the beneficiaries. But too often, the conversations are either/or any effort to combat fraud is seen as an effort to take eligible people off the rolls. Every effort to protect eligible people on the rolls is seen as just feeding into that so that’s where the federal leadership, and some of that is in technology, some of it’s in policy. Some of it’s going to be in resources, because it requires investments in technology across the board, state and federal.”

Respondents say technology can play a bigger role in improving the delivery of federal services.

CAMI says respondents offered strong support for using AI to improve government service delivery:

  • 48% support vs 29% oppose using AI tools (net +19%)
  • Self-identified republicans show stronger support than democrats (+36% vs +7% net)
  • Men are significantly more supportive than women (+35% vs +3% net)
  • Support is strongest among middle-aged voters (30-44: +40% net)

Soloway said CAMI is sharing its survey findings with both Congress and the executive branch.

“We’re trying to get the conversations going and get the information to the right people. When we do that, we find, by and large, on both sides, there’s a lot of support to do stuff. The question is going to really be, where’s the leadership going to come from that will have the enough credibility on both sides to push this ball forward?” Soloway said.

Faso added state governments also must play a big role in improving program delivery.

“You have cost sharing between the federal and state governments, and you have cost sharing in terms of the administrative burden to implement these programs. I think a lot of governors, frankly, are now really looking at themselves and saying, ‘How am I going to implement this?’” he said. “How do I collaborate with the federal government to make sure that we’re all enrolling in the same direction in terms of implementing these requirements.”

The post AI can improve federal service delivery, citizen survey says first appeared on Federal News Network.

© Getty Images/wildpixel

AI Robot Team Assistant Service and Chatbot agant or Robotic Automation helping Humans as technology and Human Job integration as employees being guided by robots.

FedRAMP is getting faster, new automation and pilots promise approvals in months, not years

23 January 2026 at 15:34

Interview transcript

Terry Gerton We’re going to talk about one of everybody’s favorite topics, FedRAMP. It’s been around for years, but agencies are still struggling to get modern tools. So from your perspective, why is the process so hard for software and service companies to get through?

Irina Denisenko  It’s a great question. Why is it so hard to get through FedRAMP? It is so hard to get through FedRAMP because at the end of the day, what is FedRAMP really here to do? It’s here to secure cloud software, to secure government data sitting in cloud software. You have to remember this all came together almost 15 years ago, which if you remember 15 years ago, 20 years ago, was kind of early days of all of us interacting with the internet. And we were still even, in some cases, scared to enter our credit card details onto an online website. Fast forward to today, we pay with our face when we get on our phone. We’ve come a long way. But the reality is cloud security hasn’t always been the, of course, it’s secure. In fact, it has been the opposite. Of course, its unsecure and it’s the internet and that’s where you go to lose all your data and all your information. And so long story short, you have to understand that’s were the government is coming from. We need to lock everything down in order to make sure that whether it’s VA patient data, IRS data on our taxpayers, obviously anything in the DoW, any sort of information data there, all of that stays secure. And so that’s why there are hundreds of controls that are applied to cloud environments in order make sure and double sure and triple sure that that data is secure.

Terry Gerton You lived the challenge first-hand with your own company. What most surprised you about the certification process when you tackled it yourself? What most surprise me?

Irina Denisenko  When we tackled FedRAMP ourselves for the first time was that even if you have the resources and specifically if you $3 million to spend, you know, $3 million burning a hole in your pocket doesn’t happen often, but even if have that and you have staff on the U.S. Soil and you have the willingness to invest all of that for a three-year process to get certified, that is still not enough. What you need on top of that is an agency to say yes to sponsoring you. And when they say yes, to sponsoring you what they are saying yes to you is to take on your cyber risk. And specifically what they’re saying yes to is to spend half a million dollars of taxpayer money of agency budget, typically using contractors, to do an initial security review of your application. And then to basically get married to you and do something called continuous monitoring, which is a monthly meeting that they’re going to have with you forever. They, that agency is going to be your accountability partner and ultimately the risk bearer of you, the software provider, to make sure you are burning down all of the vulnerabilities, all of these CVEs, every finding in your cloud environment on the timeline that you’re supposed to do that. And that ends up costing an agency about $250,000 a year, again, in the form of contractors, tooling, etc. That was the most surprising to me, that again, even as a cloud service provider, who’s already doing business with JP Morgan and Chase, you know, healthcare systems, you name it, even that’s not enough, you need an agency sponsor, because at the end of the day, it’s the agency’s data and they have to protect it. And so they have do that triple assurance of, yes, you said you’re doing the security stuff, but let us confirm that you’re doing the the security stuff. That was the most surprising to me. And why, really, ultimately, we started Knox Systems, because what we do at Knox is we enable the inheritance model. So we are doing all of that with our sponsoring agencies, of which we have 15. Knox runs the largest FedRAMP managed cloud. And what that means is we host the production environment of our customers inside of our FedRAMP environment across AWS, Azure, and GCP. And our customers inherit our sponsors. So they inherit the authorization from the treasury, from the VA, from the Marines, etc., Which means that the Marines, the Treasury, the VA, didn’t have to spend an extra half a million upfront and $250k ongoing with every new application that was authorized. They are able to get huge bang for their buck by just investing that authorization, that sponsorship into the Knox boundary. And then Knox does the work and the hard work to ensure the security and ongoing authorization and compliance of all of the applications that we bring into our environment.

Terry Gerton I’m speaking with Irina Denisenko. She’s the CEO of Knox Systems. So it sounds like you found a way through the maze that was shorter, simpler, less expensive. Is FedRAMP 20X helping to normalize that kind of approach? How do you see it playing out?

Irina Denisenko  Great question. FedRAMP 20X is a phenomenal initiative coming out of OMB-GSA. And really the crux of that is all about machine-readable and continuous authorization. Today, when I talked about continuous monitoring, that’s a monthly meeting that happens. And I kid you not, we, as a cloud service provider, again, we secure Adobe’s environment and many others, we come with a spreadsheet, an actual spreadsheet that has all of the vulnerabilities listed from all the scans we’ve done over the last month, and anything that is still open from anything prior months. And we review that spreadsheet, that actual Excel document, and then after the meet with our agencies and then, after that meeting, we upload that spreadsheet into a system called USDA on the FedCiv side, eMass, DOW side, DISA side. And then they, on their side, download that spreadsheet and they put it into other systems. And I mean, that’s the process. I think no one is confused, or no one would argue that surely there’s a better way. And a better would be a machine readable way, whether that’s over an API, using a standard language like OSCAL. There’s lots of ways to standardize, but it doesn’t have to be basically the equivalent of a clipboard and a pencil. And that’s what FedRAMP 20X is doing. It’s automating that information flow so that not only is it bringing down the amount of just human labor that needs to be done to do all this tracking, but more importantly, this is cloud security. Just because you’re secure one second doesn’t mean you’re secure five seconds from now, right? You need to be actively monitoring this, actively reporting this. And if it’s taking you 30 days to let an agency know that you have a critical vulnerability, that’s crazy. You, you got to tell them in, you know, five minutes after you find out or, you know to put a respectable buffer, a responsible buffer to allow you to mitigate remediate before you notify more parties, maybe it’s a four day buffer but it’s certainly not 30 days. That’s what FedRAMP20X is doing. We’re super excited about it. We are very supportive of it and have been actively involved in phase I and all subsequent phases.

Terry Gerton Right, so phase II is scheduled to start shortly in 2026. What are you expecting to see as a result?

Irina Denisenko  Well, phase I was all about FedRAMP low, phase II is all about FedRAMP moderate. And we expect that, you know, it’s going to really — FedRAMP moderate is realistically where most cloud service offerings sit, FedRAMP moderate and high. And so that’s really the one that the FedRAMP needs to get right. What we expect to see and hope to see is to have agencies actually authorized off of these new frameworks. The key is really going to be what shape does FedRAMP 20x take in terms of machine readable reporting on the security posture of any cloud environment? And then of course, the industry will standardize around that. So we’re excited to see what that looks like. And also how much AI does the agency, the GSA, OMB and ultimately FedRAMP leverage because there is a tremendous amount of productivity, but also security that AI can provide. It can also introduce a lot of risks. And so we’re all collaborating with that agency, as well as we’re excited to see what, you know, where they draw the bright red lines and where they embrace AI.

Terry Gerton So phase II is only gonna incorporate 10 companies, right? So for the rest of the world who’s waiting on these results, what advice do you have for them in the meantime? How can companies prepare better or how can companies who want to get FedRAMP certified now best proceed?

Irina Denisenko  I think the end of the day the inheritance model that Knox provides — and, you know, we’re not the only ones, actually there’s two key players.; it’s ourselves and Palantir. There’s a reason hat large companies like Celonis like OutSystems like BigID like Armis who was just bought by ServiceNow for almost $8 billion. There’s reason that all those guys choose Knox and there’s a reason Anthropic chose Palantir and Grafana chose Palantir, because regardless, FedRAMP 20X, Rev 5, doesn’t matter, there is a massive, massive premium put on getting innovative technology in the hands of our government faster. We have a window right now with the current administration prioritizing innovative technology and commercial off-the-shelf. You know, take the best out of Silicon Valley and use it in the government or out of Europe, out of Israel, you name it, rather than build it yourself, customize it until you’re blue in the face and still get an inferior product. Just use the best and breed, right? But you need it to be secure. And we have this window as a country. We have a window as country for the next few years here to get these technologies in. It takes a while to adopt new technologies. It takes awhile to do a quantum leap, but I’ll give you a perfect example. Celonis, since becoming FedRAMPed on August 19th with Knox — they had been trying to get FedRAMPed for five years — since getting FedRAMPed on august 19th, has implemented three agencies. And what do they do? They do process mining and intelligence. They’re an $800 million company that’s 20 years old that competes, by the way, head on with Palantir’s core product, Foundry and Gotham and so on. They’ve implemented three agencies already to drive efficiency, to drive visibility, to drive process mining, to driving intelligence, to drive AI-powered decision-making. And that’s during the holidays, during a government shutdown, it’s speed that we’ve never seen before. If you want outcomes, you need to get these technologies into the hands of our agencies today. And so that’s why, you know, we’re such big proponents of this model, and also why, our agencies, our federal advisory board, which includes the DHS CISO, the DOW CIO, the VA CIO are also supportive of this because ultimately it’s about serving the mission and doing it now. Rather than waiting for some time in the future.

The post FedRAMP is getting faster, new automation and pilots promise approvals in months, not years first appeared on Federal News Network.

© Getty Images/iStockphoto/Kalawin

Cloud

Ahead of filing season, IRS scraps customer service metric it’s used for 20 years

21 January 2026 at 18:42

The IRS is abandoning a customer service metric it’s been using for the past 20 years and replacing it with a new measurement that will more accurately reflect the public’s interactions with the tax agency, according to agency leadership.

The IRS is pursuing these changes as part of a broader shakeup of its senior ranks happening less than a week from the start of the tax filing season.

IRS Chief Executive Officer Frank Bisignano told employees in a memo obtained by Federal News Network that these changes will help the IRS achieve the “best filing season results in timeliness and accuracy.”

“At the heart of this vision is a digital-first taxpayer experience, complemented by a strong human touch wherever it is needed,” Bisignano wrote in the memo sent Tuesday.

In addition to overseeing day-to-day operations at the IRS, Bisignano also serves as the head of the Social Security Administration.

As part of these changes, Bisignano wrote that the IRS will place its current measurement of customer service over the phone with “enterprise metrics that reflect new technologies and service channels.”

“These updates will allow us to more accurately capture how the IRS serves taxpayers today,” he wrote.

The IRS and the Treasury Department did not respond to requests for comment. Bisignano told the Washington Post that the new metrics will track the agency’s average speed to answer incoming calls, call abandonment rates and the amount of time taxpayers spend on the line with the agency.

He told the Post that the agency’s old phone metrics didn’t help the IRS with its mission of solving taxpayers’ problems — and that the agency is investing in technology to better service its customers.

“We’re constantly investing in technology. We constantly must reap the rewards of it,” Bisignano told the Post.

The IRS is specifically sunsetting its Customer Service Representative Level of Service metric. The agency has used this metric for more than 20 years.

But the National Taxpayer Advocate, an independent watchdog within the IRS, told Congress last year that this metric is “misleading” and “does not accurately reflect the experience of most taxpayers who call” the agency.

National Taxpayer Advocate Erin Collins wrote in last year’s mid-year report to Congress that this Level of Service (LOS) metric only reflects calls coming into IRS accounts management phone lines, which make up only about 25% of the agency’s total call volume.

Using the LOS metric, the IRS achieved an 88% level of phone service in fiscal 2024. But IRS employees actually answered less than a third of calls received during the 2024 filing season — both in terms of total calls, and calls to accounts management phone lines.

The agency calculates its LOS metric by taking the percentage of phone calls answered by IRS employees and dividing it by the number of calls routed to IRS staff.

The IRS relies on this metric, as well as historical data on call volumes, to set targets for how many calls it has the capacity to answer, and to set hiring and training goals in preparation for each tax filing season.

Collins wrote that the LOS metric has become a proxy for the level of customer service taxpayers can expect from the IRS. But she told lawmakers that using this metric to drive taxpayer service decisions “is akin to letting the tail wag the dog.”

“The LOS is a check-the-box measure that fails to gauge the taxpayer’s telephone experience accurately and fails even to attempt to gauge the taxpayer experience in other important areas,” Collins wrote. “Yet because the IRS has adopted it as its primary measure of taxpayer service, sacrifices are made in other areas to boost the LOS as much as possible.”

Besides overhauling IRS call metrics, Bisignano announced a new leadership team at the agency.

As reported by the Associated Press, Gary Shapley, a whistleblower who testified about investigations into Hunter Biden’s taxes and who served as IRS commissioner for just two days last year, has been named deputy chief of the agency’s criminal investigation division.

According to Bisignano’s memo, Guy Ficco, the chief of the agency’s criminal investigation division, is retiring and will be replaced by Jarod Koopman, who will also continue to serve as the agency’s chief tax compliance officer.

The post Ahead of filing season, IRS scraps customer service metric it’s used for 20 years first appeared on Federal News Network.

© Getty Images/iStockphoto/marcnorman

Federal CIOs want AI-improved CX; customers want assured security

20 January 2026 at 15:50

 

Interview transcript:

Terry Gerton Gartner’s just done a new survey that’s very interesting around how citizens perceive how they should share data with the government. Give us a little bit of background on why you did the survey.

Mike Shevlin We’re always looking at, and talk to people about, doing some “voice of the customer,” those kinds of things as [government agencies] do development. This was an opportunity for us to get a fairly large sample voice-of-the-customer response around some of the things we see driving digital services.

Terry Gerton There’s some pretty interesting data that comes out of this. It says 61% of citizens rank secure data handling as extremely important, but only 41% trust the government to protect their personal information. What’s driving that gap?

Mike Shevlin To some extent, we have to separate trust in government with the security pieces. You know, if we looked strictly at the, “do citizens expect us to secure their data?” You know, that’s up in the 90% range. So we’re really looking at something a little bit different with this. We’re looking at, and I think one of the big points that came out of the survey, is citizens’ trust in how government is using their data. To think of this, you have to think about kind of the big data. So big data is all about taking a particular dataset and then enriching it with data from other datasets. And as a result, you can form some pretty interesting pictures about people. One of the things that jumps to mind for me, and again, more on the state and local level, is automated license plate readers. What can government learn about citizens through the use of automated license plates readers? Well, you know, it depends on how we use them, right? So if we’re using it and we’re keeping that data in perpetuity, we can probably get a pretty good track on where you are, where you’ve been, the places that you visit. But that’s something that citizens are, of course, concerned about their privacy on. So I think that the drop is not between, are you doing the right things to secure my data while you’re using it, but more about, okay, are you using it for the right purposes? How do I know that? How do you explain it to me?

Terry Gerton It seems to me like the average person probably trusts their search engine more than they trust the government to keep that kind of data separate and secure. But this is really important as the government tries to deliver easier front-facing interfaces for folks, especially consumers of human services programs like SNAP and homeless assistance and those kinds of things. So how important is transparency in this government use of data? And how can the government meet that expectation while still perhaps being able to enrich this data to make the consumer experience even easier?

Mike Shevlin When I come into a service, I want you to know who I am. I want to know that you’re providing me a particular service, that it’s customized. You know, you mentioned the search engine. Does Google or Amazon know you very well? Yeah, I’d say they probably know you better than the government knows you. So my expectation is partly driven out of my experience with the private sector. But at the same time, particularly since all the craze around generative AI, citizens are now much more aware of what else data can do, and as a result, they’re looking for much more control around their own privacy. If you look at, for example in Europe with the GDPR, they’ve got some semblance of control. I can opt out. I can have my data removed. The U.S. has an awful lot of privacy legislation, but nothing as overarching as that. We’ve got HIPAA. We’ve got protections around personally identifiable information. But we don’t have something as overarching as that in Spain. In Spain, if I deal with the government, I can say yes, I only want this one agency to use my data and I don’t want it going anywhere else. We don’t have that in the U.S. I think it’s something that is an opportunity for government digital services to begin to make some promises to citizens and then fulfill those promises or prove that they’re fulfilling those promises.

Terry Gerton I’m speaking with Mike Shevlin. He’s senior director analyst at Gartner Research. Well, Mike, you introduced AI to the conversation, so I’m going to grab that and privacy. How does AI complicate trust and what role does explainable AI play here, in terms of building citizen trust that their privacy will be protected?

Mike Shevlin I think AI complicates trust in part from generative AI and in part from our kind of mistrust in computers as a whole, as entities, as we start to see these things become more human-like. And that’s really, I think, the big thing that generative AI did to us — now we can talk to a computer and get a result. The importance of the explainable AI is because what we’ve seen is these answers aren’t right from generative AI. But that’s not what it’s built for. It’s built to make something that sounds like a human. I think the explainable AI part is particularly important for government because I want to know as a citizen, if you’re using my data, if you’re then running it through an AI model and coming back with a result that affects my life, my liberty, my prosperity, how do I know that that was the right answer? And that’s where the explainable AI pieces really come into play.  Generative AI is not going to do that, at least not right now, they’re working on it. But it’s not, because it builds its decision tree as it evaluates the question, unlike some of the more traditional AI models, the machine learning or graph AI, where those decision trees are pre-built. So it’s much easier to follow back through and say, this is why we got the answer we did. You can’t really do that right now with gen AI.

Terry Gerton We’re talking to folks in federal agencies every day who are looking for ways to deploy AI, to streamline their backlogs, to integrate considerations, to flag applications where there may be actions that need to be taken, or pass through others that look like they’re clear. From the government’s perspective, how much of that needs to be explained or disclosed to citizens?

Mike Shevlin That’s one of the things I really like about the GDPR: It lays out some pretty simple rules around what’s the risk level associated with this. So for example, if the government is using AI to summarize a document, but then someone is reviewing that summary and making a decision on it, I have less concern than I have if that summary becomes the decision. So I think that’s the piece to really focus on as we look at this and some of the opportunities. Gartner recommends combining AI models, and this will become even more important as we move into the next era of agentic AI or AI agents, because now we’re really going to start having the machines do things for us. And I think that explainability becomes really appropriate.

Terry Gerton What does this mean for contractors who are building these digital services? How can they think about security certifications or transparency features as they’re putting these new tools together?

Mike Shevlin The transparency features are incumbent upon government to ask for. The security pieces, you know, we’ve got FedRAMP, we got some of the other pieces. But if you look at the executive orders on AI, transparency and explainability are one of the pillars that are in those executive orders. So, certainly, government entities should be asking for some of those things. I’m pulling from some law enforcement examples, because that’s usually my specific area of focus. But when I look at some of the Drone as a First Responder programs, and I think it was San Francisco that just released their “here’s all the drone flights that we did, here’s why we did them,” so that people can understand: Hey, yeah, this is some AI that’s involved in this, this is some remote gathering, but here’s what we did and why. And that kind of an audit into the system is huge for citizen confidence. I think those are the kinds of things that government should be thinking about and asking for in their solicitations. How do we prove to citizens that we’re really doing the right thing? How can we show them that if we say we’re going to delete this data after 30 days, we’re actually doing that?

Terry Gerton So Mike, what’s your big takeaway from the survey results that you would want to make sure that federal agencies keep in mind as they go into 2026 and they’re really moving forward in these customer-facing services?

Mike Shevlin So my big takeaway is absolutely around transparency. There’s a lot to be said for efficiency, there’s lot to be said for personalization. But I think the biggest thing that came from this survey for me was, we all know security is important. We’ve known that for a long time. Several administrations have talked about it as a big factor. And we have policies and standards around that. But the transparency pieces, I think, we’re starting to get into that. We need to get in to that a little faster. I think that’s probably one of the quickest wins for government if we can do that.

The post Federal CIOs want AI-improved CX; customers want assured security first appeared on Federal News Network.

© Federal News Network

8 federal agency data trends for 2026

14 January 2026 at 14:54

If 2025 was the year federal agencies began experimenting with AI at-scale, then 2026 will be the year they rethink their entire data foundations to support it. What’s coming next is not another incremental upgrade. Instead, it’s a shift toward connected intelligence, where data is governed, discoverable and ready for mission-driven AI from the start.

Federal leaders increasingly recognize that data is no longer just an IT asset. It is the operational backbone for everything from citizen services to national security. And the trends emerging now will define how agencies modernize, secure and activate that data through 2026 and beyond.

Trend 1: Governance moves from manual to machine-assisted

Agencies will accelerate the move toward AI-driven governance. Expect automated metadata generation, AI-powered lineage tracking, and policy enforcement that adjusts dynamically as data moves, changes and scales. Governance will finally become continuous, not episodic, allowing agencies to maintain compliance without slowing innovation.

Trend 2: Data collaboration platforms replace tool sprawl

2026 will mark a turning point as agencies consolidate scattered data tools into unified data collaboration platforms. These platforms integrate cataloging, observability and pipeline management into a single environment, reducing friction between data engineers, analysts and emerging AI teams. This consolidation will be essential for agencies implementing enterprise-wide AI strategies.

Trend 3: Federated architectures become the federal standard

Centralized data architectures will continue to give way to federated models that balance autonomy and interoperability across large agencies. A hybrid data fabric — one that links but doesn’t force consolidation — will become the dominant design pattern. Agencies with diverse missions and legacy environments will increasingly rely on this approach to scale AI responsibly.

Trend 4: Integration becomes AI-first

Application programming interfaces (APIs), semantic layers and data products will increasingly be designed for machine consumption, not just human analysis. Integration will be about preparing data for real-time analytics, large language models (LLMs) and mission systems, not just moving it from point A to point B.

Trend 5: Data storage goes AI-native

Traditional data lakes will evolve into AI-native environments that blend object storage with vector databases, enabling embedding search and retrieval-augmented generation. Federal agencies advancing their AI capabilities will turn to these storage architectures to support multimodal data and generative AI securely.

Trend 6: Real-time data quality becomes non-negotiable

Expect a major shift from reactive data cleansing to proactive, automated data quality monitoring. AI-based anomaly detection will become standard in data pipelines, ensuring the accuracy and reliability of data feeding AI systems and mission applications. The new rule: If it’s not high-quality in real time, it won’t support AI at-scale.

Trend 7: Zero trust expands into data access and auditing

As agencies mature their zero trust programs, 2026 will bring deeper automation in data permissions, access patterns and continuous auditing. Policy-as-code approaches will replace static permission models, ensuring data is both secure and available for AI-driven workloads.

Trend 8: Workforce roles evolve toward human-AI collaboration

The rise of generative AI will reshape federal data roles. The most in-demand professionals won’t necessarily be deep coders. They will be connectors who understand prompt engineering, data ethics, semantic modeling and AI-optimized workflows. Agencies will need talent that can design systems where humans and machines jointly manage data assets.

The bottom line: 2026 is the year of AI-ready data

In the year ahead, the agencies that win will build data ecosystems designed for adaptability, interoperability and human–AI collaboration. The outdated mindset of “collect and store” will be replaced by “integrate and activate.”

For federal leaders, the mission imperative is clear: Make data trustworthy by default, usable by design, and ready for AI from the start. Agencies that embrace this shift will move faster, innovate safely, and deliver more resilient mission outcomes in 2026 and beyond.

Seth Eaton is vice president of technology & innovation at Amentum.

The post 8 federal agency data trends for 2026 first appeared on Federal News Network.

© Getty Images/iStockphoto/ipopba

AI, Machine learning, Hands of robot and human touching on big data network connection background, Science and artificial intelligence technology, innovation and futuristic.

Pandemic watchdog builds AI fraud prevention ‘engine’ trained on millions of COVID program claims

13 January 2026 at 19:04

When Congress authorized over $5 trillion in pandemic-era relief programs, and directed agencies to prioritize speed above all else, fraudsters cashed in with bogus claims.

But data from these pandemic-era relief programs is now being used to train artificial intelligence-powered tools meant to detect fraud before payments go out.

The Pandemic Response Accountability Committee has developed an AI-enabled “fraud prevention engine,” trained on over 5 million applications for pandemic-era relief programs, that can review 20,000 applications for federal funds per second, and can flag anomalies in the data before payment.

The PRAC’s executive director, Ken Dieffenbach, told members of the House Oversight and Government Reform Committee on Tuesday that, had the fraud prevention engine been available at the onset of the pandemic, it would have flagged “at least tens of billions of dollars” in fraudulent claims.

Dieffenbach said that the PRAC’s data analytics capabilities can serve as an “early warning system” when organized, transnational criminals target federal benefits programs. He said the PRAC is working with agency inspectors general on ways to prevent fraud in programs funded by the One Big Beautiful Bill Act, as well as track fraudsters targeting multiple agencies.

“Fraudsters rarely target just one government program. They exploit vulnerabilities wherever they exist,” Dieffenbach said.

The PRAC’s analytics systems have recovered over $500 million in taxpayer funds. Created at the onset of the COVID-19 pandemic, the PRAC oversaw over $5 trillion in relief spending.  It was scheduled to disband last year, but the One Big Beautiful Bill Act reauthorized the PRAC through 2034.

Government Operations Subcommittee Chairman Pete Sessions (R-Texas) said the PRAC has developed data analytics capabilities that can comb through billions of records, and that these tools need a “permanent” home once the PRAC disbands.

“A permanent solution that maintains the analytic capacities and capabilities that have been built over the past six years is necessary and needed. Its database is billions of records deep, and it has begun to pay for itself,” Sessions said.

In one pandemic fraud case, the PRAC identified a scheme where 100 applicants filed 450 applications across 24 states, and obtained $2.6 million in pandemic loans. Dieffenbach said there are tens of thousands of cases like it.

“This is but one example where the proactive use of data and technology could have prevented or aided in the early detection of a scheme, mitigated the need for a resource-intensive investigation and prosecution, and helped ensure taxpayer dollars went to the intended recipients and not the fraudsters,” Dieffenbach said.

In 2024, the Government Accountability Office estimated that the federal government loses $233 to $521 billion in fraud every year.

Sterling Thomas, GAO’s chief scientist, said AI tools are showing promise in flagging fraud, but he warned that “rapid deployment without thoughtful design has already led to unintended outcomes.”

“In data science, we often say garbage in, garbage out. Nowhere is that more true than with AI and machine learning. If we start trying to identify fraud and improper payments with flawed data, we’re going to get poor results,” Thomas said.

The Treasury Department often serves as the last line of defense against fraud, but it is giving agencies access to more of its data to flag potential fraud before issuing payments.

Under a March executive order, President Donald Trump directed the Treasury Department to share its own fraud prevention database, Do Not Pay, with other agencies to the “greatest extent permitted by law.”

Renata Miskell, the deputy assistant secretary for accounting policy and financial transparency at the Treasury Department’s Bureau of the Fiscal Service, told lawmakers that only 4% of federal programs could access all of Do Not Pay’s data in fiscal 2014. But by the end of this fiscal year, she said all federal programs are on track to fully utilize Do Not Pay.

“We want every program — and there’s thousands of federal programs —  to use Do Not Pay before making award and eligibility determinations,” Miskell said.

To make Do Not Pay a more effective tool against fraud, Miskell said Treasury is looking for the ability to “ping” other authoritative federal databases, such as the taxpayer identification numbers (TINs) issued by the IRS or Social Security numbers, before issuing a payment. Without those datasets, she said, Treasury is following a “trust but verify” approach to payments, doing some basic checks before federal funds go out.

“These data sources would dramatically improve eligibility determination and fraud prevention,” Miskell said.

The post Pandemic watchdog builds AI fraud prevention ‘engine’ trained on millions of COVID program claims first appeared on Federal News Network.

© AP Photo/Patrick Semansky

FILE - The Treasury Building is viewed in Washington, May 4, 2021. The U.S. government has imposed sanctions on a Bosnian state prosecutor who is accused of being complicit in corruption and undermining democratic processes or institutions in the Western Balkans. The Treasury Department says its Office of Foreign Assets Control designated Diana Kajmakovic for sanctions. (AP Photo/Patrick Semansky, File)

DLA’s foundation to use AI is built on training, platforms

The Defense Logistics Agency is initially focusing its use of artificial intelligence across three main mission areas: operations, demand planning and forecasting, and audit and transparency.

At the same time, DLA isn’t waiting for everyone to be trained or for its data to be perfect.

Adarryl Roberts, the chief information officer at DLA, said by applying AI tools to their use cases, employees can actually clean up the data more quickly.

Adarryl Roberts is the chief information officer at the Defense Logistics Agency. (Photo courtesy of DLA).

“You don’t have a human trying to analyze the data and come up with those conclusions. So leveraging AI to help with data curation and ensuring we have cleaner data, but then also not just focusing on ChatGPT and things of that nature,” Roberts said on Ask the CIO. “I know that’s the buzzword, but for an agency like DLA, ChatGPT does not solve our strategic issues that we’re trying to solve, and so that’s why there’s a heavier emphasis on AI. For us in those 56 use cases, there’s a lot of that was natural language processing, a lot around procurement, what I would consider more standardized data, what we’re moving towards with generative AI.”

A lot of this work is setting DLA up to use agentic AI in the short-to-medium term. Roberts said by applying agentic AI to its mission areas, DLA expects to achieve the scale, efficiency and effectiveness benefits that the tools promise to provide.

“At DLA, that’s when we’re able to have digital employees work just like humans, to make us work at scale so that we’re not having to redo work. That’s where you get the loss in efficiency from a logistics perspective, when you have to reorder or re-ship, that’s more cost to the taxpayer, and that also delays readiness to the warfighter,” Roberts said at the recent DLA Industry Collider day. “From a research and development perspective, it’s really looking at the tools we have. We have native tools in the cloud. We have SAP, ServiceNow and others, so based upon our major investments from technology, what are those gaps from a technology perspective that we’re not able to answer from a mission perspective across the supply chain? Then we focus on those very specific use cases to help accelerate AI in that area. The other part of that is architecting it so that it seamlessly plugs back into the ecosystem.”

He added that this ensures the technology doesn’t end up becoming a data stovepipe and can integrate into the larger set of applications to be effective and not break missions.

A good example of this approach leading to success is DLA’s use of robotics process automation (RPA) tools. Roberts said the agency currently has about 185 unattended bots that are working 24/7 to help DLA meet mission goals.

“Through our digital citizen program, government people actually are building bots. As the CIO, I don’t want to be a roadblock as a lot of the technology has advanced to where if you watch a YouTube video, you can pretty much do some rudimentary level coding and things of that nature. You have high school kids building bots today. So I want to put the technology in the hands of the experts, the folks who know the business process the best, so it’s a shorter flash to bang in order to get that support out to the warfighter,” Roberts said.

The success of the bots initiative helped DLA determine that the approach of adopting commercial platforms to implement AI tools was the right one. Roberts said all of these platforms reside under its DLA Connect enterprisewide portal.

“That’s really looking at the technology, the people, our processes and our data, and how do we integrate that and track that schematically so that we don’t incur the technical debt we incurred about 25 years ago? That’s going to result in us having architecture laying out our business processes, our supply chain strategies, how that is integrated within those business processes, overlaying that with our IT and those processes within the IT space,” he said. “The business processes, supply chain, strategies and all of that are overlapping. You can see that integration and that interoperability moving forward. So we are creating a single portal where, if you’re a customer, an industry partner, an actual partner or internal DLA, for you to communicate and also see what’s happening across DLA.”

Training every employee on AI

He said that includes questions about contracts and upcoming requests for proposals as well as order status updates and other data driven questions.

Of course, no matter how good the tools are, if the workforce isn’t trained on how to use the AI capabilities or knows where to find the data, then the benefits will be limited.

Roberts said DLA has been investing in training from online and in person courses to creating a specific “innovation navigators course” that is focused on both the IT and how to help the businesses across the agency look at innovation as a concept.

“Everyone doesn’t need the same level of training for data acumen and AI analytics, depending on where you sit in the organization. So working with our human resources office, we are working with the other executives in the mission areas to understand what skill sets they need to support their day-to-day mission. What are their strategic objectives? What’s that population of the workforce and how do we train them, not just online, but in person?” Roberts said. “We’re not trying to reinvent how you learn AI and data, but how do we do that and incorporate what’s important to DLA moving forward? We have a really robust plan for continuous education, not just take a course, and you’re trained, which, I think, is where the government has failed in the past. We train people as soon as they come on board, and then you don’t get additional training for the next 10-15 years, and then the technology passes you by. So we’re going to stay up with technology, and it’s going to be continuous education moving forward, and that will evolve as our technology evolves.”

Roberts said the training is for everyone, from the director of DLA to senior leaders in the mission areas to the logistics and supply chain experts. The goal is to help them answer and understand how to use the digital products, how to prompt AI tools the best way and how to deploy AI to impact their missions.

“You don’t want to deploy AI for the sake of deploying AI, but we need to educate the workforce in terms of how it will assist them in their day to day jobs, and then strategically, from a leadership perspective, how are we structuring that so that we can achieve our objectives,” he said. “Across DLA, we’ve trained over 25,000 employees. All our employees have been exposed, at least, to an introductory level of data acumen. Then we have some targeted courses that we’re having for senior leaders to actually understand how you manage and lead when you have a digital-first concept. We’re actually going to walk through some use cases, see those to completion for some of the priorities that we have strategically, that way we can better lead the workforce and their understanding of how to employ it at echelon within our organization, enhancing IT governance and operational success.”

The courses and training has helped DLA “lay the foundation in terms of what we need to be a digital organization, to think digital first. Now we’re at the point of execution and implementation, putting those tools to use,” Roberts said.

The post DLA’s foundation to use AI is built on training, platforms first appeared on Federal News Network.

© Federal News Network

JEDI CLOUD

The AI data fabric: Your blueprint for mission-ready intelligence

30 December 2025 at 15:10

Artificial intelligence is only as powerful as the data it’s built on. Today’s foundation models thrive on vast, diverse and interconnected datasets, mostly drawn from public domains. The real differentiator for organizations lies with their private, mission-specific data.

This is where a data fabric comes in. A modern data fabric stores and integrates information and serves as the connective tissue between raw enterprise data and AI systems that can reason, recommend and respond in real time.

Know your data

Before you can connect your data, you need to understand it. Knowing what data you have, where it resides, and how it flows is the foundation to every AI initiative. This understanding is built on four core pillars:

  • Discoverability: Can your teams find the data they need without a manual search?
  • Lineage: Do you know where your data comes from, and how it has been altered?
  • Context: Can you interpret what the data means within your mission or business environment?
  • Governance: Is it secure, compliant and trusted enough for decision-making?

At the center of an effective data fabric is a data catalog that takes an inventory of sources and continuously learns from them. It captures relationships, context and organizational knowledge across every corner of a digital ecosystem.

Anything can be a data source, including databases, spreadsheets, code repositories, sensor streams, ETL pipelines, documents and even collaborative discussions. When you start treating all of it as valuable data, the picture of your enterprise becomes complete.

Knowing your data is the foundation. Connecting your data is transformation.

Hidden value of knowledge

The true power of a data fabric lies in its ability to link seemingly unrelated data. When disparate systems — including operations, logistics, HR, supply chain and customer support — are connected through shared metadata and inferred relationships, new knowledge emerges.

Imagine correlating operational readiness data with HR analytics to anticipate staffing shortages or connecting structured metrics with unstructured logs to reveal previously invisible patterns. This derived knowledge is your competitive advantage.

A connected intelligence framework feeds directly into machine learning and generative AI systems, enriching them with contextually grounded, multi-source insights. The result: models that are powerful, explainable and mission aware.

Minimal requirements for an AI-ready data fabric

Building an AI-ready data fabric requires more than simple integration. It demands systems designed for adaptability, compliance and continuous intelligence.

A next-generation catalog goes far beyond simple source indexing. It must integrate lineage, governance and compliance requirements at every level, particularly in regulated or public-sector environments. This type of catalog evolves as data and processes evolve, maintaining a living view of the organization’s digital footprint. It serves as both a map and a memory of enterprise knowledge, ensuring AI models operate on trusted and the latest information.

Data is not enough. Domain expertise, often expressed in documents, code and daily workflows, represents a critical layer of organizational intelligence. Capturing that expertise transforms data fabric from a technical repository into a true knowledge ecosystem.

An AI-ready data fabric should continuously integrate new data, metadata and derived knowledge. Automation and feedback loops ensure that as systems operate, they learn. Relationships between data objects become richer over time, while metadata grows more contextual. This self-updating nature allows organizations to maintain a real-time, adaptive understanding of their operations and knowledge base.

The role of generative AI in the data fabric

Generative AI plays a pivotal role. Through conversational interfaces, an AI system can interactively capture rationales, design decisions and lessons learned directly from subject matter experts. These natural exchanges become structured insights that enrich the broader data landscape.

Generative AI fundamentally changes what is possible within a data ecosystem. It enables organizations to extract and encode knowledge that has traditionally remained locked in minds or informal workflows.

Imagine a conversational agent embedded within the data fabric that interacts with data scientists, engineers or analysts as they develop new models or pipelines. By asking context-aware questions, such as why a specific dataset was chosen or what assumptions guided a model’s design, the AI captures valuable reasoning in real time.

This process transforms static documentation into living knowledge that grows alongside the enterprise. It creates an ecosystem where data, context and human expertise continually reinforce one another, driving deeper intelligence and stronger decision-making.

The way forward

The next generation of enterprise intelligence depends on data systems built for agility and scale. Legacy architectures were never designed for the speed of today’s generative AI landscape. To move forward, organizations must embrace data fabrics that integrate governance, automation and knowledge capture at their core.

John Mark Suhy is the chief technology officer of Greystones Group.

The post The AI data fabric: Your blueprint for mission-ready intelligence first appeared on Federal News Network.

© Getty Images/iStockphoto/KENGKAT

AI artificial intelligence concept Central Computer Processors CPU concept, 3d rendering, Circuit board, Technology background, Motherboard digital chip, Tech science background, machine learning

Federal News Network’s Industry Exchange Data 2026

By: wfedstaff
23 December 2025 at 11:14

Federal agencies manage vast amounts of data. The challenge is turning that information into mission-critical insights — quickly, securely and at scale. 

Join Federal News Network on Feb. 9, 2026, where federal IT leaders examine how agencies are advancing data strategies to meet rising mission, security and policy demands. 

This virtual event will explore the platforms and policy shifts shaping the next phase of federal data modernization. 

Topics include: 

  • Building flexible, AI-ready data architectures 
  • Using open-source tools to improve scale and performance 
  • Balancing innovation with security and compliance 
  • Turning mission data into real-time, actionable insights 
  • Enabling secure interoperability across cloud environments 

Who should attend:
Federal IT decision-makers, data leaders and systems architects focused on maximizing the value of agency data. 

Register now to hear how federal agencies are strengthening data foundations and using insights to support faster, smarter decisions in 2026. 

The post Federal News Network’s Industry Exchange Data 2026 first appeared on Federal News Network.

© Getty Images/Rommel Gonzalez

Person trading stock market and crypto reflecting in glasses

Slow Your Roll on AI

22 December 2025 at 15:19
S. Schuchart

AI has been the rage for at least three years now, first just generative AI (GenAI), and now agentic AI. AI can be pretty useful, at GlobalData we’ve done some very cool things with AI on our site. Strategic things, that serve a defined purpose and add value. The use of AI at GlobalData hasn’t been indiscriminate – it has been thought through with how it could help our customers and ourselves. Even this skeptical author can appreciate what’s been done.

But a lot of what is happening out there with AI is indiscriminate and doesn’t attack problems in a prescriptive way. Instead, it is sold as a panacea. A cure for all business and IT ills. The claims are always huge but strangely lacking in detail. It’s particularly true for agentic AI where only in the last month managed to get MCP into the Linux Foundation as a standard. The security issues of agentic AI are still largely unaddressed and certainly not addressed in any standardized fashion. It’s not that agentic AI is a bad idea, it’s not. But the way it’s being sold has a tinge of irrational hysteria.

Sometimes when a vendor introduces a new capability that proudly uses agentic AI, it’s not clear why that capability being ‘agentic’ makes any difference than just AI. New AI features are appearing everywhere, with vendors jamming Ai in every nook and cranny, ignoring the privacy issues, and making it next to impossible to avoid or turn off. The worst part is often these AI features are half-baked ideas implemented too quickly or even worse, written by AI itself and all of the security and code bloat issues that ensue.

The prevailing wind, no scratch that, the hurricane force gale in the IT industry is that AI is everything, AI must be everywhere, and *any* AI is good AI. Any product, service, solution, or announcement must spend at least half of its content on how this is AI and how AI is good.

AI *can* be a wonderful thing. But serious enterprise IT administrators, coders, and engineers know a few things:

1. In a new market like AI, not every company selling AI will continue to sell AI. There will be consolidation, especially in an overhyped trend. Vendors and products will disappear.2. Version 1.0 hardly ever lives up to its billing. Even Windows wasn’t really minimally viable until 3.1. 3. Aligning IT/business value received vs. costs to implement/continue is a core component to the job.4. The bigger the hype, the bigger the backlash.5. The bigger the hype, the bigger the fear of missing out (FOMO) amongst senior management.6. The problems are in the details, not in the overall concept.

So let’s all slow our roll when it comes to AI. More focus on what matters, what can *demonstrably* provide value vs. what is claimed will provide value. Implementation costs as well as one year, three year, and five year costs. Risk assessment from a data privacy, cybersecurity, and regulation standpoint. In short, a little bit more due diligence and a lot less FOMO. AI is going to happen; that’s not the issue. The issue is for enterprises to implement AI where it will help, rather than viewing it as a panacea for all problems.

DXC Helps Enterprises Scale AI with AdvisoryX

By: siowmeng
16 December 2025 at 17:48
S. Soh

Summary Bullets:

  • DXC has created AdvisoryX, a global advisory and consulting group to help enterprises scale their AI deployment and create business values.
  • Besides leveraging AI to drive innovation with customers, DXC is also adopting AI internally to gain productivity and embedding AI into its services.

DXC has made significant progress expanding its AI capability throughout 2025. The company recently launched AdvisoryX, a global advisory and consulting group designed to help enterprises address their most complex strategic, operational, and technology challenges. This is a positive move that can help enterprises accelerate their AI journey and achieve better outcomes. While enterprises are eager to implement AI, most of them do not have a well-thought-out strategy and operating model, or the necessary expertise to deploy AI successfully. What happens typically is departments working on siloed projects, without organization-wide collaboration, resulting in inefficiencies and governance issues. DXC’s AdvisoryX helps to overcome key challenges from getting started to the full lifecycle management.

DXC’s AdvisoryX offers five integrated solutions, which include DXC’s AI Core (i.e., the foundation including data, modeling, governance, and platform architecture); AI Reinvent (i.e., proven industry use cases across human-assisted, semi-autonomous, and autonomous operating models); AI Interact (i.e., redesigned workflows for collaboration between people and AI); AI Validate (i.e., continuous testing, observability, and governance); and AI Manage (i.e., production operations and lifecycle management).

With AdvisoryX, DXC has strengthened its position as a partner for AI innovation and allows the company to counter efforts by competitors to drive mindshare in the AI space. This is also a buildup of efforts the company has undertaken to develop its AI capabilities. In October 2025, DXC announced Xponential, which is an AI orchestration blueprint that has already been used by global enterprises to scale AI adoption. Xponential provides a structured approach to integrating people, processes, and technology. There are five independent pillars within the blueprint, including: ‘Insight’ (i.e., embedded governance, compliance, and observability); ‘Accelerators’ (i.e., tools to speed up deployment); ‘Automation’ (i.e., agentic frameworks and protocols); ‘Approach’ (i.e., collaboration of skilled professionals and AI to amplify outcomes); and ‘Process’ (i.e., delivery methodology). The company has indicated Singapore General Hospital as a client who has leveraged DXC’s expertise to develop the Augmented Intelligence in Infectious Diseases (AI2D) solution. This solution helps to guide antibiotic choices for lower respiratory tract infections with 90% accuracy and improve patient care while combating antimicrobial resistance.

In April 2025, the company introduced DXC AI Workbench, a generative AI (GenAI) offering that combines consulting, engineering, and secure enterprise services to help businesses worldwide integrate and scale responsible AI into their operations. The company has named Ferrovial, a global infrastructure company, as a customer reference that has leveraged DXC AI Workbench. The customer developed more than 30 AI agents making real-time decisions to optimize field operations, elevate safety measures, manage business knowledge, analyze competition, and assess regulatory impacts.

The company has identified AI as a key driver for business growth. Equally, it sees opportunities to apply AI internally for productivity and to gain experience from the technology. For example, DXC’s finance teams have used AI to transform back-office activities and eliminate repetitive processes; its legal department uses AI for legal research, drafting, and document preparation; and its sales and marketing teams deploy AI to automate workflows, generate proposals, etc. The company is also leveraging AI to enhance its service offerings. For example, it has partnered with 7AI to launch DXC’s agentic security operations center. These examples underscore DXC’s experience and capability in creating business values with AI.

That said, DXC is not the only systems integrator using AI to drive a with an AI advisory and consulting practice. While the company is showing traction and building customer case studies, competitors are also moving rapidly to engage clients in AI innovation and implementation. Accenture, for example, has nearly doubled its GenAI bookings in FY2025 to $5.9 billion from FY2024 and tripled its revenues to $2.7 billion. Tata Consultancy Services has also created a dedicated Tata Consultancy Services AI business unit, and it is driving transformation through a ‘responsible AI’ framework.

While DXC has introduced AdvisoryX, there is a lack of details in terms of the size of the group, areas of focus (e.g., geographical regions and industry sectors), and the assets underpinning its five integrated solutions. This makes it harder to see the differentiation against other providers that are also scaling their AI consulting practice. The company should also consider following up with announcements to highlight how AdvisoryX has made a difference in helping clients achieve their AI goals. This can be across the five integrated solutions, especially AdvisoryX’s AI Reinvent and AI Interact, which address many challenges related to human collaboration and business processes.

It is still early days in the adoption of AI, and competition in the AI space will become more intense. To stay competitive, service providers need to continue to strengthen their ability to help clients align business goals, industry-specific processes and challenges; enhance their AI platforms and tools; and expand their AI partner ecosystem. They also need to build more customer case studies to highlight success and gain credibility.

Boomi Enables Agentic Transformation by Connecting Applications, Data, and AI Agents Through a Single Platform

By: siowmeng
11 December 2025 at 09:34
S. Soh

Summary Bullets:

  • Boomi has developed a platform to help connect systems, manage data, and deploy AI agents more effectively.
  • Boomi is expanding its customer base and partner base in Asia-Pacific; adding global systems integrators will help to drive penetration in the large enterprise segment.

Boomi highlighted at its Boomi World Tour event in Sydney (Australia) that without connectivity, context, and control, there will be no business impact. This epitomizes the challenge for businesses as they continue to pursue agentic transformation, especially with the recent focus on various AI technologies to drive new operating and business models. As enterprises shift their focus toward agentic AI, they often look at the tasks they can automate with AI agents.

However, the bigger picture is about business impact: Businesses should focus on reimagining their workflows and business operations. This requires communication between systems and application programming interfaces (APIs), which is the backbone for communications between enterprise systems connecting applications, data, and AI agents. The ability to extract data across an organization is key as it adds context for decision-making. Moreover, it is essential for businesses to have the right control over their integration, use of data, and the access rights of AI agents.

It is against this backdrop that Boomi has developed its platform to enable effective management of integration, APIs, data, and AI agents. While Boomi’s business has been anchored on integration and automation, it has made significant investments and efforts to enhance data management, API management, and AI agent management. For example, the acquisitions of Rivery and Thru have added data integration and managed file transfer capabilities respectively. While Boomi now has a compelling API management solution, it has added an AI gateway that sits between applications and AI model to check AI requests, manage costs, apply security rules, and route requests to the right model. These are crucial functions to manage the costs of using AI models that use token-based pricing, provide a layer of security to prevent prompt injection, and process streaming responses.

Boomi’s Agentstudio provides AI agent lifecycle management, allowing users to create, govern, and orchestrate agents. Its customers have deployed over 50,000 agents and nearly 350 AI-powered solutions on Boomi Marketplace. The company continues to enhance Agentstudio to meet customer demand. In particular, Boomi is supporting context engineering (e.g., GraphRAG), open-source (e.g., MCP client/server), agent governance (e.g., multi-provider support, FinOps), management of AI agent access (e.g., delegated authorization), and more. All these capabilities – from AI agent management to integration & automation, data management, and API management – are now available through a single Boomi Enterprise Platform.

Boomi’s platform and its AI approach are well-received by enterprise customers. For example, Greencross Pet Wellness Company, Australia’s largest pet wellness organization, leverages Boomi Enterprise Platform to support data integration and business transformation across its inventory systems, HR platforms, warehousing, and digital services. Boomi’s platform also enabled the company to develop its digital pet profile platform, which allows customers to build personalized profiles, receive timely reminders for treatments, view tailored product recommendations, and access relevant services based on their pet’s needs.

Serco Asia-Pacific is another customer in the Asia-Pacific region that has deployed Boomi’s platform and achieved productivity with Boomi’s AI capability. In particular, the company has reduced dramatically the time for a developer to build and document an integration, using Boomi’s DesignGen (creating integration with prompts) and Scribe (generating summaries, descriptions, and documentation) AI agents. Serco now sees Boomi as a crucial partner for its digital transformation, leveraging Boomi Enterprise Platform for integration as well as data management and API management.

Partners play a part in promoting Boomi’s solutions while helping enterprises transform their business. Example of partners in Asia-Pacific include Adaptiv, Atturra, and United Techno, who have been leveraging Boomi for their data and integration business. Atturra is a business advisory and IT solutions provider in Asia-Pacific with a strong industry focus (e.g., logistics, education, financial services, and more). Adaptiv is an ANZ provider of data integration, analytics and AI services. United Techno has a stronger focus on data management and AI solutions especially within the retail, e-commerce, and logistics sectors.

Boomi also engages global systems integrators to promote its solutions to large enterprises for their digital transformation. The company formed a strategic partnership with DXC in August 2025, focusing on application modernization and agentic AI. Particularly for AI projects, consulting services can make a difference in helping enterprises drive more successful outcomes. Systems integrators have been strengthening their consulting capabilities aligned to industry verticals, which can be pivotal in helping companies reimagine their business workflows, implement the right solutions, and measure the business outcomes effectively. They also have existing relationships with many large enterprise customers. Ultimately, the enterprise technology environment is becoming more complex with the need to manage an ecosystem of different technology vendors. Boomi wants to be the glue connecting different technologies, but it also needs partners to bring it all together. Continuing to expand its go-to-market partners and adding more global/regional systems integrators is crucial to penetrate the large enterprise segment across Asia-Pacific.

Twilio Drives CX with Trust, Simple, and Smart

By: siowmeng
5 December 2025 at 09:55
S. Soh

Summary Bullets:

  • The combination of omni-channel capability, effective data management, and AI will drive better customer experience.
  • As Twilio’s business evolves from CPaaS to customer experience, the company focuses its product development on themes around trust, simple, and smart.

The ability to provide superior customer experience (CX) helps a business gain customer loyalty and a strong competitive advantage. Many enterprises are looking to AI including generative AI (GenAI) and agentic AI to further boost CX by enabling faster resolution and personalized experiences.

Communications platform-as-a-service (CPaaS) vendors offer a platform that focuses on meeting omni-channel channel communications requirements. These players have now integrated a broader set of capabilities to solve CX challenges, involving different touch points including sales, marketing, and customer service. Twilio is one of the major CPaaS vendors that has moved beyond just communications applications programming interfaces (APIs), including contact center (Twilio Flex), customer data management (Segment), and conversational AI. Twilio’s product development has been focusing on three key themes: Trusted, Simple, and Smart. The company has demonstrated these themes through product announcements throughout 2025 and showcased at its SIGNAL events around the world.

Firstly, Twilio is winning customer trust through its scalable and reliable platform (e.g., 99.99% API reliability), working with all major telecom operators in each market (e.g., Optus, Telstra, and Vodafone in Australia). More importantly, it is helping clients win the trust of their customers. With the rising fraud impacting consumers, Twilio has introduced various capabilities including Silent Network Authentication and FIDO-certified passkey as part of its Verify, a user verification product. The company is also promoting the use of branded communications, which has shown to achieve consumer trust and greater willingness to engage with brands. Twilio has introduced branded calling, RCS for branded messaging, Whatsapp Business Calling, and WebRTC for browser.

The second theme is about simplifying developer experience when using the Twilio platform to achieve better CX outcomes. Twilio has long been in the business of giving businesses the ability to reach their customers through a range of communications channels. With Segment (customer data platform), Twilio enables businesses to leverage their data more effectively for gaining customer insights and taking actions. An example is the recent introduction of Event Triggered Journey (general availability in July 2025), which allows the creation of automated marketing workflows to support personalized customer journeys. This can be used to enable a responsive approach for real-time use cases, such as cart abandonment, onboarding flows, and trial-to-paid account journeys. By taking actions to promptly address issues a customer is facing can improve the chance of having a successful transaction, and a happy customer.

The third theme on ‘smart’ is about leveraging AI to make better decisions, enable differentiated experiences, and build stronger customer relationships. Twilio announced two conversational AI updates in May 2025. The first is ‘Conversational Intelligence’ (generally available for voice and private beta for messaging), which analyzes voice calls and text-based conversations and converting them into structured data and insights. This is useful for understanding sentiments, spotting compliance risks, and identifying churn risks. The other AI capability is ‘ConversationRelay’, which enables developers to create voice AI agents using their preferred LLM and integrate with customer data. Twilio is leveraging speech recognition technology and interrupt handling to enable human-like voice agents. Cedar, a financial experience platform for healthcare providers is leveraging ConversationRelay to automate inbound patient billing calls. Healthcare providers receive large volume of calls from patients seeking clarity on their financial obligations. And the use of ConversationRelay enables AI-powered voice agents to provide quick answers and reduce wait times. This provides a better patient experience and quantifiable outcome compared to traditional chatbots. It is also said to reduce costs. The real test is whether such capabilities impact customer experience metrics, such as net promoter score (NPS).

Today, many businesses use Twilio to enhance customer engagement. At the Twilio SIGNAL Sydney event for example, Twilio customers spoke about their success with Twilio solutions. Crypto.com reduced onboarding times from hours to minutes, Lendi Group (a mortgage FinTech company) highlighted the use of AI agents to engage customers after hours, and Philippines Airlines was exploring Twilio Segment and Twilio Flex to enable personalized customer experiences. There was a general excitement with the use of AI to further enhance CX. However, while businesses are aware of the benefits of using AI to improve customer experience, the challenge has been the ability to do it effectively.

Twilio is simplifying the process with Segment and conversational AI solutions. The company is tackling another major challenge around AI security, through the acquisition of Stytch (completed on November 14, 2025), an identity platform for AI agents. AI agent authentication becomes crucial as more agents are deployed and given access to data and systems. AI agents will also collaborate autonomously through protocols such as Model Context Protocol, which can create security risks without an effective identity framework.

It has come a long way from legacy chatbots to GenAI-powered voice agents, and Twilio is not alone in pursuing AI-powered CX solutions. The market is a long way off from providing quantifiable feedback from customers. Technology vendors enabling customer engagement (e.g., Genesys, Salesforce, and Zendesk) have developed AI capabilities including voice AI agents. The collective efforts and competition within the industry will help to drive awareness and adoption. But it is crucial to get the basics right around data management, security, and cost of deploying AI.

Verizon Mobile Security Index: In the AI Era, the Human Element Remains the Weak Link

20 November 2025 at 12:55

Amy Larsen DeCarlo – Principal Analyst, Security and Data Center Services

Summary Bullets:

  • To protect an expansive mobile environment attack surface in the face of a very dangerous threat environment, organizations are ramping up their security investments, with 75% of the 762 polled in a recent Verizon study reporting they had increased spending this year.
  • But concerns still loom large threat actors using AI and other technologies and tactics to breach the enterprise; and only 17% have implemented security controls to stave off AI-driven attacks.

Mobile and IoT devices play an essential role in most organizations’ operations today. However, the convenience and flexibility they bring comes with risk, opening new points of exposure to enterprise assets. Organizations that were quick to embrace bring your own device (BYOD) strategies often didn’t have a solid plan for safeguarding this environment when so many of these devices were under-secured. Enterprises have made progress in layering their defenses to better protect mobile and IoT environments, but there is still room for progress.

In Verizon’s eighth annual Mobile Security Index report, 77% of the people surveyed said deepfake attacks that tap AI-generated voice and video content to impersonate staff or executives, and SMS text phishing campaigns are likely to accomplish their objective. Approximately 38% think AI will make ransomware even more effective.

Despite the increase in cybersecurity spending in most organizations, only 12% have deployed security controls to safeguard their enterprise from deepfake-enhanced voice phishing. Just 16% have implemented protections against zero-day exploits.

Enterprise employees are welcoming AI-driven apps to their mobile devices – with 93% using GenAI as part of their workday routine. They raised red flags, with 64% calling data compromise via GenAI their number one mobile risk. Of 80% of enterprises that ran employee smishing tests, 39% fell for the scam.

AI aside, user error is the most frequently noted contributor to breaches in general, followed by application threats and network threats. Some 80% said they had documented mobile phishing attempts aimed at staff.

While prioritizing cybersecurity spending is important, organizations need to look at whether they are allocating this investment on the right areas. Just 45% said their organization provides comprehensive education on the potential risks mobile AI tools bring. Only half have formal policies regarding GenAI use on mobile devices, and 27% said they aren’t strictly enforced.

ANS’ Sci-Net Acquisition Positioned as Driving UK AI Readiness

15 October 2025 at 12:06
R. Pritchard

Summary Bullets:

  • ANS’ acquisition of Sci-Net Solutions expands its portfolio of value-added enterprise technology solutions in a highly competitive UK B2B market
  • AI is a hook everyone latches on to – there are even products and solutions out there – but this is an acquisition of a service provider with current revenues

The ANS acquisition of Sci-Net Business Solutions is positioned as a complement to previous acquisitions such as Makutu as part of the ANS strategy to exploit and deliver the opportunities presented by artificial intelligence (AI). Sci-Net is an Oxford-based business solutions specialist with expertise in ERP, CRM, and cloud infrastructure solutions (e.g., 365 Business Central, Microsoft Dynamics NAV, CRM, and Microsoft Azure).

With ANS already having a strong relationship with Microsoft (Services Partner of the Year in 2024 and over 100 certified Microsoft specialists), the combination makes sense and grows the ANS talent base to over 750 including 65 technology consultants from Sci-Net. It offers opportunities to cross- and up-sell to the companies’ existing customer bases, and to continue to move up the value chain as a managed services provider (MSP).

The move also underlines some key trends in the UK marketplace. Competition remains fierce, so being able to act as a trusted advisor is becoming more important to win and retain business. At the same time, technology continues to become more complex, therefore offering a full portfolio of services ‘above and beyond’ connectivity is vital. MSPs and value-added resellers (VARs) recognize this and represent an ever-stronger force in the market as they can work closely with customers to develop technology solutions that directly address their business needs.

That is not to say that the ‘Big Three’ B2B service providers – BT, Vodafone, and O2 Daisy – do not also recognize this. All of them are positioning to become more solutions-oriented with a focus on areas like cloud, security and, increasingly, AI. They have the advantage of significant existing customer bases, deep human and partnership resources, strong brands, and nationwide fixed and mobile networks from which to deliver their services. By contrast, the likes of ANS and other VARs/MSPs can exploit their agility to differentiate themselves in the market.

It will continue to be a highly competitive market to win the custom of enterprises of all sizes in the UK, which is a tough challenge for all service providers. But it is good news for UK plc as businesses stand to benefit from innovation and value.

Is Liquid Cooling the Key Now that AI Pervades Everything?

30 September 2025 at 13:13
B. Valle

Summary Bullets:

• Data center cooling has become an increasingly insurmountable challenge because AI accelerators consume massive amounts of power.

• Liquid cooling adoption is progressively evolving from experimental to mainstream starting with AI labs and hyperscalers, then moving into the colocation space and later enterprises.

As Generative AI (GenAI) takes an ever-stronger hold in our lives, the demands on data centers continue to grow. The heat generated by the high-density computing required to run AI applications that are more resource-intensive than ever is pushing companies to adopt ever more innovative cooling techniques. As a result, liquid cooling, which used to be a fairly experimental technique, is becoming more mainstream.

Eye-watering amounts of money continue to pour into data center investment to run AI workloads. Heat management has become top of mind due to the high rack densities deployed in data centers. GlobalData forecasts that AI revenue worldwide will reach $165 billion in 2025, marking an annual growth of 26% over the previous year. The growth rate will accelerate from 2026 at 34%, and in subsequent years; in fact, the CAGR in the period 2004-2025 will reach 37%.


Source: GlobalData

The powerful hardware designed for AI workloads is growing in density. Although average density racks are usually below 10 kW, it is feasible to think of AI training clusters of 200 kW per rack in the not-too-distant future. Of course, the average number of kW per rack varies a lot, depending on the application, with traditional IT workloads for mainstream business applications requiring far fewer kW-per-rack than frontier AI workloads.

Liquid cooling is a heat management technique that uses liquid to remove heat from computing components in data centers. Liquid has a much higher thermal conductivity than air as it can absorb and transfer heat more effectively. By bringing a liquid coolant into direct contact with heat-generating components like CPUs and GPUs, liquid cooling systems can remove heat at its source, maintaining stable operating temperatures.

Although there are many diverse types of liquid cooling techniques, direct to chip is the most popular cooling method, also known as “cold plate,” accounting for approximately half of the liquid cooling market. This technique uses a cold plate directly mounted on the chip inside the server, enabling efficient heat dissipation. This direct contact enhances the heat transfer efficiency. This method allows high-end, specialized servers to be installed in standard IT cabinets, similar to legacy air-cooled equipment.

There are innovative variations on the cold plate technique that are currently under experimentation. Microsoft is currently prototyping a new method that takes the direct to chip technique one step further by bringing liquid coolant directly inside the silicon where the heat is generated. The method entails applying microfluidics via tiny channels etched into the silicon chip, creating grooves that allow cooling liquid to flow directly onto the chip and more efficiently remove heat.

Swiss startup Corintis is behind the novel technique, which blends the electronics and the heat management system that have been historically designed and made separately, creating unnecessary obstacles when heat has to propagate through multiple materials. Corintis created a design that blends the electronics and the cooling together from the beginning so the microchannels are right underneath the transistor.

Cisco Quantum – Simply Network All the Quantum Computers

26 September 2025 at 12:29
S. Schuchart

Cisco’s Quantum Labs research team, part of Outshift by Cisco, has announced that they have completed a complete software solution prototype. The latest part is the Cisco Quantum Complier prototype, designed for distributed quantum computing across networked processors. In short, it allows a network of quantum computers, of all types, to participate in solving a single problem. Even better, this new compiler supports distributed quantum error correction. Instead of a quantum computer needing to have a huge number of qbits itself, the load can be spread out among multiple quantum computers. This coordination is handled across a quantum network, powered by Cisco’s Quantum Network entanglement chip, which was announced in May 2025. This network could also be used to secure communications for traditional servers as well.

For some quick background – one of the factors holding quantum computers back is the lack of quantity and quality when it comes to qubits. Most of the amazing things quantum computers can in theory do require thousands or millions of qubits. Today we have systems with around a thousand qubits. But those qubits need to be quality qubits. Qubits are extremely susceptible to outside interference. Qubits need to be available in quantity as well as quality. To fix the quality problem, there has been a considerable amount of work performed on error correction for qubits. But again, most quantum error correction routines require even more qubits to create logical ‘stable’ qubits. Research has been ongoing across the industry – everyone is looking for a way to create large amounts of stable qubits.

What Cisco is proposing is that instead of making a single quantum processor bigger to have more qubits, multiple quantum processors can be strung together with their quantum networking technology and the quality of the transmitted qubits should be ensured with distributed error correction. It’s an intriguing idea – as Cisco more or less points out we didn’t achieve scale with traditional computing by simply making a single CPU bigger and bigger until it could handle all tasks. Instead, multiple CPUs were integrated on a server and then those servers networked together to share the load. That makes good sense, and it’s an interesting approach. Just like with traditional CPUs, quantum processors will not suddenly stop growing – but if this works it will allow scaling of those quantum processors on a smaller scale, possibly ushering in useful, practical quantum computing sooner.

Is this the breakthrough needed to bring about the quantum computing revolution? At this point it’s a prototype – not an extensively tested method. Quantum computing requires so much fundamental physics research and is so complicated that its extremely hard to say if what Cisco is suggesting can usher in that new quantum age. But it is extremely interesting, and it will certainly be worth watching this approach as Cisco ramps up its efforts in quantum technologies.

Technology Leaders Can Leverage TBM to Play a More Strategic Role in Aligning Tech Spend with Business Values

By: siowmeng
19 September 2025 at 12:44
S. Soh

Summary Bullets:

  • Organizations are spending more on technology across business functions, and it is imperative for them to understand and optimize their tech spending through technology business management (TBM).
  • IBM is a key TBM vendor helping organizations to drive their IT strategy more effectively; it is making moves to extend the solution to more customers and partners.

Every company is a tech company. While this is a cliché, especially in the tech industry, it is becoming real in the era of data and AI. For some time, businesses have been gathering data and analyzing them for insights to improve processes and develop new business models. By feeding data into AI engines, enterprises accelerate transformation by automating processes and reducing human intervention. The result is less friction in customer engagement, more agile operations, smarter decision-making, and faster time to market. This is, at least on paper, the promises of AI.

However, enterprises face challenges as they modernize their tech stack, adopt more digital solutions, and move AI from trials to production. Visibility into tech spending and the ability to forecast costs, especially with many services consumed on a pay-as-you-go basis is a challenge. While FinOps addresses cloud spend, a more holistic view of technology spend is necessary, including legacy on-premises systems, GenAI costs (pricing is typically based on the tokens), as well as labor-related costs.

This has made the concept of TBM more crucial today than ever. TBM is a discipline that focuses on enhancing business outcomes by providing organizations with a systematic approach to translating technology investments into business values. It brings financial discipline and transparency to their IT expenditures with the aim of maximizing the contribution of technology to overall business success. Technology is now widely used across business functions such as enterprise resource planning (ERP) for finance, human capital management (HCM) for HR, customer resource management (CRM) for sales, and supply chain management (SCM) for operations. Based on GlobalData’s research, about half of the tech spend today is already from budgets outside of the IT department. It is becoming more crucial as the use of technology becomes even more pervasive across the organization especially with AI being embedded into workflows. Moreover, TBM capability also help to elevate the role of tech leaders within an organization, as a strategic business partners.

IBM is one of the vendors that offer a comprehensive set of solutions to support TBM in part enabled by acquisitions such as Apptio (which also acquired Cloudability and Targetprocess) and Kubecost. Cloudability underpins IBM’s FinOps and cloud cost management, which is a key component that is already seeing great demand due to the need to optimize cloud workloads and spend as companies continue to expand their cloud usage. Apptio offers IT financial management (ITFM) which helps enterprises gain visibility into their tech spend (including SaaS, cloud, on-premises systems, labor, etc.) as well as usage and performance by app or team. This enables real-time decision-making, facilitates the assessment IT investments against KPIs, makes it possible to shift IT budget from keeping the lights on to innovation, and supports showback/chargeback to promote fairness and efficient usage of resources. With Targetprocess, IBM also has a strategic portfolio management (SPM) solution that helps organizations to plan, track, and prioritize work from the strategic portfolio of projects and products to the software development team. The ability to track work delivered by teams and determine the cost per unit of work allows organizations to improve time-to-market and align talent spend to strategic priorities.

Besides IBM, ServiceNow’s SPM helps organizations make better decision based on the initiatives to pursue based on resources, people, budgets, etc. ServiceWare is another firm that offers cloud cost management, ITFM, and a digital value model for TBM. Other FinOps and ITSM vendors may also join the fray as market awareness grows.

Moreover, TBM should not be a practice of the largest enterprises but rather depends on the level of tech spending involved. While IBM/Apptio serves many enterprises (e.g., 60% of Global Fortune 100 companies) that have tech spend well over $100 million, there are other vendors (e.g., MagicOrange and Nicus) that have more cost-effective solutions to target mid-sized enterprises. IBM is now addressing this customer segment with a streamlined IBM Apptio Essentials suite announced in June 2025 which offers fundamental building blocks of ITFM practice that can be implemented quickly and more cost-effectively. Based on GlobalData’s ICT Client Prospector database, in the US alone, there are over 5,000 businesses with total spend exceeding $25 million, which expands the addressable market for IBM.

For service providers, TBM is also a powerful solution for deeper engagement with enterprises and delivers a solution that drives tangible business outcomes. Personas interested in TBM include CIOs, CFOs, and CTOs. While there are TBM tools and dashboards that are readily available, service providers can play a role in managing the stakeholders and designing the processes. Through working with multiple enterprise customers, service providers are also building experiences and best practices to help deliver value faster and avoid potential pitfalls. Service providers such as Deloitte and Wipro already offer TBM to enterprise customers. Others should also consider working with TBM vendors to develop a similar practice.

Mistral AI’s Independence from US Companies Lends it a Competitive Edge

15 September 2025 at 17:40
B. Valle

Summary Bullets:

• Mistral AI’s valuation went up to EUR11.7 billion after a funding round of EUR1.7 billion spearheaded by Netherlands-based ASML.

• The French company has the edge in open source and is well positioned to capitalize on the sovereign AI trend sweeping Europe right now.

Semiconductor equipment manufacturer ASML and Mistral AI announced a partnership to explore the use of AI models across ASML’s product portfolio to enhance its holistic lithography systems. In addition, ASML was the lead investor in the latest funding round in the AI startup and now holds 11% share on a fully diluted basis in Mistral AI.

The deal holds a massive symbolic weight in the era of sovereign AI and trade barriers. Although not big in the great scheme of things and especially compared with the eye-watering sums usually exchanged in the bubbly AI world, it brings together Europe’s AI superstar Mistral with the world’s only manufacturer of EUV lithography machines for AI accelerators. ASML may not be a well-known name outside the industry, but the company is a key player in global technology. Although not an acquisition, the deal reminds of the many alliances between AI accelerator companies and AI software companies, as Nvidia and AMD continue to buy startups such as Silo AI and others. Moreover, Mistral, which has never been short of US funding through VC activity, has received a financial boost at the right time when US bidders were rumored to be circling like sharks. Even Microsoft was said to be considering buying the company at some point. For GlobalData’s take on this, please see Three is a Crowd: Microsoft Strikes Sweetheart Deal with Mistral while OpenAI Trains GPT-5. ASML becomes now its main shareholder, helping keep at bay the threat of US ownership at a critical time to reinforce one of its unique selling points: its credentials in “sovereign AI” by remaining independent from US companies.

From a technological perspective, Mistral AI has also developed a unique modus operandi, leveraging open-source models and targeting only enterprise customers, setting it apart from US competitors. Last June, it launched its first reasoning model, Magistral, focused on domain-specific multilingual reasoning, code, and maths. Using open source from the outset has helped it build a large developer ecosystem, long before DeepSeek’s disruption in the landscape drove competitors such as OpenAI to adopt open-source alternatives.

The company’s use of innovative mixture of experts (MoE) architectures and other optimizations means that its models are efficient in terms of computational resources while maintaining high performance, a key competitive differentiator. This means its systems achieve high performance per compute cost, making them more cost effective. Techniques such as sparse MoE allow scaling capacity without proportional increases in resource usage.

In February 2024, Mistral AI launched Le Chat, a multilingual conversational assistant, positioning itself against OpenAI’s ChatGPT and Google Cloud’s Gemini but with more robust privacy credentials. The company has intensified efforts to expand its business platform and tools around Le Chat, recently releasing free enterprise features such as advanced memory capabilities and capacity, and extensive third-party integrations at no cost to users. The latter includes a connectors list, built on MCP, supporting platforms such as Databricks, Snowflake, GitHub, Atlassian, and Stripe, among many others. This move will help Mistral AI penetrate the enterprise market by democratizing access to advanced features, and signals an ambitious strategy to achieve market dominance through integrated suites, not just applications.

Of course, the challenges are plentiful, Mistral AI’s scale is really far behind its US counterparts, and estimates on LLM usage seem to indicate that it is not nibbling market share away from them yet. It has a mammoth task ahead. But this deal can carve a path for European ambitions in AI, and for the protection of European assets in an increasingly polarized world divided across geopolitical lines. Some of the largest European tech companies including SAP and Capgemini have tight links to Mistral AI. They could make a bid to expand their ecosystems with acquisitions of European AI labs, that have so often fallen in US hands, in the future. For ASML, which has so many Asian customers, and whose revenues are going through a rough patch, the geopolitical turmoil of late has not been good news: this partnership brings a much-needed push in the realm of software, a key competitive enabler. After the US launched America’s AI Action plan last July, to strengthen the US leadership in AI with a plan based on removing red tape and regulation, the stakes are undoubtedly higher than ever.

The EU is a Trailblazer, and the AI Act Proves It

29 August 2025 at 12:06
B. Valle

Summary Bullets:

• On August 2, 2025, the second stage of the EU AI Act came into force, including obligations for general purpose models.

• The AI Act first came into force in February 2025, with the first set of applications built into law; the legislation follows a staggered approach with the last wave expected for August 2, 2027.

August 2025 has been marked by the enforcement of a new set of rules as part of the AI Act, the world’s first comprehensive AI legislation, which is being implemented in gradual stages. Like GDPR was for data privacy in the 2010s, the AI Act will be the global blueprint for governance of the transformative technology of AI, for decades to come. Recent news of the latest case of legal action, this time against OpenAI, by the parents of 16-year-old Adam Raine, who ended his life after months of intensive use of ChatGPT, has thrown into stark relief the potential for harm and the need to regulate the technology.

The AI Act follows a risk management approach; it aims to regulate transparency and accountability for AI systems and their developers. Although it was enacted into law in 2024, the first wave of enforcement proper was implemented last February (please see GlobalData’s take on The AI Act: landmark regulation comes into force) covering “unacceptable risk,” including AI systems considered a clear threat to societal safety. The second wave, implemented this month, covers general purpose AI (GPAI) models and arguably is the most important one, at least in terms of scope. The next steps are expected to follow in August 2026 (“high-risk systems”) and August 2027 (final steps of implementation).

From August 2, 2025, GPAI providers must comply with transparency and copyright obligations when placing their models on the EU market. This applies not only to EU-based companies but any organization with operations in the EU. GPAI models already on the market before August 2, 2025, must ensure compliance by August 2, 2027. For the intents and purposes of the law, GPAI models include those trained with over 10^23 floating point operations (FLOP) and capable of generating language (whether text or audio), text-to-image, or text-to-video.

Providers of GPAI systems must keep technical documentation about the model, including a sufficiently detailed summary of its training corpus. In addition, they must implement a policy to comply with EU copyright law. Within the group of GPAI models there is a special tier considered to be of “systemic risk,” very advanced models that only a small handful of providers develop. Firms within this tier face additional obligations, for instance, notify the European Commission when developing a model deemed with systemic risk and take steps to ensure the model’s safety and security. The classification of which models pose systemic risks can change over time as the technology evolves. There are exceptions: AI used for national security, military, and defense purposes is exempted in the act. Some open-source systems are also outside the reach of the legislation, as are AI models developed using publicly available code.

The European Commission has published a template to help providers summarize the data used to train their models, the GPAI Code of Practice, developed by independent experts as a voluntary tool for AI providers to demonstrate compliance with the AI Act. Signatories include Amazon, Anthropic, Cohere, Google, IBM, Microsoft, Mistral AI, OpenAI and ServiceNow, but some glaring absences include Meta (at the time of print). The code covers transparency and copyright rules that apply to all GPAI models, with additional safety and security rules for the systemic risk tier.

The AI Act has drawn criticism because of its disproportionate impact on startups and SMBs, with some experts arguing that it should include exceptions for technologies that are yet to have some hold on the general public, and don’t have a wide impact or potential for harm. Others say it could slow down progress among European organizations in the process of training their AI models, and that the rules are confusing. Last July, several tech lobbies including CCIA Europe, urged the EU to pause implementation of the act, arguing that the roll-out had been too rushed, without weighing in on the potential consequences… Sounds familiar?

However, it has been developed with the collaboration of thousands of stakeholders in the private sector, at a time when businesses are craving regulatory guidance. It is also introducing standard security practices across the EU, in a critical period of adoption. It is setting a global benchmark for others to follow in a time of great upheaval. After the AI Act, the US and other countries will find it increasingly harder to continue ignoring the calls for more responsible AI, a commendable effort that will make history.

❌
❌