Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

National Design Studio looks to overhaul 27,000 federal websites — and is hiring a team to do it

23 January 2026 at 17:26

A private-sector tech leader tapped by the Trump administration to improve the federal government’s online presence is setting an ambitious goal — overhauling about 27,000 dot-gov websites.

Joe Gebbia, chief design officer of the United States and co-founder of Airbnb, said in a podcast interview Tuesday that the White House set out this goal when President Donald Trump signed an executive order last summer creating the National Design Studio.

“We’re fixing all of them,” Gebbia said Tuesday on the American Optimist show. Many of the federal government’s websites, he added, “look like they’re from the mid-90s.”

Gebbia began working with the Department of Government Efficiency in the early days of the Trump administration. At the Office of Personnel Management, he oversaw a long-anticipated modernization of the federal employee retirement system.

The National Design Studio so far has launched several new websites that serve as landing pages for some of the Trump administration’s policies on immigration, law enforcement and prescription drug prices.

As for next steps, Gebbia said his office will deliver “major updates,” including a refresh of existing federal websites, by July 4.

“It’s working, because we are really pulling in veterans of Silicon Valley from a talent perspective, I think it’s working because this president really deeply cares about how things look, because he knows that esthetics matter,” he said.

The White House estimates that only 6% of federal websites are rated “good” for use on mobile devices. About 45% of federal websites are not mobile-friendly.

As part of the President’s Management Agenda, the Trump administration is looking to leverage technology to “deliver faster, more secure services” and “reduce the number of confusing government websites. “

The administration has already taken steps to eliminate websites that it deems unnecessary. Federal News Network first reported that the 24 largest federal agencies are preparing to eliminate more than 330 websites — about 5% of an inventory of 7,200 websites reviewed.

The National Design Studio is still recruiting new hires. Gebbia estimated that his office will eventually have a team of about 15 engineers and 15 designers.

“We’re still ramping up the team,” he said, adding that the National Design Studio has been able to “recruit some of the best and brightest minds of our era.”

“This is a once-in-a-lifetime moment where we have a shot on goal to actually upgrade the U.S. government the way we present ourselves to the nation and to the world,” Gebbia said.

The idea for the National Design Studio began when Interior Secretary Doug Burgum asked Gebbia to improve Recreation.gov, a website for booking campsites, scheduling tours and obtaining hunting and fishing permits on federal lands. The site serves as an outdoor recreation system for 14 federal agencies.

“There’s a lot to be desired for when you have this incredible feature of the American experience, our national parks. They were being undersold in a way that they were showcased,” Gebbia said.

After working on Recreation.gov, Gebbia said he was getting similar requests from other Cabinet secretaries.

“I started to see there’s demand here for better design. There’s demand here for modernizing the digital surfaces of the government,” he said.

At that point, Gebbia said he made his pitch for the National Design Studio to Trump during a meeting at the Oval Office.

“What would it look like to have a national initiative to actually go in and up level and upgrade, not just one agency, not just one website, all the websites, all the agencies, all of the digital touch points between us, government and the American people?” he recalled.

According to the America by Design website, the White House is drawing inspiration from the Nixon administration’s beautification project in the 1970s. That project led to the creation of NASA’s iconic logo, branding for national parks and signage for the national highway system.

“My vision is that, at some point, somebody’s working at a startup and they go look at a dot-gov website to see how they did it. And we can actually create references for good design in the government, rather than be the butt of a joke,” Gebbia said.

So far, the National Design Studio has launched SafeDC.gov, a website meant to facilitate the Trump administration’s surge of federal law enforcement agents to Washington, D.C. It’s also launched TrumpCard.gov, a program meant to fast-track the green-card process for noncitizens seeking permanent residency in the United States — and who are able to pay a $15,000 processing fee and a $1 million or $5 million “gift” to the Commerce Department.

Its most recent website, https://trumprx.gov/, is still in the works. The website supports an administration goal of connecting consumers with lower-priced prescription drugs.

Gebbia said private-sector tech experts are interested in working with National Design Studio and overcoming institutional barriers to change.

“Of course, you bump into things and all the processes and people saying, ‘Well, it’s always been done this way. Why would we change it?’ I think, though, there’s an incredible amount of momentum behind this — the excitement around America by Design, the excitement around the National Design Studio, and the excitement on the demand side of secretaries and people and agencies — ‘Yes, please fix this for us. We’re so happy you’re here to make us make this look good,'” he said.

The post National Design Studio looks to overhaul 27,000 federal websites — and is hiring a team to do it first appeared on Federal News Network.

© AP Photo/Alex Brandon

This U.S. Department of Education website page is seen on Jan. 24, 2025 in Washington. (AP Photo/Alex Brandon, File)

FedRAMP is getting faster, new automation and pilots promise approvals in months, not years

23 January 2026 at 15:34

Interview transcript

Terry Gerton We’re going to talk about one of everybody’s favorite topics, FedRAMP. It’s been around for years, but agencies are still struggling to get modern tools. So from your perspective, why is the process so hard for software and service companies to get through?

Irina Denisenko  It’s a great question. Why is it so hard to get through FedRAMP? It is so hard to get through FedRAMP because at the end of the day, what is FedRAMP really here to do? It’s here to secure cloud software, to secure government data sitting in cloud software. You have to remember this all came together almost 15 years ago, which if you remember 15 years ago, 20 years ago, was kind of early days of all of us interacting with the internet. And we were still even, in some cases, scared to enter our credit card details onto an online website. Fast forward to today, we pay with our face when we get on our phone. We’ve come a long way. But the reality is cloud security hasn’t always been the, of course, it’s secure. In fact, it has been the opposite. Of course, its unsecure and it’s the internet and that’s where you go to lose all your data and all your information. And so long story short, you have to understand that’s were the government is coming from. We need to lock everything down in order to make sure that whether it’s VA patient data, IRS data on our taxpayers, obviously anything in the DoW, any sort of information data there, all of that stays secure. And so that’s why there are hundreds of controls that are applied to cloud environments in order make sure and double sure and triple sure that that data is secure.

Terry Gerton You lived the challenge first-hand with your own company. What most surprised you about the certification process when you tackled it yourself? What most surprise me?

Irina Denisenko  When we tackled FedRAMP ourselves for the first time was that even if you have the resources and specifically if you $3 million to spend, you know, $3 million burning a hole in your pocket doesn’t happen often, but even if have that and you have staff on the U.S. Soil and you have the willingness to invest all of that for a three-year process to get certified, that is still not enough. What you need on top of that is an agency to say yes to sponsoring you. And when they say yes, to sponsoring you what they are saying yes to you is to take on your cyber risk. And specifically what they’re saying yes to is to spend half a million dollars of taxpayer money of agency budget, typically using contractors, to do an initial security review of your application. And then to basically get married to you and do something called continuous monitoring, which is a monthly meeting that they’re going to have with you forever. They, that agency is going to be your accountability partner and ultimately the risk bearer of you, the software provider, to make sure you are burning down all of the vulnerabilities, all of these CVEs, every finding in your cloud environment on the timeline that you’re supposed to do that. And that ends up costing an agency about $250,000 a year, again, in the form of contractors, tooling, etc. That was the most surprising to me, that again, even as a cloud service provider, who’s already doing business with JP Morgan and Chase, you know, healthcare systems, you name it, even that’s not enough, you need an agency sponsor, because at the end of the day, it’s the agency’s data and they have to protect it. And so they have do that triple assurance of, yes, you said you’re doing the security stuff, but let us confirm that you’re doing the the security stuff. That was the most surprising to me. And why, really, ultimately, we started Knox Systems, because what we do at Knox is we enable the inheritance model. So we are doing all of that with our sponsoring agencies, of which we have 15. Knox runs the largest FedRAMP managed cloud. And what that means is we host the production environment of our customers inside of our FedRAMP environment across AWS, Azure, and GCP. And our customers inherit our sponsors. So they inherit the authorization from the treasury, from the VA, from the Marines, etc., Which means that the Marines, the Treasury, the VA, didn’t have to spend an extra half a million upfront and $250k ongoing with every new application that was authorized. They are able to get huge bang for their buck by just investing that authorization, that sponsorship into the Knox boundary. And then Knox does the work and the hard work to ensure the security and ongoing authorization and compliance of all of the applications that we bring into our environment.

Terry Gerton I’m speaking with Irina Denisenko. She’s the CEO of Knox Systems. So it sounds like you found a way through the maze that was shorter, simpler, less expensive. Is FedRAMP 20X helping to normalize that kind of approach? How do you see it playing out?

Irina Denisenko  Great question. FedRAMP 20X is a phenomenal initiative coming out of OMB-GSA. And really the crux of that is all about machine-readable and continuous authorization. Today, when I talked about continuous monitoring, that’s a monthly meeting that happens. And I kid you not, we, as a cloud service provider, again, we secure Adobe’s environment and many others, we come with a spreadsheet, an actual spreadsheet that has all of the vulnerabilities listed from all the scans we’ve done over the last month, and anything that is still open from anything prior months. And we review that spreadsheet, that actual Excel document, and then after the meet with our agencies and then, after that meeting, we upload that spreadsheet into a system called USDA on the FedCiv side, eMass, DOW side, DISA side. And then they, on their side, download that spreadsheet and they put it into other systems. And I mean, that’s the process. I think no one is confused, or no one would argue that surely there’s a better way. And a better would be a machine readable way, whether that’s over an API, using a standard language like OSCAL. There’s lots of ways to standardize, but it doesn’t have to be basically the equivalent of a clipboard and a pencil. And that’s what FedRAMP 20X is doing. It’s automating that information flow so that not only is it bringing down the amount of just human labor that needs to be done to do all this tracking, but more importantly, this is cloud security. Just because you’re secure one second doesn’t mean you’re secure five seconds from now, right? You need to be actively monitoring this, actively reporting this. And if it’s taking you 30 days to let an agency know that you have a critical vulnerability, that’s crazy. You, you got to tell them in, you know, five minutes after you find out or, you know to put a respectable buffer, a responsible buffer to allow you to mitigate remediate before you notify more parties, maybe it’s a four day buffer but it’s certainly not 30 days. That’s what FedRAMP20X is doing. We’re super excited about it. We are very supportive of it and have been actively involved in phase I and all subsequent phases.

Terry Gerton Right, so phase II is scheduled to start shortly in 2026. What are you expecting to see as a result?

Irina Denisenko  Well, phase I was all about FedRAMP low, phase II is all about FedRAMP moderate. And we expect that, you know, it’s going to really — FedRAMP moderate is realistically where most cloud service offerings sit, FedRAMP moderate and high. And so that’s really the one that the FedRAMP needs to get right. What we expect to see and hope to see is to have agencies actually authorized off of these new frameworks. The key is really going to be what shape does FedRAMP 20x take in terms of machine readable reporting on the security posture of any cloud environment? And then of course, the industry will standardize around that. So we’re excited to see what that looks like. And also how much AI does the agency, the GSA, OMB and ultimately FedRAMP leverage because there is a tremendous amount of productivity, but also security that AI can provide. It can also introduce a lot of risks. And so we’re all collaborating with that agency, as well as we’re excited to see what, you know, where they draw the bright red lines and where they embrace AI.

Terry Gerton So phase II is only gonna incorporate 10 companies, right? So for the rest of the world who’s waiting on these results, what advice do you have for them in the meantime? How can companies prepare better or how can companies who want to get FedRAMP certified now best proceed?

Irina Denisenko  I think the end of the day the inheritance model that Knox provides — and, you know, we’re not the only ones, actually there’s two key players.; it’s ourselves and Palantir. There’s a reason hat large companies like Celonis like OutSystems like BigID like Armis who was just bought by ServiceNow for almost $8 billion. There’s reason that all those guys choose Knox and there’s a reason Anthropic chose Palantir and Grafana chose Palantir, because regardless, FedRAMP 20X, Rev 5, doesn’t matter, there is a massive, massive premium put on getting innovative technology in the hands of our government faster. We have a window right now with the current administration prioritizing innovative technology and commercial off-the-shelf. You know, take the best out of Silicon Valley and use it in the government or out of Europe, out of Israel, you name it, rather than build it yourself, customize it until you’re blue in the face and still get an inferior product. Just use the best and breed, right? But you need it to be secure. And we have this window as a country. We have a window as country for the next few years here to get these technologies in. It takes a while to adopt new technologies. It takes awhile to do a quantum leap, but I’ll give you a perfect example. Celonis, since becoming FedRAMPed on August 19th with Knox — they had been trying to get FedRAMPed for five years — since getting FedRAMPed on august 19th, has implemented three agencies. And what do they do? They do process mining and intelligence. They’re an $800 million company that’s 20 years old that competes, by the way, head on with Palantir’s core product, Foundry and Gotham and so on. They’ve implemented three agencies already to drive efficiency, to drive visibility, to drive process mining, to driving intelligence, to drive AI-powered decision-making. And that’s during the holidays, during a government shutdown, it’s speed that we’ve never seen before. If you want outcomes, you need to get these technologies into the hands of our agencies today. And so that’s why, you know, we’re such big proponents of this model, and also why, our agencies, our federal advisory board, which includes the DHS CISO, the DOW CIO, the VA CIO are also supportive of this because ultimately it’s about serving the mission and doing it now. Rather than waiting for some time in the future.

The post FedRAMP is getting faster, new automation and pilots promise approvals in months, not years first appeared on Federal News Network.

© Getty Images/iStockphoto/Kalawin

Cloud

Workforce, supply chain factor into reauthorizing National Quantum Initiative

House lawmakers are discussing a reauthorization of the National Quantum Initiative, with lawmakers eyeing agency prize challenges, workforce issues and supply chain concerns among other key updates.

During a hearing hosted by the House Committee on Science, Space and Technology on Thursday, lawmakers sought input from agencies leading quantum information science efforts. Chairman Brian Babin (R-Texas) said he is working with Ranking Member Zoe Lofgren (D-Calif.) on a reauthorization of the NQI.

“This effort seeks to reinforce U.S. leadership in quantum science, technology and engineering, address workforce challenges, and accelerate commercialization,” Babin said.

The National Quantum Initiative Act of 2018 created a national plan for quantum technologies spearheaded by agencies including the National Institute of Standards and Technology, the National Science Foundation and the Energy Department.

As the House committee works on its bill, Senate lawmakers earlier this month introduced a bipartisan National Quantum Initiative Reauthorization Act. The bill would extend the initiative for an additional five years through 2034 and reauthorize key agency programs.

The Senate bill would also expand the NQI to include National Aeronautics and Space Administration’s (NASA) research initiatives, including quantum satellite communications and quantum sensing.

Meanwhile, in September, the White House named quantum information sciences as one of six priority areas in governmentwide research and development budget guidance. “Agencies should deepen focused efforts, such as centers and core programs, to advance basic quantum information science, while also prioritizing R&D that expands the understanding of end user applications and supports the maturation of enabling technologies,” the guidance states.

During the House hearing on Thursday, lawmakers sought feedback on several proposals to include in the reauthorization bill. Rep. Valerie Foushee (D-N.C.) said the Energy Department had sent lawmakers technical assistance in December, including a proposal to provide quantum prize challenge authority to agencies that sit on the quantum information science subcommittee of the National Science and Technology Council.

Tanner Crowder, quantum information science lead at Energy’s Office of Science, said the prize challenges would help the government use “programmatic mechanisms” to drive the field forward.

“We’ve talked a little bit about our notices of funding opportunities, and the prize challenge would just be another, another mechanism to drive the field forward, both in potential algorithmic designs, hardware designs, and it just gives us more flexibility to push the forefront of the field,” Crowder said.

Crowder was also asked about how the reauthorization bill should direct resources for sensor development and quantum network infrastructure.

“We want to be able to connect systems together, and we need quantum networks to do that,” Crowder responded. “It is impractical to send quantum information over classical networks, and so we need to continue to push that forefront and look to interconnect heterogeneous systems at the data scale level, so that we can actually extract this information and compute upon it.”

Lawmakers also probed the witnesses on supply chain concerns related to quantum information sciences. James Kushmerick, director of the Physical Measurement Laboratory at the National Institute of Standards and Technology, was asked about U.S. reliance on Europe and China for components like lasers and cooling equipment.

“One of the things we are looking for within the reauthorization is to kind of refocus and kind of onshore or develop new supply chains, not even just kind of duplicate what’s there, but move past that,” Kushmerick said. “Through the Quantum Accelerator Program, we’re looking to focus on chip-scale lasers and modular, small cryo-systems that can be deployed in different ways, as a change agent to kind of move forward.”

Several lawmakers also expressed concerns about the workforce related to quantum information sciences, with several pointing out that cuts to the NSF and changes to U.S. immigration policy under the Trump administration could hamper research and development.

Kushmerick said the NIST-supported Quantum Economic Development Consortium polled members in the quantum industry to better understand workforce challenges.

“It’s not just in quantum physicists leading the efforts,” Kushmerick said. “It’s really all the way through to engineers and technicians and people at all levels. So I really think we need a whole government effort to increase the pipeline through certificates to degrees and other activities.”

The post Workforce, supply chain factor into reauthorizing National Quantum Initiative first appeared on Federal News Network.

© AP Photo/Seth Wenig

This Feb. 27, 2018, photo shows electronics for use in a quantum computer in the quantum computing lab at the IBM Thomas J. Watson Research Center in Yorktown Heights, N.Y. Describing the inner workings of a quantum computer isn’t easy, even for top scholars. That’s because the machines process information at the scale of elementary particles such as electrons and photons, where different laws of physics apply. (AP Photo/Seth Wenig)

DLA turns to AI, ML to improve military supply forecasting

The Defense Logistics Agency — an organization responsible for supplying everything from spare parts to food and fuel — is turning to artificial intelligence and machine learning to fix a long-standing problem of predicting what the military needs on its shelves.

While demand planning accuracy currently hovers around 60%, DLA officials aim to push that baseline figure to 85% with the help of AI and ML tools. Improved forecasting will ensure the services have access to the right items exactly when they need them. 

“We are about 60% accurate on what the services ask us to buy and what we actually have on the shelf.  Part of that, then, is we are either overbuying in some capacity or we are under buying. That doesn’t help the readiness of our systems,” Maj. Gen. David Sanford, DLA director of logistics operations, said during the AFCEA NOVA Army IT Day event on Jan. 15.

Rather than relying mostly on historical purchase data, the models ingest a wide range of data that DLA has not previously used in forecasting. That includes supply consumption and maintenance data, operational data gleaned from wargames and exercises, as well as data that impacts storage locations, such as weather.

The models are tied to each weapon system and DLA evaluates and adjusts the models on a continuing basis as they learn. 

“We are using AI and ML to ingest data that we have just never looked at before. That’s now feeding our planning models. We are building individual models, we are letting them learn, and then those will be our forecasting models as we go forward,” Sanford said.

Some early results already show measurable improvements. Forecasting accuracy for the Army’s Bradley Infantry Fighting Vehicle, for example, has improved by about 12% over the last four months, a senior DLA official told Federal News Network.

The agency has made the most progress working with the Army and the Air Force and is addressing “some final data-interoperability issues” with the Navy. Work with the Marine Corps is also underway. 

“The Army has done a really nice job of ingesting a lot of their sustainment data into a platform called Army 360. We feed into that platform live data now, and then we are able to receive that live data. We are ingesting data now into our demand planning models not just for the Army. We’re on the path for the Navy, and then the Air Force is next. We got a little more work to do with Marines. We’re not as accurate as where we need to be, and so this is our path with each service to drive to that accuracy,” Sanford said.

Demand forecasting, however, varies widely across the services — the DLA official cautioned against directly comparing forecasting performance.

“When we compare services from a demand planning perspective, it’s not an apples-to-apples comparison.  Each service has different products, policies and complexities that influence planning variables and outcomes. Broadly speaking, DLA is in partnership with each service to make improvements to readiness and forecasting,” the DLA official said.

The agency is also using AI and machine learning to improve how it measures true administrative and production lead times. By analyzing years of historical data, the tools can identify how industry has actually performed — rather than how long deliveries were expected to take — and factor that into DLA stock levels.  

“When we put out requests, we need information back to us quickly. And then you got to hold us accountable to get information back to you too quickly. And then on the production lead times, they’re not as accurate as what they are. There’s something that’s advertised, but then there’s the reality of what we’re getting and is not meeting the target that that was initially contracted for,” Sanford said.

The post DLA turns to AI, ML to improve military supply forecasting first appeared on Federal News Network.

© Federal News Network

DEFENSE_04

With a new executive order clearing the path for federal AI standards, the question now is whether Congress will finish the job

21 January 2026 at 17:12

Interview transcript:

Terry Gerton Last time we spoke, we were talking about the potential of a patchwork of state laws that might stifle AI innovation. Now we don’t have a federal law, but we have an executive order from the president that creates a federal preemption framework and a task force that will specifically challenge those state laws. Last time we talked, we were worried about constitutionality. What do you think about this new construct?

Kevin Frazier Yeah, so this construct really tries to set forth a path for Congress to step up. I think everyone across the board at the state level in the White House is asking Congress to take action. And this, in many ways, is meant to smooth that path and ease the way forward for Congress to finally set forth a national framework. And by virtue of establishing an AI Litigation Task Force, the president is trying to make sure that Congress has a clear path to move forward. This AI Litigation Task Force is essentially charging the Department of Justice under the Attorney General to challenge state AI laws that may be unconstitutional or otherwise unlawful. Now, critically, this is not saying that states do not have the authority to regulate AI in certain domains, but merely giving and encouraging the AG to have a more focused regulatory agenda, focusing their litigation challenges on state AI laws that may have extra-territorial ramifications, that may violate the First Amendment, other things that the DOJ has always had the authority to do.

Terry Gerton Where do you think, then, that this sets the balance between innovation and state autonomy and federal authority?

Kevin Frazier So the balance is constantly being weighed here, Terry. I’d say that this is trying to strike a happy middle ground. We see that in the executive order, there’s explicit recognition that in many ways there may be state laws that actually do empower and encourage innovation. We know that in 2026, we’re going to see Texas, my home state, develop a regulatory sandbox that allows for AI companies to deploy their tools under fewer regulations, but with increased oversight. Utah has explored a similar approach. And so those sorts of state laws that are very much operating within their own borders, that are regulating the end uses of AI, or as specified in the executive order, things like data center locations, things like child safety protections and things like state government use of AI, those are all cordoned off and recognized by the EO as the proper domain of states. And now, the EO is really encouraging Congress to say, look, we’re trying to do our best to make sure that states aren’t regulating things like the frontier of AI, imposing obligations on AI development, but Congress, you need to step up because it is you, after all, that has the authority under the Constitution to regulate interstate commerce.

Terry Gerton Let’s go back to those sandboxes that you talked about, because we talked about those before and you talked about them as a smart way of creating a trial and error space for AI governance. Does this EO then align with those and do you expect more states to move in that direction?

Kevin Frazier Yes, so this EO very much encourages and welcomes state regulations that, again, aren’t running afoul of the Constitution, aren’t otherwise running afoul of federal laws or regulations that may preempt certain regulatory postures by the states. If you’re not doing something unconstitutional, if you’re trying to violate the Supremacy Clause, there’s a wide range for states to occupy with respect to AI governance. And here, those sorts of sandboxes are the sort of innovation-friendly approaches that I think the White House and members of Congress and many state legislators would like to see spread and continue to be developed. And these are really the sorts of approaches that allow us to get used to and start acclimating to what I like to refer to as boring AI. The fact of the matter is most AI isn’t something that’s going to threaten humanity. It’s not something that’s going to destroy the economy tomorrow, so on and so forth. Most AI, Terry, is really boring. It’s things like improving our ability to detect diseases, improving our ability to direct the transmission of energy. And these sorts of positive, admittedly boring, uses of AI are the very sorts of things we should be trying to experiment with at the state level.

Terry Gerton I’m speaking with Dr. Kevin Frazier. He is the AI innovation and law fellow at the University of Texas School of Law. Kevin, one of the other things we’ve talked about is that the uncertainty around AI laws and regulations really creates a barrier to entry for innovators or startups or small businesses in the AI space. How do you think the EO affects that concern?

Kevin Frazier So the EO is very attentive to what I would refer to, not only as a patchwork, but increasingly what’s looking like a Tower of Babel approach that we’re seeing at the state level. So most recently in New York, we saw that the governor signed legislation that looks a lot like SB 53. Now for folks who aren’t spending all of their waking hours thinking about AI, SB 53 was a bill passed in California that regulates the frontier AI companies and imposes various transparency requirements on them. Now, New York in some ways copy and pasted that legislation. Folks may say, oh, this is great, states are trying to copy one another to make sure that there is some sort of harmony with respect to AI regulation. Well, the problem is how states end up interpreting those same provisions, what it means for example, to have a reasonable model or what it means to adhere to certain transparency requirements, that may vary in terms of state-by-state enforcement. And so that’s really where there is concern among the White House with respect to extra-territorial laws, because if suddenly we see that a AI company in Utah or Texas feels compelled or is compelled to comply with New York laws or California laws, that’s where we start to see that concern about a patchwork.

Terry Gerton And what does that mean for innovators who may want to scale up? They may get a great start in Utah, for example, but how do they scale up nationwide if there is that patchwork?

Kevin Frazier Terry, this is a really important question because there’s an argument to be made that bills like SB 53 or the RAISE Act in New York include carve-outs for smaller AI labs. And some folks will say, hey, look, it says if you’re not building a model of this size or with this much money, or if you don’t have this many users, then great, you don’t to comply with this specific regulation. Well, the problem is, Terry, I have yet to meet a startup founder who says, I can’t wait to build this new AI tool, but the second I hit 999,000 users, I’m just going to stop building. Or the second that I want to build a model that’s just one order of magnitude more powerful in terms of compute, I’m just going to turn it off, I’m going to throw in the towel. And so even when there are carve-outs, we see that startups have to begin to think about when they’re going to run into those regulatory burdens. And so even with carve-outs applied across the patchwork approach, we’re going to see that startups find it harder and harder to convince venture capitalists, to convince institutions, to bet and gamble on them. And that’s a real problem if we want to be the leaders in AI innovation.

Terry Gerton So let’s go back then to the DOJ’s litigation task force. How might that play into this confusion? Will it clarify it? Will it add more complexity? What’s your prognostication?

Kevin Frazier Yes, I always love to prognosticate, and I think that here we’re going to see some positive litigation be brought forward that allows some of these really important, difficult debates to finally be litigated. There’s questions about what it means to regulate interstate commerce in the AI domain. We need experts to have honest and frank conversations about this, and litigation can be a very valuable forcing mechanism for having folks suddenly say, hey, if you regulate this aspect of AI, then from a technical standpoint, it may not pose any issues. But if you calculate this aspect, now we’re starting to see that labs would have to change their behavior. And so litigation can be a very positive step that sends the signals to state legislators, hey, here are the areas where it’s clear for you to proceed and here are areas where the constitution says, whoa, that’s Congress’s domain. And so I’m optimistic that under the leadership of the attorney general and seeing folks like David Sacks, the AI and crypto czar, lend their expertise to these challenges as well, that we’re going to get the sort of information we need at the state and federal level for both parties to be more thoughtful about the sorts of regulations they should impose.

Terry Gerton All right, Kevin, underlying all of the things you’ve just talked about is the concern you raised at the beginning. Will Congress step up and enact national legislation? What should be at the top of their list if they’re going to move forward on this?

Kevin Frazier So the thing at the top of Congress’s list, in my opinion, has to be novel approaches, number one, to AI research. We just need to understand better how AI works, things like that black box concept we talk about frequently with respect to AI, and things like making sure that if AI ends up in the hands of bad actors, we know how to respond. Congress can really put a lot of energy behind those important AI research initiatives. We also need Congress to help make sure we have more data be available to more researchers and startups so that we don’t find ourselves just operating under the AI world of OpenAI, Microsoft and Anthropic. But we want to see real competition in this space. And Congress can make sure that the essential inputs to AI development are more broadly available. And finally, I think Congress can do a lot of work with respect to improving the amount of information we’re receiving from AI companies. So SB 53, for example, is a great example of a state bill that’s trying to garner more information from AI labs that can then lead to smarter, better regulation down the road. But the best approach is for Congress to take the lead on imposing those requirements, not states.

The post With a new executive order clearing the path for federal AI standards, the question now is whether Congress will finish the job first appeared on Federal News Network.

© Getty Images/Khanchit Khirisutchalual

technology control law ai concept for AI ethics and Developing artificial codes of ethics.Compliance, regulation, standard, and responsibility for guarding against

Governing the future: A strategic framework for federal HR IT modernization

21 January 2026 at 15:27

The federal government is preparing to undertake one of the most ambitious IT transformations in decades: Modernizing and unifying human resources information technology across agencies. The technology itself is not the greatest challenge. Instead, success will hinge on the government’s ability to establish an effective, authoritative and disciplined governance structure capable of making informed, timely and sometimes difficult decisions.

The central tension is clear: Agencies legitimately need flexibility to execute mission-specific processes, yet the government must reduce fragmentation, redundancy and cost by standardizing and adopting commercial best practices. Historically, each agency has evolved idiosyncratic HR processes — even for identical functions — resulting in one of the most complex HR ecosystems in the world.

We need a governance framework that can break this cycle. It has to be a structured requirements-evaluation process, a systematic approach to modernizing outdated statutory constraints, and a rigorous mechanism to prevent “corner cases” from derailing modernization. The framework is based on a three-tiered governance structure to enable accountability, enforce standards, manage risk and accelerate decision making.

The governance imperative in HR IT modernization

Modernizing HR IT across the federal government requires rethinking more than just systems — it requires rethinking decision making. Technology will only succeed if governance promotes standardization, manages statutory and regulatory constraints intelligently, and prevents scope creep driven by individual agency preferences.

Absent strong governance, modernization will devolve into a high-cost, multi-point, agency-to-vendor negotiation where each agency advocates for its “unique” variations. Commercial vendors, who find arguing with or disappointing their customers to be fruitless and counterproductive, will ultimately optimize toward additional scope, higher complexity and extended timelines — that is, unless the government owns the decision framework.

Why governance is the central challenge

The root causes of this central challenge are structural. Agencies with different missions evolved different HR processes — even for identical tasks such as onboarding, payroll events or personnel actions. Many “requirements” cited today are actually legacy practices, outdated rules or agency preferences. And statutes and regulations are often more flexible than assumed, but in order to avoid any risk of perceived noncompliance or litigation.

Without centralized authority, modernization will replicate fragmentation in a new system rather than reduce it. Governance must therefore act as the strategic filter that determines what is truly required, what can be standardized and what needs legislative or policy reform.

A two-dimensional requirements evaluation framework

Regardless of the rigor associated with the requirements outlined at the outset of the program, implementers will encounter seemingly unique or unaccounted for “requirements” that appear to be critical to agencies as they begin seriously planning for implementation. Any federal HR modernization effort must implement a consistent, transparent and rigorous method for evaluating these new or additional requirements. The framework should classify every proposed “need” across two dimensions:

  • Applicability (breadth): Is this need specific to a single agency, a cluster of agencies, or the whole of government?
  • Codification (rigidity): Is the need explicitly required by law/regulation, or is it merely a policy preference or tradition?

This line of thinking leads to a decision matrix of sorts. For instance, identified needs that are found to be universal and well-codified are likely legitimate requirements and solid candidates for productization on the part of the HR IT vendor. For requirements that apply to a group of agencies or a single agency, or that are really based on practice or tradition, there may be a range of outcomes worth considering.

Prior to an engineering discussion, the applicable governance body must ask of any new requirement: Can this objective be achieved by conforming to a recognized commercial best practice? If the answer is yes, the governance process should strongly favor moving in that direction.

This disciplined approach is crucial to keeping modernization aligned with cost savings, simplification and future scalability.

Breaking the statutory chains: A modern exception and reform model

A common pitfall in federal IT is the tendency to view outdated laws and regulations as immutable engineering constraints. There are in fact many government “requirements” — often at a very granular and prescriptive level — embedded in written laws and regulations, that are either out-of-date or that simply do not make sense when viewed in a larger context of how HR gets done. The tendency is to look at these cases and say, “This is in the rule books, so we must build the software this way.”

But this is the wrong answer, for several reasons. And reform typically lags years behind technology. Changing laws or regulations is an arduous and lengthy process, but the government cannot afford to encode obsolete statutes into modern software. Treating every rule as a software requirement guarantees technical debt before launch.

The proposed mechanism: The business case exception

The Office of Management and Budget and the Office of Personnel Management have demonstrated the ability to manage simple, business-case-driven exception processes. This capability should be operationalized as a core component of HR IT modernization governance:

  • Immediate flexibility: OMB and OPM should grant agencies waivers to bypass outdated procedural requirements if adopting the standard best practice reduces administrative burden and cost.
  • Batch legislative updates: Rather than waiting for laws to change before modernizing, OPM and OMB can “batch up” these approved exceptions. On a periodic basis, these proven efficiencies through standard processes to modify laws and regulations to match the new, modernized reality.

This approach flips the traditional model. Instead of software lagging behind policy, the modernization effort drives policy evolution.

Avoiding the “corner case” trap: ROI-driven decision-making

In large-scale HR modernization, “corner cases” can become the silent destroyer of budgets and timelines. Every agency can cite dozens of rare events — special pay authorities, unusual personnel actions or unique workforce segments — that occur only infrequently.

The risk is that building system logic for rare events is extraordinarily expensive. These edge cases disproportionately consume design and engineering time. And any customization or productization can increase testing complexity and long-term maintenance cost.

Governance should enforce a strict return-on-investment rule: If a unique scenario occurs infrequently and costs more to automate than to handle manually, it should not be engineered into the system.

For instance, if a unique process occurs only 50 times a year across a 2-million-person workforce, it is cheaper to handle it manually outside the system than to spend millions customizing the software. If the government does not manage this evaluation itself, it will devolve into a “ping-pong” negotiation with vendors, leading to scope creep and vulnerability. The government must hold the reins, deciding what gets built based on value, not just request.

Recommended governance structure

To operationalize the ideas above, the government should implement a three-tiered governance structure designed to separate strategy from technical execution.

  1. The executive steering committee (ESC)
  • Composition: Senior leadership from OMB, OPM and select agency chief human capital officers and chief information officers (CHCOs/CIOs).
  • Role: Defines the “North Star.” They hold the authority to approve the “batch exceptions” for policy and regulation. They handle the highest-level escalations where an agency claims a mission-critical need to deviate from the standard.

The ESC establishes the foundation for policy, ensures accountability, and provides air cover for standardization decisions that may challenge entrenched agency preferences.

  1. The functional control board (FCB)
  • Composition: Functional experts (HR practitioners) and business analysts.
  • Role: The “gatekeepers.” They utilize the two-dimensional framework to triage requirements. Their primary mandate is to protect the standard commercial best practice. They determine if a request is a true “need” or just a preference.

The FCB prevents the “paving cow paths” phenomenon by rigorously protecting the standard process baseline.

  1. The architecture review board (ARB)
  • Composition: Technical architects and security experts.
  • Role: Ensures that even approved variations do not break the data model or introduce technical debt. They enforce the return on investment (ROI) rule on corner cases — if the technical cost of a request exceeds its business value, they reject it.

The ARB enforces discipline on engineering choices and protects the system from fragmentation.

Federal HR IT modernization presents a rare opportunity to reshape not just systems, but the business of human capital management across government. The technology exists. The challenge — and the opportunity — lies in governance.

The path to modernization will not be defined by the software implemented, but by the discipline, authority, and insight of the governance structure that guides it.

Steve Krauss is a principal with SLK Executive Advisory. He spent the last decade working for GSA and OPM, including as the Senior Executive Service (SES) director of the HR Quality Service Management Office (QSMO).

The post Governing the future: A strategic framework for federal HR IT modernization first appeared on Federal News Network.

© Getty Images/iStockphoto/metamorworks

People network concept. Group of person. Teamwork. Human resources.

Securing AI in federal and defense missions: A multi-level approach

20 January 2026 at 17:07

As the federal government accelerates artificial intelligence adoption under the national AI Action Plan, agencies are racing to bring AI into mission systems. The Defense Department, in particular, sees the potential of AI to help analysts manage overwhelming data volumes and maintain an advantage over adversaries.

Yet most AI projects never make it out of the lab — not because models are inadequate, but because the data foundations, traceability and governance around them are too weak. In mission environments, especially on-premises and air-gapped cloud regions, trustworthy AI is impossible without secure, transparent and well-governed data.

To deploy AI that reaches production and operates within classification, compliance and policy constraints, federal leaders must view AI security in layers.

Levels of security and governance

AI covers a wide variety of fields such as machine learning, robotics and computer vision. For this discussion, let’s focus on one of AI’s fastest-growing areas: natural language processing and generative AI used as decision-support tools.

Under the hood, these systems, based on large language models (LLMs), are complex “black boxes” trained on vast amounts of public data. On their own, they have no understanding of a specific mission, agency or theater of operations. To make them useful in government, teams typically combine a base model with proprietary mission data, often using retrieval-augmented generation (RAG), where relevant documents are retrieved and used as context for each answer.

That’s where the security and governance challenges begin.

Layer 1: Infrastructure — a familiar foundation

The good news is that the infrastructure layer for AI looks a lot like any other high-value system. Whether an agency is deploying a database, a web app or an AI service, the ATO processes, network isolation, security controls and continuous monitoring apply.

Layer 2: The challenge of securing AI augmented data

The data layer is where AI security diverges most sharply from commercial use. In RAG systems, mission documents are retrieved as context for model queries. If retrieval doesn’t enforce classification and access controls, the system can generate results that cause security incidents.

Imagine a single AI system indexing multiple levels of classified documents. Deep in the retrieval layer, the system pulls a highly relevant document to augment the query, but it’s beyond the analyst’s classification access levels. The analyst never sees the original document; only a neat, summarized answer that is also a data spill.

The next frontier for federal AI depends on granular, attribute-based access control.

Every document — and every vectorized chunk — must be tagged with classification, caveats, source system, compartments and existing access control lists. This is often addressed by building separate “bins” of classified data, but that approach leads to duplicated data, lost context and operational complexity. A safer and more scalable solution lies within a single semantic index with strong, attribute-based filtering.

Layer 3: Models and the AI supply chain

Agencies may use managed models, fine-tune their own, or import third-party or open-source models into air-gapped environments. In all cases, models should be treated as part of a software supply chain:

  • Keep models inside the enclave so prompts and outputs never cross uncontrolled boundaries.
  • Protect training pipelines from data poisoning, which can skew outputs or introduce hidden security risks.
  • Rigorously scan and test third-party models before use.

Without clear policy around how models are acquired, hosted, updated and retired, it’s easy for “one-off experiments” to become long-term risks.

The challenge at this level lies in the “parity gap” between commercial and government cloud regions. Commercial environments receive the latest AI services and their security enhancements much earlier. Until those capabilities are authorized and available in air-gapped regions, agencies may be forced to rely on older tools or build ad hoc workarounds.

Governance, logging and responsible AI

AI governance has to extend beyond the technical team. Policy, legal, compliance and mission leadership all have a stake in how AI is deployed.

Three themes matter most:

  1. Traceability and transparency. Analysts must be able to see which sources informed a result and verify the underlying documents.
  2. Deep logging and auditing. Each query should record who asked what, which model ran, what data was retrieved, and which filters were applied.
  3. Alignment with emerging frameworks. DoD’s responsible AI principles and the National Institute of Standards and Technology’s AI risk guidance offer structure, but only if policy owners understand AI well enough to apply them — making education as critical as technology.

Why so many pilots stall — and how to break through

Industry estimates suggest that up to 95% of AI projects never make it to full production. In federal environments, the stakes are higher, and the barriers are steeper. Common reasons include vague use cases, poor data curation, lack of evaluation to detect output drift, and assumptions that AI can simply be “dropped in.”

Data quality in air-gapped projects is also a factor. If your query is about “missiles,” but your system is mostly indexed with documents about “tanks”, analysts can expect poor results, also called “AI hallucinations.” They won’t trust the tool, and the project will quietly die. AI cannot invent high-quality mission data where none exists.

There are no “quick wins” for AI in classified missions, but there are smart starting points:

  • Build upon a focused decision-support problem.
  • Inventorying and tagging mission data.
  • Bringing security and policy teams in early.
  • Establishing an evaluation loop to test outputs.
  • Designing for traceability and explainability from day one.

Looking ahead

In the next three to five years, we can expect AI platforms, both commercial and government, to ship with stronger built-in security, richer monitoring, and more robust audit features. Agent-based AI pipelines with autonomous security accesses that can pre-filter queries and post-process answers (for example, to enforce sentiment policies or redact PII) will become more common. Yet even as these security requirements and improvements accelerate, national security environments face a unique challenge: The consequences of failure are too high to rely on blind automation.

Agencies that treat AI as a secure system — grounded in strong data governance, layered protections and educated leadership — will be the ones that move beyond pilots to real mission capability.

Ron Wilcom is the director of innovation for Clarity Business Solutions.

The post Securing AI in federal and defense missions: A multi-level approach first appeared on Federal News Network.

© Getty Images/ThinkNeo

Circuit board in shape electronic brain with gyrus. Artificial intelligence in neon cyberspace with glowing.

Federal CIOs want AI-improved CX; customers want assured security

20 January 2026 at 15:50

 

Interview transcript:

Terry Gerton Gartner’s just done a new survey that’s very interesting around how citizens perceive how they should share data with the government. Give us a little bit of background on why you did the survey.

Mike Shevlin We’re always looking at, and talk to people about, doing some “voice of the customer,” those kinds of things as [government agencies] do development. This was an opportunity for us to get a fairly large sample voice-of-the-customer response around some of the things we see driving digital services.

Terry Gerton There’s some pretty interesting data that comes out of this. It says 61% of citizens rank secure data handling as extremely important, but only 41% trust the government to protect their personal information. What’s driving that gap?

Mike Shevlin To some extent, we have to separate trust in government with the security pieces. You know, if we looked strictly at the, “do citizens expect us to secure their data?” You know, that’s up in the 90% range. So we’re really looking at something a little bit different with this. We’re looking at, and I think one of the big points that came out of the survey, is citizens’ trust in how government is using their data. To think of this, you have to think about kind of the big data. So big data is all about taking a particular dataset and then enriching it with data from other datasets. And as a result, you can form some pretty interesting pictures about people. One of the things that jumps to mind for me, and again, more on the state and local level, is automated license plate readers. What can government learn about citizens through the use of automated license plates readers? Well, you know, it depends on how we use them, right? So if we’re using it and we’re keeping that data in perpetuity, we can probably get a pretty good track on where you are, where you’ve been, the places that you visit. But that’s something that citizens are, of course, concerned about their privacy on. So I think that the drop is not between, are you doing the right things to secure my data while you’re using it, but more about, okay, are you using it for the right purposes? How do I know that? How do you explain it to me?

Terry Gerton It seems to me like the average person probably trusts their search engine more than they trust the government to keep that kind of data separate and secure. But this is really important as the government tries to deliver easier front-facing interfaces for folks, especially consumers of human services programs like SNAP and homeless assistance and those kinds of things. So how important is transparency in this government use of data? And how can the government meet that expectation while still perhaps being able to enrich this data to make the consumer experience even easier?

Mike Shevlin When I come into a service, I want you to know who I am. I want to know that you’re providing me a particular service, that it’s customized. You know, you mentioned the search engine. Does Google or Amazon know you very well? Yeah, I’d say they probably know you better than the government knows you. So my expectation is partly driven out of my experience with the private sector. But at the same time, particularly since all the craze around generative AI, citizens are now much more aware of what else data can do, and as a result, they’re looking for much more control around their own privacy. If you look at, for example in Europe with the GDPR, they’ve got some semblance of control. I can opt out. I can have my data removed. The U.S. has an awful lot of privacy legislation, but nothing as overarching as that. We’ve got HIPAA. We’ve got protections around personally identifiable information. But we don’t have something as overarching as that in Spain. In Spain, if I deal with the government, I can say yes, I only want this one agency to use my data and I don’t want it going anywhere else. We don’t have that in the U.S. I think it’s something that is an opportunity for government digital services to begin to make some promises to citizens and then fulfill those promises or prove that they’re fulfilling those promises.

Terry Gerton I’m speaking with Mike Shevlin. He’s senior director analyst at Gartner Research. Well, Mike, you introduced AI to the conversation, so I’m going to grab that and privacy. How does AI complicate trust and what role does explainable AI play here, in terms of building citizen trust that their privacy will be protected?

Mike Shevlin I think AI complicates trust in part from generative AI and in part from our kind of mistrust in computers as a whole, as entities, as we start to see these things become more human-like. And that’s really, I think, the big thing that generative AI did to us — now we can talk to a computer and get a result. The importance of the explainable AI is because what we’ve seen is these answers aren’t right from generative AI. But that’s not what it’s built for. It’s built to make something that sounds like a human. I think the explainable AI part is particularly important for government because I want to know as a citizen, if you’re using my data, if you’re then running it through an AI model and coming back with a result that affects my life, my liberty, my prosperity, how do I know that that was the right answer? And that’s where the explainable AI pieces really come into play.  Generative AI is not going to do that, at least not right now, they’re working on it. But it’s not, because it builds its decision tree as it evaluates the question, unlike some of the more traditional AI models, the machine learning or graph AI, where those decision trees are pre-built. So it’s much easier to follow back through and say, this is why we got the answer we did. You can’t really do that right now with gen AI.

Terry Gerton We’re talking to folks in federal agencies every day who are looking for ways to deploy AI, to streamline their backlogs, to integrate considerations, to flag applications where there may be actions that need to be taken, or pass through others that look like they’re clear. From the government’s perspective, how much of that needs to be explained or disclosed to citizens?

Mike Shevlin That’s one of the things I really like about the GDPR: It lays out some pretty simple rules around what’s the risk level associated with this. So for example, if the government is using AI to summarize a document, but then someone is reviewing that summary and making a decision on it, I have less concern than I have if that summary becomes the decision. So I think that’s the piece to really focus on as we look at this and some of the opportunities. Gartner recommends combining AI models, and this will become even more important as we move into the next era of agentic AI or AI agents, because now we’re really going to start having the machines do things for us. And I think that explainability becomes really appropriate.

Terry Gerton What does this mean for contractors who are building these digital services? How can they think about security certifications or transparency features as they’re putting these new tools together?

Mike Shevlin The transparency features are incumbent upon government to ask for. The security pieces, you know, we’ve got FedRAMP, we got some of the other pieces. But if you look at the executive orders on AI, transparency and explainability are one of the pillars that are in those executive orders. So, certainly, government entities should be asking for some of those things. I’m pulling from some law enforcement examples, because that’s usually my specific area of focus. But when I look at some of the Drone as a First Responder programs, and I think it was San Francisco that just released their “here’s all the drone flights that we did, here’s why we did them,” so that people can understand: Hey, yeah, this is some AI that’s involved in this, this is some remote gathering, but here’s what we did and why. And that kind of an audit into the system is huge for citizen confidence. I think those are the kinds of things that government should be thinking about and asking for in their solicitations. How do we prove to citizens that we’re really doing the right thing? How can we show them that if we say we’re going to delete this data after 30 days, we’re actually doing that?

Terry Gerton So Mike, what’s your big takeaway from the survey results that you would want to make sure that federal agencies keep in mind as they go into 2026 and they’re really moving forward in these customer-facing services?

Mike Shevlin So my big takeaway is absolutely around transparency. There’s a lot to be said for efficiency, there’s lot to be said for personalization. But I think the biggest thing that came from this survey for me was, we all know security is important. We’ve known that for a long time. Several administrations have talked about it as a big factor. And we have policies and standards around that. But the transparency pieces, I think, we’re starting to get into that. We need to get in to that a little faster. I think that’s probably one of the quickest wins for government if we can do that.

The post Federal CIOs want AI-improved CX; customers want assured security first appeared on Federal News Network.

© Federal News Network

Why Uncle Sam favors AI-forward government contractors — and how contractors can use that to their advantage

16 January 2026 at 17:03

Read between the lines of recent federal policies and a clear message to government contractors begins to emerge: The U.S. government isn’t just prioritizing deployment of artificial intelligence in 2026. It wants the contractors to whom it entrusts its project work to do likewise.

That message, gleaned from memoranda issued by the White House Office of Management and Budget, announcements out of the Defense Department’s Chief Digital and Artificial Intelligence Office, statements from the General Services Administration and other recent actions, suggests that when it comes to evaluating government contractors for potential contract awards, the U.S. government in many instances will favor firms that are more mature in their use and governance of AI.

That’s because, in the big picture, firms that are more AI-mature — that employ it with strong governance and oversight — will tend to use and share data to make better decisions and communicate more effectively, so their projects and business run more efficiently and cost-effectively. That in turn translates into lower risk and better value for the procuring agency. Agencies apparently are recognizing the link between AI and contractor value. Based on recent contracting trends along with my own conversations with contracting executives, firms that can demonstrate they use AI-driven tools and processes in key areas like project management, resource utilization, cost modeling and compliance are winning best-value assessments even when they aren’t the cheapest.

To simply dabble in AI is no longer enough. Federal agencies and their contracting officers are putting increased weight on the maturity of a contractor’s AI program, and the added value that contractor can deliver back to the government in specific projects. How, then, can contractors generate extra value using AI in order to be a more attractive partner to federal contracting decision-makers?

Laying the groundwork

Let’s dig deeper into the “why” behind AI. For contractors, it’s not just about winning more government business. Big picture: It’s about running projects and the overall business more efficiently and profitably.

What’s more, being an AI-forward firm isn’t about automating swaths of a workforce out of a job. Rather, AI is an enabler and multiplier of human innovation. It frees people to focus on higher-value work by performing tasks on their behalf. It harnesses the power of data to surface risks, opportunities, trends and potential issues before they escalate into larger problems. Its predictive power promotes anticipatory actions rather than reactive management. The insights it yields, when combined with the collective human experience, institutional knowledge and business acumen inside a firm, leads to better-informed human decision making.

For AI to provide benefits and value both internally and to customers, it requires a solid data foundation underneath it. Clean, connected and governed data is the lifeblood that AI models must have to deliver reliable outputs. If the data used to train those models is incomplete, siloed, flawed or otherwise suspect, the output from AI models will tend to be suspect, too. So in building a solid foundation for AI, a firm would be wise to ensure it has an integrated digital environment in place (with key business systems like enterprise resource planning [ERP], customer relationship management [CRM] and project portfolio management [PPM] connected) to enable data to flow unimpeded. Nowadays, federal contracting officers and primes are evaluating contractors based on the maturity of their AI programs, as well as on the maturity of their data-management programs in terms of hygiene, security and governance.

They’re also looking closely at the guardrails contractors have around their AI program: appropriate oversight, human-in-the-loop practices and governance structures. Transparency, auditability and explainability are paramount, particularly in light of regulations such as the Federal Acquisition Regulations, Defense Federal Acquisition Regulation Supplement, and Cybersecurity Maturity Model Certification. It’s worth considering developing (and keeping up-to-date) an AI capabilities and governance statement that details how and where your firm employs AI, and the structures it uses to oversee its AI capabilities. A firm then can include that statement in the proposals it submits.

AI use cases that create value

Having touched on the why and how behind AI, let’s explore some of the areas where contractors could be employing intelligent automation, predictive engines, autonomous agents, generative AI co-pilots and other capabilities to run their businesses and projects more efficiently. With these approaches in mind, contractors can deliver more value to their federal government customers.

  1. Project and program management: AI has a range of viable use cases that create value inside the project management office. On the process management front, for example, it can automate workflows and processes. Predictive scheduling, cost variance forecasting, automated estimate at completion (EAC) updates, and project triage alerts are also areas where AI is proving its value. For example, AI capabilities within an ERP system can alert decision-makers to cost trends and potential overruns, and offer suggestions for how to address them. They also can provide project managers with actionable, up-to-the-minute information on project status, delays, milestones, cost trends, potential budget variances and resource utilization.

Speaking of resources, predictive tools (skills graphs, staffing models, et cetera) can help contractors forecast talent needs and justify salary structures. They also support resource allocation and surge requirements. Ultimately, these tools help optimize the composition of project teams by analyzing project needs across the firm, changing circumstances and peoples’ skills, certifications and performance. It all adds up to better project outcomes and better value back to the government agency customer.

  1. Finance and accounting: From indirect rate modeling to anomaly detection in timesheets and cost allowability, AI tools can minimize the financial and accounting risk inside a contract. It can alert teams to issues related to missing, inconsistent or inaccurate data, helping firms avoid compliance issues. Using AI, contractors also can expedite invoicing on the accounts receivable side as well as processes on the accounts payable side to provide clarity to both the customer and internal decision-makers.
  2. Compliance: Contractors carry a heavy reporting and compliance burden and live under the constant shadow of an audit. AI is proving valuable as a compliance support tool, with its ability to interpret regulatory language and identify compliance risks like mismatched data or unallowable costs. AI also can create, then confirm compliance with, policies and procedures by analyzing and applying rules, monitoring time and expense entries, gathering and formatting data for specific contractual reporting requirements, and detecting and alerting project managers to data disparities.
  3. Business development and capture: AI can help firms uncover and win new business by identifying relevant and winnable opportunities, and through proposal development, harnessing business data tailored to solicitation requirements. Using AI-driven predictive analytics, companies can develop a scoring system and decision matrix to apply to their go or no-go decisions. Firms can also use AI to handle much of the heavy lifting with proposal creation, significantly reducing time-to-draft and proposal-generation costs, while boosting a firm’s proposal capacity substantially. Intelligent modeling capabilities can recommend optimal pricing and rate strategies for a proposal.

As much as the U.S. government is investing to become an AI-forward operation, logic suggests that it would prefer that its contractors be similarly AI-savvy in their use — and governance — of intelligent tools. In the world of government contracting, we’re approaching a point where winning business from the federal government could depend on how well a firm can leverage the AI tools at hand to demonstrate and deliver value.

 

Steve Karp is chief innovation officer for Unanet.

The post Why Uncle Sam favors AI-forward government contractors — and how contractors can use that to their advantage first appeared on Federal News Network.

© Federal News Network

Steve Karp headshot

8 federal agency data trends for 2026

14 January 2026 at 14:54

If 2025 was the year federal agencies began experimenting with AI at-scale, then 2026 will be the year they rethink their entire data foundations to support it. What’s coming next is not another incremental upgrade. Instead, it’s a shift toward connected intelligence, where data is governed, discoverable and ready for mission-driven AI from the start.

Federal leaders increasingly recognize that data is no longer just an IT asset. It is the operational backbone for everything from citizen services to national security. And the trends emerging now will define how agencies modernize, secure and activate that data through 2026 and beyond.

Trend 1: Governance moves from manual to machine-assisted

Agencies will accelerate the move toward AI-driven governance. Expect automated metadata generation, AI-powered lineage tracking, and policy enforcement that adjusts dynamically as data moves, changes and scales. Governance will finally become continuous, not episodic, allowing agencies to maintain compliance without slowing innovation.

Trend 2: Data collaboration platforms replace tool sprawl

2026 will mark a turning point as agencies consolidate scattered data tools into unified data collaboration platforms. These platforms integrate cataloging, observability and pipeline management into a single environment, reducing friction between data engineers, analysts and emerging AI teams. This consolidation will be essential for agencies implementing enterprise-wide AI strategies.

Trend 3: Federated architectures become the federal standard

Centralized data architectures will continue to give way to federated models that balance autonomy and interoperability across large agencies. A hybrid data fabric — one that links but doesn’t force consolidation — will become the dominant design pattern. Agencies with diverse missions and legacy environments will increasingly rely on this approach to scale AI responsibly.

Trend 4: Integration becomes AI-first

Application programming interfaces (APIs), semantic layers and data products will increasingly be designed for machine consumption, not just human analysis. Integration will be about preparing data for real-time analytics, large language models (LLMs) and mission systems, not just moving it from point A to point B.

Trend 5: Data storage goes AI-native

Traditional data lakes will evolve into AI-native environments that blend object storage with vector databases, enabling embedding search and retrieval-augmented generation. Federal agencies advancing their AI capabilities will turn to these storage architectures to support multimodal data and generative AI securely.

Trend 6: Real-time data quality becomes non-negotiable

Expect a major shift from reactive data cleansing to proactive, automated data quality monitoring. AI-based anomaly detection will become standard in data pipelines, ensuring the accuracy and reliability of data feeding AI systems and mission applications. The new rule: If it’s not high-quality in real time, it won’t support AI at-scale.

Trend 7: Zero trust expands into data access and auditing

As agencies mature their zero trust programs, 2026 will bring deeper automation in data permissions, access patterns and continuous auditing. Policy-as-code approaches will replace static permission models, ensuring data is both secure and available for AI-driven workloads.

Trend 8: Workforce roles evolve toward human-AI collaboration

The rise of generative AI will reshape federal data roles. The most in-demand professionals won’t necessarily be deep coders. They will be connectors who understand prompt engineering, data ethics, semantic modeling and AI-optimized workflows. Agencies will need talent that can design systems where humans and machines jointly manage data assets.

The bottom line: 2026 is the year of AI-ready data

In the year ahead, the agencies that win will build data ecosystems designed for adaptability, interoperability and human–AI collaboration. The outdated mindset of “collect and store” will be replaced by “integrate and activate.”

For federal leaders, the mission imperative is clear: Make data trustworthy by default, usable by design, and ready for AI from the start. Agencies that embrace this shift will move faster, innovate safely, and deliver more resilient mission outcomes in 2026 and beyond.

Seth Eaton is vice president of technology & innovation at Amentum.

The post 8 federal agency data trends for 2026 first appeared on Federal News Network.

© Getty Images/iStockphoto/ipopba

AI, Machine learning, Hands of robot and human touching on big data network connection background, Science and artificial intelligence technology, innovation and futuristic.

A data mesh approach: Helping DoD meet 2027 zero trust needs

13 January 2026 at 16:54

As the Defense Department moves to meet its 2027 deadline for completing a zero trust strategy, it’s critical that the military can ingest data from disparate sources while also being able to observe and secure systems that span all layers of data operations.

Gone are the days of secure moats. Interconnected cloud, edge, hybrid and services-based architectures have created new levels of complexity — and more avenues for bad actors to introduce threats.

The ultimate vision of zero trust can’t be accomplished through one-off integrations between systems or layers. For critical cybersecurity operations to succeed, zero trust must be based on fast, well-informed risk scoring and decision making that consider a myriad of indicators that are continually flowing from all pillars.

Short of rewriting every application, protocol and API schema to support new zero trust communication specifications, agencies must look to the one commonality across the pillars: They all produce data in the form of logs, metrics, traces and alerts. When brought together into an actionable speed layer, the data flowing from and between each pillar can become the basis for making better-informed zero trust decisions.

The data challenge

According to the DoD, achieving its zero trust strategy results in several benefits, including “the ability of a user to access required data from anywhere, from any authorized and authenticated user and device, fully secured.”

Every day, defense agencies are generating enormous quantities of data. Things get even more tricky when the data is spread across cloud platforms, on-prem systems, or specialized environments like satellites and emergency response centers.

It’s hard to find information, let alone use it efficiently. And with different teams working with many different apps and data formats, the interoperability challenge increases. The mountain of data is growing. While it’s impossible to calculate the amount of data the DoD generates per day, a single Air Force unmanned aerial vehicle can generate up to 70 terabytes of data within a span of 14 hours, according to a Deloitte report. That’s about seven times more data output than the Hubble Space Telescope generates over an entire year.

Access to that information is bottlenecking.

Data mesh is the foundation for modern DoD zero trust strategies

Data mesh offers an alternative answer to organizing data effectively. Put simply, a data mesh overcomes silos, providing a unified and distributed layer that simplifies and standardizes data operations. Data collected from across the entire network can be retrieved and analyzed at any or all points of the ecosystem — so long as the user has permission to access it.

Instead of relying on a central IT team to manage all data, data ownership is distributed across government agencies and departments. The Cybersecurity and Infrastructure Security Agency uses a data mesh approach to gain visibility into security data from hundreds of federal agencies, while allowing each agency to retain control of its data.

Data mesh is a natural fit for government and defense sectors, where vast, distributed datasets have to be securely accessed and analyzed in real time.

Utilizing a scalable, flexible data platform for zero trust networking decisions

One of the biggest hurdles with current approaches to zero trust is that most zero trust implementations attempt to glue together existing systems through point-to-point integrations. While it might seem like the most straightforward way to step into the zero trust world, those direct connections can quickly become bottlenecks and even single points of failure.

Each system speaks its own language for querying, security and data format; the systems were also likely not designed to support the additional scale and loads that a zero trust security architecture brings. Collecting all data into a common platform where it can be correlated and analyzed together, using the same operations, is a key solution to this challenge.

When implementing a platform that fits these needs, agencies should look for a few capabilities, including the ability to monitor and analyze all of the infrastructure, applications and networks involved.

In addition, agencies must have the ability to ingest all events, alerts, logs, metrics, traces, hosts, devices and network data into a common search platform that includes built-in solutions for observability and security on the same data without needing to duplicate it to support multiple use cases.

This latter capability allows the monitoring of performance and security not only for the pillar systems and data, but also for the infrastructure and applications performing zero trust operations.

The zero trust security paradigm is necessary; we can no longer rely on simplistic, perimeter-based security. But the requirements demanded by the zero trust principles are too complex to accomplish with point-to-point integrations between systems or layers.

Zero trust requires integration across all pillars at the data level –– in short, the government needs a data mesh platform to orchestrate these implementations. By following the guidance outlined above, organizations will not just meet requirements, but truly get the most out of zero trust.

Chris Townsend is global vice president of public sector at Elastic.

The post A data mesh approach: Helping DoD meet 2027 zero trust needs first appeared on Federal News Network.

© AP Illustration/Peter Hamlin)

(AP Illustration/Peter Hamlin)US--Insider Q&A-Pentagon AI Chief

In a crowded federal contracting space, Hive Group bets on innovation

12 January 2026 at 13:11

Interview transcript

Terry Gerton Let’s start with talking about Hive. You’ve positioned yourself really as a disrupter in the federal contracting space. Are there specific technology or process innovations that have helped you most transform your organization?

Will Fortier Well, I think that that’s something that every organization is going through at this moment and time. I think some of the enablers that we’ve certainly benefited from would be the cloud service providers. That has completely revolutionized how we interact with our solutions. When I entered the GovCon space, we were still on print and file servers and version control was a huge issue and Just moving towards collaborative tools like Google suites and Microsoft Teams has allowed us to actually collaborate on the artifacts that we’re building for our clients and communicate. Could you imagine going through a pandemic without those cloud service providers? That has been a big game changer for us in how we affect our clients’ missions. But what we’ve been focusing on are things like improving decision support systems across our clients’ platforms, enabling them to just — getting the right information into the decision-makers hands and enabling that decision-making process much faster and much more efficient. Trying to take out errors, trying to increase automation and reliability of that data.

Terry Gerton Do you find that your innovations are driven more by your client demands or by Hive anticipating what they will want?

Will Fortier Yeah, I think it’s always a mixture of both. I think that just in our name itself, Hive Group, we’ve got a lot — our people are in the field, our labs are in field, we’re responding to client demand. But we also have this collection of subject matter expertise in house. All of our leadership are practitioners in the industry, meaning they’ve delivered solutions to clients themselves. And so when we’re brought into discussion, with our teams on the delivery side, what we end up doing is synthesizing what they’re doing and then using our experience to then pull that information in-house and try to build something on top of that that might be useful to the clients, might give some benefit. And not only that, but sharing from what we’re doing on one client engagement on other client engagements as well. So we’ve developed things like Hexacore, which is a delivery excellence framework that we employ in all our delivery teams. And then HiveIQ, as well, as something else that we’ve instituted, which is really information sharing and getting best practices from what one team is doing into another team’s skillset.

Terry Gerton You started to talk about some of your specific products there. It’s a crowded federal contracting space. How do you use those kinds of products and other techniques or strategies to differentiate Hive?

Will Fortier Yeah, that’s interesting. So there’s been a lot of innovation in this space, and certainly there’s been a lot a top-down pressure, which is really making it — a lot of people taking stabs at trying to innovate. Whether we’re building the tools themselves, or we’re developing partnerships with those that actually have really good tools as well, it’s a mixture between both of those sides of the coin. A lot of what we do is focused around the reinvestment of what were doing. I mean, we’ve had a quick and fast growth trajectory since the start of the company. And so what we’re doing is taking that money and it’s allowing us to reinvest into what we’re able to offer as far as our solutions to our clients. That allows us to also get deeper and specialize in some of those areas that we’re focusing on.

Terry Gerton You talked about that rapid scaling and growth. Are there any particular critical decisions that have enabled you to scale without losing your quality control?

Will Fortier Yes, scaling. That is something that, as a practitioner in the field, I think a lot of us don’t really get faced with those kinds of questions until you’re either in the C-suite or in the ownership seat, yourself. And so what we do, I think it’s kind of two-part. One is internal, it’s making sure that the right people are in the right seats on the bus. Because what got you to 50 people isn’t necessarily the same configuration that’s gonna work when you’re scaling up to 200. And so you gotta keep that in mind. And that’s something that you learn when you are actually sitting in that seat. And then the next is externally. I think that looking at your support providers is a real game changer and getting the right support providers can be a real force multiplier. And making those mistakes can also cost you a lot of effort and time down the road. And there’s certainly some examples of that, one would be getting good advice. Not surprising to anybody, but having good legal advice is a really obvious thing to have, but one that kind of caught me by surprise was banking relationships. And so, when you start out a company, whoever will give you a line of credit and some treasury is usually who you end up starting with. But as you start to scale up, your needs do change. And so I remember our CFO, Ryan Fuller, came to me and said, hey, I think it’s time that we kind of look at some other options, got into that, got into the why. We ended up going with JP Morgan Chase. Once we did that, I really started realizing how important banking relationships are. Bringing people, hearing from the SMEs on what direction the market is going, getting to meet with other business owners and sort of collaborating with them on what they’re going through — all those kinds of, those offerings that the right support provider can provide is just, it certainly has affected who we are as a business and how we operate.

Terry Gerton That is such important insight, and it sounds like a lesson learned maybe the hard way through experience.

Will Fortier Yeah, you can start your QuickBooks as a great tool, probably not going to get you to, it’s probably not something you want to scale on. But yeah, I mean the tools that you start out with are great and they certainly have a place, but the partnerships, they need to evolve along with how you scale.

Terry Gerton You also mentioned the importance of having the right people in the right seats on the bus, everybody’s favorite leadership analogy. But what are your tricks of the trade in terms of making sure that you have the right people in those right seats, especially in cleared positions, and how do you maintain culture as you’re growing your organization?

Will Fortier The culture question is, we could spend an hour on that alone and that has certainly faced some challenges with COVID and the separation of a lot of folks going remote. But when it comes to talent retention, I think it’s one of the most important things that we’ve focused on. It’s important not to just drop your team member off at a client’s site and say good luck. When our employees look around, what we focus on is putting an emphasis around lifting our people. This is actually one of our core pillars as an organization. And in doing that, we try to focus and work with our client sets on finding opportunities for people to have a growth path. And when we do that, it’s more than just a sales pitch. I mean, the employees that are coming in can take a quick look around and see some folks who might have joined our organization at an entry level and are now a part of the leadership team. Finding those kinds of candidates is certainly what we strive for, and that’s the kind of culture that we want to build and support. Then really a part the secret sauce is how you hire, which is very different depending on the organizations that you go to. I think that what we try to focus on is more the total picture. I mean, some bigger mistakes that we see both on the government side as well as the industry side is making a hiring decision just because the person has done the work, similar work before, instead of focusing on the culture fit: is this person professionally curious? Are they gonna emulate the qualities and the core values that you’re trying to establish throughout the organization? And look, nobody bats a thousand in doing that, but it is something that we consciously try to keep in mind when we’re building out our team.

Terry Gerton So we’ve talked a little bit about what makes Hive different, your scaling strategy or your workforce strategy, but there’s a lot that’s changing in the GovCon space right now. So if you’re looking forward five years, how do you think Hive will change to continue to be competitive in the market?

Will Fortier Yeah, so to say that things are changing right now doesn’t feel like a strong enough phrase. But one of the things that we want to focus on, and we could go on about all the different kinds of capabilities that we’re trying to implement for clients, but one that’s near and dear to my heart is certainly the acquisition process. You’re seeing a lot in that discussion nowadays, and certainly with the revolutionary FAR overhaul, you’re seeing lot of recommended change for how the government goes about procuring things. So I think that there’s a lot of discussion that needs to happen, and there’s certainly a lot of innovation that we can work towards. In that realm, one of the things that we’ve started at Hive Group is the Industry Partner Council, and that is a collection of really the big 20, and altogether there’s 200 businesses all throughout the GovCon space that are in varying degrees participating, and what we’re aimed to do is really just strengthening that conversation between contracting activities and industry, and trying to find ways to educate each other, you know, break down those barriers, get us talking more, and there’s certainly, with as much change that is going on right now, we’ve got to also keep those avenues to talk and discuss what’s working and what’s not working. So that’s one avenue that we’re hoping continues to stick and certainly getting more contracting activities on board with it.

The post In a crowded federal contracting space, Hive Group bets on innovation first appeared on Federal News Network.

© Getty Images/Oscar Wong

gettyimages-1517235424-612x612

Expert Edition: Modernization that delivers: Real tools, real people, real impact

By: wfedstaff
9 January 2026 at 15:38

Modernization isn’t just about tech — it’s about ensuring transformation that helps people.

In the latest Federal News Network Expert Edition, leaders from Tyler Technologies share what it really takes to modernize government systems in ways that stick.

  • CTO Russell Gainford reminds us that modernization must empower people — not just upgrade infrastructure. If tools don’t work for the workforce, they won’t work at all, he says.
  • SVP Mike Cerniglia breaks down how “thinking smaller” leads to bigger wins. He explains how incremental modernization reduces risk, builds momentum and delivers value faster.
  • Federal Portfolio Manager Darreisha Harper emphasizes that communication is the key to adoption. Engaging knowledge workers early and often ensures tech aligns with real-world workflows, she says.
  • VP of Engineering Sonia Sanghavi shows how mobile-first fieldwork platforms are transforming inspections, disaster response and compliance — without replacing human intuition.

Be sure to download our exclusive e-book now!

The post Expert Edition: Modernization that delivers: Real tools, real people, real impact first appeared on Federal News Network.

© Federal News Network

Tyler Tech ebook 1_9_26

DLA’s foundation to use AI is built on training, platforms

The Defense Logistics Agency is initially focusing its use of artificial intelligence across three main mission areas: operations, demand planning and forecasting, and audit and transparency.

At the same time, DLA isn’t waiting for everyone to be trained or for its data to be perfect.

Adarryl Roberts, the chief information officer at DLA, said by applying AI tools to their use cases, employees can actually clean up the data more quickly.

Adarryl Roberts is the chief information officer at the Defense Logistics Agency. (Photo courtesy of DLA).

“You don’t have a human trying to analyze the data and come up with those conclusions. So leveraging AI to help with data curation and ensuring we have cleaner data, but then also not just focusing on ChatGPT and things of that nature,” Roberts said on Ask the CIO. “I know that’s the buzzword, but for an agency like DLA, ChatGPT does not solve our strategic issues that we’re trying to solve, and so that’s why there’s a heavier emphasis on AI. For us in those 56 use cases, there’s a lot of that was natural language processing, a lot around procurement, what I would consider more standardized data, what we’re moving towards with generative AI.”

A lot of this work is setting DLA up to use agentic AI in the short-to-medium term. Roberts said by applying agentic AI to its mission areas, DLA expects to achieve the scale, efficiency and effectiveness benefits that the tools promise to provide.

“At DLA, that’s when we’re able to have digital employees work just like humans, to make us work at scale so that we’re not having to redo work. That’s where you get the loss in efficiency from a logistics perspective, when you have to reorder or re-ship, that’s more cost to the taxpayer, and that also delays readiness to the warfighter,” Roberts said at the recent DLA Industry Collider day. “From a research and development perspective, it’s really looking at the tools we have. We have native tools in the cloud. We have SAP, ServiceNow and others, so based upon our major investments from technology, what are those gaps from a technology perspective that we’re not able to answer from a mission perspective across the supply chain? Then we focus on those very specific use cases to help accelerate AI in that area. The other part of that is architecting it so that it seamlessly plugs back into the ecosystem.”

He added that this ensures the technology doesn’t end up becoming a data stovepipe and can integrate into the larger set of applications to be effective and not break missions.

A good example of this approach leading to success is DLA’s use of robotics process automation (RPA) tools. Roberts said the agency currently has about 185 unattended bots that are working 24/7 to help DLA meet mission goals.

“Through our digital citizen program, government people actually are building bots. As the CIO, I don’t want to be a roadblock as a lot of the technology has advanced to where if you watch a YouTube video, you can pretty much do some rudimentary level coding and things of that nature. You have high school kids building bots today. So I want to put the technology in the hands of the experts, the folks who know the business process the best, so it’s a shorter flash to bang in order to get that support out to the warfighter,” Roberts said.

The success of the bots initiative helped DLA determine that the approach of adopting commercial platforms to implement AI tools was the right one. Roberts said all of these platforms reside under its DLA Connect enterprisewide portal.

“That’s really looking at the technology, the people, our processes and our data, and how do we integrate that and track that schematically so that we don’t incur the technical debt we incurred about 25 years ago? That’s going to result in us having architecture laying out our business processes, our supply chain strategies, how that is integrated within those business processes, overlaying that with our IT and those processes within the IT space,” he said. “The business processes, supply chain, strategies and all of that are overlapping. You can see that integration and that interoperability moving forward. So we are creating a single portal where, if you’re a customer, an industry partner, an actual partner or internal DLA, for you to communicate and also see what’s happening across DLA.”

Training every employee on AI

He said that includes questions about contracts and upcoming requests for proposals as well as order status updates and other data driven questions.

Of course, no matter how good the tools are, if the workforce isn’t trained on how to use the AI capabilities or knows where to find the data, then the benefits will be limited.

Roberts said DLA has been investing in training from online and in person courses to creating a specific “innovation navigators course” that is focused on both the IT and how to help the businesses across the agency look at innovation as a concept.

“Everyone doesn’t need the same level of training for data acumen and AI analytics, depending on where you sit in the organization. So working with our human resources office, we are working with the other executives in the mission areas to understand what skill sets they need to support their day-to-day mission. What are their strategic objectives? What’s that population of the workforce and how do we train them, not just online, but in person?” Roberts said. “We’re not trying to reinvent how you learn AI and data, but how do we do that and incorporate what’s important to DLA moving forward? We have a really robust plan for continuous education, not just take a course, and you’re trained, which, I think, is where the government has failed in the past. We train people as soon as they come on board, and then you don’t get additional training for the next 10-15 years, and then the technology passes you by. So we’re going to stay up with technology, and it’s going to be continuous education moving forward, and that will evolve as our technology evolves.”

Roberts said the training is for everyone, from the director of DLA to senior leaders in the mission areas to the logistics and supply chain experts. The goal is to help them answer and understand how to use the digital products, how to prompt AI tools the best way and how to deploy AI to impact their missions.

“You don’t want to deploy AI for the sake of deploying AI, but we need to educate the workforce in terms of how it will assist them in their day to day jobs, and then strategically, from a leadership perspective, how are we structuring that so that we can achieve our objectives,” he said. “Across DLA, we’ve trained over 25,000 employees. All our employees have been exposed, at least, to an introductory level of data acumen. Then we have some targeted courses that we’re having for senior leaders to actually understand how you manage and lead when you have a digital-first concept. We’re actually going to walk through some use cases, see those to completion for some of the priorities that we have strategically, that way we can better lead the workforce and their understanding of how to employ it at echelon within our organization, enhancing IT governance and operational success.”

The courses and training has helped DLA “lay the foundation in terms of what we need to be a digital organization, to think digital first. Now we’re at the point of execution and implementation, putting those tools to use,” Roberts said.

The post DLA’s foundation to use AI is built on training, platforms first appeared on Federal News Network.

© Federal News Network

JEDI CLOUD

OPM data overhaul reveals deeper federal workforce insights

Clearer numbers on the federal workforce are coming into view, after the Office of Personnel Management launched a major update to one of its largest data assets on Thursday.

A new federal workforce data website from OPM aims to deliver information on the federal workforce faster, with more transparency and more frequent updates, than its predecessor, FedScope.

“This is a major step forward for accountability and data-driven decision-making across government,” OPM Director Scott Kupor said Thursday in a press release.

OPM’s new platform also reaffirms the significant reshaping the federal workforce experienced over the last year. The latest workforce data, now publicly available up to November 2025, shows governmentwide staffing levels at a decade low. According to OPM’s numbers, the government shed well over 300,000 federal employees last year, impacting virtually all executive branch agencies. When accounting for hiring numbers, there has been a net loss of nearly 220,000 federal employees since January 2025.

Data on federal employees’ bargaining unit status has also shifted significantly under the Trump administration. OPM’s new data platform shows that the share of the federal workforce represented by unions dropped from about 56% to about 38% over the last year, as a result of President Donald Trump’s orders to end collective bargaining at most agencies.

And agencies reported a 75% decrease in telework hours between January and October 2025, due to Trump’s on-site requirements for the federal workforce, which the president initiated on his first day in office.

The new website is the result of a major update to OPM’s legacy data asset, FedScope, which had been in need of significant modernization for years. In a report from 2016, the Government Accountability Office recommended that OPM update the FedScope platform and improve the availability of workforce data.

Users of OPM’s new public-facing website can filter the workforce data by geographic location, agency, age, education level, bargaining unit status — and much more.

Additional data that was not accessible on the legacy FedScope platform is also now readily available, including information on retirement eligibility, telework levels, performance ratings and hiring activities for the federal workforce.

Information on race and ethnicity across the federal workforce, however, is not featured on OPM’s new platform. That’s due to Trump’s executive order last year to eliminate diversity, equity, inclusion and accessibility (DEIA) across government.

OPM had been working to update several of its workforce data assets since at least the end of the Biden administration. Federal News Network reported in early January 2025 that the agency was already in the process of building out its data management capabilities for FedScope and the Enterprise Human Resources Integration system (EHRI).

OPM, under the Trump administration, then announced plans last July to relaunch FedScope with “immediate enhancements.”

“OPM will continue releasing new data, visuals and features on the site each month and will iterate on the platform as user feedback is received,” OPM said in its press release Thursday. “This launch represents just the beginning, with regular updates and new enhancements planned on an ongoing basis.”

The post OPM data overhaul reveals deeper federal workforce insights first appeared on Federal News Network.

© Federal News Network

WORKFORCE_03

From gift lists to government systems, agentic AI is changing how we plan and prepare

Interview transcript

Terry Gerton The Information Technology Industry Council, or ITI, has a new paper out, Understanding Agentic AI. Help us with the basics there. What is the difference between agentic AI and some of the most common AI tools? We’re all familiar with ChatGPT and its partners.

Jason Oxman Well, agentic AI is the next generation of AI, if you will. Agentic refers to the use of an agent. And an agent is someone who does something on your behalf. So when we talk about agentic, AI, essentially what we’re saying is giving the AI some autonomy to help you as the human user of AI to accomplish a task. So for example, you can set your agentic AI to perform a task for you that you otherwise might have to do yourself. Telling the agentic AI, for example, I want to go travel somewhere, the agentic AI can go search out different travel options, book some tickets for you. You can set the level of autonomy that it has. But the general idea of agentic AI is it is out there using the AI tools to perform a task on your behalf.

Terry Gerton And your paper argues that AI can boost productivity, streamline processes and enhance security. So beyond me using it to put my Christmas shopping list together or book my airline tickets, tell us more about how it can be used in a more particular business sense, I guess.

Jason Oxman I think that’s a great way to think about it. The business sense uses of AI are ones that we think of as enhancing productivity for the human beings that are involved in a particular task. The best example I like, and this applies to both the private sector and certainly to government, is in customer service use cases. If you’re, for example, filing a claim with the VA because of an insurance event or because of the passing of a veteran. You’re going to need to call somebody to get some help. And a lot of that help that you’re going to needs is in very basic tasks. The VA has a lot information about service records, but it doesn’t necessarily have all the information you need to process the claim. Agenetic AI can help by telling the agent, here’s what I’m doing. Here’s what I’m trying to accomplish on behalf of someone. Can you go out and find this information for me? I need to compile together service records, insurance claims. I need find a death certificate. I need to find the copy of the policy. Can you go out and find all those things for me? A human being would have a lot of work to do to do that. AI can help. And then the human being can focus on the more complex and more important tasks, not those kind of rudimentary tasks that AI can help with. So that’s a good example of a business use case for AI.

Terry Gerton So that sounds like it’s a bit more extensive and involved than a chat bot that we might all sort of be familiar with. And you gave there a VA claim example. Talk us through some more cases where government, maybe the federal government or state government should be thinking about agentic AI and what it’s gonna take to put it out there.

Jason Oxman Yeah, and you’re right to highlight that this is a little different than the usual use cases that we think of with chat GPT or Gemini or Claude or any of the other GPT-oriented AIs that we use for various tasks. What the government can do with AI is very different and I think very exciting and I think about this in the public sector arena in particular. Think about how public sector procurement works. The United States government is the single largest purchaser of information technology in the world. And those technology purchases need to happen across agencies in many cases for different use cases. And those agencies need to talk to one another and coordinate what they’re doing. So that agencies can buy technology that interacts effectively across agencies. AI can help with that, agentic AI in particular, by looking at procurement cases across different agencies in a way that a human being wouldn’t have access to because human beings work for one agency at a time and can help with those procurement coordination efforts. We also see the government using AI across use cases where consumers need to access information. Tax information is a great example. You usually have to call a human being and sometimes wait a very long time to get access to that human being to get information on prior tax filings or your current tax filings or to ask questions. Agentic AI can help with that because you can assign it a task and let it know what you’re trying to do. It can also help with very simple things like setting up appointments. If you need to talk to somebody in government, set up an appointment to do it, whether it’s a healthcare appointment at the VA hospital, or whether it is an appointment with somebody at the IRS to talk about your tax situation. A human being shouldn’t have to spend time with that. They should spend time on actually helping with what the customer needs. The agentic AI can do the things that are pretty basic, like setting up those appointments. These are all the kind of use cases that we see. They are really about improving efficiency of employees.

Terry Gerton I’m speaking with Jason Oxman. He’s president and CEO of ITI. Jason, you gave a couple of different examples there, some that are customer-facing or citizen-facing, some that back office, internal work. But when the government takes on agentic AI, does it face any particular risks or challenges compared to, say, private sector?

Jason Oxman I think the government’s primary challenge is with the customer data that governments have. Governments have a lot of information about us, and a lot of that information is sensitive information, and we want to make sure that information is protected. So I think that’s a challenge, certainly in the private sector, but it’s highlighted particularly in government because government just has a lot more information about us. And that’s not to say that that information isn’t useful and shouldn’t be deployed on our behalf, but we want make sure when agentic AI is accessing information that it’s protecting the personal information that we all have. You’re a former government employee, I’m a former governor employee. They have a lot of information about us. We wanna make sure it’s protected, that’s one thing. A second thing is cybersecurity. There are a lot bad actors out there that really wanna access information. And in some cases, AI can help the criminals in that use case. We’ve seen recently, for example, a lot fraud that retailers are facing, AI being used to trick customer service representatives into providing refunds to AI, not to actual human beings. So cybersecurity is the second big area. And the third area is the usual challenge of government, which is making sure that government agencies talk to each other. IT modernization is a big theme that we talk about at ITI, and it’s because government systems are old, they’re antiquated, they don’t do a good job of interacting with one another. So we really wanna make sure that the government updates systems to get the best value out of IT investment.

Terry Gerton One of the other things that your paper brings out is the need to train the workforce for this revolution in operations. What are some of the most critical recommendations that you have for government workforce training?

Jason Oxman I’m really glad you raised that because there is this tendency to think of AI and agentic AI in particular as posing a threat to people’s jobs. I think it actually enhances and increases opportunity from people’s jobs. But as you noted, the only way that works is if people are trained in AI. So the way I like to frame it is, AI is not going to take anybody’s job. It’s only going to take a job of somebody that doesn’t know how to use AI from somebody else that does know how to use AI. And that’s what we need to make sure we focus on, training the workforce on how to use it. So in the same way that government workers have, over the last decades and even centuries, had to adjust to new technologies, learning how to the internet, learning how use email, you go farther back, learning how to use typewriters and phones. And all of those technologies have improved the efficiency and the effectiveness of the government workforce, but we need do training. So we need make sure that the tools are available to government workers so that they know how to use them they can make use of them And they can improve the way in which they provide value to their employers the American people by using those tools.

Terry Gerton AI policy has been a bit of a flash point lately as the federal government and state governments debate who should be in charge of it and how centralized we should make it. Your paper recommends developing a national AI strategy and updating government IT infrastructure to prepare for agentic AI. What would you put at the top of the list for policymakers and legislators as they move into this agentic time frame?

Jason Oxman Top of the list is absolutely a national strategy. And the reason I say that is technologies do not recognize state borders. We wanna make sure that there aren’t 50 different regimes governing the adoption of AI. And the threat of that is that technology will not work as effectively in some states as it will in others if different regulatory regimes are adopted. So the reason we really emphasize the importance of a national AI strategy is because we want one strategy, everyone knows what the rules are, everyone knows what protections are in place, and that is of primary importance to ensure the success of AI. And then within that national strategy, protection of data, a national privacy law, which is something we also don’t have in the United States, is really important to ensure that everyone is protected in the way that they want their data to be safeguarded. We need cybersecurity measures to make sure that AI is protected from foreign intrusion and from criminal intrusion. All of this really needs to happen at the federal level. And as you noted, it hasn’t happened at the federal level yet, so we’re really urging Congress to make this a priority, the Trump administration to make this a priority, so we get that national roadmap in place, and then federal agencies and operations can adopt AI knowing what the rules of the road are, and deploy them on behalf of citizens knowing that they’re protected.

Terry Gerton The president has issued an AI policy or strategy. What needs to be added to that to complete the picture that you’re describing?

Jason Oxman Yeah, the president has adopted that national AI strategy and we’re strongly supportive of it. It’s a great looking document. It’s great strategy. It has a lot of things to do and accomplish and a lot of different federal agencies are looking at different pieces of implementation. You know, the things that we think are really important to focus on certainly is having Congress adopt a national AI law that replaces the possibility of 50 different state laws. But also, there’s a lot of implementation work to be done within government. NIST within the Department of Commerce, for example, is working very hard on adopting AI standards so that there are voluntary consensus driven industry standards in place for the adoption of AI so, again, we know what the rules of the road are. That’s really important. We’re also seeing a lot of work being done on driving energy policy that will power data centers that will make AI even more productive. That’s in the AI strategy that the president adopted that’s really important. And then the other thing that we think is really, really important is to make sure that the U.S. Is globally competitive and is making the technology exporting it to the world. That’s a part of the strategy that we think needs to move forward as well.

 

The post From gift lists to government systems, agentic AI is changing how we plan and prepare first appeared on Federal News Network.

© Getty Images/WANAN YOSSINGKUM

The hidden vulnerability: Why legacy government web forms demand urgent attention

Government agencies face a security challenge hiding in plain sight: outdated web forms that collect citizen data through systems built years — sometimes decades — ago. While agencies invest heavily in perimeter security and advanced threat detection, many continue using legacy forms lacking modern encryption, authentication capabilities and compliance features. These aging systems process Social Security numbers, financial records, health information and security clearance data through technology that falls short of current federal security standards.

The scale of this challenge is substantial. Government organizations allocate 80% of IT budgets to maintaining legacy systems, leaving modernization efforts chronically underfunded. Critical legacy systems cost hundreds of millions annually to maintain, with projected spending reaching billions by 2030. Meanwhile, government data breaches cost an average of $10 million per incident in the United States — the highest globally.

The encryption gap that persists

Despite the 2015 federal mandate establishing HTTPS as the baseline for all government websites, implementation gaps continue. The unencrypted HTTP protocol exposes data to interception, manipulation and impersonation attacks. Attackers positioned on the network can read Social Security numbers, driver’s license numbers, financial account numbers and login credentials transmitted in plain text.

Legacy government web forms that do implement encryption often use outdated protocols no longer meeting regulatory requirements. Older systems rely on deprecated hashing algorithms like SHA-1 and outdated TLS versions vulnerable to known exploits. Without proper security header enforcement, browsers don’t automatically use secure connections, allowing users to inadvertently access unencrypted form pages.

Application-layer vulnerabilities

Beyond transmission security, legacy web forms suffer from fundamental application vulnerabilities. Testing reveals that over 80% of government web applications remain prone to SQL injection attacks. Unlike private sector organizations that remediate 73% of identified vulnerabilities, government departments remediate only 27% — the lowest among all industry sectors.

SQL injection remains one of the most dangerous attacks against government web forms. Legacy forms constructing database queries using string concatenation rather than parameterized queries introduce serious vulnerabilities. This insecure practice allows attackers to inject malicious SQL code, potentially gaining unauthorized access to national identity information, license details and Social Security numbers. Attackers exploit these vulnerabilities to alter or delete identity records, manipulate data to forge official documents, and exfiltrate entire databases containing citizen information.

Cross-site scripting (XSS) affects 75% of government applications. XSS attacks enable attackers to manipulate users’ browsers directly, capture keystrokes to steal credentials, obtain session cookies to hijack authenticated sessions, and redirect users to malicious websites. Legacy forms also lack protection against CSRF attacks, which trick authenticated users into performing unwanted actions without their knowledge.

Compliance imperative

Federal agencies must comply with the Federal Information Security Management Act (FISMA), which requires implementation of National Institute of Standards and Technology SP 800-53 security controls including access control, configuration management, identification and authentication, and system and communications protection. Legacy web forms fail FISMA compliance when they cannot implement modern encryption for data in transit and at rest, lack multi-factor authentication capabilities, don’t maintain comprehensive audit logs, use unsupported software without security patches, and operate with known exploitable vulnerabilities.

Federal agencies using third-party web form platforms must ensure vendors have appropriate FedRAMP authorization. FedRAMP requires security controls compliance incorporating NIST SP 800-53 Revision 5 controls, impact level authorization based on data sensitivity, and continuous monitoring of encryption methods and security posture. Legacy government web forms implemented through non-FedRAMP-authorized platforms represent unauthorized use of non-compliant systems.

Real-world transmission failures

The gap between policy and practice is stark. Federal agencies commonly require contractors to submit forms containing Social Security numbers, dates of birth, driver’s license numbers, criminal histories and credit information via standard non-encrypted email as plain PDF attachments. When contractors offer encrypted alternatives, badge offices often respond with resistance to change established procedures.

Most federal agencies lack basic secure portals for PII submission, forcing reliance on email despite policies requiring encryption. Standard Form 86 for national security clearances and other government forms are distributed as fillable PDFs that can be completed offline, saved unencrypted, and transmitted through insecure channels — despite containing complete background investigation data for millions of federal employees and contractors.

Recent breaches highlight ongoing vulnerabilities. Federal departments have suffered breaches where hackers accessed networks through compromised credentials. Congressional offices have been targeted by suspected foreign actors. Private contractors providing employee screening services have confirmed massive data breaches affecting millions, with unauthorized access lasting months before detection.

What agencies must do now

Government agencies must immediately enforce HTTPS encryption for all web form pages using HTTP strict transport security, deploy server-side input validation to prevent SQL injection and XSS attacks, implement anti-CSRF tokens for each form session, add bot protection, enable comprehensive access logging, and conduct regular vulnerability scanning for Open Worldwide Application Security Project Top 10 vulnerabilities.

Long-term security requires replacing legacy forms with FedRAMP-authorized platforms that provide end-to-end encryption using AES-256 for data at rest and TLS 1.3 for data in transit, multi-factor authentication for both citizens and government staff, role-based access control with granular permissions, comprehensive audit trails capturing all data access events, and automated security updates addressing emerging vulnerabilities.

Secure data collection

The real question is not whether government agencies can afford to modernize outdated web forms, but whether they can afford the consequences of failing to do so. Every unencrypted submission, each SQL injection vulnerability, and each missing audit trail represents citizen data at risk and regulatory violations accumulating. Federal mandates established the security standards years ago. Implementation can no longer wait.

The technology to solve these problems exists today. Modern secure form platforms offer FedRAMP authorization, end-to-end encryption, multi-factor authentication, comprehensive audit logging, and automated compliance monitoring. These platforms can replace legacy systems while improving user experience, reducing operational costs, and meeting evolving security requirements.

Success requires more than technology adoption — it demands organizational commitment. Agency leadership must prioritize web form security, allocate adequate budgets for modernization, and establish clear timelines for legacy system replacement. Security and IT teams need the resources and authority to implement proper controls.

Government web forms represent the primary interface between citizens and their government for countless critical services. When these forms are secure, they enable efficient, trustworthy digital government services. When they’re vulnerable, they undermine public confidence in government’s ability to protect sensitive information. The path forward is clear: Acknowledge the severity of legacy web form vulnerabilities, commit resources to address them systematically, and implement modern secure solutions. The cost of action is significant, but the cost of inaction — measured in breached data, compromised systems, regulatory penalties and lost public trust — is far higher.

 

Frank Balonis is chief information security officer and senior vice president of operations and support at Kiteworks.

The post The hidden vulnerability: Why legacy government web forms demand urgent attention first appeared on Federal News Network.

© Getty Images/iStockphoto/Traitov

AI may not be the federal buzzword for 2026

Let’s start with the good news: artificial intelligence may NOT be the buzzword for 2026.

What will be the most talked about federal IT and/or acquisition topic for this year remains up for debate. While AI will definitely be part of the conversation, at least some experts believe other topics will emerge over the next 12 months. These range from the Defense Department’s push for “speed to capability” to resilient innovation to workforce transformation.

Federal News Network asked a panel of former federal technology and procurement executives for their opinions what federal IT and acquisition storylines they are following over the next 12 months. If you’re interested in previous years’ predictions, here is what experts said about 20232024 and 2025.

The panelists are:

  • Jonathan Alboum, federal chief technology officer for ServiceNow and former Agriculture Department CIO.
  • Melvin Brown, vice president and chief growth officer at CANI and a former deputy CIO at the Office of Personnel Management.
  • Matthew Cornelius, managing director of federal industry at Workday and former OMB and Senate staff member.
  • Kevin Cummins, a partner with the Franklin Square Group and former Senate staff member.
  • Michael Derrios, the new executive director of the Greg and Camille Baroni Center for Government Contracting at George Mason University and former State Department senior procurement executive.
  • Julie Dunne, a principal with Monument Advocacy and former commissioner of GSA’s Federal Acquisition Service.
  • Mike Hettinger, founding principal of Hettinger Strategy Group and former House staff member.
  • Nancy Sieger, a partner at Guidehouse’s Financial Services Sector and a former IRS CIO.

What are two IT or acquisition programs/initiatives that you are watching closely for signs of progress and why?

Brown: Whether AI acquisition governance becomes standard, templates, clauses, evaluation norms, 2026 is where agencies turn OMB AI memos into repeatable acquisition artifacts, through solicitation language, assurance evidence, testing/monitoring expectations and privacy and security gates. The 2025 memos are the anchor texts. I’m watching for signals such as common clause libraries, governmentwide “minimum vendor evidence” and how agencies operationalize “responsible AI” in source selections.

The Cybersecurity Maturity Model Certification (CMMC) phased rollout and how quickly it becomes a de facto barrier to entry. Because the rollout is phased over multiple years starting in November 2025, 2026 is the first full year where you can observe how often contracting officers insert the clause and how primes enforce flow-downs. The watch signals include protest activity, supply-chain impacts and whether smaller firms get crowded out or supported.

Hettinger: Related to the GSA OneGov initiative, there’s continuing pressure on the middleman, that is to say resellers and systems integrators to deliver more value for less. This theme emerged in early 2025, but it will continue to be front and center throughout 2026. How those facing the pressure respond to the government’s interests will tell us a lot about how IT acquisition is going to change in the coming years. I’ll be watching that closely.

Mike Hettinger is president and founding principal of Hettinger Strategy Group and former staff director of the House Oversight and Government Reform Subcommittee on Government Management.

The other place to watch more broadly is how the government is going to leverage AI. If 2025 was about putting the pieces in place to buy AI tools, 2026 is going to be about how agencies are able to leverage those tools to bring efficiency and effectiveness in a host of new areas.

Cornelius: The first is watching the Hill to see if the Senate can finally get the Strengthening Agency Management and Oversight of Software Assets (SAMOSA) Act passed and to the President’s desk. While a lot of great work has already happened — and will continue to happen — at GSA around OneGov, there is only so much they can do on their own. If Congress forces agencies to do the in-depth analysis and reporting required under SAMOSA, it will empower GSA, as well as OMB and Congress, to have the type of data and insights needed to drive OneGov beyond just cost savings to more enterprise transformation outcomes for their agency customers. This would generate value at an order of magnitude beyond what they have achieved thus far.

The second is the implementation of the recent executive order that created the Genesis Mission initiative. The mission is focused on ensuring that the Energy Department and the national labs can hire the right talent and marshal the right resources to help develop the next generation of biotechnology, quantum information science, advanced manufacturing and other critical capabilities empower America’s global leadership for the next few generations. Seeing how DOE and Office of Science and Technology Policy (OSTP) partner collaboratively with industry to execute this aspirational, but necessary, nationwide effort will be revelatory and insightful.

Cummins: Will Congress reverse its recent failure to reauthorize the Technology Modernization Fund (TMF)? President Donald Trump stood up the TMF during his first term and it saw a significant funding infusion by President Joe Biden. Watching the TMF just die with a whimper will make me pessimistic about reviving the longstanding bipartisan cooperation on modernizing federal IT that existed before the Department of Government Efficiency (DOGE).

I will be closely watching how well the recently-announced Tech Force comes together. Its goal of recruiting top engineers to serve in non-partisan roles focused on technology implementation sounds a lot like the U.S. Digital Service started by President Barack Obama, which then became the U.S. DOGE Service. I would like to see Tech Force building a better government with some of the enthusiasm that DOGE showed for cutting it.

Sieger: I’m watching intensely how agencies manage the IT talent exodus triggered by DOGE-mandated workforce reductions and return-to-office requirements. The unintended consequence we’re already observing is the disproportionate loss of mid-career technologists, the people who bridge legacy systems knowledge with modern cloud and AI capabilities.

Agencies are losing their most marketable IT talent first, while retention of personnel managing critical legacy infrastructure creates technical debt time bombs. At Guidehouse, we’re fielding unprecedented requests for cybersecurity, cloud architecture and data engineering services. The question heading into 2026 is whether agencies can rebuild sustainable IT operating models or whether they become permanently dependent on contractor support, fundamentally altering the government’s long-term technology capacity.

My prediction of the real risk is that mission-critical systems are losing institutional knowledge faster than documentation or modernization can compensate. Agencies need to watch and mitigate for increased system outages, security incidents, and failed modernization projects as this workforce disruption cascades through 2026.

Sticking with the above theme, it does bear watching how the new federal Tech Force hiring initiative succeeds. The federal Tech Force initiative signals a major shift in how the federal government sources and deploys modern technology talent. As agencies bring in highly skilled technologists focused on AI, cloud, cybersecurity and agile delivery, the expectations for speed, engineering rigor and product-centric outcomes will rise. This will reshape how agencies engage industry partners, favoring firms that can operate at comparable technical and cultural velocity.

The initiative also introduces private sector thinking into government programs, influencing requirements, architectures and vendor evaluations. This creates both opportunity and pressure. Organizations aligned to modern delivery models will gain advantage, while legacy approaches may struggle to adapt. Federal Tech Force serves as an early indicator of how workforce decisions are beginning to influence acquisition approaches and modernization priorities across government.

Dunne: Title 41 acquisition reform. The House Armed Services Committee and House Oversight Committee worked together to pass a 2026 defense authorization bill out of the House with civilian or governmentwide (Title 41) acquisition reform proposals. These reform proposals in the House NDAA bill included increasing various acquisition thresholds (micro-purchase and simplified acquisition thresholds and cost accounting standards) and language on advance payments to improve buying of cloud solutions. Unfortunately, these governmentwide provisions were left out of the final NDAA agreement, leaving in some cases different rules the civilian and defense sectors. I’m hopeful that Congress will try again on governmentwide acquisition reform.

Office of Centralized Acquisition Services (OCAS). GSA launched OCAS late this year to consolidate and streamline contracting for common goods and services in accordance with the March 2025 executive order (14240). Always a good exercise to think about how to best consolidate and streamline contracting vehicles. We’ve been here before and I think OCAS has a tough mission as agencies often want to do their own thing.  If given sufficient resources and leadership attention, perhaps it will be different this time.

FedRAMP 20x. Earlier this year, GSA’s FedRAMP program management office launched FedRAMP 20x to reform the process and bring efficiencies through automation and expand the availability of cloud service provider products for agencies. All great intentions, but as we move into the next phase of the effort and into FedRAMP moderate type solutions, I hope the focus remains on the security mission and the original intent to measure once, use many times for the benefit of agencies. Also, FedRAMP authorization expires in December 2027 – which is not that far away in congressional time.

Alboum: In the coming year, I’m paying close attention to how agencies manage AI efficiency and value as they move from pilots to production. As budgets tighten, agencies need a clearer picture of which models are delivering results, which aren’t, and where investments are being duplicated.

I’m also watching enterprise acquisition and software asset management efforts. The Strengthening Agency Management and Oversight of Software Assets (SAMOSA) Act has been floating around Congress for the last few years. I’m curious to see whether it will ultimately become law. Its provisions reflect widely acknowledged best practices for controlling software spending and align with the administration’s PMA objective to “consolidate and standardize systems, while eliminating duplicative ones.” How agencies manage their software portfolios will be a crucial test of whether efficiency goals are turning into lasting structural change, or just short-term fixes.

Derrios: I’ll be watching how GSA’s OneGov initiative shapes up will be important because contract consolidation without an equal focus on demand forecasting, standardization and potential requirements aggregation may not yield the intended results. There needs to be a strong focus on acquisition planning between GSA and their federal agency customers in addition to any movement of contracts.

In 2025, the administration revamped the FAR, which hadn’t been reviewed holistically in 40 years. So in 2026, what IT/acquisition topic(s) would you like to see the administration take on that has long been overlooked and/or underappreciated for the impact change and improvements could have, and why?

Cummins: Despite the recent Trump administration emphasis on commercialization, it is still too hard for innovative companies to break into the federal market. Sometimes agencies will move mountains to urgently acquire a new technology, like we have seen recently with some artificial intelligence and drones initiatives. But a commercial IT company generally has to partner with a reseller and get third-party accreditation (CMMC, FedRAMP, etc.) just to get access to a federal customer. Moving beyond the FAR rewrite, could the government give up some of the intellectual property and other requirements that make it difficult for commercial companies to bid as a prime or sell directly to an agency outside of an other transaction agreement (OTA)? It would also be helpful to see more FedRAMP waivers for low-risk cloud services.

Cornelius: It’s been almost 50 years since foundational law and policy set the parameters we still follow today around IT accessibility. During my time in the Senate, I drafted the provision in the 2023 omnibus appropriations bill that required GSA and federal agencies to perform comprehensive assessments of accessibility compliance across all IT and digital assets throughout the government. Now, with a couple years of analysis and with many thoughtful recommendations from GSA and OMB, it is time for Congress to make critical updates in law to improve the accessibility of any capabilities the government acquires or deploys. 2026 could be a year of rare bipartisan, bicameral collaboration on digital accessibility, which could then underpin the administration’s American by Design initiative and ensure important accessibility outcomes from all vendors serving government customers are delivered and maintained effectively.

Derrios: The federal budgeting process really needs a reboot. Static budgets do not align with multi-year missions where risks are continuous, technology changes at lightning speed, and world events impact aging cost estimates. And without a real “return on investment” mentality incorporated into the budgeting process, under-performing programs with high sunk-costs will continue to be supported. But taxpayers shouldn’t have to sit through a bad movie just because they already paid for the ticket.

Brown: I’m watching how agencies continue to move toward the implementation of zero trust and how the data layer becomes the budget fight. With federal guides emphasizing data security, the 2026 question becomes, do programs converge on fewer, interoperable controls, or do they keep buying overlapping tools? My watch signals include requirements that prioritize data tagging/classification, attribute-based access, encryption/key management and auditability as “must haves” in acquisitions.

Alboum: Over the past few years, the federal government has made significant investments in customer experience and service delivery. The question now is whether those gains can be sustained amid federal staffing reductions.

Jonathan Alboum is a former chief information officer at the Agriculture Department and now federal chief technology officer for ServiceNow.

This challenge is closely tied to the “America by Design” executive order, which calls for redesigned websites where people interact with the government. A beautiful, easy-to-use website is an excellent start. However, the public expects a great end-to-end experience across all channels, which aligns directly with the administration’s PMA objective to build digital services for “real people, not bureaucracy.”

So, I’ll be watching to see if we meet these expectations by investing in AI and other technologies to lock in previous gains and improve the way we serve the public. With the proper focus, I’m confident that we can positively impact the public’s perception and trust in government.

Hettinger: Set aside the know and historic challenges with the TMF, we really do need to figure out how to more effectively buy IT at a pace consistent with the need of agencies. Maybe some of that is addressed in the FAR changes, but those are only going to take us so far (no pun intended). If we think outside the box, maybe we can find a way to make real progress in IT funding and acquisition in a way that gets the right technology tools in the hands of the right people more quickly.

Dunne: I think follow through on the initiatives launched in 2025 will be important to focus on in 2026.  The formal rulemaking process for the RFO will launch in 2026 and will be an important part of that follow through. And now that we have a confirmed Office of Federal Procurement Policy administrator, I think 2026 will be an important year for industry engagement on topics like the RFO.

Sieger: If the administration could tackle one long-overlooked issue with transformative impact, it should be the modernization of security clearances are granted, maintained and reciprocally recognized for contractor personnel supporting federal IT initiatives.

The current clearance system regularly creates 6-to-12 month delays in staffing critical IT programs, particularly in cybersecurity and AI. Agencies lose qualified contractors to private sector opportunities during lengthy adjudication periods. The lack of true clearance reciprocity means contractors moving between agency projects often restart the process, wasting resources and creating knowledge gaps on programs.

This is a strategic vulnerability. Federal IT modernization depends on contractor expertise for specialized skills government cannot hire directly. When clearance processes take longer than typical IT project phases, agencies either compromise on talent quality or delay mission-critical initiatives. The opportunity cost is measured in delayed outcomes and increased cyber risk.

Implementing continuous vetting for contractor populations, establishing true cross-agency clearance reciprocity, and creating “clearance portability” would benefit emerging technology areas such as AI, quantum, advanced cybersecurity, where talent competition is fiercest. From Guidehouse’s perspective, we see clients are repeatedly unable to staff approved projects because cleared personnel aren’t available, not because talent doesn’t exist.

This reform would have cascading benefits: faster modernization, better talent retention, reduced costs and improved security through continuous monitoring rather than point-in-time investigations.

If 2025 has been all about cost savings and efficiencies, what do you think will emerge as the buzzword of 2026?

Brown: “Speed to capability” acquisition models spreading beyond DoD. The drone scaling example is a concrete indicator of a broader push. The watch signals for me are increased use of rapid pathways, shorter contract terms, modular contracting and more frequent recompetes to keep pace with technology change.

Cornelius: Governmentwide human resource transformation.

Julie Dunne, a former House Oversight and Reform Committee staff member for the Republicans, a former commissioner of the Federal Acquisition Service at the General Services Administration, and now a principal at Monument Advocacy.

Dunne: AI again. How the government uses it to facilitate delivery of citizen services and how AI tools will assist with the acquisition process, and AI-enabled cybersecurity attacks. I know that’s not one word, but it’s a huge risk to watch and only a matter of time before our adversaries find success in attacking federal systems with an AI-enabled cyberattack, and federal contractors will be on the hook to mitigate such risks.

Cummins: Fraud prevention. While combating waste, fraud and abuse is a perennial issue, the industrial scale fraud revealed in Minnesota highlights a danger from how Congress passed COVID pandemic-era spending packages without the same level of checks and balances that were put in place for earlier Obama-era stimulus spending. Federal government programs generally still have a lot of room for improvement when it comes to preventing improper payments, such as by using better identity and access management and other security tools. Stopping fraud is also one of the few remaining areas of bipartisan agreement among policymakers.

Hettinger: DOGE may be gone, or maybe it’s not really gone, but I don’t know that cost savings and efficiencies are going to be pushed to the backburner. This administration comes at everything — at least from an IT perspective — as believing it can be done better, faster and cheaper. I expect that to continue not just into 2026 but for the rest of this administration.

Derrios: I think there will have to be a focus on how government needs and requirements are defined and how the remaining workforce can upskill to use technology as a force multiplier. If you don’t focus on what you’re buying and whether it constitutes a legitimate mission support need, any cost savings gained in 2025 will not be sustainable long-term. Balancing speed-to-contract and innovative buying methodologies with real requirements rigor is critical. And how your federal workforce uses the tools in the toolbox to yield maximum outcomes while trying to do more with less is going to take focused leadership. To me, all of this culminates in one word for 2026, and that’s producing “value” for federal missions.

Sieger: Resilient innovation. While 2025 focused intensely on cost savings and efficiencies, particularly through DOGE-mandated cuts, 2026’s emerging buzzword will be “resilient innovation.” Agencies are recognizing the need to continue advancing technological capabilities while maintaining operational continuity under constrained resources and heightened uncertainty.

The efficiency drives of 2025 exposed real vulnerabilities. Agencies lost institutional knowledge, critical systems became more fragile, and the pace of modernization actually slowed in many cases as talent departed and budgets tightened. Leaders now recognize that efficiency without resilience creates brittleness—systems that work well under ideal conditions but fail catastrophically when stressed.

Resilient innovation captures the dual mandate facing federal IT in 2026: Continue modernizing and adopting transformative technologies like AI, but do so in ways that don’t create new single points of failure, vendor dependencies or operational risks. It’s about building systems and capabilities that can absorb shocks — whether from workforce turnover, budget cuts, cyber incidents or geopolitical disruption — while still moving forward.

Alboum: Looking ahead, governance will take the center stage across government. As AI, data and cybersecurity continue to scale, agencies will need stronger oversight, greater transparency and better coordination to manage complexity and maintain public trust. Governance won’t be a side conversation — it will be the foundation for everything that comes next.

Success will no longer be measured by how much AI is deployed, but by whether it is secure, compliant and delivering tangible mission value. The conversation will shift from “Do we have AI?” to “Is our AI safe, accurate and worth the investment?”

The post AI may not be the federal buzzword for 2026 first appeared on Federal News Network.

© Getty Images/Greggory DiSalvo

FAA ramps up billions in spending as ‘down payment’ for air traffic overhaul

31 December 2025 at 14:57

The Federal Aviation Administration has been working to update its aging air traffic control system, literally, for decades now. But 2026 is looking to be a big year on the FAA modernization front. The One Big Beautiful Bill Congress passed earlier this year puts more than $12 billion toward air traffic control modernization, and the FAA’s new administrator expects to obligate about half of that by the end of this fiscal year.

The agency is on an aggressive schedule to completely replace the air traffic control system within the next three years, and the billions of dollars in reconciliation funding targeted for expenditure in fiscal 2026 is meant to lay the foundation for that long-term plan. The agency says it is using the funds to modernize its telecommunications and air surveillance systems, including by replacing aging copper circuits with fiber optics.

“When we talk about modernizing telco, most people think about moving from copper to fiber, going from analog to digital. And that’s all true, but there’s another element of modernization that we aren’t doing today,” Bryan Bedford, the FAA’s administrator, told the Senate Commerce Committee this month. “The second round of funding that we’re asking for will be to re-architect how the fiber is laid. For example, Dallas-Fort Worth recently had a significant outage. There, the system theoretically was modernized, but the architecture of that system had not been modernized. So there’s really a two-step process here. There is still another step that has to happen to get from analog to digital, which will drive the resilience and our capabilities to increase bandwidth in our facilities.”

Bedford told lawmakers the telco modernization work is now about 35% complete, and that portion of the overall project should be finished by the third quarter of fiscal 2027.

New prime integrator

But to finish all the work the agency believes will fully modernize the system, officials say they’ll need another $20 billion on top of the $12.5 billion they refer to as a “down payment.” And to manage the overall project, the FAA earlier this month hired Peraton to serve as the prime integrator for the new system.

In order to meet that three-year target, Bedford said the agency built in specific incentives to reward performance and on-time delivery across 14 areas the FAA has identified as “critical needs.”

“We set up a series of needs packages that clearly articulate what the work streams are and the estimated timeline to completion,” he said. “Peraton’s profit is essentially broken into three different elements. There’s a fixed profit element of 3% and then there’s a variable element of 6%. The variable element is contingent upon completing the plan on budget and on time with our satisfaction, and we will hold back 3% of the potential profits for any potential damages that might happen for failure to comply with our work packages. So it’s a very strenuous agreement, and we have vigilant oversight on it.”

Unsustainable legacy systems

Meanwhile, the agency says it will also need additional funding to keep the current system up and running while the new one is being built. The Government Accountability Office has identified 105 individual components of the overall system that it’s deemed unsustainable as those subsystems, many of them decades old, continue to age.

And so, Bedford said, the $5 billion in annual “modernization” funding Congress is considering as part of the standard appropriations process is more about operating and sustaining those legacy systems than modernizing them.

“As you read many of these audit reports, you learn the same thing that I have, which is 80% of our infrastructure is considered obsolete and/or unsustainable,” he said. “So the vast majority of that $5 billion doesn’t actually go to build new brick and mortar. 85 to 90% of those funds actually go to repairing, painting or replacing elevators and HVAC systems and plumbing and roofs. Frankly, we’re putting lipstick on a pig. So you may think you’re buying brand new infrastructure with the $5 billion but what you’re buying is sustainment of the old system.”

The overall modernization project is broken down into five categories: communications, surveillance, automation, facilities, and broad updates across the state of Alaska.

Workforce concerns

Sen. Tammy Duckworth (D-Ill.), the ranking Democrat on the transportation subcommittee on aviation, space, and innovation, argued there needs to be one more: workforce.

“We must remember that the recent aviation safety crisis was driven by decades of the FAA pouring billions into unproven technologies and costly service contracts as it pursued, in vain, modernization projects with overly ambitious goals and constantly changing requirements,” she said. “These shiny objects lured the FAA into neglecting the health, capabilities and capacity of our system’s most important assets, its people. Under Presidents of both parties and across multiple Congresses, ATC shed critical expertise and experience. And between 2013 and 2023, the FAA only hired two thirds of the controllers that the FAA’s own staffing model called for. So today, we find ourselves short 3,500 air traffic controllers, while air travel rises to record highs and controllers are forced to regularly work 60 hour weeks because well over 90% of airports are understaffed. Placing the lives of our constituents in the hands of civil servants who are overworked and utterly exhausted was and remains unfair, unacceptable and ultimately dangerous.”

Bedford said the agency is taking workforce issues seriously under a separate initiative, called Flight Plan 2026. He said the FAA has plans to hire 8,900 new controllers between now and 2028.

But the recent government shutdown didn’t help matters. An estimated 500 people withdrew from the FAA’s controller training programs while they were waiting for the government to resume normal operations. And for controllers already on the job, staffing shortages caused an unprecedented number of safety-related air traffic slowdowns.

“Staffing triggers reached unprecedented levels, rising from mere single digits prior to the lapse to more than 80 in a single day,” Bedford said. “Applying the hard lessons we’ve learned from the DCA accident, the FAA safety team identified controller workload and system demand as emerging risk factors. And as a response to this increased risk, we temporarily reduced operations at 40 high traffic airports. The connection between controller workload, system demand and operational risk was unmistakable, and it reinforced the need for the FAA to act decisively when the data demanded it, and underscored the importance of stable controller funding.”

The post FAA ramps up billions in spending as ‘down payment’ for air traffic overhaul first appeared on Federal News Network.

© Getty Images/iStockphoto/gorodenkoff

From DOJ to VA, Kshemendra Paul’s journey exemplifies lasting public service

30 December 2025 at 15:34

Interview transcript:

Terry Gerton You’ve worked across several different executive branch agencies and done a lot of things. Tell us what first drew you into public service.

Kshemendra Paul I came into public service in 2005 into the Department of Justice, in large measure because I was presented with an opportunity to be part of the solution, to be a part of something bigger than myself post 9/11. So for many years, I was part of structural response in the government to the tragic events on 9/11: improving information sharing, more effective use of technology at a pretty interesting and challenging time for the nation.

Terry Gerton Sounds like that got you hooked. How did you find it moving across different federal agencies?

Kshemendra Paul It did get me hooked. I came from the private sector — took a little bit of a pay cut because of the attraction to the mission. Two years in, I was getting ready to start thinking about maybe what’s next for me. I thought that was going back to the private sector, but that was right around the time when I got outreach from Karen Evans and Dick Burke in White House, in the Office of Management and Budget. And they asked me to come up and be the federal chief architect. I had to do a gut call; it was a big job. I decided I love working and doing the public service mission. I said yes, and I went up there and just continued in OMB and then later as the presidentially designated governmentwide lead for information-sharing under President Obama. At that point, I decided to just stick with it. And I have no regrets. I’m just full of gratitude for the opportunities I had, for the people that held me up, and the exciting and interesting work I was able to do.

Terry Gerton You just mentioned some pretty massive and impressive projects. As you look back, is there any one accomplishment or success or program that you go, yes, that’s what it was really all about, that’s where I’m especially proud?

Kshemendra Paul  I have many of those, but my first success, maybe my first love in the public sector, was leading something called the National Information Exchange Model. I was asked soon after coming into the Department of Justice by the then-CIO Van Hitch to take a look and help out and lead the project. I was really pleased to be able to do that with state and local partners, with DHS, other federal partners, and deliver the first version of the exchange model — really providing a core technical aspect of the government’s response post-9/11. Semantic interoperability, right? With computer systems, you have data stovepipes and what terms mean doesn’t necessarily translate across system boundaries, much less boundaries of endeavor like law enforcement, homeland security, intelligence and levels of government. And that was a problem that we solved successfully with the NIEM. Now it’s 20 years old and actually successfully transitioned to OASIS as an international standards body and a standard itself. That effort was a great introduction to the possibility of transformation using data and technology in the public sector.

Terry Gerton Well, data and technology have changed a lot in the last 20 years. Is your sense that the government is able to keep pace?

Kshemendra Paul I think the government does a lot, and there’s a lot of folks across the federal workforce that are quite capable and committed. But the government has challenges, large bureaucracies. Some of that is related to the political process. When I first came into government, the government budget process seemed to work, more or less. That’s dropped off over the years and that’s cascaded; the budget process really is the keystone management process, and I’m a management guy. That dropping off really caused some consternation and made it more difficult. The prevalence of shutdowns, we just went through the longest shutdown, that didn’t help. So there’s challenges, and that was actually a key theme of the conference. We focused at the NAPA conference, the National Academy of Public Administration, on the challenges that the public sector faces, but also the fact that these are long-standing challenges, the drop of trust in government, in some ways the ossification of public administration, and the opportunity for reinvention in this moment — never let a crisis go to waste.

Terry Gerton I’m speaking with Kshemendra Paul. He’s a former senior federal data and tech leader and a newly elected fellow of the National Academy of Public Administration. Kshemendra, you’re joining the Academy at a time when public trust in government is under pressure. How do you hope to participate with the Academy and jointly help the government address some of these issues associated with public trust?

Kshemendra Paul My lane is data, information sharing and technology applied to government to perform and improve government performance. As a part of the Academy, I’m hoping that what I bring to the table can be melded and remixed with the other 1,000 fellows that have different perspectives. They can help make me better and I can help make them better, and together the Academy can put forward positive, constructive and respectful prescriptions for what’s next. I think that’s a major role and a theme at the conference. I also am keen to carry forward ideas about open government. Government needs to be transparent. Government needs to be participatory. Government needs to be collaborative. And I really think that using data in smart and innovative ways to help with setting incentives and organizational design and organizational incentives offers new opportunities for public administration.

Terry Gerton You’re joining this group of a thousand people who’ve had long careers in public service. If you were speaking to someone from the next generation maybe, how would you encourage them to consider a career in public service?

Kshemendra Paul It’s so important not to get caught up and react to the moment, but to be reflective and smart and strategic and respond in the moment. And that response is informed by your values, informed by what makes public administration and public service so important. I think public administration — vigorous public administration that’s transparent and responsive, that works across levels of government — really keeps with the constitutional design that’s reflected in the Federalist Papers and in our constitution as written. So that’s the eye on the prize. A vigorous, effective government is so important to restoring trust in government, to underpinning our democracy and our federated republic. I think the next generation can be part of that solution and respond to the sound of cannon, so to speak, maybe in some way like I did after 9/11 and generations have done so in the past.

The post From DOJ to VA, Kshemendra Paul’s journey exemplifies lasting public service first appeared on Federal News Network.

© Federal News Network

oei-kshemendra-paul
❌
❌