Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

Federal CIOs want AI-improved CX; customers want assured security

20 January 2026 at 15:50

 

Interview transcript:

Terry Gerton Gartner’s just done a new survey that’s very interesting around how citizens perceive how they should share data with the government. Give us a little bit of background on why you did the survey.

Mike Shevlin We’re always looking at, and talk to people about, doing some “voice of the customer,” those kinds of things as [government agencies] do development. This was an opportunity for us to get a fairly large sample voice-of-the-customer response around some of the things we see driving digital services.

Terry Gerton There’s some pretty interesting data that comes out of this. It says 61% of citizens rank secure data handling as extremely important, but only 41% trust the government to protect their personal information. What’s driving that gap?

Mike Shevlin To some extent, we have to separate trust in government with the security pieces. You know, if we looked strictly at the, “do citizens expect us to secure their data?” You know, that’s up in the 90% range. So we’re really looking at something a little bit different with this. We’re looking at, and I think one of the big points that came out of the survey, is citizens’ trust in how government is using their data. To think of this, you have to think about kind of the big data. So big data is all about taking a particular dataset and then enriching it with data from other datasets. And as a result, you can form some pretty interesting pictures about people. One of the things that jumps to mind for me, and again, more on the state and local level, is automated license plate readers. What can government learn about citizens through the use of automated license plates readers? Well, you know, it depends on how we use them, right? So if we’re using it and we’re keeping that data in perpetuity, we can probably get a pretty good track on where you are, where you’ve been, the places that you visit. But that’s something that citizens are, of course, concerned about their privacy on. So I think that the drop is not between, are you doing the right things to secure my data while you’re using it, but more about, okay, are you using it for the right purposes? How do I know that? How do you explain it to me?

Terry Gerton It seems to me like the average person probably trusts their search engine more than they trust the government to keep that kind of data separate and secure. But this is really important as the government tries to deliver easier front-facing interfaces for folks, especially consumers of human services programs like SNAP and homeless assistance and those kinds of things. So how important is transparency in this government use of data? And how can the government meet that expectation while still perhaps being able to enrich this data to make the consumer experience even easier?

Mike Shevlin When I come into a service, I want you to know who I am. I want to know that you’re providing me a particular service, that it’s customized. You know, you mentioned the search engine. Does Google or Amazon know you very well? Yeah, I’d say they probably know you better than the government knows you. So my expectation is partly driven out of my experience with the private sector. But at the same time, particularly since all the craze around generative AI, citizens are now much more aware of what else data can do, and as a result, they’re looking for much more control around their own privacy. If you look at, for example in Europe with the GDPR, they’ve got some semblance of control. I can opt out. I can have my data removed. The U.S. has an awful lot of privacy legislation, but nothing as overarching as that. We’ve got HIPAA. We’ve got protections around personally identifiable information. But we don’t have something as overarching as that in Spain. In Spain, if I deal with the government, I can say yes, I only want this one agency to use my data and I don’t want it going anywhere else. We don’t have that in the U.S. I think it’s something that is an opportunity for government digital services to begin to make some promises to citizens and then fulfill those promises or prove that they’re fulfilling those promises.

Terry Gerton I’m speaking with Mike Shevlin. He’s senior director analyst at Gartner Research. Well, Mike, you introduced AI to the conversation, so I’m going to grab that and privacy. How does AI complicate trust and what role does explainable AI play here, in terms of building citizen trust that their privacy will be protected?

Mike Shevlin I think AI complicates trust in part from generative AI and in part from our kind of mistrust in computers as a whole, as entities, as we start to see these things become more human-like. And that’s really, I think, the big thing that generative AI did to us — now we can talk to a computer and get a result. The importance of the explainable AI is because what we’ve seen is these answers aren’t right from generative AI. But that’s not what it’s built for. It’s built to make something that sounds like a human. I think the explainable AI part is particularly important for government because I want to know as a citizen, if you’re using my data, if you’re then running it through an AI model and coming back with a result that affects my life, my liberty, my prosperity, how do I know that that was the right answer? And that’s where the explainable AI pieces really come into play.  Generative AI is not going to do that, at least not right now, they’re working on it. But it’s not, because it builds its decision tree as it evaluates the question, unlike some of the more traditional AI models, the machine learning or graph AI, where those decision trees are pre-built. So it’s much easier to follow back through and say, this is why we got the answer we did. You can’t really do that right now with gen AI.

Terry Gerton We’re talking to folks in federal agencies every day who are looking for ways to deploy AI, to streamline their backlogs, to integrate considerations, to flag applications where there may be actions that need to be taken, or pass through others that look like they’re clear. From the government’s perspective, how much of that needs to be explained or disclosed to citizens?

Mike Shevlin That’s one of the things I really like about the GDPR: It lays out some pretty simple rules around what’s the risk level associated with this. So for example, if the government is using AI to summarize a document, but then someone is reviewing that summary and making a decision on it, I have less concern than I have if that summary becomes the decision. So I think that’s the piece to really focus on as we look at this and some of the opportunities. Gartner recommends combining AI models, and this will become even more important as we move into the next era of agentic AI or AI agents, because now we’re really going to start having the machines do things for us. And I think that explainability becomes really appropriate.

Terry Gerton What does this mean for contractors who are building these digital services? How can they think about security certifications or transparency features as they’re putting these new tools together?

Mike Shevlin The transparency features are incumbent upon government to ask for. The security pieces, you know, we’ve got FedRAMP, we got some of the other pieces. But if you look at the executive orders on AI, transparency and explainability are one of the pillars that are in those executive orders. So, certainly, government entities should be asking for some of those things. I’m pulling from some law enforcement examples, because that’s usually my specific area of focus. But when I look at some of the Drone as a First Responder programs, and I think it was San Francisco that just released their “here’s all the drone flights that we did, here’s why we did them,” so that people can understand: Hey, yeah, this is some AI that’s involved in this, this is some remote gathering, but here’s what we did and why. And that kind of an audit into the system is huge for citizen confidence. I think those are the kinds of things that government should be thinking about and asking for in their solicitations. How do we prove to citizens that we’re really doing the right thing? How can we show them that if we say we’re going to delete this data after 30 days, we’re actually doing that?

Terry Gerton So Mike, what’s your big takeaway from the survey results that you would want to make sure that federal agencies keep in mind as they go into 2026 and they’re really moving forward in these customer-facing services?

Mike Shevlin So my big takeaway is absolutely around transparency. There’s a lot to be said for efficiency, there’s lot to be said for personalization. But I think the biggest thing that came from this survey for me was, we all know security is important. We’ve known that for a long time. Several administrations have talked about it as a big factor. And we have policies and standards around that. But the transparency pieces, I think, we’re starting to get into that. We need to get in to that a little faster. I think that’s probably one of the quickest wins for government if we can do that.

The post Federal CIOs want AI-improved CX; customers want assured security first appeared on Federal News Network.

© Federal News Network

U.S. health data is disappearing—with potentially serious consequences

13 January 2026 at 16:32

Interview transcript:

Joel Gurin The work that we’re doing now is part of an effort being led by the Robert Wood Johnson Foundation, which has become really concerned about the potential for some real disruptions to what you can think of as the public health data infrastructure. This is the data on all kinds of things, on disease rates, on social determinants of health, on demographic variables that’s really critical to understanding health in this country and working to improve it for all Americans. And we’ve seen a lot of changes over the last year that are very troubling. There are attempts to make some of this data unavailable to the public. Some major research studies have been discontinued. There’ve been deep cuts to the federal staff that are responsible for collecting some of this data. And just cuts to research funding, for example from the NIH overall. So it really adds up to a cross-cutting risk to the infrastructure of public health data that we’ve relied on for decades.

Terry Gerton Talk to us about why this data is so important, why it’s the government’s responsibility, maybe to keep it up to speed, and whether it’s a policy shift that’s driving this, or is it just individual actions?

Joel Gurin From what we can tell, it’s, I would say, a number of policy decisions that are all related to how the Trump administration sees the president’s priorities and how they want to implement those. So it’s not like we’ve seen a wholesale destruction of data, but we’ve see a lot of kinds of targeted changes. Anything related to DEI, to diversity issues, to looking at health inequity. That’s at risk. Any kinds of data related to environmental justice or climate justice — that’s at risk. Data related to the health of LGBTQ people, particularly trans individuals, that’s at risk. So we’re seeing these kinds of policy priorities of the administration playing out in how they relate to the collection of public health data. And this data is critical because government data, number one, some of these data collections are expensive to do and only the government can afford it. And also federal data has a kind of credibility, as a kind centralized source for information, that other studies don’t have. For example, the administration recently discontinued the USDA’s study of food insecurity, which is critical to tracking hunger in America. And it’s going to be especially important as SNAP benefits are cut back. There are other organizations and institutions that study hunger in America. The University of Michigan has a study, NORC has a study. But the federal study is the benchmark. And losing those benchmarks is what’s troubling.

Terry Gerton One of the recommendations, just to skip ahead, is that more states and localities and nonprofits collect this data if the federal government is not going to. But what does that mean for trust in the data? You mentioned that federal data is usually the gold standard. If we have to rely on a disperse group of interested organizations to collect it, what happens both to the reliability of the data and the trust in data?

Joel Gurin It’s a great question, and it’s one that we and a lot of other organizations are looking at now. One of the things that’s important to remember is that a lot of what we see as federal data actually begins with the states. It’s data that’s collected by the states and then fed up to federal agencies that then aggregate it, interpret it and so on. So one of questions people have now is, could we take some of that state data that already exists and collect it and aggregate it and study it in different ways, if the federal government is going to abdicate that role? There was some very interesting work during COVID, for example, when the Johns Hopkins Center, Bloomberg Center for Government Excellence, pulled together data from all over the country around COVID rates, at a time when the CDC was not really doing that effectively, and their website really became the go-to source. So we have seen places where it’s possible to pull state data together in ways that have a lot of credibility and a lot impact. Some of the issues are what do the states really need to make that data collection effective? So regardless of what the federal government does with their data, they need mandates from the federal government to collect it, or it won’t be collected. They need funding. About 80% of the CDC’s budget actually goes to state and local, and a lot of that is for data collection, so they need that funding stream to do the work. And they also need networks, which are starting to develop now, where they can sort of share expertise and share insights to make data work on a regional level.

Terry Gerton I’m speaking with Joel Gurin. He’s the president and founder of the Center for Open Data Enterprise. Well, Joel, then let’s back up a little bit and talk about the round table and the research that led into this paper. How did you do it and what were the key insights?

Joel Gurin So one of the things that our organization, Center for Open Data Enterprise, or CODE, does is we hold roundtables with experts who have different kinds of perspectives on data. And that’s what we did here with Robert Wood Johnson Foundation support. We pulled together a group of almost 80 experts in Washington last summer, and we led them through a very highly facilitated, orchestrated set of breakout discussions. We also did a survey in advance. We did some individual interviews with people. We do a lot of our own desk research. The result is a paper that we’ve just recently published on ensuring the future of essential health data for all Americans. You can find it on our website, odenterprise.org. That’s odenterpreise.org. If you go to our publications page and do the health section in the drop-down from publications, you’ll find it right there, along with a lot of other op-eds and things we publish related to it. Putting out this paper was really the result of pulling together a lot information from literally hundreds of pages of notes from those breakout discussions as well as our own research and as well is tracking everything that we could see in the news. But one of the things that I want to really emphasize, in addition to the analysis that we’ve done of what’s happening and what some of the solutions could be which is that’s a fairly lengthy paper and hopefully useful, we’ve also put together an online resource hub of what we think are the 70 or so most important public health data sets. And I want to really stress this because we think it’s actually a model for how to look at some of the issues affecting federal data in a lot of areas. We found that by working with these 80 or so experts and doing additional research and surveying them and talking to them, there’s a lot commonality and common agreement on what are the kinds of data that are really, really critical to public health and what are those sources. Once you know that, it becomes possible for advocates to argue for why we need to keep this data and how it needs to be applied. And it’s also possible to ask questions like, for this particular kind of data, could somebody other than the federal government collect it? And could we develop supplemental or even alternative sources? So we really feel that that kind of analysis, we hope, is a step forward in really figuring out how to address these issues in a practical way.

Terry Gerton That’s really helpful and also a great prototype for, as you say, data in other areas across the federal government that may or may not be getting the visibility that they used to get. What were the key recommendations that come out of the paper?

Joel Gurin Well, we had recommendations on a couple of different levels. We had recommendations to, as we talked about before, to really look at state and local governments as important sources of data. They are already, but could more be done with those? This includes, for example, not just government data collections the way it’s done now, but using community-based organizations to help collect data from the community in a way that ultimately serves communities. We’re also very interested in the potential for what are being called non-traditional data sources, like the analysis of social media data and other kinds of things that can give insights into health. But I think probably the single most important recommendations at the federal level are to continue funding for these critical data sources and to recognize how important they are and to really recognize the principle that there’s an obligation to understand health and improve health for all Americans, which means looking at data that you can disaggregate by demographic variables and so on. I want to say we have had some really positive signs, I think, from Congress, particularly on the overall issue of supporting health research. And when we talk about NIH research, remember some of that is really lab medical research, but a lot of it is research on public health, research on social factors, research on behavioral factors, all of this kind of critical work. And the president’s budget actually recommended a 40%  cut in NIH funding, which is draconian. The Senate Appropriations Committee over the summer said, we actually do not want to do that, and in fact, we want to increase the NIH budget by a small amount. So I think what we’re seeing is there’s a lot of support, bipartisan support in Congress, for protecting research funding that ultimately is the source of a lot of the data we need. Some of this is just because it’s a shared value, and some of it is because those research dollars go to research institutions in congressional districts that representatives and senators want to see continue to be funded. So I think that basic fear that a lot of us had a few months ago, that research was simply going to be defunded, I think, that may not happen. And I would hope that Congress continues both the funding and also support for not only some of this research funding, but agencies like the National Center for Health Statistics, or the Agency for Health Research and Quality, which have been under threat, to really recognize their importance and sustain them.

Terry Gerton One of the challenges we might face, even if Congress does appropriate back at the prior levels, is that much of the infrastructure has been reduced or eliminated, and that’s people and that’s ongoing projects. How long do you think it will take to kind of rebuild back up to the data collection level that we had before, if we do see appropriation levels back to what they were?

Joel Gurin I think that’s a really critical question. You know, early in the administration, 10,000 jobs at HHS were cut, about a quarter of those from the CDC. But there has been some pushback. There was an attempt during the shutdown to do massive layoffs in HHS and CDC. The courts ruled against that. So I’m hoping that we can prevent more of that kind of brain drain. It will take a while to restaff and really get everything up to speed, but we think it’s doable and we hope we can get on that path.

The post U.S. health data is disappearing—with potentially serious consequences first appeared on Federal News Network.

© Getty Images/iStockphoto/Panuwat Sikham

health care icon pattern medical innovation concept background design

When the U.S. stops tracking global air quality, the world feels it

29 December 2025 at 17:57

Interview transcript:

Terry Gerton The State Department’s Global Air Monitoring Program gave diplomats and citizens abroad real-time data on air pollution and drove transparency worldwide. Its shutdown leaves a gap with serious health and economic consequences. Tahra, thank you so much for joining me. You’ve written recently about probably a little-known program at the U.S. Department of State, the Global Air Monitoring Program. Tell us about that and why it’s so important.

Tahra Vose The Global Air Monitoring Program actually started as a single monitor in Beijing, China, in the early 2000s. As you can imagine — or maybe you can’t, if you haven’t actually been there — some days the air pollution, in Beijing in particular but in multiple megacities of China, was so bad you could not see across the street. It was like living in a cartoon. You thought that you could take a knife and cut a circle out through that pollution. Unfortunately, at that time we only had the Chinese government data to go by for how polluted it really was. And what we were seeing was that the air was rated as a “blue-sky day.” That was the Chinese standard for a good air quality day. And we thought, how can this be possible? I can’t see across the street, but yet you’re telling me it’s only maybe mildly polluted or it is a blue-sky day. It was one of those situations where the facts on the ground just did not match what was being told. So we thought well, let’s see if this is right. One of my colleagues started analyzing the data that was being produced by the Chinese government and found that air monitors were being selectively turned off at times when their readings were getting too high. That’s how they were maintaining this “blue-sky day” average, which was not correct. So knowing that this data was incorrect, we had to take steps to find out what the air quality really was. We ordered a small, actually handheld monitor to begin with — that was the very first one. It was set up outside somebody’s window at the embassy. And its readings showed what we knew to be true, that the air was in fact hazardous or very unhealthy by U.S. EPA standards.

Terry Gerton How did the program evolve then, from that single incident to a worldwide program?

Tahra Vose We continued with that. We bought a larger single monitor, a Met One BAM, and placed that on the roof of the embassy and started to take official readings. We realized we cannot keep this information to ourselves. According to U.S. law, we have a no-double-standard policy, which means if the U.S. government knows of information that could be harmful to U.S. citizens, we need to share that information. So therefore we started putting that information out on a Twitter feed with the basic information of what the air quality was. Then the Chinese authorities started complaining, obviously, because it did not match their data. We called in the EPA to make sure that we were doing everything correctly. Turns out we were. And we honed our data to match exactly with EPA standards, and I don’t mean by manipulating the data, but by reporting it according to EPA standards. Then everybody just gobbled up this information — the Chinese public, everybody else. From there, other posts started calling us, other embassies saying, gosh — the folks in New Delhi called and they’re like, “we have terrible air pollution here too. How do we do this?” And we said, “OK, well, here’s what you need to do. You need to make sure you’re working with the EPA. Make sure that you have this and this and this criteria all set up.” And it just mushroomed from there. Everywhere that we ended up putting that monitor, everybody was happy with it.

Terry Gerton So the program originally had a focus on protecting the health of U.S. citizens in foreign cities and took on a more global aspect. Tell us about really the impact of having U.S.-presented pollution numbers in these foreign cities.

Tahra Vose Well, it was fascinating, at least in China to start with, because when we started presenting the data, the Chinese authorities claimed that we were breaking international covenants and releasing insider data, essentially. And we realized this is not true. And we pushed back within the government itself. It turned out — now this is an interesting little bit of a Chinese insider play here — that the Chinese environmental authorities were actually on our side. They wanted us to present that data because they wanted stronger laws and they also, frankly, wanted more money so they could enforce their existing laws. But there was a break between where the federal environmental agency had authority and where the local provinces did. And local provinces, unfortunately, and their governors tended to have a little too much leeway and ability to manipulate data as needed. But by siding with the federal authority, we were actually able to make them more powerful and to result in more accurate, transparent information throughout China. So that is exactly the type of effect that this had throughout multiple countries. Now, sometimes we’re dealing with former communist, USSR-type countries like Kazakhstan. Other times we’re with monarchies like Thailand. But it didn’t matter. They knew that our data was legitimate, that it could be trusted and they wanted to learn how to do it. So by us expanding this, not only were they interested in U.S. technologies and U.S. sciences on how to do it, but also, how do we build public trust within our own institutions? So it was pretty much warmly welcome.

Terry Gerton I’m speaking with Tahra Vose. She’s a retired foreign service officer. Tahra, it sounds like a no-brainer and a pretty low-cost program, but it was terminated earlier this year. Can you tell us about the logic behind that?

Tahra Vose Unfortunately, I cannot tell you the logic behind turning off this program. I remember receiving the notice that this program was going to be turned off in the spring of this year, and it was devastating to me. What was said was that the program was too expensive to operate. However, anywhere that the program was already operating, you had the sunk costs of the monitor already installed. You had minimal maintenance fees for the monitor. Publishing the data on the internet is pennies, so I am not quite sure what or where the decision came from for this.

Terry Gerton What would it take to restart the program? Maybe it doesn’t matter in cities where they’ve taken on this responsibility, but there are lots of embassies and lots of places that may not have started their own monitoring problem. What would take to restart it?

Tahra Vose It all depends, I suppose, on exactly how you want to approach it. It’s true that there are places that have graduated off of our monitoring system. We could argue that China, they have adjusted their laws and they are accurately producing that information. But there are so many embassies out there, so many countries that do not have the resources for this, but yet still have bad air pollution. Some ideas that I can come up with off the top of my head are those monitors that are no longer being used at certain embassies could be shipped to others, so then you have no additional costs other than shipping. Turning on the system again to cooperate with EPA and feed in, that’s almost like flipping a switch. I don’t want to upset all of my IT friends on that, but it’s really quite simple.

Terry Gerton We do still have a responsibility to our own citizens in those cities to provide health-related pollution information, I would assume.

Tahra Vose We do, and it’s also an excellent heads-up type of information for us here in the U.S. As we know, air pollution has no borders. We’ve seen the smoke come over from wildfires in Canada. We need monitors within our own country and other countries to know what’s coming. And it’s not just air pollution as well; I mean, the Met One BAM is only for PM2.5 monitoring, but it’s so easy to monitor any other pollutant as needed, including mercury or other contaminants. About 30% of the mercury that is in U.S. waters comes from Asia. We really need to keep an eye on these things. It affects the homeland.

The post When the U.S. stops tracking global air quality, the world feels it first appeared on Federal News Network.

© The Associated Press

In this Dec. 30, 2016 photo, a man wearing a mask looks out from a bus in Beijing as the capital of China is blanked by smog. China has long had some of the worst air in the world, blamed on its reliance on coal and a surplus of older, less efficient cars. It has set pollution reduction goals, but also has plans to increase coal mining capacity and eased caps on production when faced with rising energy prices. (AP Photo/Andy Wong, File)

Harmonizing compliance: How oversight modernization can strengthen America’s cyber resilience

24 December 2025 at 16:23

For decades, the federal government has relied on sector-specific regulations to safeguard critical infrastructure. As an example, organizations including the North American Electric Reliability Corporation Critical Infrastructure Protection (NERC CIP) set standards for the energy sector, while the Transportation Security Administration issues pipeline directives and the Environmental Protection Agency makes water utility rules.

While these frameworks were designed to protect individual sectors, the digital transformation of operational technology and information technology has made such compartmentalization increasingly risky.

Today, the boundaries between sectors are blurring – and the gaps between their governance frameworks are becoming attackers’ entry points.

The problem is the lack of harmony.

Agencies are enforcing strong but disconnected standards, and compliance often becomes an end in and of itself, rather than a pathway to resilience.

With the rollout of the Cyber Incident Reporting for Critical Infrastructure Act (CIRCIA) and the release of the National Institute of Standards and Technology’s Cybersecurity Framework 2.0, the United States has an opportunity to modernize oversight, making it more adaptive, consistent and outcome based.

Doing so will require a cultural shift within federal governance: from measuring compliance to ensuring capability.

Overlapping mandates, uneven protection

Every critical infrastructure sector has its own set of cybersecurity expectations, but those rules vary widely in scope, maturity and enforcement. The Energy Department may enforce rigorous incident response requirements for electric utilities, while TSA might focus its directives on pipeline resilience. Meanwhile, small water utilities, overseen by the EPA, often lack the resources to fully comply with evolving standards.

This uneven terrain creates what I call “regulatory dissonance.” One facility may be hardened according to its regulator’s rulebook, while another connected through shared vendors or data exchanges operates under entirely different assumptions. The gaps between these systems can create cascading risk.

The 2021 Colonial Pipeline incident illustrated how oversight boundaries can become national vulnerabilities. While the energy sector had long operated under NERC CIP standards, pipelines fell under less mature guidance until TSA introduced emergency directives after the fact. CIRCIA was conceived to close such gaps by requiring consistent incident reporting across sectors. Yet compliance alone won’t suffice if agencies continue to interpret and implement these mandates in isolation.

Governance as the common language

Modernizing oversight requires more than new rules; it requires shared governance principles that transcend sectors. NIST’s Cybersecurity Framework 2.0 introduces a crucial element in this direction: the new “Govern” function, which emphasizes defining roles, responsibilities and decision-making authority within organizations. This framework encourages agencies and their partners to move from reactive enforcement toward continuous, risk-informed governance.

For federal regulators, this presents an opportunity to align oversight frameworks through a “federated accountability” model. In practice, that means developing consistent taxonomies for cyber risk, harmonized maturity scoring systems and interoperable reporting protocols.

Agencies could begin by mapping common controls across frameworks, aligning TSA directives, EPA requirements and DOE mandates to a shared baseline that mirrors NIST Cybersecurity Framework principles. This kind of crosswalk not only streamlines oversight, but also strengthens public-private collaboration by giving industry partners a clear, consistent compliance roadmap.

Equally important is data transparency. If the Cybersecurity and Infrastructure Security Agency , DOE and EPA share a common reporting structure, insights from one sector can rapidly inform others. A pipeline incident revealing supply chain vulnerabilities could immediately prompt water or energy operators to review similar controls. Oversight becomes a feedback loop rather than a series of disconnected audits.

Engineering resilience into policy

One of the most promising lessons from the technology world comes from the “secure-by-design” movement: Resilience cannot be retrofitted. Security must be built into the design of both systems and the policies that govern them.

In recent years, agencies have encouraged vendors to adopt secure development lifecycles and prioritize vulnerability management. But that same thinking can, and should, be applied to regulation itself. “Secure-by-design oversight” means engineering resilience into the way standards are created, applied and measured.

That could include:

  • Outcome-based metrics: Shifting from binary compliance checks (“Is this control in place?”) to maturity indicators that measure recovery time, detection speed or incident containment capability.
  • Embedded feedback loops: Requiring agencies to test and refine directives through simulated exercises with industry before finalizing rules, mirroring how developers test software before release.
  • Adaptive updates: Implementing versioned regulatory frameworks that can be iteratively updated, similar to patch cycles, rather than rewritten every few years through lengthy rulemaking.

Such modernization would not only enhance accountability but also reduce the compliance burden on operators who currently navigate multiple, sometimes conflicting, reporting channels.

Making oversight measurable

As CIRCIA implementation begins in earnest, agencies must ensure that reporting requirements generate actionable insights. That means designing systems that enable real-time analysis and trend detection across sectors, not just retrospective compliance reviews.

The federal government can further strengthen resilience by integrating incident reporting into national situational awareness frameworks, allowing agencies like CISA and DOE to correlate threat intelligence and issue rapid, unified advisories.

Crucially, oversight modernization must also address the human dimension of compliance. Federal contractors, third-party service providers and local operators often sit at the outer edge of regulatory reach but remain central to national resilience. Embedding training, resource-sharing and technical assistance into federal mandates can elevate the entire ecosystem, rather than penalizing those least equipped to comply.

The next step in federal cyber strategy

Effective harmonization hinges on trust and reciprocity between government and industry. The Joint Cyber Defense Collaborative (JCDC) has demonstrated how voluntary partnerships can accelerate threat information sharing, but most collaboration remains one-directional.

To achieve true synchronization, agencies must move toward reciprocal intelligence exchange, aggregating anonymized, cross-sector data into federal analysis centers and pushing synthesized insights back to operators. This not only democratizes access to threat intelligence, but also creates a feedback-driven regulatory ecosystem.

In the AI era, where both defenders and attackers are leveraging machine learning, shared visibility becomes the foundation of collective defense. Federal frameworks should incorporate AI governance principles, ensuring transparency in data usage, algorithmic accountability and protection against model exploitation, while enabling safe, responsible innovation across critical infrastructure.

A unified future for resilience governance 

CIRCIA and NIST Cybersecurity Framework 2.0 have laid the groundwork for a new era of harmonized oversight — one that treats resilience as a measurable capability rather than a compliance checkbox.

Achieving that vision will require a mindset shift at every level of governance. Federal regulators must coordinate across agencies, industry partners must participate in shaping standards, and both must view oversight as a dynamic, adaptive process.

When frameworks align, insights flow freely, and regulations evolve as quickly as the threats they are designed to mitigate, compliance transforms from a bureaucratic exercise into a national security asset. Oversight modernization is the blueprint for a more resilient nation.

 

Dr. Jerome Farquharson is managing director and senior executive advisor at MorganFranklin Cyber.

The post Harmonizing compliance: How oversight modernization can strengthen America’s cyber resilience first appeared on Federal News Network.

© The Associated Press

A Colonial Pipeline station is seen, Tuesday, May 11, 2021, in Smyrna, Ga., near Atlanta. Colonial Pipeline, which delivers about 45% of the fuel consumed on the East Coast, halted operations last week after revealing a cyberattack that it said had affected some of its systems. (AP Photo/Mike Stewart)

A new honor for a leader who’s shaped cybersecurity policy and talent across sectors

23 December 2025 at 17:41

Interview transcript

Terry Gerton You have had an amazing career, really, multi-sector, across all kinds of dimensions and a focus on cyber security. What drew you into public administration and the public space?

Diana Burley I really wanted to make sure that technology and technology changes worked for all people. And so, you know, I often tell a story about my grandmother and me being excited and telling her about some new innovation that was going to happen on the World Wide Web back in the ’90s. And she just looked at me like, that’s not… exciting, and what about the people that I’m going to speak with, you know, that I won’t get to speak with anymore and the stories that I won’t hear in the community. And that’s really what struck me is that as we think about all the wonderful things that technology can do to make our lives more efficient, and in many ways, better. We cannot forget about the people and making sure that as we implement these new technologies, we are doing it in a responsible way.

Terry Gerton It’s kind of hard to even imagine, if you look back, the massive change in our lives as a result of technology. How have you kept yourself on the cutting edge of policy and talent?

Diana Burley I read a lot. I read lot, I listen a lot to podcasts and radio stations and interviews and I engage with the community. I think that in cybersecurity, especially when I would talk to my students, I made sure that they understood that this is not a career space where you can learn it and then go do it and forget about learning. It is truly an example of a space where you have to be continuously learning and engaging. And be excited about that. And so that’s what I do.

Terry Gerton As you think about your career, it’s full of recognitions and accomplishments and impact. Is there one thing maybe that stands out that you’re most proud of or a place where you think you really had an impact?

Diana Burley You know, every now and then, I will hear from a former student or even a former student that I didn’t actually teach but that saw me speak somewhere, or that heard me give advice to someone, and they’ll reach out just out of the blue and thank me or tell me something about their careers and really that is the greatest feeling — to know that you have positively impacted someone and help them to continue to grow.

Terry Gerton And in the technology space, there’s a lot of talk these days that technology has a real responsibility in terms of our lack of trust or our loss of trust in institutions. As you’re a new fellow of the National Academy of Public Administration, how do you see the role right now for public administration and public institutions in rebuilding that trust?

Diana Burley Well, public institutions have a significant responsibility. It is incumbent upon all of us to ensure that the work that we are doing is done in a transparent way and in a way that the communities and the citizens that we’re working to serve are able to not just hear the end, not just understand the decision or see or deal with the decision, but actually understand the process. And have an opportunity to engage in that process.

Terry Gerton Well, you’re certainly in a position where you have an impact on that leading research at the Brookings Institution. Are there particular policies or approaches that you would recommend, especially in the cybersecurity space, to help build that trust back?

Diana Burley It’s really all about transparency. I mean, that really is not just the practice that I think is important, but it’s also what I believe should be the core of the policy solutions, is making sure that people understand the rules of the road, how data was incorporated into the systems, how their data is being used, really making sure that individuals have some sense of agency and ownership over their own personal selves. We used to just think about agency over our physical selves, but now we have to believe that it is also important for us to have agency over digital selves. And that to me is the most important thing, regardless of who the people are.

Terry Gerton I’m speaking with Dr. Diana Burley. She’s the senior vice president of research at the Brookings Institution and a newly elected fellow of the National Academy of Public Administration. Diana, becoming a NAPA fellow is a big milestone. It’s sort of a culminating credit to your career. What does it mean to you personally to be inducted into that organization? It feels good to know.

Diana Burley That my work is being recognized for the impact. Napa Fellows and there are so many just extraordinary members of the Napa Academy, their work has made a difference and their work continues to make a difference. And that has always been my goal is to make sure that the work that I was doing had an impact, positive impact on someone’s life. And so this recognition helps me to just know that that is true even in some.

Terry Gerton The Academy’s got its fingers in lots of different pies and it’s a cross-sector community. How are you hoping to be engaged with the work that NAPA has ongoing?

Diana Burley I’m going to continue doing what I do onto digital transformation and thinking about how technology can help us serve the public better and help us be more efficient in the ways that we conduct our work. And so I am going to engage with the Academy in those spaces and just make sure that I’m bringing the best of.

Terry Gerton Of what I know to the work. One of Napa’s big focuses is on building the next generation of public servants. If you were chatting with a young person today who was still considering a future in public service, what advice would you have for them? Come join us.

Diana Burley You know, public service is so important. It really is the backbone of our democracy. The individuals who work in these public spaces, they don’t do it for accolades. They don’t it for lots of money. They do it because they believe in making our society work and in helping each other. And so I do believe that it’s not just a mission, it’s a calling. And I would encourage every young person to take advantage of the opportunity.

Terry Gerton They have it. Any particular guidance for folks who might think about a career in cybersecurity? Be willing to keep learning.

Diana Burley You have to read constantly, learn constantly, engage with people. And if you do that, you will be able to continue to move forward in cybersecurity.

The post A new honor for a leader who’s shaped cybersecurity policy and talent across sectors first appeared on Federal News Network.

© Getty Images/Natthaphon Wanason

Abstract technology circuit board background. Cyber security concept with shield symbol and lock. Data protection and cyber privacy. modern security technology innovation concept background

Interoperability and standardization: Cornerstones of coalition readiness

23 December 2025 at 15:23

In an era increasingly defined by rapid technological change, the ability of the United States and its allies to communicate and operate as a unified force has never been more vital. Modern conflict now moves at the pace of data, and success depends on how quickly information can be shared, analyzed and acted upon across Defense Department and coalition networks. Today, interoperability is critical to maintaining a strategic advantage across all domains.

The DoD has made progress toward interoperability goals through initiatives such as Combined

Joint All-Domain Command and Control (CJADC2), the Modular Open Systems Approach (MOSA) and the Sensor Open Systems Architecture (SOSA). Each underscores a clear recognition that victory in future conflicts will hinge on the ability to connect every sensor, platform and decision-maker in real time. Yet as adversaries work to jam communications and weaken alliances, continued collaboration between government and industry remains essential.

The strategic imperative

Interoperability allows the Army, Navy, Marine Corps, Air Force and Space Force to function as one integrated team. It ensures that data gathered by an Army sensor can inform a naval strike or that an Air Force feed can guide a Space Force operation, all in seconds. Among NATO and allied partners, this same connectivity ensures that an attack on one member can trigger a fast, coordinated, data-driven response by all. That unity of action forms the backbone of deterrence.

Without true interoperability, even the most advanced technology can end up isolated. The challenge is compounded by aging systems, proprietary platforms and differing national standards. Sustained commitment to open architectures and shared standards is the only way to guarantee compatibility while still encouraging innovation.

The role of open standards

Open standards make real interoperability possible. Common interfaces like Ethernet or IP networking allow systems built by different nations or vendors to talk to one another. When governments and companies collaborate on open frameworks instead of rigid specifications, innovation can thrive without sacrificing integration.

History has demonstrated that rigid design rules can slow progress and limit creativity, and it’s critical we now find the right balance. That means defining what interoperability requires while giving end users the freedom to achieve it in flexible ways. The DoD’s emphasis on modular, open architectures allows industry to innovate within shared boundaries, keeping future systems adaptable, affordable and compatible across domains and partners.

Security at the core

Interoperability depends on trust, and trust relies on security. Seamless data sharing among allies must be matched with strong protection for classified and mission-critical information, whether that data is moving across networks or stored locally.

Information stored on devices, vehicles or sensors, also known as data at rest, must be encrypted to prevent exploitation if it is captured or lost. Strong encryption ensures that even if adversaries access the hardware, the information remains unreadable. The loss of unprotected systems has repeatedly exposed vulnerabilities, reinforcing the need for consistent data at rest safeguards across all platforms.

The rise of quantum computing only heightens this concern. As processing power increases, current encryption methods will become outdated. Shifting to quantum-resistant encryption must be treated as a defense priority to secure joint and coalition data for decades to come.

Lessons from past operations

Past crises have highlighted how incompatible systems can cripple coordination. During Hurricane Katrina, for example, first responders struggled to communicate because their radios could not connect. The same issue has surfaced in combat, where differing waveforms or encryption standards limited coordination among U.S. services and allies.

The defense community has since made major strides, developing interoperable waveforms, software-defined radios and shared communications frameworks. But designing systems to be interoperable from the outset, rather than retrofitting them later, remains crucial. Building interoperability in from day one saves time, lowers cost and enhances readiness.

The rise of machine-to-machine communication

As the tempo of warfare increases, human decision-making alone cannot keep up with the speed of threats. Machine-to-machine communication, powered by artificial intelligence and machine learning, is becoming a decisive edge. AI-driven systems can identify, classify and respond to threats such as hypersonic missiles within milliseconds, long before a human could react.

These capabilities depend on smooth, standardized data flow across domains and nations. For AI systems to function effectively, they must exchange structured, machine-readable data through shared architectures. Distributed intelligence lets each platform make informed local decisions even if communications are jammed, preserving operational effectiveness in contested environments.

Cloud and hybrid architectures

Cloud and hybrid computing models are reshaping how militaries handle information. The Space Development Agency’s growing network of low Earth orbit satellites is enabling high bandwidth, global connectivity. Yet sending vast amounts of raw data from the field to distant cloud servers is not always practical or secure.

Processing data closer to its source, at the tactical edge, strikes the right balance. By combining local processing with cloud-based analytics, warfighters gain the agility, security and resilience required for modern operations. This approach also minimizes latency, ensuring decisions can be made in real time when every second matters.

A call to action

To maintain an edge over near-peer rivals, the United States and its allies must double down on open, secure and interoperable systems. Interoperability should be built into every new platform’s design, not treated as an afterthought. The DoD can further this goal by enforcing standards that require seamless communication across services and allied networks, including baseline requirements for data encryption at rest.

Adopting quantum-safe encryption should also remain a top priority to safeguard coalition systems against emerging threats. Ongoing collaboration between allies is equally critical, not only to harmonize technical standards, but to align operational procedures and shared security practices.

Government and industry must continue working side by side. The speed of technological change demands partnerships that can turn innovation into fielded capability quickly. Open, modular architectures will ensure defense systems evolve with advances in AI, networking and computing, while staying interoperable across generations of hardware and software.

Most importantly, interoperability should be viewed as a lasting strategic advantage, not just a technical goal. The nations that can connect, coordinate and act faster than their adversaries will maintain a strategic advantage. The continued leadership of the DoD and allied defense organizations in advancing secure, interoperable and adaptable systems will keep the United States and its partners ahead of near-peer competitors for decades to come.

 

Ray Munoz is the chief executive officer of Spectra Defense Technologies and a veteran of the United States Navy.

Cory Grosklags is the chief commercial officer of Spectra Defense Technologies.

The post Interoperability and standardization: Cornerstones of coalition readiness first appeared on Federal News Network.

© III Marine Expeditionary Force //Cpl. William Hester

❌
❌