Reading view

There are new articles available, click to refresh the page.

Workforce, supply chain factor into reauthorizing National Quantum Initiative

House lawmakers are discussing a reauthorization of the National Quantum Initiative, with lawmakers eyeing agency prize challenges, workforce issues and supply chain concerns among other key updates.

During a hearing hosted by the House Committee on Science, Space and Technology on Thursday, lawmakers sought input from agencies leading quantum information science efforts. Chairman Brian Babin (R-Texas) said he is working with Ranking Member Zoe Lofgren (D-Calif.) on a reauthorization of the NQI.

“This effort seeks to reinforce U.S. leadership in quantum science, technology and engineering, address workforce challenges, and accelerate commercialization,” Babin said.

The National Quantum Initiative Act of 2018 created a national plan for quantum technologies spearheaded by agencies including the National Institute of Standards and Technology, the National Science Foundation and the Energy Department.

As the House committee works on its bill, Senate lawmakers earlier this month introduced a bipartisan National Quantum Initiative Reauthorization Act. The bill would extend the initiative for an additional five years through 2034 and reauthorize key agency programs.

The Senate bill would also expand the NQI to include National Aeronautics and Space Administration’s (NASA) research initiatives, including quantum satellite communications and quantum sensing.

Meanwhile, in September, the White House named quantum information sciences as one of six priority areas in governmentwide research and development budget guidance. “Agencies should deepen focused efforts, such as centers and core programs, to advance basic quantum information science, while also prioritizing R&D that expands the understanding of end user applications and supports the maturation of enabling technologies,” the guidance states.

During the House hearing on Thursday, lawmakers sought feedback on several proposals to include in the reauthorization bill. Rep. Valerie Foushee (D-N.C.) said the Energy Department had sent lawmakers technical assistance in December, including a proposal to provide quantum prize challenge authority to agencies that sit on the quantum information science subcommittee of the National Science and Technology Council.

Tanner Crowder, quantum information science lead at Energy’s Office of Science, said the prize challenges would help the government use “programmatic mechanisms” to drive the field forward.

“We’ve talked a little bit about our notices of funding opportunities, and the prize challenge would just be another, another mechanism to drive the field forward, both in potential algorithmic designs, hardware designs, and it just gives us more flexibility to push the forefront of the field,” Crowder said.

Crowder was also asked about how the reauthorization bill should direct resources for sensor development and quantum network infrastructure.

“We want to be able to connect systems together, and we need quantum networks to do that,” Crowder responded. “It is impractical to send quantum information over classical networks, and so we need to continue to push that forefront and look to interconnect heterogeneous systems at the data scale level, so that we can actually extract this information and compute upon it.”

Lawmakers also probed the witnesses on supply chain concerns related to quantum information sciences. James Kushmerick, director of the Physical Measurement Laboratory at the National Institute of Standards and Technology, was asked about U.S. reliance on Europe and China for components like lasers and cooling equipment.

“One of the things we are looking for within the reauthorization is to kind of refocus and kind of onshore or develop new supply chains, not even just kind of duplicate what’s there, but move past that,” Kushmerick said. “Through the Quantum Accelerator Program, we’re looking to focus on chip-scale lasers and modular, small cryo-systems that can be deployed in different ways, as a change agent to kind of move forward.”

Several lawmakers also expressed concerns about the workforce related to quantum information sciences, with several pointing out that cuts to the NSF and changes to U.S. immigration policy under the Trump administration could hamper research and development.

Kushmerick said the NIST-supported Quantum Economic Development Consortium polled members in the quantum industry to better understand workforce challenges.

“It’s not just in quantum physicists leading the efforts,” Kushmerick said. “It’s really all the way through to engineers and technicians and people at all levels. So I really think we need a whole government effort to increase the pipeline through certificates to degrees and other activities.”

The post Workforce, supply chain factor into reauthorizing National Quantum Initiative first appeared on Federal News Network.

© AP Photo/Seth Wenig

This Feb. 27, 2018, photo shows electronics for use in a quantum computer in the quantum computing lab at the IBM Thomas J. Watson Research Center in Yorktown Heights, N.Y. Describing the inner workings of a quantum computer isn’t easy, even for top scholars. That’s because the machines process information at the scale of elementary particles such as electrons and photons, where different laws of physics apply. (AP Photo/Seth Wenig)

Lawmaker eyes bill to codify NIST AI center

A top House lawmaker is developing legislation to codify the National Institute of Standards and Technology’s Center for AI Standards and Innovation into law.

The move to codify CAISI comes as lawmakers and the Trump administration debate and discuss the federal government’s role in overseeing AI technology.

Rep. Jay Obernolte (R-Calif.), chairman of the House Science, Space and Technology Committee’s research and technology subcommittee, said he has a “forthcoming” bill dubbed the “Great American AI Act.”

During a Wednesday hearing, Obernolte said the bill will formalize CAISI’s role to “advance AI evaluation and standard setting.”

“The work it does in doing AI model evaluation is essential in creating a regulatory toolbox for our sectoral regulators, so everyone doesn’t have to reinvent the wheel,” Obernolte said.

The Biden administration had initially established an AI Safety Institute at NIST. But last summer, the Trump administration rebranded the center to focus on standards and innovation.

Last September, the center released an evaluation of the Chinese “DeepSeek” AI model that found it lagged behind U.S. models on cost, security and performance. More recently, CAISI released a request for information on securing AI agent systems.

Despite the Trump administration’s rebranding, however, Obernolte noted the NIST center’s functions have largely stayed consistent. He argued codifying the center would provide stability.

“I think everyone would agree, it’s unhealthy for us to have every successive administration spin up a brand new agency that, essentially, is doing something with a long-term mission that needs continuity,” he said.

Obernolte asked Michael Kratsios, the director of the White House Office of Science and Technology Policy, what he thought about codifying the center into law.

Kratsios said CAISI is an “very important part of the larger AI agenda.” He also said it was important for the administration to reframe the center’s work around innovation and standards, rather than safety.

“It’s absolutely important that the legacy work around standards relating to AI are undertaken by CAISI, and that’s what they’re challenged to do,” Kratsios said. “And that’s the focus that they should have, because the great standards that are put out by CAISI and by NIST are the ones that, ultimately, will empower the proliferation of this technology across many industries.”

Later on in the hearing, Kratsios said the NIST center would play a key role in setting standards for “advanced metrology of model evaluation.”

“That is something that can be used across all industries when they want to deploy these models,” he said. “You want to have trust in them so that when everyday Americans are using, whether it be medical models or anything else, they are comfortable with the fact that it has been tested and evaluated.”

Obernolte and Rep. Sarah McBride (D-Del.), meanwhile, have also introduced the “READ AI Act.” The bill would direct NIST to develop guidelines for how AI models should be evaluated, including standard documentation.

Asked about the bill, Kratsios said it was worthy of consideration, but added that any such efforts should avoid just focusing on frontier AI model evaluation.

“The reality is that the most implementation that’s going to happen across industry is going to happen through fine-tuned models for specific use cases, and it’s going to be trained on specific data that the large frontier models never had access to,” he added.

“In my opinion, the greatest work that NIST could do is to create the science behind how you measure models, such that any time that you have a specific model – for finance, for health, for agriculture – whoever’s attempting to implement it has a framework and a standard around how they can evaluate that model,” Kratsios continued. “At the end of the day, the massive proliferation is going to be through these smaller, fine-tuned models for specific use cases.”

Discussion around the role of the NIST center comes amid a larger debate over the role of the federal government in setting AI standards. In a December executive order, President Donald Trump called for legislative recommendations to create a national framework that would preempt state AI laws.

But during the hearing, Kratsios offered few specifics on what he and Special Adviser for AI and Crypto David Sacks have been considering.

“That’s something that I very much look forward to working with everyone on this committee on,” Kratsios said.  “What was clear in the executive order, specifically, was that, any proposed legislation should not preempt otherwise lawful state actions relating to child safety protections, AI compute and data infrastructure, and also state government procurement and use of AI.”

“But, we look forward over the next weeks and months to be working with Congress on a viable solution,” he added.

The post Lawmaker eyes bill to codify NIST AI center first appeared on Federal News Network.

© Federal News Network

NIST

Lawmakers boost funding for NIST after proposed cuts

Congressional appropriators are looking to maintain, and in some cases increase, the National Institute of Standards and Technology’s work in areas like artificial intelligence, cybersecurity and quantum research.

The appropriations agreement released by House and Senate negotiators this week would include $1.8 billion for NIST, instead of funding cuts for the agency proposed by the Trump administration. The “minibus” appropriations package rejected many of the administration’s proposed budget cuts and limited agency reorganizations.

The agreement includes $1.25 billion for NIST’s research and services division, more than $542 million above the Trump administration’s request. The White House had proposed cutting NIST funding and positions in areas like cybersecurity and privacy; health and biological systems measurements; and physical infrastructure and resilience.

Meanwhile, industry and lawmakers had urged Commerce Secretary Howard Lutnick to protect NIST’s budget and workforce.

The agreement also $405 million for NIST’s “Community Project Funding,” more commonly referred to as earmarks. The White House had proposed phasing out that funding in fiscal 2026.

The appropriations agreement also includes $175 million to continue funding NIST’s Hollings Manufacturing Extension Partnership Program. The MEP program includes 97 positions and helps fund a national network of centers across all 50 states and Puerto Rico that provide services to small- and medium-sized U.S. manufacturers.

The Trump administration had proposed defunding the MEP program, arguing it was outdated and had struggled to address challenges facing the U.S. manufacturing sector.

Language in the explanatory section of the appropriations agreement, however, includes strong language that forbids Commerce from revising the MEP program without gaining “explicit approval” from the committees as part of the appropriations process.

“The secretary is directed to continue the program under the same terms and conditions as were required in fiscal year 2024 and to issue awards at no less than the amounts in fiscal year 2024,” appropriators wrote. “Further, the agreement directs that no funds are provided to execute or plan for a program that reduces the number of active MEP Centers and that the secretary shall minimize, by rapidly executing funding competitions and renewing existing Centers in a timely manner, the periods of time when no MEP Center is active in any state or Puerto Rico.”

The funding in the agreement includes $55 million for NIST’s AI research and measurement efforts. Up to $10 million is intended to expand NIST’s Center for AI Standards and Innovation. NIST plays a key role in the Trump administration’s AI agenda. 

Lawmakers also want NIST to conduct various evaluations, including one comparing Chinese and U.S. AI capabilities and another evaluating foreign AI models.

The bill also includes $128 million in base construction funding to repair and upgrade major research facilities, including facilities at NIST’s main campus in Gaithersburg, Md.

The post Lawmakers boost funding for NIST after proposed cuts first appeared on Federal News Network.

© Federal News Network

RET_PR_313782

Why AI agents won’t replace government workers anytime soon

The vendor demo looks flawless, the script even cleaner. A digital assistant breezes through forms, updates systems and drafts policy notes while leaders watch a progress bar. The pitch leans on the promised agentic AI advantage.

Then the same agents face real public-sector work and stall on basic steps. The newest empirical benchmark from researchers at the nonprofit Center for AI Safety and data annotation company Scale AI finds current AI agents completing only a tiny fraction of jobs at a professional standard. Agents struggled to deliver production-ready outcomes on practical projects, including an explorer for World Happiness data, a short 2D promo, a 3D product animation, a container-home concept, a simple Suika-style game, and an IEEE-formatted manuscript. This new study should help provide some grounding on what agents can do inside federal programs today, why they will not replace government workers soon, and how to harvest benefits without risking mission, compliance or trust.

Benchmarks, not buzzwords, tell the story

Bold marketing favors smooth narratives of autonomy. Public benchmarks favor reality. In the WebArena benchmark, an agent built on GPT-4 achieved low end-to-end task success compared with human performance on real websites that require navigation, form entry and retrieval. The OSWorld benchmark assembles hundreds of desktop tasks across common apps with file handling and multi-step workflows, and documents persistent brittleness when agents face inconsistent interfaces or long sequences. Software results echo the same pattern. The original SWE-bench evaluates real GitHub issues across live repositories and shows that models generate useful patches, but need scaffolding and review to land working changes.

Duration matters. The H-CAST report correlates agent performance with human task time and finds strong results on short, well-bounded steps and sharp drop-offs on long, multi-hour work. That split maps directly to government operations. Agents can draft a memo outline or a SQL snippet. They falter when the job spans multiple systems, requires policy nuance, or demands meticulous document hygiene.

Building a public dashboard, as in the study run by researchers at the Center for AI Safety and Scale AI, is not a single chart; it is a reliable pipeline with provenance, documentation and accessible visuals. A 2D promo is not a storyboard alone; it is consistent assets, rights-safe media, captions and export settings that pass accessibility checks. A container-home concept is not a render; it is geometry, constraints and safety considerations that survive a technical review.

Federal teams must also contend with rules that raise the bar for autonomy. The AI Risk Management Framework from the National Institute of Standards and Technology gives a shared vocabulary for mapping risks and controls. These guardrails do not block Gen AI in government, they just make unsupervised autonomy a poor bet.

What this means for mission delivery, compliance and the workforce

The near-term value is clear. Treat agents as accelerators for specific tasks inside projects, not substitutes for the people who own outcomes. That approach matches field evidence. A large deployment in customer support showed double-digit gains in resolutions per hour when a generative assistant helped workers with suggested responses and knowledge retrieval, with the biggest lift for less-experienced staff. Translate that into federal work and you get faster first drafts, cleaner queries, more consistent formatting, and quicker starts on visuals, all checked by employees who understand policy, context and stakeholders.

Compliance reinforces the same division of labor. To run in production, systems must pass FedRAMP authorization, recordkeeping requirements and privacy controls. Content must meet Section 508 standards for accessibility. Security teams will lean on the joint secure AI development guidelines from the Cybersecurity and Infrastructure Security Agency and international partners to push model and system builders toward stronger practices. Auditors will use the Government Accountability Office’s accountability framework to probe governance, data quality and human oversight. Every one of those checkpoints increases the value of staff who can judge quality, interpret rules and stitch outputs into agency processes.

The fear that the large majority of federal work will be automated soon does not match the evidence. Agents still miss long sequences, stall at brittle interfaces, and struggle with multi-file deliverables. They produce assets that look plausible but fail validation or policy review. They need context from the people who understand stakeholders, statutes, and mission tradeoffs. That leaves plenty of room for productivity gains without mass replacement. It also shifts work toward specification, review and integration, roles that exist across headquarters and field offices.

A practical playbook federal leaders can use now

Plan for augmentation, not substitution. When I help government agencies adopt AI tools, we start by mapping projects into linked steps and flag the ones that benefit from an assistive agent. Drafting a response to a routine inquiry, summarizing a meeting transcript, extracting fields from a form, generating a chart scaffold, and proposing test cases are all candidates. Require a human owner for every deliverable, and publish acceptance criteria that catch the common failure modes seen in the benchmarks, including missing assets, inconsistent naming, broken links and unreadable exports. Maintain an audit trail that shows prompts, sources and edits so the work is FOIA-ready.

Ground the program in federal policy. Adopt the AI Risk Management Framework for risk mapping, and scope pilots to systems that can inherit or achieve FedRAMP authorization. Treat models and agents as components, not systems of record. Keep sensitive data inside authorized boundaries. Validate accessibility against Section 508 standards before anything goes public. For procurement, require vendors to demonstrate performance on public benchmarks like WebArena, OSWorld or SWE-bench using your agency’s constraints rather than glossy demos.

Staff and labor planning should reflect the new shape of work. Expect fewer hours on rote drafting and more time on specification, review and integration. Upskill employees to write good task definitions, evaluate model outputs, and enforce standards. Track acceptance rates, rework and defects by category so leaders can decide where to expand scope and where to hold the line. Publish internal guidance that explains when to use agents, how to attribute sources, and where human approval is mandatory. Share outcomes with the AI.gov community and look for common building blocks across agencies.

A brief scenario shows how this plays out without wishful thinking. A program office stands up a pilot for public-facing dashboards using open data. An agent produces first-pass code to ingest and visualize the dataset, similar to the World Happiness example. A data specialist verifies source URLs, adds documentation, and applies the agency’s color and accessibility standards. A policy analyst reviews labels and context language for accuracy and plain English. The team stores prompts, code and decisions with metadata for audit. In the same sprint, a communications specialist uses an agent to draft a 30-second script for a social clip and a designer converts it into a simple 2D animation. The outputs move faster, quality holds steady, and the people who understand mission and policy remain responsible for the results.

AI agents deliver lift on specific tasks and stumble on long, cross-tool projects. Public benchmarks on the web, desktop and code back that statement with reproducible evidence. Federal policy adds governance that rewards augmentation over autonomy. The smart move for agencies is to put agents to work inside projects while employees stay accountable for outcomes, compliance and trust. That plan banks real gains today and sets agencies up for more automation tomorrow, without betting programs and reputations on a hype cycle.

Dr. Gleb Tsipursky is CEO of the future-of-work consultancy Disaster Avoidance Experts.

The post Why AI agents won’t replace government workers anytime soon first appeared on Federal News Network.

© Federal News Network

Outgoing NIST cyber workforce director talks job roles, skills-based hiring, and AI

During his decade of service at the National Institute of Standards and Technology, Rodney Petersen has had a front-row seat to the evolving state of the cyber workforce across government, industry and academia.

In his role as director of education and workforce at NIST’s Applied Cybersecurity Division, Petersen led efforts to standardize cyber workforce job descriptions and better understand skills gaps that are now a recurring theme in cyber policy discussions.

He served as second director of the National Initiative for Cybersecurity Education, which is now just known as its acronym, “NICE.” NIST’s “NICE Framework” is now an internationally accepted taxonomy to describe professional cyber roles, as well as the knowledge and skills needed to work in the fast-evolving field.

Those efforts have been foundational as the national cyber workforce evolved into a pressing issue at the highest levels of government. 

“One of the biggest changes in my 11 years here has just been the proliferation and the growth and expansion of education and workforce efforts,” Petersen said. “And so that’s mostly a good thing, because it shows that we’re prioritizing and putting investments in place to both increase the supply and also find the demand. But at the same time, it makes NICE’s mission all the more important to make sure we’re creating a coordinated approach across the U.S.”

Petersen is set to retire from his post at the end of the year. He recently sat down with Federal News Network to discuss his career at NIST, the evolution of cyber workforce initiatives over the last 10 years, and the future of the cybersecurity career field amid the rise of artificial intelligence.

(This interview transcript has been lightly edited for length and clarity).

Justin Doubleday What led you to where you are, to NIST and to the NICE program, in the first place?

Rodney Petersen Since NICE works so much on cybersecurity careers, you have to remind people that it’s not always linear, or maybe you don’t end up where you expect it to be, and that’s certainly been true of me.

In undergraduate and through law school, I certainly expected to be in a legal career. I got quickly introduced to higher education and education policy. So my first job out of law school was actually at Michigan State University and then subsequently University of Maryland. So that was maybe my first pivot to move into academia, but continuing to use the law and policy expertise. And then back in the mid ‘90s, there was something called the world wide web and the internet that started hitting college campuses, and I began to combine my legal policy expertise with the growing field of information technology and work for the first CIO and vice president of information technology at the University of Maryland.

And that eventually led me to cybersecurity, where, once again, it was an emerging field and topic. Not a lot of history, certainly within colleges and universities, of having personnel doing that work. The Association of Colleges and Universities that focused on it was EDUCAUSE, and they brought me in to establish their first program in cybersecurity and eventually the Higher Education Information Security Council. Then maybe my final pivot from there was to NIST, which was a position in the federal government, but not just focusing on cybersecurity from an operational or an IT perspective but from an education and workforce perspective. So again, I appreciated the opportunity to pivot and continue to work on another dimension of cybersecurity, which was: Now, how do we create the next generation of cybersecurity workers that the nation needs?

Justin Doubleday As you reflect on that last decade, what were some of the biggest challenges or successes, just things that immediately pop up into your mind as, ‘Wow, it’s 2025. I can’t believe we worked through that just five or 10 years ago?’

Rodney Petersen I didn’t say what really attracted me to the government was NIST, the National Institute of Standards and Technology, not only because it’s a standards organization, but it’s widely respected among industry and in my case academia, for providing some common standards, guidelines, best practices for cybersecurity. I really didn’t know a lot about the NICE program, certainly not the NICE framework, which I’m sure we’ll talk more about in a moment, but that provided a similar kind of common taxonomy and lexicon.

Now, when I say I didn’t know much about NICE framework, it’s a little misleading, because I was involved in some of the early days when DHS was trying to create a common body of knowledge for cybersecurity, and both a combination of that work and then the work I was doing with EDUCAUSE across higher education, you know, 4000-plus colleges and universities in the United States. We were trying to find some common ground and do things that could lead to shared services or shared approaches and the like. NIST was a great place to bring that all together.

The NICE framework specifically evolved over the years starting from common body of knowledge, the CIO Council recognizing the need, from an employer’s perspective, to have some commonality across the cybersecurity workforce. NIST began working with the Department of Defense and the Department of Homeland Security, culminating in the 2017 NIST special publication for the first time with the NICE framework. And then fast forwarding to today, where we work increasingly with private sector employers as well as academia to really create some common vision, common strategy and a mission that really teaches us to integrate approaches across the various ecosystems.

Justin Doubleday How challenging has that been in terms of getting to this widespread adoption of the NICE framework. I’m sure you measure that in different ways at NIST. How far have we come in terms of that standardization and how far do we still have to go?

Rodney Petersen If you’re an organization or a sector who’s starting from ground zero, and if you discover the NICE framework or the NIST cybersecurity framework or any other similar guidance document, you’re in a perfect situation to adopt it wholesale, because you haven’t started anything else, or you don’t have to retrofit something else. And there are certainly examples, in fact, internationally, where other countries start to get into the cybersecurity workforce space, and they discover the NICE framework. It really gives them a starting place, a jump start to building their own unique framework that meets their needs.

Where it’s more challenging is where there’s existing work and efforts that you either have to retrofit or try to modify or adjust. An example for that is we work closely with the NSA and CISA and the National Centers of Academic Excellence and Cybersecurity. They provide designations to colleges and universities that meet their guidelines for what a cybersecurity education program should look like, and it’s based upon what they call knowledge units. And those knowledge units, which actually have some preceding standards and organizations that they were building upon, weren’t necessarily built in the NICE framework.

We use the word ‘aligned’ to make sure that we’re aligned, that they can learn from what we’re doing and apply it, and we can learn from what they’re doing and apply it as well. So I think the biggest challenge is to take those existing organizations or initiatives that already are making great progress and have a lot of momentum, and making sure they’re in step with what we’re doing and vice versa.

Justin Doubleday Part of your work at the NICE program has been launching the CyberSeek database as well, which I think is probably one of the most publicly visible and publicly cited databases that the NIST cybersecurity program puts out there. It publishes data and statistics on cybersecurity job openings across the public and private sectors and other cyber workforce stats. Back when you launched it in 2016, what was the initial goal, and how do you think it’s helped to define some of the cyber workforce challenges that the country has faced over the last decade?

Rodney Petersen At the time, there was a lot of speculation and a lot of survey data about what the cybersecurity workforce needed to be. If you asked any chief information security officer, how many workers do they need? They may say 10. When you ask the same question of, how many can you afford and how many do you plan to hire? The answer might be one. And so thankfully in 2014 when I came in, the Cybersecurity Enhancement Act that Congress passed asked us to forecast what the workforce needs were, starting with the federal government and then looking also at the private sector.

So CyberSeek really came on the scene as an analytics tool, of course, in partnership with CompTIA and now Lightcast, to look at what are the actual jobs that are posted, to begin to quantify that, and then to do it in the context of the NICE framework. We’re looking more specifically at jobs to align to the NICE framework categories and work roles, and to do it not only nationally, but to do it by state and major metro area. And so whether you’re a member of Congress, or you may be at a college university, or you may be a local workforce board, and you really want to see what the demand is in your area, the CyberSeek tool not only gives you a number of open jobs in cyber security, but you can dissect that number to look at the types of jobs, what the requirements or qualifications are necessary to compete for those jobs, what’s the compensation for those jobs. I think bringing that all together really allows us to better forecast what the cybersecurity workforce needs are, both now and in the future.

Justin Doubleday One of the major points in this conversation around cyber workforce was the 2023 national cyber workforce and education strategy. As you reflect on this cyber workforce and education issue becoming a national strategy led out of the White House, whether there are any really impactful outcomes from that strategy over the last couple of years, or whether there’s still some things on the to-do list that you’re particularly keeping track of even as you get ready for retirement?

Rodney Petersen NICE really was an outgrowth of the 2008 Comprehensive National Cybersecurity Initiative. And as that later evolved and established the NICE program office, one of the things we were asked to do was provide some unification across the different investments happening in the federal government, and then by extension things that are happening in academia, in the private sector. And again, back in 2014 when Congress passed the Cybersecurity Enhancement Act, they asked us to build upon successful existing programs. And then later in 2018 when the first Trump administration created an executive order asking us to come up with findings and recommendations. One of the things they asked us to do was an environmental scan of, again, existing programs and assess and evaluate their effectiveness.

So I think as a starting point, any new strategy, any new administration, any new person to this field, needs to acknowledge and research what currently exists and what’s being successful. What should we continue to do, versus what should we stop, or what should we change, or what should we introduce as a new initiative or a new platform? So I think when that previous administration’s National Cyber Workforce and Education Strategy came out, there was a lot of effort, after some time, to take a step back and look at all the existing programs, not only in the federal government, but at the state and local level, in the private sector and academia, and then to build upon that.

And I think they did an excellent job of recognizing some of the good efforts that were already underway. And then fast forward to the present, I think the same is true. One of the biggest changes in my 11 years here has just been the proliferation and the growth and expansion of education and workforce efforts. And so that’s mostly a good thing, because it shows that we’re prioritizing and putting investments in place to both increase the supply and also find the demand. But at the same time, it makes NICE’s mission all the more important to make sure we’re creating a coordinated approach across the U.S.

Justin Doubleday One of the facets of that [2023] strategy was strengthening the federal cyber workforce, and that’s, of course, a big area of interest for our audience. Do you have any assessment of all these different initiatives across the federal workforce, civilian side, defense side? As you mentioned, a lot of has sprung up over the last five or 10 years. How cohesive those are and how successful those have been, as we know this new administration is now looking at its own strategy?

Rodney Petersen In 2015, Congress passed the Federal Cybersecurity Workforce Assessment Act, and that was an early effort to try to essentially identify the number of cybersecurity workers we had in the federal government and that we needed in the federal government. And again, to do that, we had to have some kind of standard to measure against. So the NICE framework was the required tool to use to do that measurement, especially to answer, how many cybersecurity workers do we need? We need a recruitment and retention strategy.

And I would say again, there were a lot of positive efforts led by the national cyber director, but also in partnership with the Office of Personnel Management, Office of Management and Budget, and all the departments and agencies like Commerce, NIST and others who needed that workforce to try to really continue to build momentum and fine tune the federal practices. One of our community subgroups talks about modernizing talent management, and this isn’t meant explicitly for the federal government, but for the private sector as well.

But I would say the federal government is in need of a lot of modernization. Going back to how we currently classify federal jobs, often that OPM classification series, a lot of them are 2210, IT or information security workforce [roles]. And yet the jobs, as the NICE framework represents, the work roles are much more specific than that. So I think there is an ongoing need to evolve that process, but I think some good progress has been made over the years.

Justin Doubleday How much progress do you think we’ve made in the shift towards skills based hiring?

Rodney Petersen At a minimum, there’s increased awareness and the value and the importance that it brings. And really it comes down to relying less on traditional credentials like academic degrees and maybe even certifications and experience, and looking more specifically at the skills, knowledge, capabilities that a job candidate would bring to the workforce. I would think that most organizations, most hiring managers, most cybersecurity professionals, are on board with that.

On the other hand, I think the practices still continue to lag. We still have job announcements that require the degrees, the experiences and things that really disqualify a vast majority of individuals who are probably quite capable. In fact, not only capable today, but have the potential to be the future workforce that is needed. So we need to limit those job announcements or job descriptions that disqualify people due to the lack of those traditional credentials, and really double down on the skills, the competencies that are needed.

Justin Doubleday More generally, you’ve written about the need for cybersecurity awareness among the workforce dating back to at least a decade now in your role at NIST. We now live in a world of annual cybersecurity trainings and PSAs. How would you grade cybersecurity awareness efforts over the past decade and just the level of acumen that we all generally have about cybersecurity?

Rodney Petersen My answer is probably pretty similar to the one I just gave about, how are we doing for skills based education? The awareness programs, I would give an ‘A.’ The awareness efforts and the initiatives are very prolific. The outcomes, the behavioral change, is probably more a ‘C-minus.’ And I think what we’re all discovering with that gap is not that there isn’t good intentions, requirements or educational efforts in place. But it really comes down to changing behaviors, and we need to continue to look for more active ways to influence how employees or citizens or consumers make choices about what they do online and what they do with their computer and how they respond to phishing emails or whatever the case may be. The training, the one-way directional information flow, is not going to be enough. We need to look for more opportunities to simulate, to provide multimedia, to use exercises, to use performance based assessments and exams that really reinforce the behavior change we’re striving to direct.

Justin Doubleday I have to bring it up: artificial intelligence. AI is on everyone’s mind. If you go on LinkedIn, there’s just so much speculation about how AI is going to completely change the future of a cybersecurity career field . . . I’d love to get your thoughts just on how you think about that and how the NIST NICE program has started to perhaps incorporate just some of the taxonomies and the skills that we’re seeing around AI come into play.

Rodney Petersen It’s not just that AI is going to impact the future. AI is impacting the present, and I think we see that all around us. One example is in education. How is AI being used by students? How is it or can it be used by teachers and faculty members? How can it be used by the organizations or the enterprises that run schools and universities? Just last week, we had our K-12 cybersecurity education conference where we had a student panel, and much of their discussion was around their use, their daily use, their hourly use of AI. And they encourage teachers and administrators to embrace it, because it’s not going to go away, and it’s going to be, in their opinion, a helpful part of their learning and educational experience.

A lot of NICE’s focus starts around the impact on the education or the learning enterprise. But from a cybersecurity perspective, I think NIST and NICE as well, and I would add the Centers of Academic Excellence and Cybersecurity, have been primarily focused on three impacts. One is, how do we make sure AI technology is secure? How do we make sure security is built in by design, which is fundamental to all software, all hardware, all kind of technology considerations? And again, the NICE framework talks about design and development as a phase of the technology process life cycle where we need good cybersecurity practices.

We also think about how can AI be used for cybersecurity? How can those that are cybersecurity practitioners leverage AI for their benefit, all the way from writing code, monitoring against attacks and using it for defense, a variety of ways that we can leverage the benefits of AI for the cybersecurity of organizations. And then, thirdly, how do we defend against AI-generated attacks, which we are going to see increasingly. We’re seeing it presently. So it’s those three aspects: building it securely, how do we use it to our advantage, and how do we defend against that?

The post Outgoing NIST cyber workforce director talks job roles, skills-based hiring, and AI first appeared on Federal News Network.

© Getty Images/iStockphoto/bestdesigns

How agencies can ensure trust and transparency in digital identity

From the airport to the DMV, government employees are required to provide efficient and accurate services while also verifying identities at several touchpoints with the public. Despite this need, many agencies still rely on physical identification materials, such as driver’s licenses, passports and Social Security cards. As the demand for faster, modernized services grows, this outdated approach limits agencies’ ability to keep pace with mission demands.

However, since the implementation of smartphones and a federal push toward customer experience, agencies are adopting digital identity and biometrics solutions through initiatives like the General Service Administration’s Login.gov, and the Department of Homeland Security and Transportation Security Administration’s REAL ID.

A digital ID is a collection of data that represents an individual or entity in the digital world, often including information like usernames, passwords and personal details. Used for authentication and access control in various online services and systems, a digital ID serves as identity in the palm of a user’s hand. Through initiatives like Login.gov, users have just one account to log in to several federal websites, requiring them to remember fewer passwords, streamlining data integrity, and improving mission efficiency.

Biometrics and digital identity technology utilize unique biological traits, such as fingerprints, facial features and iris patterns, to verify who a person claims to be. By linking a physical identifier to digital credentials, there’s an additional level of assurance and security that traditional identity tools can’t compete with.

However, for biometric technology adoption to be successful, federal agencies must align standards for compliance and interoperability while also focusing on building and maintaining public trust.

Compliance and transparency 

As digital identity solutions are adopted, standards that govern the use and transparency of biometric data are not optional add-ons — they’re necessary at a foundational level.

Biometric security standards like the National Institute of Standards and Technology’s SP 800-63-4 Digital Identity Guidelines, ISO/IEC 19795 Series, GSA’s FICAM approach and the FIDO Alliance define how data should be collected, stored and secured to improve and maintain accuracy and privacy.

Integrating security and privacy into all identity tools and solutions is equally as vital as standards compliance. Building security and privacy into systems from the beginning helps improve public trust and prevents costly redesigns or necessary additions after vulnerabilities appear. By creating a network of solutions that prioritize security- and privacy-by-design principles, agencies ensure that protections are integrated into every stage of the biometric lifecycle.

Despite these opportunities for increased security, efficiency and progress, much of the public has doubts about data collection, bias and privacy, creating barriers to implementation and adoption. For this reason, transparency is critical to building and maintaining public trust.

Federal agencies must clearly communicate the parameters of biometric technology use, including how biometric data is collected, stored and accessed. Establishing offices like DHS’ Office of Biometric Identity Management provides a centralized point for disseminating pertinent information, like new policies and procedures, or addressing questions about biometric data use and where the public might encounter the technology.

Another opportunity to increase transparency is mandating third-party audits and compliance reporting that align with approved, existing frameworks, like NIST’s Digital Identity Guidelines, to measure compliance.

The digital security landscape is constantly evolving, so federal agencies must prioritize transparency through continuous testing to gain public trust through demonstrated compliance, accuracy and responsible use.

Digital identity is transforming the way governments deliver public services. But technology can’t drive progress alone. Trust is critical for success.

Complying with recognized security standards and improving data transparency lay the groundwork for a thriving digital identity ecosystem in federal government, but its continued success relies on cross-agency and industry collaboration and third-party validation. When combined, these actions will transform operations, creating a unified and secure digital ID future.

Jesús Aragón is the CEO and cofounder of Identy.io.

The post How agencies can ensure trust and transparency in digital identity first appeared on Federal News Network.

© Getty Images/iStockphoto/ArtemisDiana

Multi-Factor Authentication Concept - MFA - Screen with Authentication Factors Surrounded by Digital Access and Identity Elements - Cybersecurity Solutions - 3D Illustration

DoD expands login options beyond CAC

The Defense Department is expanding secure methods of authentication beyond the traditional Common Access Card, giving users more alternative options to log into its systems when CAC access is “impractical or infeasible.”

A new memo, titled “Multi-Factor Authentication (MFA) for Unclassified & Secret DoD Networks,” lays out when users can access DoD resources without CAC and public key infrastructure (PKI). The directive also updates the list of approved authentication tools for different system impact levels and applications.

In addition, the new policy provides guidance on where some newer technologies, such as FIDO passkeys, can be used and how they should be protected. 

“This memorandum establishes DoD non-PKI MFA policy and identifies DoD-approved non-PKI MFAs based on use cases,” the document reads.

While the new memo builds on previous DoD guidance on authentication, earlier policies often did not clearly authorize specific login methods for particular use cases, leading to inconsistent implementation across the department.

Individuals in the early stages of the recruiting process, for example, may access limited DoD resources without a Common Access Card using basic login methods such as one-time passcodes sent by phone, email or text. As recruits move further through the process, they must be transitioned to stronger, DoD-approved multi-factor authentication before getting broader access to DoD resources.

For training environments, the department allows DoD employees, contractors and other partners without CAC to access training systems only after undergoing identity verification. Those users may authenticate using DoD-approved non-PKI multi-factor authentication — options such as one-time passcodes are permitted when users don’t have a smartphone. Access is limited to low-risk, non-mission-critical training environments.

Although the memo identifies 23 use cases, the list is expected to be a living document and will be updated as new use cases emerge.

Jeremy Grant, managing director of technology business strategy at Venable, said the memo provides much-needed clarity for authorizing officials.

“There are a lot of new authentication technologies that are emerging, and I continue to hear from both colleagues in government and the vendor community that it has not been clear which products can and cannot be used, and in what circumstances. In some cases, I have seen vendors claim they are FIPS 140 validated but they aren’t — or claim that their supply chain is secure, despite having notable Chinese content in their device. But it’s not always easy for a program or procurement official to know what claims are accurate. Having a smaller list of approved products will help components across the department know what they can buy,” Grant told Federal News Network.

DoD’s primary credential

The memo also clarifies what the Defense Department considers its primary credential — prior policies would go back and forth between defining DoD’s primary credential as DoD PKI or as CAC. 

“From my perspective, this was a welcome — and somewhat overdue — clarification. Smart cards like the CAC remain a very secure means of hardware-based authentication, but the CAC is also more than 25 years old and we’ve seen a burst of innovation in the authentication industry where there are other equally secure tools that should also be used across the department. Whether a PKI certificate is carried on a CAC or on an approved alternative like a YubiKey shouldn’t really matter; what matters is that it’s a FIPS 140 validated hardware token that can protect that certificate,” Grant said.

Policy lags push for phishing-resistant authentication

While the memo expands approved authentication options, Grant said it’s surprising the guidance stops short of requiring phishing-resistant authenticators and continues to allow the use of legacy technologies such as one-time passwords that the National Institute of Standards and Technology, Cybersecurity and Infrastructure Security Agency and Office of Management and Budget have flagged as increasingly susceptible to phishing attacks.

Both the House and Senate have been pressing the Defense Department to accelerate its adoption of phishing-resistant authentication — Congress acknowledged that the department has established a process for new multi-factor authentication technologies approval, but few approvals have successfully made it through. Now, the Defense Department is required to develop a strategy to “ensure that phishing-resistant authentication is used by all personnel of the DoD” and to provide a briefing to the House and Senate Armed Services committees by May 1, 2026.

The department is also required to ensure that legacy, phishable authenticators such as one-time passwords are retired by the end of fiscal 2027.

“I imagine this document will need an update in the next year to reflect that requirement,” Grant said.

The post DoD expands login options beyond CAC first appeared on Federal News Network.

© Federal News Network

multifactor-authentificaton NIST

Harmonizing compliance: How oversight modernization can strengthen America’s cyber resilience

For decades, the federal government has relied on sector-specific regulations to safeguard critical infrastructure. As an example, organizations including the North American Electric Reliability Corporation Critical Infrastructure Protection (NERC CIP) set standards for the energy sector, while the Transportation Security Administration issues pipeline directives and the Environmental Protection Agency makes water utility rules.

While these frameworks were designed to protect individual sectors, the digital transformation of operational technology and information technology has made such compartmentalization increasingly risky.

Today, the boundaries between sectors are blurring – and the gaps between their governance frameworks are becoming attackers’ entry points.

The problem is the lack of harmony.

Agencies are enforcing strong but disconnected standards, and compliance often becomes an end in and of itself, rather than a pathway to resilience.

With the rollout of the Cyber Incident Reporting for Critical Infrastructure Act (CIRCIA) and the release of the National Institute of Standards and Technology’s Cybersecurity Framework 2.0, the United States has an opportunity to modernize oversight, making it more adaptive, consistent and outcome based.

Doing so will require a cultural shift within federal governance: from measuring compliance to ensuring capability.

Overlapping mandates, uneven protection

Every critical infrastructure sector has its own set of cybersecurity expectations, but those rules vary widely in scope, maturity and enforcement. The Energy Department may enforce rigorous incident response requirements for electric utilities, while TSA might focus its directives on pipeline resilience. Meanwhile, small water utilities, overseen by the EPA, often lack the resources to fully comply with evolving standards.

This uneven terrain creates what I call “regulatory dissonance.” One facility may be hardened according to its regulator’s rulebook, while another connected through shared vendors or data exchanges operates under entirely different assumptions. The gaps between these systems can create cascading risk.

The 2021 Colonial Pipeline incident illustrated how oversight boundaries can become national vulnerabilities. While the energy sector had long operated under NERC CIP standards, pipelines fell under less mature guidance until TSA introduced emergency directives after the fact. CIRCIA was conceived to close such gaps by requiring consistent incident reporting across sectors. Yet compliance alone won’t suffice if agencies continue to interpret and implement these mandates in isolation.

Governance as the common language

Modernizing oversight requires more than new rules; it requires shared governance principles that transcend sectors. NIST’s Cybersecurity Framework 2.0 introduces a crucial element in this direction: the new “Govern” function, which emphasizes defining roles, responsibilities and decision-making authority within organizations. This framework encourages agencies and their partners to move from reactive enforcement toward continuous, risk-informed governance.

For federal regulators, this presents an opportunity to align oversight frameworks through a “federated accountability” model. In practice, that means developing consistent taxonomies for cyber risk, harmonized maturity scoring systems and interoperable reporting protocols.

Agencies could begin by mapping common controls across frameworks, aligning TSA directives, EPA requirements and DOE mandates to a shared baseline that mirrors NIST Cybersecurity Framework principles. This kind of crosswalk not only streamlines oversight, but also strengthens public-private collaboration by giving industry partners a clear, consistent compliance roadmap.

Equally important is data transparency. If the Cybersecurity and Infrastructure Security Agency , DOE and EPA share a common reporting structure, insights from one sector can rapidly inform others. A pipeline incident revealing supply chain vulnerabilities could immediately prompt water or energy operators to review similar controls. Oversight becomes a feedback loop rather than a series of disconnected audits.

Engineering resilience into policy

One of the most promising lessons from the technology world comes from the “secure-by-design” movement: Resilience cannot be retrofitted. Security must be built into the design of both systems and the policies that govern them.

In recent years, agencies have encouraged vendors to adopt secure development lifecycles and prioritize vulnerability management. But that same thinking can, and should, be applied to regulation itself. “Secure-by-design oversight” means engineering resilience into the way standards are created, applied and measured.

That could include:

  • Outcome-based metrics: Shifting from binary compliance checks (“Is this control in place?”) to maturity indicators that measure recovery time, detection speed or incident containment capability.
  • Embedded feedback loops: Requiring agencies to test and refine directives through simulated exercises with industry before finalizing rules, mirroring how developers test software before release.
  • Adaptive updates: Implementing versioned regulatory frameworks that can be iteratively updated, similar to patch cycles, rather than rewritten every few years through lengthy rulemaking.

Such modernization would not only enhance accountability but also reduce the compliance burden on operators who currently navigate multiple, sometimes conflicting, reporting channels.

Making oversight measurable

As CIRCIA implementation begins in earnest, agencies must ensure that reporting requirements generate actionable insights. That means designing systems that enable real-time analysis and trend detection across sectors, not just retrospective compliance reviews.

The federal government can further strengthen resilience by integrating incident reporting into national situational awareness frameworks, allowing agencies like CISA and DOE to correlate threat intelligence and issue rapid, unified advisories.

Crucially, oversight modernization must also address the human dimension of compliance. Federal contractors, third-party service providers and local operators often sit at the outer edge of regulatory reach but remain central to national resilience. Embedding training, resource-sharing and technical assistance into federal mandates can elevate the entire ecosystem, rather than penalizing those least equipped to comply.

The next step in federal cyber strategy

Effective harmonization hinges on trust and reciprocity between government and industry. The Joint Cyber Defense Collaborative (JCDC) has demonstrated how voluntary partnerships can accelerate threat information sharing, but most collaboration remains one-directional.

To achieve true synchronization, agencies must move toward reciprocal intelligence exchange, aggregating anonymized, cross-sector data into federal analysis centers and pushing synthesized insights back to operators. This not only democratizes access to threat intelligence, but also creates a feedback-driven regulatory ecosystem.

In the AI era, where both defenders and attackers are leveraging machine learning, shared visibility becomes the foundation of collective defense. Federal frameworks should incorporate AI governance principles, ensuring transparency in data usage, algorithmic accountability and protection against model exploitation, while enabling safe, responsible innovation across critical infrastructure.

A unified future for resilience governance 

CIRCIA and NIST Cybersecurity Framework 2.0 have laid the groundwork for a new era of harmonized oversight — one that treats resilience as a measurable capability rather than a compliance checkbox.

Achieving that vision will require a mindset shift at every level of governance. Federal regulators must coordinate across agencies, industry partners must participate in shaping standards, and both must view oversight as a dynamic, adaptive process.

When frameworks align, insights flow freely, and regulations evolve as quickly as the threats they are designed to mitigate, compliance transforms from a bureaucratic exercise into a national security asset. Oversight modernization is the blueprint for a more resilient nation.

 

Dr. Jerome Farquharson is managing director and senior executive advisor at MorganFranklin Cyber.

The post Harmonizing compliance: How oversight modernization can strengthen America’s cyber resilience first appeared on Federal News Network.

© The Associated Press

A Colonial Pipeline station is seen, Tuesday, May 11, 2021, in Smyrna, Ga., near Atlanta. Colonial Pipeline, which delivers about 45% of the fuel consumed on the East Coast, halted operations last week after revealing a cyberattack that it said had affected some of its systems. (AP Photo/Mike Stewart)
❌