Reading view

There are new articles available, click to refresh the page.

The Supreme Court’s dangerous double standard on independent agencies

The Supreme Court appears poised to deliver a contradictory message to the American people: Some independent agencies deserve protection from presidential whim, while others do not. The logic is troubling, the implications profound and the damage to our civil service system could be irreparable.

In December, during oral arguments in Trump v. Slaughter, the court’s conservative majority signaled it would likely overturn or severely weaken Humphrey’s Executor v. United States, the 90-year-old precedent protecting independent agencies like the Federal Trade Commission from at-will presidential removal. Chief Justice John Roberts dismissed Humphrey’s Executor as “just a dried husk,” suggesting the FTC’s powers justify unlimited presidential control. Yet just weeks later, during arguments in Trump v. Cook, those same justices expressed grave concerns about protecting the “independence” of the Federal Reserve, calling it “a uniquely structured, quasi-private entity” deserving special constitutional consideration.

The message is clear: Wall Street’s interests warrant protection, but the rights of federal workers do not.

The MSPB: Guardian of civil service protections

This double standard becomes even more glaring when we consider Harris v. Bessent, where the D.C. Circuit Court of Appeals ruled in December 2025 that President Donald Trump could lawfully remove Merit Systems Protection Board Chairwoman Cathy Harris without cause. The MSPB is not some obscure bureaucratic backwater — it is the cornerstone of our merit-based civil service system, the institution that stands between federal workers and a return to the spoils system that once plagued American government with cronyism, inefficiency and partisan pay-to-play services.

The MSPB hears appeals from federal employees facing adverse actions including terminations, demotions and suspensions. It adjudicates claims of whistleblower retaliation, prohibited personnel practices and discrimination. In my and Harris’ tenure alone, the MSPB resolved thousands of cases protecting federal workers from arbitrary and unlawful treatment. In fact, we eliminated the nearly 4,000 backlogged appeals from the prior Trump administration due to a five-year lack of quorum. These are not abstract policy debates — these are cases about whether career professionals can be fired for refusing to break the law, for reporting waste and fraud or simply for holding the “wrong” political views.

The MSPB’s quasi-judicial function is precisely what Humphrey’s Executor was designed to protect. This is what Congress intended to follow in 1978 when it created the MSPB in order to strengthen the civil service workforce from the government weaponization under the Nixon administration. The 1935 Supreme Court recognized that certain agencies must be insulated from political pressure to function properly — agencies that adjudicate disputes, that apply law to fact, that require expertise and impartiality rather than ideological alignment with whoever currently occupies the White House. Why would today’s Supreme Court throw out that noble and constitutionally oriented mandate?

A specious distinction

The Supreme Court’s apparent willingness to treat the Federal Reserve as “special” while abandoning agencies like the MSPB rests on a distinction without a meaningful constitutional difference. Yes, the Federal Reserve sets monetary policy with profound economic consequences. But the MSPB’s work is no less vital to the functioning of our democracy.

Consider what happens when the MSPB loses its independence. Federal employees adjudicating veterans’ benefits claims, processing Social Security applications, inspecting food safety or enforcing environmental protections suddenly serve at the pleasure of the president. Career experts can be replaced by political loyalists. Decisions that should be based on law and evidence become subject to political calculation. The entire civil service — the apparatus that delivers services to millions of Americans — becomes a partisan weapon to be wielded by whichever party controls the White House.

This is not hypothetical. We have seen this movie before. The spoils system of the 19th century produced rampant corruption, incompetence and the wholesale replacement of experienced government workers after each election. The Pendleton Act of 1883 and subsequent civil service reforms were not partisan projects — they were recognition that effective governance requires a professional, merit-based workforce insulated from political pressure.

The real stakes

The Supreme Court’s willingness to carve out special protection for the Federal Reserve while abandoning the MSPB reveals a troubling hierarchy of values. Financial markets deserve stability and independence, but should the American public tolerate receiving partisan-based government services and protections?

Protecting the civil service is not some narrow special interest. It affects every American who depends on government services. It determines whether the Occupational Safety and Health Administration (OSHA) inspectors can enforce workplace safety rules without fear of being fired for citing politically connected companies. Whether Environmental Protection Agency scientists can publish findings inconvenient to the administration. Whether veterans’ benefits claims are decided on merit rather than political favor. Whether independent and oversight federal organizations can investigate law enforcement shootings in Minnesota without political interference.

Justice Brett Kavanaugh, during the Cook arguments, warned that allowing presidents to easily fire Federal Reserve governors based on “trivial or inconsequential or old allegations difficult to disprove” would “weaken if not shatter” the Fed’s independence. He’s right. But that logic applies with equal force to the MSPB. If presidents can fire MSPB members at will, they can install loyalists who will rubber-stamp politically motivated personnel actions, creating a chilling effect throughout the civil service.

What’s next

The Supreme Court has an opportunity to apply its principles consistently. If the Federal Reserve deserves independence to insulate monetary policy from short-term political pressure, then the MSPB deserves independence to insulate personnel decisions from political retaliation. If “for cause” removal protections serve an important constitutional function for financial regulators, they serve an equally important function for the guardians of civil service protections.

The court should reject the false distinction between agencies that protect Wall Street and agencies that protect workers. Both serve vital public functions. Both require independence to function properly. Both should be subject to the same constitutional analysis.

More fundamentally, the court must recognize that its removal cases are not merely abstract exercises in constitutional theory. They determine whether we will have a professional civil service or return to a patronage system. Whether government will be staffed by experts or political operatives. Whether the rule of law or the whim of the president will govern federal employment decisions.

A strong civil service is just as important to American democracy as an independent Federal Reserve. Both protect against the concentration of power. Both ensure that critical governmental functions are performed with expertise and integrity rather than political calculation. The Supreme Court’s jurisprudence should reflect that basic truth, not create an arbitrary hierarchy that privileges financial interests over the rights of workers and the integrity of government.

The court will issue its decisions over the next several months and when it does, it should remember that protecting democratic institutions is not a selective enterprise. The rule of law requires principles, not preferences. Because in the end, a government run on political loyalty instead of merit is far more dangerous than a fluctuating interest rate.

Raymond Limon retired after more than 30 years of federal service in 2025. He served in leadership roles at the Office of Personnel Management and the State Department and was the vice chairman of the Merit Systems Protections Board. He is now founder of Merit Services Advocates.

The post The Supreme Court’s dangerous double standard on independent agencies first appeared on Federal News Network.

© AP Photo/Julia Demaree Nikhinson

The Supreme Court is seen during oral arguments over state laws barring transgender girls and women from playing on school athletic teams, Tuesday, Jan. 13, 2026, in Washington. (AP Photo/Julia Demaree Nikhinson)

Navigating insurance, maintaining careers and making smart money moves as a Gen Z military family

For Gen Z military families, navigating life in their early-to-mid 20s means wading their way through unique challenges that can get overwhelming pretty quickly. Between frequent relocations, long deployments, unpredictable life schedules and limited early-career earnings, financial planning is more than a good idea — it’s essential for long-term stability.

According to the Congressional Research Service, 40% of active-duty military personnel are age 25 or younger, right within the Gen Z age group. Yet these same service members face the brunt of frequent moves, deployments and today’s rising cost of living.

This guide is designed specifically for Gen Z service members and their spouses, helping them understand their financial situations, insurance options, avoid common financial pitfalls and build stable careers, all while dealing with the real-world pressures of military life.

Financial pressures Gen Z military families face

While budgeting, insurance and retirement planning are critical, it’s also important to get a real sense of the actual financial stressors younger military families are grappling with:

  • Living paycheck to paycheck. Even with basic allowance for housing and basic allowance for subsistence, many junior enlisted families still find it hard to keep up with rising living costs. This becomes even more of a precarious situation when you add in dependents.
  • Delayed reimbursements during permanent change of station (PCS) moves, creating short-term cash crunches.
  • Limited emergency savings. The Military Family Advisory Network’s (MFAN) 2023 survey found 22.2% of military families had less than $500 in savings.
  • Predatory lending, with high-interest auto or payday lenders near bases disproportionately targeting young servicemembers.
  • Military spouse underemployment, leaving household income vulnerable when frequent moves disrupt career continuity.

MFAN also found that nearly 80% of respondents spend more on housing than they can comfortably afford, and 57% experienced a financial emergency in the past two years. These aren’t abstract concerns that most young servicemembers and their families can just ignore, hoping that they’ll never be impacted; these are everyday realities for Gen Z military families.

Insurance best practices

Adult life is just getting started in your 20s, and navigating insurance options can feel overwhelming. But taking the time to learn your choices will set your family up for a secure financial future.

  • Life insurance: Most servicemembers are automatically enrolled in Service Members’ Group Life Insurance (SGLI), with Family Servicemembers’ Group Life Insurance (FSGLI) extending coverage to spouses and children. Review coverage annually. Also, compare options across SGLI, FSGLI and trusted military nonprofits to find what fits your family best.
  • Disability insurance: Often overlooked, this protects your family if an injury prevents you from working, even off-duty. Supplemental private coverage can be wise if your lifestyle expenses exceed your military pay.
  • Renters insurance: Essential for families who move often; it protects your belongings through relocations.
  • Healthcare: TRICARE provides strong coverage, but learn the details on copays and referrals, especially when stationed overseas.

Common financial missteps and how to fix them

Mistake #1: Overbudgeting and lack of budgeting

BAH and BAS are designed to offset housing and food costs, not fund lifestyle inflation. Stick to a budget that keeps fixed expenses well below your income. Free tools from Military OneSource can help track spending.

Mistake #2: Not saving for retirement

Retirement may feel far away, but starting early has an outsized impact. Contribute at least 5% to your Thrift Savings Plan (TSP) — a military contribution retirement program similar to that of a 401k — to secure the full Defense Department match. Even small contributions now can grow into hundreds of thousands later.

Mistake #3: Misusing credit or loans

Predatory lenders near bases often target young servicemembers. Try to avoid any predatory or misleading lenders. Instead, consider a secured credit card or an on-base credit union to build credit responsibly. Always be sure to pay your balance in full.

Mistake #4: Skipping an emergency fund

PCS moves, car repairs or medical costs can’t always be predicted. Start small: Even $10 to $20 per week automatically transferred to savings helps to build a safety net. According to MFAN’s 2023 survey, enlisted families with children that have undergone recent PCS moves are most likely to face financial hardship, making an emergency cushion critical.

In addition to avoiding pitfalls, here are realistic strategies to strengthen your finances:

  • Tap military relief organizations like Army Emergency Relief (AER) or Navy-Marine Corps Relief Society (NMCRS) for interest-free loans or grants during emergencies.
  • Plan for post-military life: Keep in mind that SGLI and other benefits change once you leave active duty. Compare nonprofit alternatives early to avoid gaps.
  • Leverage nonprofits you can trust: Some offer competitive life insurance, savings products or financial counseling designed for servicemembers’ long-term interests.
  • Budget with inflation in mind: Rising costs are hitting Gen Z hard. Nearly 48% say they don’t feel financially secure, and over 40% say they’re struggling to make ends meet. Prioritize life’s essentials and be realistic about what you can afford outside of them.

Maintaining a career as a military spouse

Frequent relocations are undoubtedly disruptive, but they don’t have to end career growth. Military spouses may want to focus on careers that can easily move around with them, like healthcare, education, IT or freelancing.

Take advantage of programs like MyCAA, which offers $4,000 in tuition assistance for career training; Military OneSource, which offers resume assistance, free career coaching and financial counseling; and Hiring Our Heroes, which offers networking opportunities and job placement assistance for military spouses. These programs can help reduce underemployment and strengthen household stability, especially during tempestuous times like during and after a PCS move.

Putting it all together

Starting adulthood, a military career and a family all at once is an incredibly challenging undertaking. The financial pressures are real, but with the right knowledge and proactive steps, Gen Z military families can turn instability and uncertainty into long-term security.

By understanding insurance options, making smart money moves, tapping into military-specific resources and planning ahead for life after service, families can not only weather the unpredictability of military life, but also build strong financial foundations for the future.

Alejandra Cortes-Camargo is a brand marketing coordinator at Armed Forces Mutual.

The post Navigating insurance, maintaining careers and making smart money moves as a Gen Z military family first appeared on Federal News Network.

© Getty Images/wichayada suwanachun

A senior couple working together on financial planning, using documents and a calculator to manage family finances.

Governing the future: A strategic framework for federal HR IT modernization

The federal government is preparing to undertake one of the most ambitious IT transformations in decades: Modernizing and unifying human resources information technology across agencies. The technology itself is not the greatest challenge. Instead, success will hinge on the government’s ability to establish an effective, authoritative and disciplined governance structure capable of making informed, timely and sometimes difficult decisions.

The central tension is clear: Agencies legitimately need flexibility to execute mission-specific processes, yet the government must reduce fragmentation, redundancy and cost by standardizing and adopting commercial best practices. Historically, each agency has evolved idiosyncratic HR processes — even for identical functions — resulting in one of the most complex HR ecosystems in the world.

We need a governance framework that can break this cycle. It has to be a structured requirements-evaluation process, a systematic approach to modernizing outdated statutory constraints, and a rigorous mechanism to prevent “corner cases” from derailing modernization. The framework is based on a three-tiered governance structure to enable accountability, enforce standards, manage risk and accelerate decision making.

The governance imperative in HR IT modernization

Modernizing HR IT across the federal government requires rethinking more than just systems — it requires rethinking decision making. Technology will only succeed if governance promotes standardization, manages statutory and regulatory constraints intelligently, and prevents scope creep driven by individual agency preferences.

Absent strong governance, modernization will devolve into a high-cost, multi-point, agency-to-vendor negotiation where each agency advocates for its “unique” variations. Commercial vendors, who find arguing with or disappointing their customers to be fruitless and counterproductive, will ultimately optimize toward additional scope, higher complexity and extended timelines — that is, unless the government owns the decision framework.

Why governance is the central challenge

The root causes of this central challenge are structural. Agencies with different missions evolved different HR processes — even for identical tasks such as onboarding, payroll events or personnel actions. Many “requirements” cited today are actually legacy practices, outdated rules or agency preferences. And statutes and regulations are often more flexible than assumed, but in order to avoid any risk of perceived noncompliance or litigation.

Without centralized authority, modernization will replicate fragmentation in a new system rather than reduce it. Governance must therefore act as the strategic filter that determines what is truly required, what can be standardized and what needs legislative or policy reform.

A two-dimensional requirements evaluation framework

Regardless of the rigor associated with the requirements outlined at the outset of the program, implementers will encounter seemingly unique or unaccounted for “requirements” that appear to be critical to agencies as they begin seriously planning for implementation. Any federal HR modernization effort must implement a consistent, transparent and rigorous method for evaluating these new or additional requirements. The framework should classify every proposed “need” across two dimensions:

  • Applicability (breadth): Is this need specific to a single agency, a cluster of agencies, or the whole of government?
  • Codification (rigidity): Is the need explicitly required by law/regulation, or is it merely a policy preference or tradition?

This line of thinking leads to a decision matrix of sorts. For instance, identified needs that are found to be universal and well-codified are likely legitimate requirements and solid candidates for productization on the part of the HR IT vendor. For requirements that apply to a group of agencies or a single agency, or that are really based on practice or tradition, there may be a range of outcomes worth considering.

Prior to an engineering discussion, the applicable governance body must ask of any new requirement: Can this objective be achieved by conforming to a recognized commercial best practice? If the answer is yes, the governance process should strongly favor moving in that direction.

This disciplined approach is crucial to keeping modernization aligned with cost savings, simplification and future scalability.

Breaking the statutory chains: A modern exception and reform model

A common pitfall in federal IT is the tendency to view outdated laws and regulations as immutable engineering constraints. There are in fact many government “requirements” — often at a very granular and prescriptive level — embedded in written laws and regulations, that are either out-of-date or that simply do not make sense when viewed in a larger context of how HR gets done. The tendency is to look at these cases and say, “This is in the rule books, so we must build the software this way.”

But this is the wrong answer, for several reasons. And reform typically lags years behind technology. Changing laws or regulations is an arduous and lengthy process, but the government cannot afford to encode obsolete statutes into modern software. Treating every rule as a software requirement guarantees technical debt before launch.

The proposed mechanism: The business case exception

The Office of Management and Budget and the Office of Personnel Management have demonstrated the ability to manage simple, business-case-driven exception processes. This capability should be operationalized as a core component of HR IT modernization governance:

  • Immediate flexibility: OMB and OPM should grant agencies waivers to bypass outdated procedural requirements if adopting the standard best practice reduces administrative burden and cost.
  • Batch legislative updates: Rather than waiting for laws to change before modernizing, OPM and OMB can “batch up” these approved exceptions. On a periodic basis, these proven efficiencies through standard processes to modify laws and regulations to match the new, modernized reality.

This approach flips the traditional model. Instead of software lagging behind policy, the modernization effort drives policy evolution.

Avoiding the “corner case” trap: ROI-driven decision-making

In large-scale HR modernization, “corner cases” can become the silent destroyer of budgets and timelines. Every agency can cite dozens of rare events — special pay authorities, unusual personnel actions or unique workforce segments — that occur only infrequently.

The risk is that building system logic for rare events is extraordinarily expensive. These edge cases disproportionately consume design and engineering time. And any customization or productization can increase testing complexity and long-term maintenance cost.

Governance should enforce a strict return-on-investment rule: If a unique scenario occurs infrequently and costs more to automate than to handle manually, it should not be engineered into the system.

For instance, if a unique process occurs only 50 times a year across a 2-million-person workforce, it is cheaper to handle it manually outside the system than to spend millions customizing the software. If the government does not manage this evaluation itself, it will devolve into a “ping-pong” negotiation with vendors, leading to scope creep and vulnerability. The government must hold the reins, deciding what gets built based on value, not just request.

Recommended governance structure

To operationalize the ideas above, the government should implement a three-tiered governance structure designed to separate strategy from technical execution.

  1. The executive steering committee (ESC)
  • Composition: Senior leadership from OMB, OPM and select agency chief human capital officers and chief information officers (CHCOs/CIOs).
  • Role: Defines the “North Star.” They hold the authority to approve the “batch exceptions” for policy and regulation. They handle the highest-level escalations where an agency claims a mission-critical need to deviate from the standard.

The ESC establishes the foundation for policy, ensures accountability, and provides air cover for standardization decisions that may challenge entrenched agency preferences.

  1. The functional control board (FCB)
  • Composition: Functional experts (HR practitioners) and business analysts.
  • Role: The “gatekeepers.” They utilize the two-dimensional framework to triage requirements. Their primary mandate is to protect the standard commercial best practice. They determine if a request is a true “need” or just a preference.

The FCB prevents the “paving cow paths” phenomenon by rigorously protecting the standard process baseline.

  1. The architecture review board (ARB)
  • Composition: Technical architects and security experts.
  • Role: Ensures that even approved variations do not break the data model or introduce technical debt. They enforce the return on investment (ROI) rule on corner cases — if the technical cost of a request exceeds its business value, they reject it.

The ARB enforces discipline on engineering choices and protects the system from fragmentation.

Federal HR IT modernization presents a rare opportunity to reshape not just systems, but the business of human capital management across government. The technology exists. The challenge — and the opportunity — lies in governance.

The path to modernization will not be defined by the software implemented, but by the discipline, authority, and insight of the governance structure that guides it.

Steve Krauss is a principal with SLK Executive Advisory. He spent the last decade working for GSA and OPM, including as the Senior Executive Service (SES) director of the HR Quality Service Management Office (QSMO).

The post Governing the future: A strategic framework for federal HR IT modernization first appeared on Federal News Network.

© Getty Images/iStockphoto/metamorworks

People network concept. Group of person. Teamwork. Human resources.

Securing AI in federal and defense missions: A multi-level approach

As the federal government accelerates artificial intelligence adoption under the national AI Action Plan, agencies are racing to bring AI into mission systems. The Defense Department, in particular, sees the potential of AI to help analysts manage overwhelming data volumes and maintain an advantage over adversaries.

Yet most AI projects never make it out of the lab — not because models are inadequate, but because the data foundations, traceability and governance around them are too weak. In mission environments, especially on-premises and air-gapped cloud regions, trustworthy AI is impossible without secure, transparent and well-governed data.

To deploy AI that reaches production and operates within classification, compliance and policy constraints, federal leaders must view AI security in layers.

Levels of security and governance

AI covers a wide variety of fields such as machine learning, robotics and computer vision. For this discussion, let’s focus on one of AI’s fastest-growing areas: natural language processing and generative AI used as decision-support tools.

Under the hood, these systems, based on large language models (LLMs), are complex “black boxes” trained on vast amounts of public data. On their own, they have no understanding of a specific mission, agency or theater of operations. To make them useful in government, teams typically combine a base model with proprietary mission data, often using retrieval-augmented generation (RAG), where relevant documents are retrieved and used as context for each answer.

That’s where the security and governance challenges begin.

Layer 1: Infrastructure — a familiar foundation

The good news is that the infrastructure layer for AI looks a lot like any other high-value system. Whether an agency is deploying a database, a web app or an AI service, the ATO processes, network isolation, security controls and continuous monitoring apply.

Layer 2: The challenge of securing AI augmented data

The data layer is where AI security diverges most sharply from commercial use. In RAG systems, mission documents are retrieved as context for model queries. If retrieval doesn’t enforce classification and access controls, the system can generate results that cause security incidents.

Imagine a single AI system indexing multiple levels of classified documents. Deep in the retrieval layer, the system pulls a highly relevant document to augment the query, but it’s beyond the analyst’s classification access levels. The analyst never sees the original document; only a neat, summarized answer that is also a data spill.

The next frontier for federal AI depends on granular, attribute-based access control.

Every document — and every vectorized chunk — must be tagged with classification, caveats, source system, compartments and existing access control lists. This is often addressed by building separate “bins” of classified data, but that approach leads to duplicated data, lost context and operational complexity. A safer and more scalable solution lies within a single semantic index with strong, attribute-based filtering.

Layer 3: Models and the AI supply chain

Agencies may use managed models, fine-tune their own, or import third-party or open-source models into air-gapped environments. In all cases, models should be treated as part of a software supply chain:

  • Keep models inside the enclave so prompts and outputs never cross uncontrolled boundaries.
  • Protect training pipelines from data poisoning, which can skew outputs or introduce hidden security risks.
  • Rigorously scan and test third-party models before use.

Without clear policy around how models are acquired, hosted, updated and retired, it’s easy for “one-off experiments” to become long-term risks.

The challenge at this level lies in the “parity gap” between commercial and government cloud regions. Commercial environments receive the latest AI services and their security enhancements much earlier. Until those capabilities are authorized and available in air-gapped regions, agencies may be forced to rely on older tools or build ad hoc workarounds.

Governance, logging and responsible AI

AI governance has to extend beyond the technical team. Policy, legal, compliance and mission leadership all have a stake in how AI is deployed.

Three themes matter most:

  1. Traceability and transparency. Analysts must be able to see which sources informed a result and verify the underlying documents.
  2. Deep logging and auditing. Each query should record who asked what, which model ran, what data was retrieved, and which filters were applied.
  3. Alignment with emerging frameworks. DoD’s responsible AI principles and the National Institute of Standards and Technology’s AI risk guidance offer structure, but only if policy owners understand AI well enough to apply them — making education as critical as technology.

Why so many pilots stall — and how to break through

Industry estimates suggest that up to 95% of AI projects never make it to full production. In federal environments, the stakes are higher, and the barriers are steeper. Common reasons include vague use cases, poor data curation, lack of evaluation to detect output drift, and assumptions that AI can simply be “dropped in.”

Data quality in air-gapped projects is also a factor. If your query is about “missiles,” but your system is mostly indexed with documents about “tanks”, analysts can expect poor results, also called “AI hallucinations.” They won’t trust the tool, and the project will quietly die. AI cannot invent high-quality mission data where none exists.

There are no “quick wins” for AI in classified missions, but there are smart starting points:

  • Build upon a focused decision-support problem.
  • Inventorying and tagging mission data.
  • Bringing security and policy teams in early.
  • Establishing an evaluation loop to test outputs.
  • Designing for traceability and explainability from day one.

Looking ahead

In the next three to five years, we can expect AI platforms, both commercial and government, to ship with stronger built-in security, richer monitoring, and more robust audit features. Agent-based AI pipelines with autonomous security accesses that can pre-filter queries and post-process answers (for example, to enforce sentiment policies or redact PII) will become more common. Yet even as these security requirements and improvements accelerate, national security environments face a unique challenge: The consequences of failure are too high to rely on blind automation.

Agencies that treat AI as a secure system — grounded in strong data governance, layered protections and educated leadership — will be the ones that move beyond pilots to real mission capability.

Ron Wilcom is the director of innovation for Clarity Business Solutions.

The post Securing AI in federal and defense missions: A multi-level approach first appeared on Federal News Network.

© Getty Images/ThinkNeo

Circuit board in shape electronic brain with gyrus. Artificial intelligence in neon cyberspace with glowing.

China hacked our mobile carriers. So why is the Pentagon still buying from them?

A freshly belligerent China is flexing its muscles in ways not seen since the USSR during the Cold War, forging a new illiberal alliance with Russia and North Korea. But the latent battlefield is farther reaching and more dangerous in the information age.

As we now know, over “a years long, coordinated assault,” China has stolen personal data from nearly every single American. This data lets them read our text messages, listen to our phone calls, and track our movements anywhere in the United States and around the world — allowing China to build a nearly perfect intelligence picture of the American population, including our armed forces and elected officials.

This state of affairs leaves corporate leaders, democracy advocates and other private citizens vulnerable to blackmail, cyber attacks and other harassment. Even our national leaders are not immune.

Last year, China targeted the phones of President Donald Trump and Vice President JD Vance in the course of the presidential campaign, reminding us that vulnerabilities in the network can affect even those at the highest levels of government. The dangers were drawn into stark relief earlier this year when Secretary of Defense Pete Hegseth used his personal phone to pass sensitive war plans to his colleagues, along with a high-profile journalist. That incident underscored what we’ve seen in Russia’s invasion of Ukraine, Ukraine’s Operation Spiderweb drone attacks on Russia, and on front lines the world over: Modern wars are run on commercial cellular networks, despite their vulnerabilities.

Many Americans would be surprised to learn that there is no impenetrable, classified military cellular network guiding the top-flight soldiers and weapons we trust to keep us safe. The cellular networks that Lindsay Lohan and Billy Bob Thornton sell us during NFL games are the same networks our troops and national security professionals use to do their jobs. These carriers have a long, shockingly consistent history of losing our personal data via breaches and hacks — as well as selling it outright, including to foreign governments. So it’s no wonder that, when the Pentagon asked carriers to share their security audits, every single one of them refused.

This isn’t a new revelation. Twenty years ago, I served as a Special Forces communications sergeant in Iraq. There, U.S. soldiers regularly used commercial BlackBerries — not because the network was secure, but because they knew their calls would connect. It’s surreal that two decades later, our troops are still relying on commercial phones, even though the security posture has not meaningfully improved.

A big part of the reason why this challenge persists stems from an all-too-familiar issue in our government: a wall of red tape that keeps innovative answers from reaching public-sector problems.

In this case, a solution to the Pentagon’s cell network challenge already exists. The Army requested it, and our soldiers need it. But when they tried to acquire this technology, they were immediately thwarted. Not by China or Russia — but by the United States government’s own bureaucracy.

It turns out that the Defense Department is required to purchase cellular service on a blanket, ten-year contract called Spiral 4. The contract was last renewed in early 2024 to AT&T, Verizon, T-Mobile and a few others, about a year before a solution existed. Yet despite this, rigid procurement rules dictate that the Pentagon will have to wait … presumably another eight years until the contract re-opens for competition.

The FCC recently eliminated regulations calling on telecoms to meet minimum cybersecurity standards, noting that the focus should instead be collaboration with the private sector. I agree. But to harness the full ingenuity of our private sector, our government should not be locking out startups. From Palantir to Starlink to Oura, startups have proven that they can deliver critical national security technologies, out-innovating entrenched incumbents and offering people services they need.

The Pentagon has made real, top-level policy changes to encourage innovation. But it must do more to ensure that our soldiers are equipped with the very best of what they need and deserve, and find and root out these pockets of stalled bureaucratic inertia. Because America’s enemies are real enough – our own red tape should not be one of them.

John Doyle is the founder and CEO of Cape. He previously worked at Palantir and as a Staff Sergeant in the Special Forces.

The post China hacked our mobile carriers. So why is the Pentagon still buying from them? first appeared on Federal News Network.

© (Courtesy of Military OneSource)

Soldier uses the Military OneSource app on his cellphone. (Courtesy of Military OneSource)

Why Uncle Sam favors AI-forward government contractors — and how contractors can use that to their advantage

Read between the lines of recent federal policies and a clear message to government contractors begins to emerge: The U.S. government isn’t just prioritizing deployment of artificial intelligence in 2026. It wants the contractors to whom it entrusts its project work to do likewise.

That message, gleaned from memoranda issued by the White House Office of Management and Budget, announcements out of the Defense Department’s Chief Digital and Artificial Intelligence Office, statements from the General Services Administration and other recent actions, suggests that when it comes to evaluating government contractors for potential contract awards, the U.S. government in many instances will favor firms that are more mature in their use and governance of AI.

That’s because, in the big picture, firms that are more AI-mature — that employ it with strong governance and oversight — will tend to use and share data to make better decisions and communicate more effectively, so their projects and business run more efficiently and cost-effectively. That in turn translates into lower risk and better value for the procuring agency. Agencies apparently are recognizing the link between AI and contractor value. Based on recent contracting trends along with my own conversations with contracting executives, firms that can demonstrate they use AI-driven tools and processes in key areas like project management, resource utilization, cost modeling and compliance are winning best-value assessments even when they aren’t the cheapest.

To simply dabble in AI is no longer enough. Federal agencies and their contracting officers are putting increased weight on the maturity of a contractor’s AI program, and the added value that contractor can deliver back to the government in specific projects. How, then, can contractors generate extra value using AI in order to be a more attractive partner to federal contracting decision-makers?

Laying the groundwork

Let’s dig deeper into the “why” behind AI. For contractors, it’s not just about winning more government business. Big picture: It’s about running projects and the overall business more efficiently and profitably.

What’s more, being an AI-forward firm isn’t about automating swaths of a workforce out of a job. Rather, AI is an enabler and multiplier of human innovation. It frees people to focus on higher-value work by performing tasks on their behalf. It harnesses the power of data to surface risks, opportunities, trends and potential issues before they escalate into larger problems. Its predictive power promotes anticipatory actions rather than reactive management. The insights it yields, when combined with the collective human experience, institutional knowledge and business acumen inside a firm, leads to better-informed human decision making.

For AI to provide benefits and value both internally and to customers, it requires a solid data foundation underneath it. Clean, connected and governed data is the lifeblood that AI models must have to deliver reliable outputs. If the data used to train those models is incomplete, siloed, flawed or otherwise suspect, the output from AI models will tend to be suspect, too. So in building a solid foundation for AI, a firm would be wise to ensure it has an integrated digital environment in place (with key business systems like enterprise resource planning [ERP], customer relationship management [CRM] and project portfolio management [PPM] connected) to enable data to flow unimpeded. Nowadays, federal contracting officers and primes are evaluating contractors based on the maturity of their AI programs, as well as on the maturity of their data-management programs in terms of hygiene, security and governance.

They’re also looking closely at the guardrails contractors have around their AI program: appropriate oversight, human-in-the-loop practices and governance structures. Transparency, auditability and explainability are paramount, particularly in light of regulations such as the Federal Acquisition Regulations, Defense Federal Acquisition Regulation Supplement, and Cybersecurity Maturity Model Certification. It’s worth considering developing (and keeping up-to-date) an AI capabilities and governance statement that details how and where your firm employs AI, and the structures it uses to oversee its AI capabilities. A firm then can include that statement in the proposals it submits.

AI use cases that create value

Having touched on the why and how behind AI, let’s explore some of the areas where contractors could be employing intelligent automation, predictive engines, autonomous agents, generative AI co-pilots and other capabilities to run their businesses and projects more efficiently. With these approaches in mind, contractors can deliver more value to their federal government customers.

  1. Project and program management: AI has a range of viable use cases that create value inside the project management office. On the process management front, for example, it can automate workflows and processes. Predictive scheduling, cost variance forecasting, automated estimate at completion (EAC) updates, and project triage alerts are also areas where AI is proving its value. For example, AI capabilities within an ERP system can alert decision-makers to cost trends and potential overruns, and offer suggestions for how to address them. They also can provide project managers with actionable, up-to-the-minute information on project status, delays, milestones, cost trends, potential budget variances and resource utilization.

Speaking of resources, predictive tools (skills graphs, staffing models, et cetera) can help contractors forecast talent needs and justify salary structures. They also support resource allocation and surge requirements. Ultimately, these tools help optimize the composition of project teams by analyzing project needs across the firm, changing circumstances and peoples’ skills, certifications and performance. It all adds up to better project outcomes and better value back to the government agency customer.

  1. Finance and accounting: From indirect rate modeling to anomaly detection in timesheets and cost allowability, AI tools can minimize the financial and accounting risk inside a contract. It can alert teams to issues related to missing, inconsistent or inaccurate data, helping firms avoid compliance issues. Using AI, contractors also can expedite invoicing on the accounts receivable side as well as processes on the accounts payable side to provide clarity to both the customer and internal decision-makers.
  2. Compliance: Contractors carry a heavy reporting and compliance burden and live under the constant shadow of an audit. AI is proving valuable as a compliance support tool, with its ability to interpret regulatory language and identify compliance risks like mismatched data or unallowable costs. AI also can create, then confirm compliance with, policies and procedures by analyzing and applying rules, monitoring time and expense entries, gathering and formatting data for specific contractual reporting requirements, and detecting and alerting project managers to data disparities.
  3. Business development and capture: AI can help firms uncover and win new business by identifying relevant and winnable opportunities, and through proposal development, harnessing business data tailored to solicitation requirements. Using AI-driven predictive analytics, companies can develop a scoring system and decision matrix to apply to their go or no-go decisions. Firms can also use AI to handle much of the heavy lifting with proposal creation, significantly reducing time-to-draft and proposal-generation costs, while boosting a firm’s proposal capacity substantially. Intelligent modeling capabilities can recommend optimal pricing and rate strategies for a proposal.

As much as the U.S. government is investing to become an AI-forward operation, logic suggests that it would prefer that its contractors be similarly AI-savvy in their use — and governance — of intelligent tools. In the world of government contracting, we’re approaching a point where winning business from the federal government could depend on how well a firm can leverage the AI tools at hand to demonstrate and deliver value.

 

Steve Karp is chief innovation officer for Unanet.

The post Why Uncle Sam favors AI-forward government contractors — and how contractors can use that to their advantage first appeared on Federal News Network.

© Federal News Network

Steve Karp headshot

Securing the spotlight: Inside the investigations that protect America’s largest events

At large-scale events like World Cup matches, a Super Bowl or the LA 2028 Olympics, viewers around the world will turn their attention to the athletes. At these types of large-scale events, often categorized as National Special Security Events (NSSEs), a tightly choreographed collaboration of federal, state and local agencies is required to manage logistics, intelligence and operational response. The spotlight may be on the athletes and fans, but the unsung work happens behind the scenes, where security teams and support personnel operate before, during and after each event to ensure the safety of participants and spectators. While much of that focus is outward, securing perimeters, screening crowds and scanning for external threats, some of the most significant risks can come from inside the event itself.

In an era of multi-dimensional threats, the line between external adversaries and internal vulnerabilities has grown increasingly blurred. Contractors, vendors, temporary staff and employees are vital to the success of major events; however, they also introduce complex risk considerations. Managing those risks requires more than background checks and credentialing; it calls for investigative awareness rooted in federal risk management frameworks and duty-of-care principles. Agencies and partners must align with established standards such as the National Insider Threat Task Force (NITTF) guidelines and the Department of Homeland Security’s National Infrastructure Protection Plan, emphasizing collaboration, transparency and early intervention. By fostering information-sharing and cross-functional coordination, investigative teams can recognize behavioral and contextual warning signs in ways that strengthen both security and trust.

The inside threats that don’t make headlines

When we talk about insider threats in the context of NSSEs, many think of espionage or deliberate sabotage. But the reality is often more subtle, and therefore more dangerous.

Consider this real-world example: A contracted former employee of the San Jose Earthquakes’ home stadium admitted to logging into the concession vendor’s administrative system and deleting menus and payment selections. His unauthorized access, triggered from home after his termination, interrupted operations on opening day and resulted in more than $268,000 in losses.

These kinds of incidents highlight a fundamental truth: Insider risk isn’t just about malicious intent; it’s about exposure. And exposure multiplies with scale. When thousands of people have physical or digital access to a high-profile venue, especially when celebrities, politicians and global audiences are involved, the likelihood of insider-related incidents grows exponentially.

Investigations as the backbone of event security

At their core, investigations are about collecting and connecting the dots between people, data and threats. For NSSEs, this investigative function becomes the connective tissue that binds disparate security disciplines together.

Consider the investigation of a potential insider risk within a major international summit.

  • A social media monitoring team flags an insider — in this case, a contractor — expressing frustration about working conditions.
  • The venue security team reports a missing equipment case from the same contractor’s storage area.
  • A public records check reveals the individual was previously charged with theft.

Individually, none of these signals confirms a threat. But when unified under a connected investigative workflow, the risk becomes clearer and more actionable. This is the type of cross-functional insight that defines modern event protection. It’s not about reacting to threats; it’s about uncovering the threads before they unravel.

Large-scale events generate intelligence at an unprecedented scale: everything from credentialing data to behavioral reports, cybersecurity logs and social media feeds. Yet these systems rarely connect to and communicate with one another. The result is fragmented visibility and slow investigative response.

For example:

  • A fusion center monitoring social media identifies a user threatening to “disrupt the opening ceremony.”
  • A local police investigation logs a similar username associated with a harassment complaint.
  • A corporate security team managing sponsor operations notices suspicious activity during credential pickup with the same surname and the same location.

If these datasets live in silos, that pattern may never be connected. But within a connected framework, analysts can correlate these intelligence signals in seconds, surfacing a person of concern who may have both motive and proximity to the event.

Building a connected investigations framework

Establishing an effective investigations framework for NSSEs and other high-profile events requires three key capabilities:

  1. Pre-event inside vetting and behavioral baselines

Agencies and private partners must move beyond one-time background checks toward continuous, risk-informed vetting that emphasizes awareness and accountability. For example, a Defense Department–affiliated recreation facility on Walt Disney World property uncovered that an accounting technician had exploited her system access over 18 months to issue unauthorized refunds totaling more than $183,000. In a large-scale event environment, similar credential misuse could go unnoticed without behavioral baselines and cross-functional coordination. Establishing clear patterns of access and communication among HR, security and operations helps detect anomalies early and address them before they evolve into costly or reputation-damaging breaches.

  1. Case linkage and pattern recognition

Event-related investigations should never exist in isolation. When analysts apply connected data analysis and link mapping, patterns begin to emerge: recurring individuals, behaviors or affiliations that might otherwise appear unrelated. Each isolated incident may sit on the margins of concern, but when viewed collectively, they can reveal a broader narrative: an insider demonstrating escalating behavior or progressing along the pathway to violence. By aggregating and analyzing these small signals, investigative teams can shift from reacting to incidents to identifying intent, uncovering risks long before they cross into active threats.

  1. Real-time collaboration and feedback loops

Investigative insight loses its power when it’s buried in inboxes or trapped in spreadsheets. The true value of intelligence emerges only when it reaches the right people at the right moment. Breaking down silos between intelligence analysts, investigators and operational teams ensures that findings translate into timely, informed action on the ground. Establishing an event-specific security operations center — one that unites state, local and federal agencies with venue security and event officials under a shared framework — creates a single hub for intelligence sharing and rapid coordination. This collaborative model transforms investigations from static reports into dynamic, real-time decision support, ensuring that every partner has the visibility and context needed to anticipate and neutralize risks before they escalate.

Even as artificial intelligence becomes more integrated into the investigative process, the human element remains indispensable. Technology can accelerate analysis and detection, but it’s human intuition, context and judgment that transform data into decisions — capabilities that AI has yet to replicate and replace.

Securing from the inside out

As the U.S. prepares for a decade of NSSEs, the success of each operation will depend on one foundational principle: Security starts from within. The most sophisticated perimeter protection and threat detection systems cannot compensate for insider risk that goes unexamined.

By operationalizing investigations within a connected framework that unites intelligence data, event security teams and tradecraft, federal agencies can transform insider threats from unknown liabilities into known risks, enabling the implementation of mitigation actions. In doing so, they not only safeguard events, but also set the new standard for how public and private sectors can work together to protect what matters most when the world is watching.

Tim Kirkham is vice president of the investigations practice at Ontic.

The post Securing the spotlight: Inside the investigations that protect America’s largest events first appeared on Federal News Network.

© Federal News Network

Tim Kirkham headshot

8 federal agency data trends for 2026

If 2025 was the year federal agencies began experimenting with AI at-scale, then 2026 will be the year they rethink their entire data foundations to support it. What’s coming next is not another incremental upgrade. Instead, it’s a shift toward connected intelligence, where data is governed, discoverable and ready for mission-driven AI from the start.

Federal leaders increasingly recognize that data is no longer just an IT asset. It is the operational backbone for everything from citizen services to national security. And the trends emerging now will define how agencies modernize, secure and activate that data through 2026 and beyond.

Trend 1: Governance moves from manual to machine-assisted

Agencies will accelerate the move toward AI-driven governance. Expect automated metadata generation, AI-powered lineage tracking, and policy enforcement that adjusts dynamically as data moves, changes and scales. Governance will finally become continuous, not episodic, allowing agencies to maintain compliance without slowing innovation.

Trend 2: Data collaboration platforms replace tool sprawl

2026 will mark a turning point as agencies consolidate scattered data tools into unified data collaboration platforms. These platforms integrate cataloging, observability and pipeline management into a single environment, reducing friction between data engineers, analysts and emerging AI teams. This consolidation will be essential for agencies implementing enterprise-wide AI strategies.

Trend 3: Federated architectures become the federal standard

Centralized data architectures will continue to give way to federated models that balance autonomy and interoperability across large agencies. A hybrid data fabric — one that links but doesn’t force consolidation — will become the dominant design pattern. Agencies with diverse missions and legacy environments will increasingly rely on this approach to scale AI responsibly.

Trend 4: Integration becomes AI-first

Application programming interfaces (APIs), semantic layers and data products will increasingly be designed for machine consumption, not just human analysis. Integration will be about preparing data for real-time analytics, large language models (LLMs) and mission systems, not just moving it from point A to point B.

Trend 5: Data storage goes AI-native

Traditional data lakes will evolve into AI-native environments that blend object storage with vector databases, enabling embedding search and retrieval-augmented generation. Federal agencies advancing their AI capabilities will turn to these storage architectures to support multimodal data and generative AI securely.

Trend 6: Real-time data quality becomes non-negotiable

Expect a major shift from reactive data cleansing to proactive, automated data quality monitoring. AI-based anomaly detection will become standard in data pipelines, ensuring the accuracy and reliability of data feeding AI systems and mission applications. The new rule: If it’s not high-quality in real time, it won’t support AI at-scale.

Trend 7: Zero trust expands into data access and auditing

As agencies mature their zero trust programs, 2026 will bring deeper automation in data permissions, access patterns and continuous auditing. Policy-as-code approaches will replace static permission models, ensuring data is both secure and available for AI-driven workloads.

Trend 8: Workforce roles evolve toward human-AI collaboration

The rise of generative AI will reshape federal data roles. The most in-demand professionals won’t necessarily be deep coders. They will be connectors who understand prompt engineering, data ethics, semantic modeling and AI-optimized workflows. Agencies will need talent that can design systems where humans and machines jointly manage data assets.

The bottom line: 2026 is the year of AI-ready data

In the year ahead, the agencies that win will build data ecosystems designed for adaptability, interoperability and human–AI collaboration. The outdated mindset of “collect and store” will be replaced by “integrate and activate.”

For federal leaders, the mission imperative is clear: Make data trustworthy by default, usable by design, and ready for AI from the start. Agencies that embrace this shift will move faster, innovate safely, and deliver more resilient mission outcomes in 2026 and beyond.

Seth Eaton is vice president of technology & innovation at Amentum.

The post 8 federal agency data trends for 2026 first appeared on Federal News Network.

© Getty Images/iStockphoto/ipopba

AI, Machine learning, Hands of robot and human touching on big data network connection background, Science and artificial intelligence technology, innovation and futuristic.

A data mesh approach: Helping DoD meet 2027 zero trust needs

As the Defense Department moves to meet its 2027 deadline for completing a zero trust strategy, it’s critical that the military can ingest data from disparate sources while also being able to observe and secure systems that span all layers of data operations.

Gone are the days of secure moats. Interconnected cloud, edge, hybrid and services-based architectures have created new levels of complexity — and more avenues for bad actors to introduce threats.

The ultimate vision of zero trust can’t be accomplished through one-off integrations between systems or layers. For critical cybersecurity operations to succeed, zero trust must be based on fast, well-informed risk scoring and decision making that consider a myriad of indicators that are continually flowing from all pillars.

Short of rewriting every application, protocol and API schema to support new zero trust communication specifications, agencies must look to the one commonality across the pillars: They all produce data in the form of logs, metrics, traces and alerts. When brought together into an actionable speed layer, the data flowing from and between each pillar can become the basis for making better-informed zero trust decisions.

The data challenge

According to the DoD, achieving its zero trust strategy results in several benefits, including “the ability of a user to access required data from anywhere, from any authorized and authenticated user and device, fully secured.”

Every day, defense agencies are generating enormous quantities of data. Things get even more tricky when the data is spread across cloud platforms, on-prem systems, or specialized environments like satellites and emergency response centers.

It’s hard to find information, let alone use it efficiently. And with different teams working with many different apps and data formats, the interoperability challenge increases. The mountain of data is growing. While it’s impossible to calculate the amount of data the DoD generates per day, a single Air Force unmanned aerial vehicle can generate up to 70 terabytes of data within a span of 14 hours, according to a Deloitte report. That’s about seven times more data output than the Hubble Space Telescope generates over an entire year.

Access to that information is bottlenecking.

Data mesh is the foundation for modern DoD zero trust strategies

Data mesh offers an alternative answer to organizing data effectively. Put simply, a data mesh overcomes silos, providing a unified and distributed layer that simplifies and standardizes data operations. Data collected from across the entire network can be retrieved and analyzed at any or all points of the ecosystem — so long as the user has permission to access it.

Instead of relying on a central IT team to manage all data, data ownership is distributed across government agencies and departments. The Cybersecurity and Infrastructure Security Agency uses a data mesh approach to gain visibility into security data from hundreds of federal agencies, while allowing each agency to retain control of its data.

Data mesh is a natural fit for government and defense sectors, where vast, distributed datasets have to be securely accessed and analyzed in real time.

Utilizing a scalable, flexible data platform for zero trust networking decisions

One of the biggest hurdles with current approaches to zero trust is that most zero trust implementations attempt to glue together existing systems through point-to-point integrations. While it might seem like the most straightforward way to step into the zero trust world, those direct connections can quickly become bottlenecks and even single points of failure.

Each system speaks its own language for querying, security and data format; the systems were also likely not designed to support the additional scale and loads that a zero trust security architecture brings. Collecting all data into a common platform where it can be correlated and analyzed together, using the same operations, is a key solution to this challenge.

When implementing a platform that fits these needs, agencies should look for a few capabilities, including the ability to monitor and analyze all of the infrastructure, applications and networks involved.

In addition, agencies must have the ability to ingest all events, alerts, logs, metrics, traces, hosts, devices and network data into a common search platform that includes built-in solutions for observability and security on the same data without needing to duplicate it to support multiple use cases.

This latter capability allows the monitoring of performance and security not only for the pillar systems and data, but also for the infrastructure and applications performing zero trust operations.

The zero trust security paradigm is necessary; we can no longer rely on simplistic, perimeter-based security. But the requirements demanded by the zero trust principles are too complex to accomplish with point-to-point integrations between systems or layers.

Zero trust requires integration across all pillars at the data level –– in short, the government needs a data mesh platform to orchestrate these implementations. By following the guidance outlined above, organizations will not just meet requirements, but truly get the most out of zero trust.

Chris Townsend is global vice president of public sector at Elastic.

The post A data mesh approach: Helping DoD meet 2027 zero trust needs first appeared on Federal News Network.

© AP Illustration/Peter Hamlin)

(AP Illustration/Peter Hamlin)US--Insider Q&A-Pentagon AI Chief

Operational Readiness and Resiliency Index: A new model to assess talent, performance

You just left a high-level meeting with agency leadership. You and your colleagues have been informed that Congress passed new legislation, and your agency is expected to implement the new law with your existing budget and staff. The lead program office replied, “We can make this work.” The agency head is pleased to hear this, but has reservations. How?

Another situation: The president just announced a new priority and has assigned it to your agency. Again, there is no new funding for the effort. Your agency head assigns the priority to your program with the expectation for success. How do you proceed?

Today, given the recent reductions in force (RIFs), people voluntarily leaving government, and structural reorganizations that have taken place and will likely continue, answering the question “How to proceed?” is even more difficult. There is a real need to “know” with a level of certainty whether there are sufficient resources to effectively deliver and sustain new programs or in some cases even the larger agency mission.

Members of the Management Advisory Group — a voluntary group of former appointees under Presidents George W. Bush and Donald Trump — and I believe the answer to these and other questions around an organization’s capabilities and capacity to perform can be found by employing the Operational Readiness and Resiliency Index (ORRI). ORRI is a domestic equivalent of the military readiness model. It is structured into four categories:

  • Workforce
  • Performance
  • Culture
  • Health

Composed of approximately 50 data elements and populated by existing systems of record, including payroll, learning management systems, finance, budget and programmatic/functional work systems, ORRI links capabilities/capacity with performance, informed by culture and health to provide agency heads and executives with an objective assessment of their organization’s current and future performance.

In the past, dynamic budgeting and incrementalism meant that risk was low and performance at some levels predictable. We have all managed some increases or cuts to budgets. Those days are gone. Government is changing now at a speed and degree of transformation that has not been witnessed before. Relying on traditional budgeting methods and employee surveys cannot provide insights needed to assess whether current resources provide the capabilities or capacity for future performance of an agency — at any level.

So how does it work?

As is evident with the illustration above, ORRI pulls mainly from existing systems of record. Many of these systems are outside of traditional human resources (HR) departments to include budget, finance and work-systems. Traditionally, HR departments relied on personnel data alone. These systems told you what staff were paid to do, not what they could do. It is focused on classification and pay, not skills, capacity or performance.

Over the years, we have made many efforts to better measure performance. The Government Performance and Results Act (GPRA) as amended, the Performance Assessment Rating Tool (PART), category management and other efforts have attempted to better account for performance. These tools — along with improvements in budgeting to include zero-based budgeting, planning, programming and budgeting systems, and enterprise risk management — have continued to advance our thinking along systems lines. These past efforts, however, failed to produce an integrated model that runs in near real-time or sets objective performance targets using best-in-class benchmarks. Linking capabilities/capacity to performance provides the ability to ask new questions and conduct comparative performance assessments. ORRI can answer such questions as:

  • Are our staffing plans ready for the next mission priority? Can we adapt? Are we resilient?
  • Do we have the right numbers with the right skills assigned to our top priorities?
  • Are we over-staffed in uncritical areas?
  • Given related functions, where are the performance outliers — good and bad?
  • Given our skill shortages, where do I have those skills that are at the right level available now? Should we recruit, train or reassign to make sure we have the right skills? What is in the best interest of the agency/taxpayer?
  • Is our performance comparable — in named activity, to the best — regardless of sector?
  • What does our data/evidence tell us about our culture? Do we represent excellence in whatever we do? Compared to whom?
  • Where are we excelling and why?
  • Where can we invest to demonstrate impact faster?

Focusing on workforce and performance are critical. However, if you believe that culture eats strategy every time, workforce and performance needs to be informed by culture. Hence ORRI includes culture as a category. Culture in this model concentrates on having a team of executives that drive and sustain the culture, evidenced by cycles of learning, change management success and employee engagement. Health is also a key driver for sustained higher performance. In this regard, ORRI tracks both positive and negative indicators of health, as is evident in the illustration. Again, targets are set and measured to drive performance and increase organizational health. Targets are set by industry best in class standards and strategic performance targets necessary for mission achievement.

Governmentwide, ORRI can provide the Office of Management and Budget with real-time comparative performance around key legislative and presidential priorities and cross-agency thematic initiatives. For the Office of Personnel Management, it can provide strategic intelligence on talent, such as enterprise risk management based on an objective assessment: data driven, on critical skills, numbers, competitive environment and performance.

ORRI represents the first phase of an expanded notion of talent assessment. It concentrates on human talent: federal employees.

Phase two of this model will expand the notion of operating capabilities to include AI agents and robotics. As the AI revolution gains speed and acceptance, we can see that agencies will move toward increased use of these tools to increase productivity and reduce transactional cost of government services. Government customer service and adjudication processes will be assigned to AI agents. Like Amazon, more and more warehouse functions will be assigned to physical robots. Talent will need to include machine capabilities, and the total capabilities/capacity reflect the new performance curve — optimizing talent from various sources. This new reality will require a reset in the way government plans, budgets, deploys talent, and assesses overall performance. Phase three will encompass the government’s formalized external supply chains which represent the non-governmental delivery systems — essentially government by other means. For example, the rise of public/private partnerships is fundamentally changing the nature of federated government; think of NASA and its dependence on Space X, Boeing, Lockheed Martin and others. ORRI will need to expand to accurately capture these alternative delivery systems to overall government performance. As the role of the federal government continues to evolve, so too do our models for planning, managing talent and assessing performance. ORRI provides that framework.

John Mullins served on the Trump 45 Transition Team and later as the senior advisor to the director at OPM. Most recently Mullins served as strategy and business development executive for IBM supporting NASA, the General Services Administration and OPM.

Mark Forman was the first administrator for E-Government and Information Technology (Federal CIO). He most recently served as chief strategy officer at Amida Technology Solutions.

The post Operational Readiness and Resiliency Index: A new model to assess talent, performance first appeared on Federal News Network.

© Getty Images/iStockphoto/ipopba

Businessman hold circle of network structure HR - Human resources. Business leadership concept. Management and recruitment. Social network. Different people.

CMMC DFARS clause explained: The KO’s checklist contractors never see

If you only read the contract clause, you’re missing the playbook.

As of Nov. 10, the Defense Federal Acquisition Regulation Supplement (DFARS) 252.204-7021, also known as the “Cybersecurity Maturity Model Certification [CMMC] clause,” is now in effect. With implementation officially underway, contractors are under pressure to understand not only what 7021 demands of them, but also what contracting officers (KOs) are required to do behind the scenes. Those instructions, which are buried in DFARS subpart 204.75, tell KOs when to include 7021, when they cannot award, and what they must verify before exercising options or extending a period of performance.

Contractors often treat 7021 as a black box dropped into their contracts. Now that the clause is active across new awards, KOs are following explicit procedures you never see. Understanding those procedures gives you visibility into how requirements are determined, enforced and sustained over the life of your award.

Where 7021 really comes from — and what KOs must do

The CMMC clause doesn’t appear in your contracts out of nowhere. It’s part of a stack. At the top is 32 CFR Part 170, the Defense Department’s CMMC program policy (effective Dec. 2024). DFARS 204.75 translates that policy into concrete guidance for contracting officers: policy, procedures and instructions on when to use the clause. You see it in practice as DFARS 252.204-7021, paired with 252.204-7025. DFARS 204.7500-7501 set the scope and definitions. The point is that DFARS isn’t inventing anything new; it’s carrying out CMMC program policy and telling KOs how to enforce it.

The KO instructions are unambiguous. Under DFARS 204.7502, a KO shall insert the required CMMC level when the program office or requiring activity tells them to. The KO doesn’t decide the level, as that comes from the program office based on the data and mission, but they are responsible for putting it into your contract language. Just as clearly, KOs shall not award a contract, task order or delivery order to an offeror without a current CMMC status at the required level.

Two qualifiers matter. First, “CMMC status” doesn’t mean “in progress.” It means you’ve achieved the minimum required score for the assessment, and your status is recognized (self or third-party; final or — at Levels 2 and 3 — conditional). Second, “current” matters. Status is generally valid for three years, and you must maintain it for the life of the award.

To make sense of this, it helps to decode what “status” really means at each level:

  • Level 1: Only a final self-assessment counts and no plans of actions and milestones (POA&Ms) are allowed.
  • Level 2: Can be self- or certified third-party assessor organization (C3PAO)- assessed, in either final or conditional status.
  • Level 3: Always a government assessment — Defense Industrial Base Cybersecurity Assessment Center (DIBCAC) — which can be final or conditional.

KOs may award if your status is final or conditional at Level 2 or 3, provided it meets the required level in the solicitation and any open items are limited to those allowed by 32 CFR §170.21. But conditional status is time bound: 180 days from the status date. If you achieved conditional four months ago and bid today, you’ve only got about 60 days left to close those POA&Ms. There is no conditional path for Level 1.

The message is clear: While conditional paths exist, they are narrow and tightly limited.

The SPRS/UID reality check

Before a KO awards, extends or exercises an option, they verify your status in the Supplier Performance Risk System (SPRS) using your 10-character alphanumeric CMMC Unique Identifier (UID), which is tied to the specific system or enclave that was assessed. This binding matters. The government wants traceability from the contract to the exact enclave processing its data. If your UID points to System A, but CUI ends up in System B, you’ve created a mismatch with contractual — and potentially False Claims Act — implications. Keep your boundary, documentation and operational reality aligned to the UID you present.

This KO check isn’t one-and-done. KOs verify at initial award, again at option exercise or performance extensions, and again if you introduce a new UID mid-performance (for example, after a significant scope change requiring a new assessment). If your status isn’t current at any of those points, the instruction is simple: no award, or no option for extension.

When 7021 must be used — and when it isn’t

The rule is now active, placing us in the phased rollout period that runs through Nov.9, 2028. During this stage, DFARS 204.7504 requires KOs to insert 7021 whenever the program office identifies a CMMC level and no waiver applies. Waivers remain rare and are issued only at the contract level, not as carve-outs for individual contractors.

When the rollout ends on Nov. 10, 2028, the requirement broadens: 7021 must appear in any contract involving the processing, storage or transmission of federal contract information (FCI) or CUI, unless formally waived. Wherever 7021 is used, 7025 follows to ensure all offerors see the requirement before bidding.

What this means for the contractor

Contractors should assume that KOs are already verifying CMMC status in SPRS today, not at some future point. Here’s how the KO’s world translates into your action list:

  • Don’t “strategy-bet” on KO discretion: The KO isn’t picking your level. The program office is. The KO’s job is execution and verification under “shall” language.
  • Know your status category and the timeline: If you’re planning to bid with conditional Level 2, track the 180-day closeout window from your status date. Build that into proposal schedules and risk plans.
  • Engineer your scope and keep it stable: Your CMMC UID binds the assessment to the specific system that will handle DoD data. Avoid unnecessary “significant change” events mid-performance that would force a new assessment/UID, unless you’ve planned for it.
  • Keep status current through the entire period of performance (PoP): Treat the three-year validity like a maintenance interval. If your status expires during performance, you’ve put option exercises and extensions at risk.
  • Map data flows to the assessed system: Ensure your CUI boundary and your assessed enclave are the same in reality, not just on paper. Align your system security plan (SSP), network diagrams, asset inventory and boundary controls to the UID’s scope.
  • Bid packages should include UID clarity: Make it easy for the KO to verify SPRS entries. Label the UID, level, status (final or conditional), status date and expiration in your cover letter or compliance matrix.
  • Have a POA&M closure plan you can execute: If conditional, your plan should show who/what/when, procurement lead times and validation steps. Assume the government will ask for evidence of progress.
  • Prepare for options early: Six months before option exercise, review your status currency, any scope drift, and whether new UIDs have appeared. Give your KO a smooth verification path.

The KO’s lens

Now that 7021 is in effect and being applied to new awards, KOs are already following the same mandatory procedures across solicitations, evaluations and option exercises. From the KO’s perspective, 7021 is not subjective. It’s a procedure backed by “shall” language: Include the required level, verify status in SPRS by UID, and do not award or extend if the status isn’t current at the required level. Conditional Level 2/3 can win you work, but only within the 180-day window and only with allowable POA&M items per policy.

By understanding the KO’s checklist, contractors can predict how requirements will appear in your contracts, anticipate when status checks will occur, and avoid surprises that might otherwise cost you awards or option years.

Jacob Horne is the chief cybersecurity evangelist at Summit 7.

The post CMMC DFARS clause explained: The KO’s checklist contractors never see first appeared on Federal News Network.

© Federal News Network

cybersecurity maturity model certification

The veteran advantage: How military service shapes cybersecurity leadership

In an era where cyber threats evolve faster than most organizations can respond, effective cybersecurity leadership demands more than technical expertise. It requires discipline, grace under pressure, and the ability to make mission-critical decisions with limited information, which are precisely the qualities forged through military service. 

Veterans often enter federal cyber roles already trained to operate in high-pressure environments where the stakes are real and failure isn’t an option. Their service fundamentally shapes how they lead, build resilient teams, and anticipate threats before they emerge. 

As a veteran of the U.S. Air Force who now works as a cybersecurity executive, I’ve seen firsthand how the military prepares its service members for the workforce. Here are some of my observations of how service shapes cybersecurity leadership. 

Navigating cultural complexity

Military service immediately immerses its personnel in diverse environments, requiring them to understand how colleagues from different backgrounds operate and think. As service members advance, this cultural fluency deepens. Leaders must motivate people with varying needs, perspectives and communication styles. 

International deployments further build this perspective. For instance, understanding that Middle Eastern business partners may prioritize relationship-building before negotiations, or that other cultures approach hierarchy differently than Americans, becomes second nature. In an increasingly globalized federal workforce and threat landscape, this cross-cultural competency is invaluable.  

Executing the mission without exception

When federal agencies hire veterans, they gain professionals who see tasks through to completion. If an assignment falls outside normal responsibilities, veterans rise to the occasion without questioning whether it’s “their job.” This mission-first mindset, where every action contributes to a larger objective, stands in stark contrast to siloed thinking that can plague organizations. And this flexibility proves essential during cyber incidents when traditional roles blur and every team member must contribute wherever needed most. 

Decisions under pressure

Military training deliberately forces officers to make decisions without comprehensive information. The lesson: A leader who makes an imperfect decision is better than one who makes no decision at all. Veterans learn to work with the available information, commit to action, and move forward. 

The concept of “fog of war,” or confusion caused by chaos when engaged in military operations or exercises, teaches service members that everyone is doing their best with the information they have. This builds both decisiveness and humility, recognizing that perfection is aspirational, not achievable. 

For federal cybersecurity leaders managing events where every minute counts, this ability to analyze quickly, decide confidently, and remain non-emotional under pressure is indispensable. 

Thinking like the adversary

Operational security training teaches service members to identify mission-critical assets, anticipate adversary tactics, and spot subtle anomalies. These skills translate directly to cybersecurity threat modeling and risk assessment. 

Veterans naturally prioritize what matters most, evaluate threats methodically, and continuously reassess risk. Those who conducted offensive cyber operations in military roles have encountered nation-state activity at levels rarely accessible in the civilian world. They understand adversary tactics, techniques and procedures with a depth that strengthens any security team. 

Working across massive government enterprises, whether Air Force-wide or across the entire Defense Department, provides veterans with crucial experience managing cybersecurity at scales few civilian organizations can match.  

Communicating mission and intent

In the military, commanders often state their intent upfront, ensuring everyone understands not just what to do, but why. This clarity drives success. 

Federal agencies must adopt similar practices. When team members understand the larger mission, vision and goals, they make better decisions at every level. Leaders should regularly communicate organizational priorities and check that employees grasp the underlying purpose of their work. Transparency about intent helps prevent disengagement and ensures alignment across complex organizations.  

For instance, at my company (which employs many former service members), business leaders align every 90 days to identify top priorities, then cascade this information throughout the organization. Team leads verify that employees understand both the “what” and the “why.” This well-defined approach keeps everyone moving in the same direction.  

Leading through influence, not authority

The transition from military to civilian leadership requires adaptation. Military rank structure provides clear authority; civilian workplaces demand influence built through relationships. 

Veterans must develop power beyond their rank or title and not let this solely determine their workplace interactions. This means building genuine rapport with team members, establishing mutual respect, and creating working relationships strong enough that people want to deliver excellent work. When employees trust and respect their leader, motivation becomes intrinsic rather than imposed.  

This transition can be challenging for veterans accustomed to rule-driven environments with explicit policies for every situation. Success requires learning to lead through persuasion, collaboration and relationship-building. 

Translating military experience for civilian impact

Veterans transitioning into federal cybersecurity roles should focus on demonstrating technical depth over credentials. While certifications matter and are required by many federal jobs, hiring managers also desire that prospective talent can exhibit problem-solving ability and learnings from their actual experience. 

Veterans must learn to translate classified military work into language civilian employers will understand, avoiding military jargon while communicating substantive experience. This skill — articulating complex technical work without revealing protected information — is valuable in government roles requiring discretion. 

Additionally, veterans should ease the formality that military boards require and may have to adjust how they act. Serving in the military can mean following strict rules and displaying a fixed demeanor that isn’t always present in the workplace. Being personable and building personal connections accelerates how veterans assimilate into civilian teams. 

Offering lessons for all federal leaders

Veterans bring valuable perspective to federal cybersecurity leadership, but their lessons apply broadly. Any employee can second-guess decisions without visibility into the full decision-making process. While explaining reasoning requires effort, sharing the intent behind decisions keeps teams motivated and prevents disengagement. 

The military principle of operating in the “fog of war,” where decisions are made with imperfect information, applies equally to civilian leadership. Federal executives should embrace this reality: transparent communication about constraints and trade-offs builds understanding, even when decisions are unpopular. 

Building resilient cyber teams

The intersection of military experience and federal cybersecurity creates powerful synergies. Veterans bring operational discipline, threat awareness and mission focus, qualities that strengthen any security program. Their training in high-stakes decision-making, cultural navigation and adversarial thinking addresses critical gaps in federal cyber defense.  

As threats grow more sophisticated and federal networks more complex, the leadership qualities established through military service become increasingly vital. Organizations that effectively employ veteran talent and adopt their mission-first mindset position themselves to meet emerging cybersecurity challenges with confidence and resilience. 

Russel Van Tuyl is vice president of services at SpecterOps. 

The post The veteran advantage: How military service shapes cybersecurity leadership first appeared on Federal News Network.

© Amelia Brust/Federal News Network

Army civilian, worker, computer, soldiers

The mineral deficiency in America’s economic health

Many Americans remain unaware of a troubling reality that is deeply consequential to our daily lives. Buried within the obscure corners of the periodic table are elements that are essential to our modern economy and national security. These elements, known as critical minerals, are indispensable.

They’re found in everything: health and beauty products, semiconductors in our smartphones, and in the fiber optics that enable internet connectivity. Yet, disconcertingly, we are exceedingly reliant on foreign suppliers for these minerals; this dependency puts America on its back foot.

Consider gallium, used in LEDs and solar panels. Despite possessing considerable gallium resources, the U.S. has no current domestic gallium production and is 100% reliant on imports (most of which come from China). In fact, the U.S. is entirely dependent on foreign sources for at least 12 of the 54 critical minerals identified as essential by the U.S. Geological Survey. For another 29, we are over 50% reliant on non-domestic sources. Our industries — from advanced manufacturing to defense — remain susceptible to supply chain disruptions because of this, as USGS’ latest draft assessment lays out.

This was not always the case. From the 1950s to the 1980s, the United States led the world in the production and refining of rare earth elements (REEs), a subset of critical minerals. However, increased global interconnectedness, high domestic production costs, and environmental challenges contributed to a decline in domestic production. As the United States shifted focus, the People’s Republic of China aggressively invested in its REE mining technology and infrastructure, transforming itself into the world’s dominant player by the mid-1990s. Today, it’s clear that America’s lost dominance in the critical minerals sector is a path it can no longer afford.

China has recently ramped up export controls on minerals like gallium, germanium, antimony and several others. This action underscores a precarious position for the United States and its allies. In December 2024, China banned the export of these minerals to the U.S., Japan, and the Netherlands and subsequently expanded controls to include tungsten, indium, bismuth, tellurium and molybdenum. This move has severely impacted the availability of minerals for which the U.S. is significantly import-reliant.

It is within this context that the administration has made critical minerals security a key component of its energy strategy. Yet, despite these efforts, progress has remained slow because of overlapping initiatives. Intentional coordination among more than 15 federal agencies involved in mineral security could speed up opportunities.

Cohesion, coordination and a comprehensive approach could help to overcome this. A U.S. Critical Minerals Action Plan could focus on fostering a domestic renaissance in mining and processing, strengthening international cooperation, and mitigating risks while fostering a more transparent market.

First, domestic mining and processing capabilities would need to be enhanced. This could mean accelerating permitting processes, streamlining regulations, and investing in exploration and workforce development. It’s not just about digging up minerals; it’s about revitalizing the entire industry value chain, from education to innovation, to support sustainable and efficient production.

Second, international cooperation could be paramount. The U.S. may expand its role in existing multilateral arrangements and invest strategically in the critical mineral projects of allied nations. This could help diversify supply chains and reduce dependency on any single source — specifically, reducing reliance on China.

Third, the U.S. can mitigate risks by creating a well-functioning critical minerals market. This includes implementing targeted incentives to stabilize the market and encourage investment.

The considerations laid out in the U.S. Critical Minerals Action Plan are not a luxury; they are a necessity. Such a plan could serve as a blueprint for reducing American vulnerabilities, securing supply chains, and maintaining national security and economic stability. By taking concerted action now, leaders across the country can ensure that America’s future remains bright, innovative and secure.

Richard Longstaff is a managing director for Deloitte Consulting LLP.

The post The mineral deficiency in America’s economic health first appeared on Federal News Network.

© Richard Pipes/The Albuquerque Journal via AP

FILE - Terraces cut into the hillside at the huge Santa Rita copper mine in Grant County, N.M., are shown in this March 1999 file photo. The Biden administration is recommending changes to a 151-year-old law that governs mining for copper, gold and other hardrock minerals on U.S.-owned lands, including making companies pay royalties on what they extract. (Richard Pipes/The Albuquerque Journal via AP, File)

The hidden vulnerability: Why legacy government web forms demand urgent attention

Government agencies face a security challenge hiding in plain sight: outdated web forms that collect citizen data through systems built years — sometimes decades — ago. While agencies invest heavily in perimeter security and advanced threat detection, many continue using legacy forms lacking modern encryption, authentication capabilities and compliance features. These aging systems process Social Security numbers, financial records, health information and security clearance data through technology that falls short of current federal security standards.

The scale of this challenge is substantial. Government organizations allocate 80% of IT budgets to maintaining legacy systems, leaving modernization efforts chronically underfunded. Critical legacy systems cost hundreds of millions annually to maintain, with projected spending reaching billions by 2030. Meanwhile, government data breaches cost an average of $10 million per incident in the United States — the highest globally.

The encryption gap that persists

Despite the 2015 federal mandate establishing HTTPS as the baseline for all government websites, implementation gaps continue. The unencrypted HTTP protocol exposes data to interception, manipulation and impersonation attacks. Attackers positioned on the network can read Social Security numbers, driver’s license numbers, financial account numbers and login credentials transmitted in plain text.

Legacy government web forms that do implement encryption often use outdated protocols no longer meeting regulatory requirements. Older systems rely on deprecated hashing algorithms like SHA-1 and outdated TLS versions vulnerable to known exploits. Without proper security header enforcement, browsers don’t automatically use secure connections, allowing users to inadvertently access unencrypted form pages.

Application-layer vulnerabilities

Beyond transmission security, legacy web forms suffer from fundamental application vulnerabilities. Testing reveals that over 80% of government web applications remain prone to SQL injection attacks. Unlike private sector organizations that remediate 73% of identified vulnerabilities, government departments remediate only 27% — the lowest among all industry sectors.

SQL injection remains one of the most dangerous attacks against government web forms. Legacy forms constructing database queries using string concatenation rather than parameterized queries introduce serious vulnerabilities. This insecure practice allows attackers to inject malicious SQL code, potentially gaining unauthorized access to national identity information, license details and Social Security numbers. Attackers exploit these vulnerabilities to alter or delete identity records, manipulate data to forge official documents, and exfiltrate entire databases containing citizen information.

Cross-site scripting (XSS) affects 75% of government applications. XSS attacks enable attackers to manipulate users’ browsers directly, capture keystrokes to steal credentials, obtain session cookies to hijack authenticated sessions, and redirect users to malicious websites. Legacy forms also lack protection against CSRF attacks, which trick authenticated users into performing unwanted actions without their knowledge.

Compliance imperative

Federal agencies must comply with the Federal Information Security Management Act (FISMA), which requires implementation of National Institute of Standards and Technology SP 800-53 security controls including access control, configuration management, identification and authentication, and system and communications protection. Legacy web forms fail FISMA compliance when they cannot implement modern encryption for data in transit and at rest, lack multi-factor authentication capabilities, don’t maintain comprehensive audit logs, use unsupported software without security patches, and operate with known exploitable vulnerabilities.

Federal agencies using third-party web form platforms must ensure vendors have appropriate FedRAMP authorization. FedRAMP requires security controls compliance incorporating NIST SP 800-53 Revision 5 controls, impact level authorization based on data sensitivity, and continuous monitoring of encryption methods and security posture. Legacy government web forms implemented through non-FedRAMP-authorized platforms represent unauthorized use of non-compliant systems.

Real-world transmission failures

The gap between policy and practice is stark. Federal agencies commonly require contractors to submit forms containing Social Security numbers, dates of birth, driver’s license numbers, criminal histories and credit information via standard non-encrypted email as plain PDF attachments. When contractors offer encrypted alternatives, badge offices often respond with resistance to change established procedures.

Most federal agencies lack basic secure portals for PII submission, forcing reliance on email despite policies requiring encryption. Standard Form 86 for national security clearances and other government forms are distributed as fillable PDFs that can be completed offline, saved unencrypted, and transmitted through insecure channels — despite containing complete background investigation data for millions of federal employees and contractors.

Recent breaches highlight ongoing vulnerabilities. Federal departments have suffered breaches where hackers accessed networks through compromised credentials. Congressional offices have been targeted by suspected foreign actors. Private contractors providing employee screening services have confirmed massive data breaches affecting millions, with unauthorized access lasting months before detection.

What agencies must do now

Government agencies must immediately enforce HTTPS encryption for all web form pages using HTTP strict transport security, deploy server-side input validation to prevent SQL injection and XSS attacks, implement anti-CSRF tokens for each form session, add bot protection, enable comprehensive access logging, and conduct regular vulnerability scanning for Open Worldwide Application Security Project Top 10 vulnerabilities.

Long-term security requires replacing legacy forms with FedRAMP-authorized platforms that provide end-to-end encryption using AES-256 for data at rest and TLS 1.3 for data in transit, multi-factor authentication for both citizens and government staff, role-based access control with granular permissions, comprehensive audit trails capturing all data access events, and automated security updates addressing emerging vulnerabilities.

Secure data collection

The real question is not whether government agencies can afford to modernize outdated web forms, but whether they can afford the consequences of failing to do so. Every unencrypted submission, each SQL injection vulnerability, and each missing audit trail represents citizen data at risk and regulatory violations accumulating. Federal mandates established the security standards years ago. Implementation can no longer wait.

The technology to solve these problems exists today. Modern secure form platforms offer FedRAMP authorization, end-to-end encryption, multi-factor authentication, comprehensive audit logging, and automated compliance monitoring. These platforms can replace legacy systems while improving user experience, reducing operational costs, and meeting evolving security requirements.

Success requires more than technology adoption — it demands organizational commitment. Agency leadership must prioritize web form security, allocate adequate budgets for modernization, and establish clear timelines for legacy system replacement. Security and IT teams need the resources and authority to implement proper controls.

Government web forms represent the primary interface between citizens and their government for countless critical services. When these forms are secure, they enable efficient, trustworthy digital government services. When they’re vulnerable, they undermine public confidence in government’s ability to protect sensitive information. The path forward is clear: Acknowledge the severity of legacy web form vulnerabilities, commit resources to address them systematically, and implement modern secure solutions. The cost of action is significant, but the cost of inaction — measured in breached data, compromised systems, regulatory penalties and lost public trust — is far higher.

 

Frank Balonis is chief information security officer and senior vice president of operations and support at Kiteworks.

The post The hidden vulnerability: Why legacy government web forms demand urgent attention first appeared on Federal News Network.

© Getty Images/iStockphoto/Traitov

Recruiting cleared talent in 2026

Federal contractors face a critical challenge in 2026: the growing gap between what cleared professionals demand and what traditional government contracts can deliver. After a difficult 2025 marked by budget uncertainty, a government shutdown and stalled hiring, the year ahead promises continued pressure on talent acquisition.

The post-pandemic workforce has shifted expectations. Cleared professionals now expect remote flexibility, tech sector compensation and modern workplace cultures. Many government contracts still operate under rigid models constrained by bid ceilings, onsite mandates and fixed pricing.

But 2026 also presents opportunities. A stronger pipeline of junior technical talent and transitioning veterans could stabilize workforces if contractors move fast enough. Success requires faster decision-making, modernized employee value propositions, and year-round community-based recruiting beyond proposal-driven cycles.

The widening expectations gap

Cleared technical professionals now benchmark against commercial tech companies, not other contractors. They demand rapid salary progression, permanent remote work (where possible), continuous upskilling and transparent compensation structures.

Government contracting remains constrained by customer requirements, fixed labor categories, and proposal locked pricing. Contractors who creatively address these expectations within structural constraints, through enhanced benefits, clearer promotion pathways, or robust training programs, will gain significant competitive advantages.

The tale of two talent markets

Nearly impossible to fill:

  • Cloud security architects with active clearances
  • AI/ML engineers in the cleared space
  • Senior program managers with technical expertise and clearances
  • Specialized cybersecurity roles (penetration testers, threat hunters, SOC analysts)

The cybersecurity shortage remains acute with a 4.8 million global gap. Professionals with these skills command premium compensation, often choosing private sector opportunities. Clearance requirements add friction: Candidates won’t wait through 12+ months of adjudications without guaranteed compensation.

Increasingly accessible:

  • Traditional IT support roles (help desk, desktop support, systems administration)
  • Junior technical positions where clearances can be sponsored
  • Entry level cybersecurity roles

Expanding pipelines from apprenticeship programs, technical schools and military transitions provides opportunities to “grow your own” cleared workforce through clearance sponsorship and training.

Speed as competitive advantage

Clearance processing delays have always been problematic, but contractor response time may be equally damaging in 2026. Organizations that move candidates from interest to offer in days, not weeks, will win both talent and contracts.

Fast hiring offers multiple advantages: enhanced proposal credibility with pre-identified talent, more accurate pricing, higher win rates and improved candidate experience. To achieve this speed, maintain prequalified talent pools, streamline internal approvals, and prioritize relationship building over reactive posting.

Beyond job boards: Where cleared talent lives

While ClearanceJobs.com remains a standard resource, contractors will find greater success by combining it with other targeted recruitment channels. Successful contractors will diversify their strategies through sophisticated employee referral programs, cleared talent communities, technical forums, veteran transition programs, security conferences and thought leadership initiatives.

The common thread? Relationship-based recruiting that prioritizes ongoing engagement over transactional postings. Cleared professionals trust peer recommendations and community reputation over job board advertisements.

Maintain talent relationships between contracts, invest in technical content, and participate authentically in cleared communities, not just when positions open.

Strategic imperatives for 2026

Organizations that succeed will take these actions:

Build always-on talent communities: Replace reactive hiring with year-round relationship management. Maintain engagement even without immediate openings.

Modernize the employee value proposition: Where compensation flexibility is limited, compete on career development, technical training, meaningful work and work-life balance.

Accelerate internal processes: Reduce time to offer from weeks to days. Empower hiring managers with clearer authority.

Invest in junior talent development: Build clearance sponsorship programs and structured training. Growing your own cleared workforce is more sustainable than competing for experienced professionals.

Leverage data: Track time to fill, offer acceptance rates, and source effectiveness to continuously refine your approach.

The cleared talent market won’t ease in 2026, but contractors who take a strategic, relationship-driven approach can turn challenge into a competitive advantage. The market is distinguishing between those who merely adapt and those who innovate. 2026 will reveal which category your organization occupies.

Scott Ryan is chief revenue officer for HireClix.

The post Recruiting cleared talent in 2026 first appeared on Federal News Network.

© Federal News Network

scott ryan headshot

America can’t afford to hollow out its cyber defenses

In recent months, the United States has entered a dangerous phase of digital vulnerability just as adversaries accelerate their use of artificial intelligence. Anthropic recently disclosed that a nation state-linked threat actor attempted to use its commercial AI models to enhance cyber espionage operations, one of the first publicly documented attempts to operationalize AI for real-world intelligence gathering and offensive cyber activity. The company ultimately blocked the activity, but it demonstrated how quickly hostile actors are adapting and how easily these tools can be repurposed for malicious use.

At the same time, the U.S. is grappling with a significant loss of cyber expertise across agencies, including nearly 1,000 seasoned experts from the Cybersecurity and Infrastructure Security Agency. Attrition and budget reductions over recent years have hollowed out capabilities the nation relies on for critical infrastructure protection and threat coordination. Key intelligence units that once monitored Russian and other foreign cyber operations have been disbanded. CISA is now planning a major hiring surge to rebuild its workforce, which has vacancy rates hovering around 40%, but the gap between where the agency stands and what the threat environment demands remains significant.

Combined, these developments paint a troubling picture. AI is enabling threat actors to become more aggressive, efficient and effective, yet the U.S. appears to be weakening the very cyber defenses necessary to counter them. Make no mistake: A one-third loss of our top cyber forces since the start of the current administration, combined with a proposed 17% CISA budget cut, equates to strategic self-sabotage.

The AI-powered digital arms race

Cyber policy experts warn that the U.S. is entering a digital arms race just as it’s hollowing out its defensive ranks. We’re facing the battle with fewer soldiers and less ammunition. Many are speaking out, including security experts such as Bruce Schneier, a Harvard fellow and renowned cryptographer; Heather Adkins, Google’s founding director of information security; and Gadi Evron, a cyber intelligence leader and early pioneer in botnet defense. They have all warned that AI is becoming an asymmetric weapon empowering adversaries far faster than it equips defenders. The tools that once required months of expert development can now be generated by large language models in minutes. Malware creation, vulnerability discovery and exploitation are being automated at an unprecedented scale.

Meanwhile, defenders are being asked to do more with less. CISA’s work, from protecting critical infrastructure and federal networks to supporting state and local election systems, is foundational to national security. Reducing the agency’s budget or its workforce doesn’t just create gaps; it signals to adversaries that the U.S. is willing to accept greater risk in the digital domain.

Critical infrastructure’s expanding attack surface

This risk extends far beyond government networks. Our power grids, water treatment plants, financial systems, hospitals and communications infrastructure are all connected to and dependent on the same digital backbone. And while it’s true that most critical infrastructure in the U.S. is privately owned and regulated and that the federal government and industry have spent more than a decade trying to harden these systems, those efforts have not eliminated the underlying vulnerabilities and the cascade effect compromise can have.

Many of the improvements have focused on legacy perimeter defenses, voluntary standards or incremental upgrades to aging operational technology. But the attack surface has expanded faster than regulations or investments can keep pace. Water systems, in particular, carry a disproportionate risk. Utilities operated at the local level often lack dedicated security staff, rely on remote access software and operate equipment that was never designed for an environment of persistent, AI-assisted cyber threats. According to CISA, hospitals lose their ability to provide basic patient care, sanitation and medical procedures within just two hours. Unlike electricity, where backup generators commonly provide redundancy to ensure continuous operations, there is no equivalent resilience for water treatment or distribution.

Researchers like Joshua Corman, who leads the UnDisruptable27 initiative at the Institute for Security and Technology, have warned about the cascading consequences when cyber or physical incidents compromise critical functions. Corman said U.S. critical infrastructure was never built to withstand deliberate, persistent attacks and the nation continues to underestimate how quickly a disruption in one lifeline sector cascades into others. Water and wastewater systems, emergency medical care, food supply chains and power are tightly interdependent; losing even one can trigger rapid, compounding failures.

So while critical infrastructure is more secure in some respects, it is also more interconnected, more digitized and more exposed than ever. Hardening alone cannot offset the impact of weakened federal cyber capacity. The systems that sustain our world are online, remotely managed and increasingly targeted by adversaries who now have faster, cheaper, AI-driven tools at their disposal.

The global impact of weakened U.S. defenses

Today, nation-state actors can weaponize code at superhuman speed, but the erosion of federal cyber capacity is not merely a domestic concern. The impact can be felt throughout the global fabric of the internet and its interconnected systems. They depend on American digital resilience. Water infrastructure, power grids, telecommunications, financial networks and even the integrity of democratic elections hinge on having a properly resourced, expert-led cyber defense.

Allies rely on American intelligence and coordination, and multiple federal agencies contribute to that ecosystem. The Office of the Director of National Intelligence leads the classified intelligence-sharing mission across the “Five Eyes” and other international partners. But CISA plays a critical role in global cyber defense.

CISA is the primary U.S. agency responsible for sharing unclassified, actionable threat information with foreign computer emergency readiness teams (CERT), multinational companies, critical infrastructure operators and technology vendors who sit outside the intelligence community. Its Joint Cyber Defense Collaborative routinely coordinates with international partners to issue joint advisories, publish analytic reports on nation-state activity and align defensive playbooks across borders. These are often the first public warnings about nation-state activity. When CISA’s capacity shrinks, these real-time channels of global coordination weaken.

That’s why the disbanding of specialized units focused on Russian operations has strained relationships and emboldened our adversaries. The loss is not only in classified analysis, but in the day-to-day operational coordination, warnings and technical guidance that CISA provides to governments and private-sector operators worldwide. In an era of growing geopolitical instability, the shadow cast by U.S. cyber policy reaches far beyond our borders and shared defense efforts are essential. Cyber risks and threat actors will continue to evolve with the weaponization of AI, and we simply cannot afford to let any part of the ecosystem erode.

The future of U.S. cybersecurity

Although we are under tremendous pressure to reinforce our digital infrastructure, we cannot address this challenge by pointing fingers. This is not a partisan issue; it is a universal one.

Fortunately, we can still reverse course, but only if we act decisively. Every day we delay, we trade preparedness for fragility. Appealing to Washington alone won’t be enough. The private sector operates and secures most of the systems that keep the U.S. running. Corporate leaders, from utilities to finance to technology, have as much at stake as the intelligence community. They have a voice, and it’s time to use it. Everyone who values security and stability must take part in reversing this decline.

Cybersecurity and corporate leaders must stand together and make it clear that weakening the nation’s digital defenses weakens the entire global economy. That means demanding Congress restore cyber funding, publicly supporting stronger baseline security requirements for critical infrastructure, participating in joint advisories with CISA and international CERTs, and committing to shared defense initiatives through industry coalitions, such as the Cyber Threat Alliance or one of the industry-focused Information Sharing and Analysis Centers (ISACs). The prosperity we enjoy depends on peace and stability in cyberspace, and that stability depends on a united front that encompasses both public and private as well as domestic and international interests.

The U.S. once led the world in building the secure foundations of the internet. We can lead again, but only if we treat cybersecurity as an essential part of our national security.

Jaya Baloo is the co-founder, chief operating officer and chief information security officer of AISLE.

The post America can’t afford to hollow out its cyber defenses first appeared on Federal News Network.

© Getty Images/iStockphoto/your_photo

Big Data Protection Cyber Security Concept With Shield Icon In Cyber Space.Cyber Attack Protection For Worldwide Connections,Block chain. Digital Big Data Stream Analysis.

USPTO proposes dramatic restrictions on patent challenges through inter partes review

The United States Patent and Trademark Office has published proposed regulations that would fundamentally transform the inter partes review (IPR) landscape, potentially eliminating IPR as a viable option for many patent challengers. Euphemistically referred to as “Revision to Rules of Practice before the Patent Trial and Appeal Board,” these changes would create a “one-and-done” system where patents that survive any initial validity challenge — regardless of its quality or completeness — become virtually immune from subsequent IPR proceedings. Both the patent owner and the company facing patent assertion must understand these changes and adjust their litigation strategies accordingly.

The new regulatory framework

  1. Mandatory stipulations upon institution

The proposed amendments would require petitioners to stipulate that they will not pursue any invalidity challenges in other venues if the Patent Trial and Appeal Board (PTAB) institutes IPR. This goes significantly beyond current estoppel provisions, which only applies after a final written decision and is limited to grounds that were raised or reasonably could have been raised.

Practical impact: Companies facing patent assertion would be forced to make an all-or-nothing decision at the IPR filing stage. If you file an IPR petition and it is instituted, you forfeit all anticipation and obviousness defenses in district court — even those not included in your petition. This dramatically raises the stakes for IPR filings and may discourage challenges to questionable patents.

  1. Absolute bar following prior challenges

Perhaps most significantly, the proposed amendments would prohibit IPR institutions against claims that have survived any prior validity challenge in district court, the International Trade Commission (ITC), or previous USPTO proceedings. Unlike the current discretionary framework, this creates a mandatory bar with no exceptions for new prior art or arguments.

Practical impact: If you are the second or subsequent defendant sued on a patent, you may have no IPR option if an earlier defendant mounted any validity challenge — even if that challenge was poorly executed, settled early or based on inferior prior art. This “first mover” problem particularly benefits non-practicing entities (NPEs) who can strategically select weak first defendants.

  1. Parallel proceedings prohibition

The proposed amendments would categorically bar IPR institution when a district court trial or ITC determination “will more likely than not” occur before the PTAB’s deadline for a final written decision. This replaces the current discretionary Fintiv analysis with a bright-line rule.

Practical impact: Patent owner plaintiffs can effectively block IPR access by filing in rocket dockets like the Western District of Texas, where trials routinely occur within 18 months. Even if circumstances change — such as trial delays or potential stays — the initial “more likely than not” determination would be binding.

  1. Limited “extraordinary circumstances” exception

The proposed amendments permit subsequent IPR challenges only under “extraordinary circumstances,” which are significantly narrowed to situations such as prior bad faith conduct or intervening changes in the law. Discovery of new prior art, inadequate prior representation or clear errors in earlier proceedings would not qualify. Frivolous or abusive petitions would be subject to sanctions and attorneys’ fees.

Practical impact: The exception is so narrow as to be virtually meaningless for most patent defendants. Even breakthrough prior art discovered after an initial challenge would not justify a second IPR attempt against the patent.

Strategic implications for your business

  1. For companies facing patent assertions: Immediate actions required
  • Coordinate with co-defendants: If you face assertions alongside other defendants, immediately establish coordination agreements regarding IPR strategy. The first filer may determine everyone’s fate.
  • Accelerate prior art searches: Conduct comprehensive prior art searches immediately upon receiving notice of potential assertions. Waiting until after litigation commences may be too late.
  • Evaluate forum options: Consider proactive declaratory judgment actions in favorable venues to avoid rocket dockets that trigger the parallel proceedings bar.
  1. For patent owners: New enforcement advantages
  • Strategic first suits: Consider initially suing smaller, less sophisticated defendants who may mount inadequate validity challenges, thereby immunizing your patents from subsequent IPR attacks.
  • Rocket docket filings: Filing in fast-moving courts can effectively eliminate IPR risk through the parallel proceedings bar.
  • Settlement leverage: The proposed rules significantly increase settlement leverage, as defendants cannot rely on subsequent IPR opportunities if initial challenges fail.
  1. For technology companies with patent portfolios: Portfolio management considerations
  • Defensive publication strategies: With reduced IPR availability, preventing competitor patents through defensive publications becomes more critical.
  • Prosecution strategy: Expect more aggressive examination challenges as IPR becomes less available for post-grant correction.
  • Freedom-to-operate (FTO) analyses: Conduct more thorough FTO analyses earlier, as post-assertion challenge options will be limited.
  1. Industry-specific impacts:

Pharmaceutical and biotechnology
The proposed amendments particularly benefit branded pharmaceutical companies by making it harder for generic manufacturers to challenge patents through IPR. Subsequent generic filers may be bound by inadequate challenges from first filers, potentially extending market exclusivity.

High technology and software
Technology companies facing assertions from NPEs will be most negatively impacted. NPEs can leverage the rules to protect weak patents by ensuring initial challenges come from resource-constrained defendants, then asserting those “validated” patents against major industry players.

Manufacturing and consumer products
Companies in competitive industries where cross-assertions are common must carefully consider whether IPR remains viable given the mandatory stipulation requirements. District court litigation may become the only option for comprehensive validity challenges.

Recommended actions

  1. Review existing litigation: Companies that may face patent assertions should immediately assess any pending patent disputes to determine whether IPR petitions should be filed before the new rules may take effect.
  2. Update IP policies: Companies that may face patent assertions should revise internal procedures for responding to patent assertions, emphasizing early coordination with counsel and comprehensive prior art searches.
  3. Budget adjustments: Anticipate higher patent defense costs as district court litigation becomes the primary venue for validity challenges.
  4. Insurance reviews: Evaluate whether current IP insurance coverage adequately addresses the increased litigation risks under the new framework.

The USPTO’s proposed rules represent the most significant restriction on IPR accessibility since the America Invents Act created the procedure in 2012. By essentially creating a “one-and-done” system for validity challenges, the rules would restore many of the inefficiencies and inequities that IPR was designed to address. While the USPTO frames these changes as promoting efficiency and quiet title, the practical effect would likely be to shield questionable patents from review while dramatically increasing litigation costs for innovative companies. Immediate action is required to address these proposed changes through the comment process and to adjust litigation strategies in anticipation of their potential implementation.

Edward “Ed” Lanquist is a shareholder in the Nashville office of Baker Donelson.

Lea Speed is a patent attorney and patent litigator in the Memphis office of Baker Donelson.

The post USPTO proposes dramatic restrictions on patent challenges through inter partes review first appeared on Federal News Network.

© PAUL J.RICHARDS/AFP via Getty Images

Alexandria, UNITED STATES: The U.S. Patent and Trademark Office is shown 14 March 2006 in Alexandria, Virgina. Congress established the United States Patent and Trademark Office (USPTO) to issue patents on behalf of the government.The United States Patent and Trademark Office as a distinct bureau dates from the year 1802 when a separate official in the Department of State who became known as ?Superintendent of Patents? was placed in charge of patents. The United States Patent and Trademark Office administers the patent laws as they relate to the granting of patents for inventions, and performs other duties relating to patents. AFP Photo/Paul J. RICHARDS (Photo credit should read PAUL J.RICHARDS/AFP via Getty Images)

The AI data fabric: Your blueprint for mission-ready intelligence

Artificial intelligence is only as powerful as the data it’s built on. Today’s foundation models thrive on vast, diverse and interconnected datasets, mostly drawn from public domains. The real differentiator for organizations lies with their private, mission-specific data.

This is where a data fabric comes in. A modern data fabric stores and integrates information and serves as the connective tissue between raw enterprise data and AI systems that can reason, recommend and respond in real time.

Know your data

Before you can connect your data, you need to understand it. Knowing what data you have, where it resides, and how it flows is the foundation to every AI initiative. This understanding is built on four core pillars:

  • Discoverability: Can your teams find the data they need without a manual search?
  • Lineage: Do you know where your data comes from, and how it has been altered?
  • Context: Can you interpret what the data means within your mission or business environment?
  • Governance: Is it secure, compliant and trusted enough for decision-making?

At the center of an effective data fabric is a data catalog that takes an inventory of sources and continuously learns from them. It captures relationships, context and organizational knowledge across every corner of a digital ecosystem.

Anything can be a data source, including databases, spreadsheets, code repositories, sensor streams, ETL pipelines, documents and even collaborative discussions. When you start treating all of it as valuable data, the picture of your enterprise becomes complete.

Knowing your data is the foundation. Connecting your data is transformation.

Hidden value of knowledge

The true power of a data fabric lies in its ability to link seemingly unrelated data. When disparate systems — including operations, logistics, HR, supply chain and customer support — are connected through shared metadata and inferred relationships, new knowledge emerges.

Imagine correlating operational readiness data with HR analytics to anticipate staffing shortages or connecting structured metrics with unstructured logs to reveal previously invisible patterns. This derived knowledge is your competitive advantage.

A connected intelligence framework feeds directly into machine learning and generative AI systems, enriching them with contextually grounded, multi-source insights. The result: models that are powerful, explainable and mission aware.

Minimal requirements for an AI-ready data fabric

Building an AI-ready data fabric requires more than simple integration. It demands systems designed for adaptability, compliance and continuous intelligence.

A next-generation catalog goes far beyond simple source indexing. It must integrate lineage, governance and compliance requirements at every level, particularly in regulated or public-sector environments. This type of catalog evolves as data and processes evolve, maintaining a living view of the organization’s digital footprint. It serves as both a map and a memory of enterprise knowledge, ensuring AI models operate on trusted and the latest information.

Data is not enough. Domain expertise, often expressed in documents, code and daily workflows, represents a critical layer of organizational intelligence. Capturing that expertise transforms data fabric from a technical repository into a true knowledge ecosystem.

An AI-ready data fabric should continuously integrate new data, metadata and derived knowledge. Automation and feedback loops ensure that as systems operate, they learn. Relationships between data objects become richer over time, while metadata grows more contextual. This self-updating nature allows organizations to maintain a real-time, adaptive understanding of their operations and knowledge base.

The role of generative AI in the data fabric

Generative AI plays a pivotal role. Through conversational interfaces, an AI system can interactively capture rationales, design decisions and lessons learned directly from subject matter experts. These natural exchanges become structured insights that enrich the broader data landscape.

Generative AI fundamentally changes what is possible within a data ecosystem. It enables organizations to extract and encode knowledge that has traditionally remained locked in minds or informal workflows.

Imagine a conversational agent embedded within the data fabric that interacts with data scientists, engineers or analysts as they develop new models or pipelines. By asking context-aware questions, such as why a specific dataset was chosen or what assumptions guided a model’s design, the AI captures valuable reasoning in real time.

This process transforms static documentation into living knowledge that grows alongside the enterprise. It creates an ecosystem where data, context and human expertise continually reinforce one another, driving deeper intelligence and stronger decision-making.

The way forward

The next generation of enterprise intelligence depends on data systems built for agility and scale. Legacy architectures were never designed for the speed of today’s generative AI landscape. To move forward, organizations must embrace data fabrics that integrate governance, automation and knowledge capture at their core.

John Mark Suhy is the chief technology officer of Greystones Group.

The post The AI data fabric: Your blueprint for mission-ready intelligence first appeared on Federal News Network.

© Getty Images/iStockphoto/KENGKAT

AI artificial intelligence concept Central Computer Processors CPU concept, 3d rendering, Circuit board, Technology background, Motherboard digital chip, Tech science background, machine learning

Why AI agents won’t replace government workers anytime soon

The vendor demo looks flawless, the script even cleaner. A digital assistant breezes through forms, updates systems and drafts policy notes while leaders watch a progress bar. The pitch leans on the promised agentic AI advantage.

Then the same agents face real public-sector work and stall on basic steps. The newest empirical benchmark from researchers at the nonprofit Center for AI Safety and data annotation company Scale AI finds current AI agents completing only a tiny fraction of jobs at a professional standard. Agents struggled to deliver production-ready outcomes on practical projects, including an explorer for World Happiness data, a short 2D promo, a 3D product animation, a container-home concept, a simple Suika-style game, and an IEEE-formatted manuscript. This new study should help provide some grounding on what agents can do inside federal programs today, why they will not replace government workers soon, and how to harvest benefits without risking mission, compliance or trust.

Benchmarks, not buzzwords, tell the story

Bold marketing favors smooth narratives of autonomy. Public benchmarks favor reality. In the WebArena benchmark, an agent built on GPT-4 achieved low end-to-end task success compared with human performance on real websites that require navigation, form entry and retrieval. The OSWorld benchmark assembles hundreds of desktop tasks across common apps with file handling and multi-step workflows, and documents persistent brittleness when agents face inconsistent interfaces or long sequences. Software results echo the same pattern. The original SWE-bench evaluates real GitHub issues across live repositories and shows that models generate useful patches, but need scaffolding and review to land working changes.

Duration matters. The H-CAST report correlates agent performance with human task time and finds strong results on short, well-bounded steps and sharp drop-offs on long, multi-hour work. That split maps directly to government operations. Agents can draft a memo outline or a SQL snippet. They falter when the job spans multiple systems, requires policy nuance, or demands meticulous document hygiene.

Building a public dashboard, as in the study run by researchers at the Center for AI Safety and Scale AI, is not a single chart; it is a reliable pipeline with provenance, documentation and accessible visuals. A 2D promo is not a storyboard alone; it is consistent assets, rights-safe media, captions and export settings that pass accessibility checks. A container-home concept is not a render; it is geometry, constraints and safety considerations that survive a technical review.

Federal teams must also contend with rules that raise the bar for autonomy. The AI Risk Management Framework from the National Institute of Standards and Technology gives a shared vocabulary for mapping risks and controls. These guardrails do not block Gen AI in government, they just make unsupervised autonomy a poor bet.

What this means for mission delivery, compliance and the workforce

The near-term value is clear. Treat agents as accelerators for specific tasks inside projects, not substitutes for the people who own outcomes. That approach matches field evidence. A large deployment in customer support showed double-digit gains in resolutions per hour when a generative assistant helped workers with suggested responses and knowledge retrieval, with the biggest lift for less-experienced staff. Translate that into federal work and you get faster first drafts, cleaner queries, more consistent formatting, and quicker starts on visuals, all checked by employees who understand policy, context and stakeholders.

Compliance reinforces the same division of labor. To run in production, systems must pass FedRAMP authorization, recordkeeping requirements and privacy controls. Content must meet Section 508 standards for accessibility. Security teams will lean on the joint secure AI development guidelines from the Cybersecurity and Infrastructure Security Agency and international partners to push model and system builders toward stronger practices. Auditors will use the Government Accountability Office’s accountability framework to probe governance, data quality and human oversight. Every one of those checkpoints increases the value of staff who can judge quality, interpret rules and stitch outputs into agency processes.

The fear that the large majority of federal work will be automated soon does not match the evidence. Agents still miss long sequences, stall at brittle interfaces, and struggle with multi-file deliverables. They produce assets that look plausible but fail validation or policy review. They need context from the people who understand stakeholders, statutes, and mission tradeoffs. That leaves plenty of room for productivity gains without mass replacement. It also shifts work toward specification, review and integration, roles that exist across headquarters and field offices.

A practical playbook federal leaders can use now

Plan for augmentation, not substitution. When I help government agencies adopt AI tools, we start by mapping projects into linked steps and flag the ones that benefit from an assistive agent. Drafting a response to a routine inquiry, summarizing a meeting transcript, extracting fields from a form, generating a chart scaffold, and proposing test cases are all candidates. Require a human owner for every deliverable, and publish acceptance criteria that catch the common failure modes seen in the benchmarks, including missing assets, inconsistent naming, broken links and unreadable exports. Maintain an audit trail that shows prompts, sources and edits so the work is FOIA-ready.

Ground the program in federal policy. Adopt the AI Risk Management Framework for risk mapping, and scope pilots to systems that can inherit or achieve FedRAMP authorization. Treat models and agents as components, not systems of record. Keep sensitive data inside authorized boundaries. Validate accessibility against Section 508 standards before anything goes public. For procurement, require vendors to demonstrate performance on public benchmarks like WebArena, OSWorld or SWE-bench using your agency’s constraints rather than glossy demos.

Staff and labor planning should reflect the new shape of work. Expect fewer hours on rote drafting and more time on specification, review and integration. Upskill employees to write good task definitions, evaluate model outputs, and enforce standards. Track acceptance rates, rework and defects by category so leaders can decide where to expand scope and where to hold the line. Publish internal guidance that explains when to use agents, how to attribute sources, and where human approval is mandatory. Share outcomes with the AI.gov community and look for common building blocks across agencies.

A brief scenario shows how this plays out without wishful thinking. A program office stands up a pilot for public-facing dashboards using open data. An agent produces first-pass code to ingest and visualize the dataset, similar to the World Happiness example. A data specialist verifies source URLs, adds documentation, and applies the agency’s color and accessibility standards. A policy analyst reviews labels and context language for accuracy and plain English. The team stores prompts, code and decisions with metadata for audit. In the same sprint, a communications specialist uses an agent to draft a 30-second script for a social clip and a designer converts it into a simple 2D animation. The outputs move faster, quality holds steady, and the people who understand mission and policy remain responsible for the results.

AI agents deliver lift on specific tasks and stumble on long, cross-tool projects. Public benchmarks on the web, desktop and code back that statement with reproducible evidence. Federal policy adds governance that rewards augmentation over autonomy. The smart move for agencies is to put agents to work inside projects while employees stay accountable for outcomes, compliance and trust. That plan banks real gains today and sets agencies up for more automation tomorrow, without betting programs and reputations on a hype cycle.

Dr. Gleb Tsipursky is CEO of the future-of-work consultancy Disaster Avoidance Experts.

The post Why AI agents won’t replace government workers anytime soon first appeared on Federal News Network.

© Federal News Network

Agentic experience’s promise to transform federal service delivery

How many times have you logged into a system to complete a task that should have taken minutes but ended up taking hours? Whether renewing a credential, accessing benefits or updating a record, these experiences often feel tedious and time-consuming. As you search for the right page, click through drop-down menus, input data and submit forms, you are essentially doing the system’s work for it.

Technology has digitized countless services, but it hasn’t yet humanized them. The next leap in human-computer interaction isn’t about building better buttons; it’s about building agents that act on our behalf. Agentic experience (AX) represents that next evolution: a model where humans direct intelligent systems to deliver desired outcomes.

In the context of federal government, where complex workflows and interactions with multiple systems are common, the shift to AX will be transformational. It will free up time for employees and contractors to focus on mission-critical work while reducing errors, accelerating service delivery, and improving public trust.

The next frontier: Federal leadership in AX

User experience (UX) is not going away; it is rapidly evolving to include agentic experience.  For decades, UX has focused on how people interact with digital systems to accomplish tasks: A human navigates interfaces, completes steps and makes decisions. The system is a passive tool, and the user does all the work.

AX flips that model: Intelligent agents act on the user’s behalf, executing tasks based on intent. Humans guide outcomes rather than performing every step.

Instead of logging into multiple systems to file a report, an AI agent could authenticate and retrieve necessary data, complete the form, and deliver the final output for review. The person remains in control but no longer must perform every procedural step.

The federal government is uniquely positioned to lead in this space. Agencies already manage vast, mission-critical systems that serve millions of citizens. By applying human-centered design principles to agentic technologies, they can create experiences that are equitable and effective.

Federal agencies have already begun exploring frameworks for responsible AI with an emphasis on transparency, reliability and ethical use. These same principles must anchor the development of agentic systems.

A helpful parallel comes from the self-driving car industry. Companies like Waymo have learned that passengers trust autonomy more when they can see what the car “sees.” Similarly, agentic systems should make their reasoning visible, showing users what the agent perceives, decides and does on their behalf.

What AX could look like in federal missions

The power of AX lies in its ability to streamline complex, multi-system workflows that currently slow down mission delivery. Here are a few hypothetical examples of how AX could enhance efficiency at federal agencies.

At the Centers for Medicare & Medicaid Services, a program analyst needs to compile a quarterly compliance report that pulls data from several systems. Today, that process might require manual queries, spreadsheet consolidation and error checks. With AX, an intelligent agent could aggregate and validate data automatically, flag anomalies, and generate a polished report, saving days of manual labor and ensuring consistency across datasets.

For veterans seeking benefits, requests at the Department of Veterans Affairs typically involve multiple forms and offices. AX could change that. A veteran could state an intent, such as “I’d like to update my address and apply for housing assistance,” and the system would handle the rest: confirming eligibility, updating records across systems, and alerting the user once completed. The result is faster service and a more human experience that builds trust between veterans and their government.

Clinicians and administrators spend significant time navigating electronic health record systems to manage patient data. An AX-enabled system within the Military Health System could interpret a clinician’s request to “schedule post-operative appointment in two weeks, notify the patient, and update the care plan,” execute those steps across systems, and confirm completion. This allows healthcare professionals to spend more time on patient care rather than administrative tasks.

Principles for designing trustworthy AX

To foster confidence and accountability in agentic systems, here are six guiding principles that echo human-centered design’s emphasis on empathy, understanding and outcomes. They extend familiar human experience (HX) practices into a new era where AI partners act alongside humans.

  • Transparent: Users should understand what the agent did, along with the why and how. Decision visibility builds confidence.
  • Controllable: Humans remain in charge. Users can intervene, adjust autonomy levels, or revoke actions.
  • Contextual: Agents must understand user goals and operational context to act appropriately.
  • Reliable: Predictability, safety and consistency are essential to maintaining trust.
  • Adaptive: Agents should learn and improve within clear, testable boundaries.
  • Ethical: Every action and decision must align with human, organizational and societal values.

Preparing for the next design evolution

For federal agencies, adopting AX requires designing ecosystems where AI agents can navigate systems safely and effectively. That means modernizing data infrastructure, enforcing interoperability standards, and implementing strong governance to monitor and audit agentic behavior.

As agentic systems emerge, the roles of designers, product strategists and technologists will evolve. Designers will become orchestrators of ecosystems, ensuring that human oversight and system transparency remain central. To measure AX success, we must prioritize trust, reliability and ethics — for example, assessing how ethically an agent interacts with humans and how accurately it presents its work to users.

Designers will also need new tools, such as a formalized AX blueprint, to map user intent, system interactions and agent pathways. This blueprint will help agencies visualize how human intent translates into autonomous action, ensuring accountability and ethical safeguards.

Building the bridge between AI and human intent

The transition to AX isn’t about replacing workers with automation; it’s about amplifying human capability. Intelligent AI agents can handle repetitive, transactional work, freeing the federal workforce to focus on strategic, mission-driven decisions.

To achieve this responsibly, agencies should start small: piloting agentic workflows in low-risk environments, establishing transparency protocols, and engaging users early through testing and iteration. By validating early and often — an approach rooted in human-centered design — agencies can build systems that adapt, learn and earn trust over time.

AX represents a paradigm shift in how government interacts with technology. It reframes the relationship from “user and tool” to “human and collaborator.” When designed with transparency, control and ethics at the core, AX can transform how federal agencies deliver on their missions — reducing friction, increasing efficiency and restoring trust in public systems.

As we move into the age of agentic systems, one truth remains constant: Great solutions start with great experiences for the people who use them. Agentic AI will inevitably change how we work; government and industry must design these systems to serve people, purpose and the public good.

Chris Capuano is the human experience (HX) practice lead at Tria Federal. 

The post Agentic experience’s promise to transform federal service delivery first appeared on Federal News Network.

© Getty Images/Userba011d64_201

Agentic AI technology concept with coding and digital gears.
❌