Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

US Crypto Market Structure Bill Further Delayed Until Late February or March – Report

22 January 2026 at 00:10

The US Senate Banking Committee has again postponed the work on the long-awaited landmark crypto market structure bill that could create a regulatory framework for digital assets.

Unnamed sources told Bloomberg that the crypto market structure legislation may be delayed by several weeks. The panel is likely to consider it in late February or March, they added.

Instead of focusing on the digital asset bill, the committee will pivot to housing legislation, following President Donald Trump’s recent push for affordability.

President Trump wrote that he is taking “immediate steps” on the housing bill, which remains a priority and “American Dream.”

Crypto Community Isn’t Happy With it

The Committee’s backburnering of the crypto bill has left the community in uncertainty, despite backers pushing for the urgent passage of the legislation.

Patrick Witt, White House Executive Director of the President’s Crypto Council, called for immediate implementation of the bill. He said that it is unrealistic to expect a multi-trillion-dollar industry to operate without a comprehensive regulatory framework.

The work on the crypto bill – called the CLARITY Act – stalled its planned markup after Coinbase CEO Brian Armstrong publicly withdrew support for the draft bill. Armstrong flagged several issues with the draft, including a de facto ban on tokenized equities.

However, the Bloomberg report noted that the Banking panel’s delay might not affect the Senate Agriculture Committee’s efforts on crypto.

The Agriculture Committee released its own version of that market structure bill, which the industry insiders fear might be a partisan bill lacking Democratic support.

“While differences remain on fundamental policy issues, this bill builds on our bipartisan discussion draft while incorporating input from stakeholders and represents months of work,” the Committee Chairman, John Boozman, clarified. Boozman postponed this legislation markup to late January.

The Agriculture Committee bill on crypto will need to get support from both Democrats and the Banking counterpart before it can continue further steps.

“I Hope to Sign Very Soon:” Donald Trump

President Trump confirmed that the crypto market structure bill will be signed “very soon.”

Speaking at the World Economic Forum at Davos 2026, he said that his administration is working to ensure that America remains the crypto capital of the world.

DAVOS📍2026: 🇺🇸 President Trump says he hopes to sign the crypto market structure legislation soon, “unlocking new pathways for Americans to reach financial freedom,” including #Bitcoin. pic.twitter.com/l1ZkTGX7xl

— Bitcoin.com News (@BitcoinNews) January 21, 2026

“Last year, I signed a landmark GENIUS Act into law, now Congress is working very hard on crypto market structure legislation… Bitcoin, all of them,” he spoke at Davos.

“I hope to sign very soon, unlocking new pathways for Americans to reach financial freedom.”

The post US Crypto Market Structure Bill Further Delayed Until Late February or March – Report appeared first on Cryptonews.

Hong Kong Professionals Association Urges Regulators To Ease Crypto Reporting Rules

20 January 2026 at 02:00

A Hong Kong industry group has urged the city’s regulators to ease aspects of the Organisation for Economic Co-operation and Development’s (OECD) crypto reporting rules ahead of its implementation.

Association Pushes To Soften CARF Requirements

On Monday, the Hong Kong Securities & Futures Professionals Association (HKSFPA) released a response to the implementation of the OECD’s Crypto Asset Reporting Framework (CARF) and the related amendments made to Hong Kong’s Common Reporting Standard (CRS).

In their official response, the association shared its concerns about certain elements of the CARF and CRS amendments, warning that they could create operational and liability risks for market participants.

Notably, the HKSFPA affirmed that it mostly supports the proposals, but urged regulators to ease the record-keeping requirements for dissolved entities. “We generally agree with the six-year retention period to align with existing inland revenue and CRS standards,” they explained, “but we have concerns regarding the obligations placed on individuals post-dissolution.”

The industry group argued that holding directors or principal officers personally liable for record-keeping after dissolution poses significant practical challenges, noting that former officers of dissolved companies may lack the resources, infrastructure, and legal standing to maintain sensitive personal data of former clients.

As a result, they suggested the government “allow for the appointment of a designated third-party custodian (such as a liquidator or a licensed corporate service provider) to fulfill this obligation, rather than placing indefinite personal liability and logistical burden on former individual officers.”

Moreover, the association also cautioned that the proposed uncapped per-account penalties for minor technical errors. They asserted that this could lead to “disproportionately astronomical fines for systemic software errors affecting thousands of accounts where there was no intent to defraud.”

To solve this, they proposed a “reasonable cap” on total penalties for unintentional administrative errors or first-time offenses to ensure that the per-account calculation “is reserved for cases of willful negligence or intentional evasion.”

Additionally, the group suggested a “lite” registration or a simplified annual declaration process for Reporting Crypto-Asset Service Providers (RCASPs) that anticipate filing Nil Returns, to reduce administrative costs while still satisfying the Inland Revenue Department’s oversight requirements.

Hong Kong’s Crypto Hub Efforts

Notably, Hong Kong is among the 76 markets committed to implementing the upcoming crypto reporting framework, which is the OECD’s new global standard for exchanging tax information on crypto assets.

The CARF is designed to prevent tax evasion by bringing crypto users across borders under global tax transparency rules, similar to the OECD’s existing CRS for traditional finance. Hong Kong will be among the 27 jurisdictions that will begin their first cross-border exchanges of crypto reporting data in 2028.

Over the past few years, Hong Kong financial authorities have been actively working to develop a comprehensive framework that supports the expansion of the digital assets industry, part of its strategy to become a leading crypto hub in the world.

As reported by Bitcoinist, the city is exploring rules to allow insurance companies to invest in cryptocurrencies and the infrastructure sector. The Hong Kong Insurance Authority recently proposed a framework that could channel insurance capital into cryptocurrencies and stablecoins.

Moreover, the Hong Kong Monetary Authority (HKMA) is expected to grant the first batch of stablecoin issuer licenses in the first few months of the year. The HKMA enacted the Stablecoins Ordinance in August, which directs any individual or entity seeking to issue a stablecoin in Hong Kong, or any Hong Kong Dollar-pegged token, to obtain a license from the regulator.

Multiple companies have applied for the license, with over 30 applications filed in 2025, including logistics technology firm Reitar Logtech and the overseas arm of Chinese mainland financial technology giant Ant Group.

crypto, bitcoin, btc, btcusdt

How the OWASP Application Security Verification Standard Helps Improve Software Security

15 January 2026 at 00:52

A short time ago, we announced our integration of OWASP ASVS into our cyber risk management platform. At a high level, this allows organizations to run more structured, repeatable security assessments for web applications and cloud-based services, while also giving security and procurement teams a consistent way to evaluate internally developed and third-party software. This […]

The post How the OWASP Application Security Verification Standard Helps Improve Software Security appeared first on Centraleyes.

The post How the OWASP Application Security Verification Standard Helps Improve Software Security appeared first on Security Boulevard.

Senate bill will require DoD to review cyber workforce gaps

A new bill will require the Pentagon to assess whether its current efforts to recruit, train and retain cyber talent are working — and to produce a new department-wide plan aimed at addressing persistent cyber workforce gaps.

The legislation, titled the Department of Defense Comprehensive Cyber Workforce Strategy Act of 2025, tasks the Pentagon with developing a cyber workforce strategy and delivering a detailed report to Congress by Jan. 31, 2027. 

Sens. Gary Peters (D-Mich.) and Mike Rounds (R-S.D.), the bill’s sponsors, want the Pentagon to assess progress made and remaining gaps in implementing the DoD’s 2023–2027 Cyber Workforce Strategy, and identify which elements of the current strategy should be continued or dropped. 

The lawmakers are also requesting detailed workforce data, including the size of the cyber workforce, vacancy rates, specific work roles and other data related to personnel system metrics.

In addition, the legislation calls for a detailed analysis of the Defense Cyber Workforce Framework itself, including its goals, implementation efforts, the milestones used to track progress and the performance metrics used to determine whether the cyber workforce strategy is actually effective. The Defense Department issued the framework in 2023 to establish an “authoritative lexicon based on the work an individual is performing, not their position titles, occupational series, or designator.” The goal of the framework is to give the Pentagon a clearer picture of its cyber and IT workforce, which has been difficult since cyber-related work often falls under traditional military jobs titles that do not clearly reflect those job responsibilities.

The Pentagon would also be required to identify “any issues, problems or roadblocks” that have slowed implementation of the framework — and outline steps taken to overcome those barriers.

The legislation encourages the Defense Department to explore alternative personnel models, including cyber civilian reserve or auxiliary forces, and to leverage talent management authorities used by other federal agencies. The Pentagon would also be required to examine the use of commercial tools for tracking workforce qualification and certifications, identifying talent and skills in existing personnel management systems.

The bill further calls for partnerships with universities and academic centers of excellence to improve workforce development and talent acquisition.

The Pentagon would be required to provide Congress with a timeline and estimated costs for implementing the new cyber workforce strategy.

The bill comes amid personnel reductions across the Defense Department over the past year, including at key cyber organizations such as U.S. Cyber Command and the Defense Information Systems Agency. The Pentagon faces a shortage of approximately 25,000 cyber professionals. 

In May, the Defense Information Systems Agency, for instance, said it expected to lose nearly 10% of its civilian workforce due to the deferred resignation program and early retirements. The organization said it was experimenting with automation and artificial intelligence to offset the impact of workforce reductions.

Meanwhile, Cyber Command lost 5 to 8 percent of its personnel amid the department’s efforts to shrink its civilian workforce.

The Pentagon has lost approximately 60,000 civilian employees since President Donald Trump took office.

If you would like to contact this reporter about recent changes in the federal government, please email anastasia.obis@federalnewsnetwork.com or reach out on Signal at (301) 830-2747

The post Senate bill will require DoD to review cyber workforce gaps first appeared on Federal News Network.

© Getty Images/iStockphoto/NicoElNino

Cybersecurity of network of connected devices and personal data security

Why AI agents won’t replace government workers anytime soon

30 December 2025 at 14:59

The vendor demo looks flawless, the script even cleaner. A digital assistant breezes through forms, updates systems and drafts policy notes while leaders watch a progress bar. The pitch leans on the promised agentic AI advantage.

Then the same agents face real public-sector work and stall on basic steps. The newest empirical benchmark from researchers at the nonprofit Center for AI Safety and data annotation company Scale AI finds current AI agents completing only a tiny fraction of jobs at a professional standard. Agents struggled to deliver production-ready outcomes on practical projects, including an explorer for World Happiness data, a short 2D promo, a 3D product animation, a container-home concept, a simple Suika-style game, and an IEEE-formatted manuscript. This new study should help provide some grounding on what agents can do inside federal programs today, why they will not replace government workers soon, and how to harvest benefits without risking mission, compliance or trust.

Benchmarks, not buzzwords, tell the story

Bold marketing favors smooth narratives of autonomy. Public benchmarks favor reality. In the WebArena benchmark, an agent built on GPT-4 achieved low end-to-end task success compared with human performance on real websites that require navigation, form entry and retrieval. The OSWorld benchmark assembles hundreds of desktop tasks across common apps with file handling and multi-step workflows, and documents persistent brittleness when agents face inconsistent interfaces or long sequences. Software results echo the same pattern. The original SWE-bench evaluates real GitHub issues across live repositories and shows that models generate useful patches, but need scaffolding and review to land working changes.

Duration matters. The H-CAST report correlates agent performance with human task time and finds strong results on short, well-bounded steps and sharp drop-offs on long, multi-hour work. That split maps directly to government operations. Agents can draft a memo outline or a SQL snippet. They falter when the job spans multiple systems, requires policy nuance, or demands meticulous document hygiene.

Building a public dashboard, as in the study run by researchers at the Center for AI Safety and Scale AI, is not a single chart; it is a reliable pipeline with provenance, documentation and accessible visuals. A 2D promo is not a storyboard alone; it is consistent assets, rights-safe media, captions and export settings that pass accessibility checks. A container-home concept is not a render; it is geometry, constraints and safety considerations that survive a technical review.

Federal teams must also contend with rules that raise the bar for autonomy. The AI Risk Management Framework from the National Institute of Standards and Technology gives a shared vocabulary for mapping risks and controls. These guardrails do not block Gen AI in government, they just make unsupervised autonomy a poor bet.

What this means for mission delivery, compliance and the workforce

The near-term value is clear. Treat agents as accelerators for specific tasks inside projects, not substitutes for the people who own outcomes. That approach matches field evidence. A large deployment in customer support showed double-digit gains in resolutions per hour when a generative assistant helped workers with suggested responses and knowledge retrieval, with the biggest lift for less-experienced staff. Translate that into federal work and you get faster first drafts, cleaner queries, more consistent formatting, and quicker starts on visuals, all checked by employees who understand policy, context and stakeholders.

Compliance reinforces the same division of labor. To run in production, systems must pass FedRAMP authorization, recordkeeping requirements and privacy controls. Content must meet Section 508 standards for accessibility. Security teams will lean on the joint secure AI development guidelines from the Cybersecurity and Infrastructure Security Agency and international partners to push model and system builders toward stronger practices. Auditors will use the Government Accountability Office’s accountability framework to probe governance, data quality and human oversight. Every one of those checkpoints increases the value of staff who can judge quality, interpret rules and stitch outputs into agency processes.

The fear that the large majority of federal work will be automated soon does not match the evidence. Agents still miss long sequences, stall at brittle interfaces, and struggle with multi-file deliverables. They produce assets that look plausible but fail validation or policy review. They need context from the people who understand stakeholders, statutes, and mission tradeoffs. That leaves plenty of room for productivity gains without mass replacement. It also shifts work toward specification, review and integration, roles that exist across headquarters and field offices.

A practical playbook federal leaders can use now

Plan for augmentation, not substitution. When I help government agencies adopt AI tools, we start by mapping projects into linked steps and flag the ones that benefit from an assistive agent. Drafting a response to a routine inquiry, summarizing a meeting transcript, extracting fields from a form, generating a chart scaffold, and proposing test cases are all candidates. Require a human owner for every deliverable, and publish acceptance criteria that catch the common failure modes seen in the benchmarks, including missing assets, inconsistent naming, broken links and unreadable exports. Maintain an audit trail that shows prompts, sources and edits so the work is FOIA-ready.

Ground the program in federal policy. Adopt the AI Risk Management Framework for risk mapping, and scope pilots to systems that can inherit or achieve FedRAMP authorization. Treat models and agents as components, not systems of record. Keep sensitive data inside authorized boundaries. Validate accessibility against Section 508 standards before anything goes public. For procurement, require vendors to demonstrate performance on public benchmarks like WebArena, OSWorld or SWE-bench using your agency’s constraints rather than glossy demos.

Staff and labor planning should reflect the new shape of work. Expect fewer hours on rote drafting and more time on specification, review and integration. Upskill employees to write good task definitions, evaluate model outputs, and enforce standards. Track acceptance rates, rework and defects by category so leaders can decide where to expand scope and where to hold the line. Publish internal guidance that explains when to use agents, how to attribute sources, and where human approval is mandatory. Share outcomes with the AI.gov community and look for common building blocks across agencies.

A brief scenario shows how this plays out without wishful thinking. A program office stands up a pilot for public-facing dashboards using open data. An agent produces first-pass code to ingest and visualize the dataset, similar to the World Happiness example. A data specialist verifies source URLs, adds documentation, and applies the agency’s color and accessibility standards. A policy analyst reviews labels and context language for accuracy and plain English. The team stores prompts, code and decisions with metadata for audit. In the same sprint, a communications specialist uses an agent to draft a 30-second script for a social clip and a designer converts it into a simple 2D animation. The outputs move faster, quality holds steady, and the people who understand mission and policy remain responsible for the results.

AI agents deliver lift on specific tasks and stumble on long, cross-tool projects. Public benchmarks on the web, desktop and code back that statement with reproducible evidence. Federal policy adds governance that rewards augmentation over autonomy. The smart move for agencies is to put agents to work inside projects while employees stay accountable for outcomes, compliance and trust. That plan banks real gains today and sets agencies up for more automation tomorrow, without betting programs and reputations on a hype cycle.

Dr. Gleb Tsipursky is CEO of the future-of-work consultancy Disaster Avoidance Experts.

The post Why AI agents won’t replace government workers anytime soon first appeared on Federal News Network.

© Federal News Network

Outgoing NIST cyber workforce director talks job roles, skills-based hiring, and AI

During his decade of service at the National Institute of Standards and Technology, Rodney Petersen has had a front-row seat to the evolving state of the cyber workforce across government, industry and academia.

In his role as director of education and workforce at NIST’s Applied Cybersecurity Division, Petersen led efforts to standardize cyber workforce job descriptions and better understand skills gaps that are now a recurring theme in cyber policy discussions.

He served as second director of the National Initiative for Cybersecurity Education, which is now just known as its acronym, “NICE.” NIST’s “NICE Framework” is now an internationally accepted taxonomy to describe professional cyber roles, as well as the knowledge and skills needed to work in the fast-evolving field.

Those efforts have been foundational as the national cyber workforce evolved into a pressing issue at the highest levels of government. 

“One of the biggest changes in my 11 years here has just been the proliferation and the growth and expansion of education and workforce efforts,” Petersen said. “And so that’s mostly a good thing, because it shows that we’re prioritizing and putting investments in place to both increase the supply and also find the demand. But at the same time, it makes NICE’s mission all the more important to make sure we’re creating a coordinated approach across the U.S.”

Petersen is set to retire from his post at the end of the year. He recently sat down with Federal News Network to discuss his career at NIST, the evolution of cyber workforce initiatives over the last 10 years, and the future of the cybersecurity career field amid the rise of artificial intelligence.

(This interview transcript has been lightly edited for length and clarity).

Justin Doubleday What led you to where you are, to NIST and to the NICE program, in the first place?

Rodney Petersen Since NICE works so much on cybersecurity careers, you have to remind people that it’s not always linear, or maybe you don’t end up where you expect it to be, and that’s certainly been true of me.

In undergraduate and through law school, I certainly expected to be in a legal career. I got quickly introduced to higher education and education policy. So my first job out of law school was actually at Michigan State University and then subsequently University of Maryland. So that was maybe my first pivot to move into academia, but continuing to use the law and policy expertise. And then back in the mid ‘90s, there was something called the world wide web and the internet that started hitting college campuses, and I began to combine my legal policy expertise with the growing field of information technology and work for the first CIO and vice president of information technology at the University of Maryland.

And that eventually led me to cybersecurity, where, once again, it was an emerging field and topic. Not a lot of history, certainly within colleges and universities, of having personnel doing that work. The Association of Colleges and Universities that focused on it was EDUCAUSE, and they brought me in to establish their first program in cybersecurity and eventually the Higher Education Information Security Council. Then maybe my final pivot from there was to NIST, which was a position in the federal government, but not just focusing on cybersecurity from an operational or an IT perspective but from an education and workforce perspective. So again, I appreciated the opportunity to pivot and continue to work on another dimension of cybersecurity, which was: Now, how do we create the next generation of cybersecurity workers that the nation needs?

Justin Doubleday As you reflect on that last decade, what were some of the biggest challenges or successes, just things that immediately pop up into your mind as, ‘Wow, it’s 2025. I can’t believe we worked through that just five or 10 years ago?’

Rodney Petersen I didn’t say what really attracted me to the government was NIST, the National Institute of Standards and Technology, not only because it’s a standards organization, but it’s widely respected among industry and in my case academia, for providing some common standards, guidelines, best practices for cybersecurity. I really didn’t know a lot about the NICE program, certainly not the NICE framework, which I’m sure we’ll talk more about in a moment, but that provided a similar kind of common taxonomy and lexicon.

Now, when I say I didn’t know much about NICE framework, it’s a little misleading, because I was involved in some of the early days when DHS was trying to create a common body of knowledge for cybersecurity, and both a combination of that work and then the work I was doing with EDUCAUSE across higher education, you know, 4000-plus colleges and universities in the United States. We were trying to find some common ground and do things that could lead to shared services or shared approaches and the like. NIST was a great place to bring that all together.

The NICE framework specifically evolved over the years starting from common body of knowledge, the CIO Council recognizing the need, from an employer’s perspective, to have some commonality across the cybersecurity workforce. NIST began working with the Department of Defense and the Department of Homeland Security, culminating in the 2017 NIST special publication for the first time with the NICE framework. And then fast forwarding to today, where we work increasingly with private sector employers as well as academia to really create some common vision, common strategy and a mission that really teaches us to integrate approaches across the various ecosystems.

Justin Doubleday How challenging has that been in terms of getting to this widespread adoption of the NICE framework. I’m sure you measure that in different ways at NIST. How far have we come in terms of that standardization and how far do we still have to go?

Rodney Petersen If you’re an organization or a sector who’s starting from ground zero, and if you discover the NICE framework or the NIST cybersecurity framework or any other similar guidance document, you’re in a perfect situation to adopt it wholesale, because you haven’t started anything else, or you don’t have to retrofit something else. And there are certainly examples, in fact, internationally, where other countries start to get into the cybersecurity workforce space, and they discover the NICE framework. It really gives them a starting place, a jump start to building their own unique framework that meets their needs.

Where it’s more challenging is where there’s existing work and efforts that you either have to retrofit or try to modify or adjust. An example for that is we work closely with the NSA and CISA and the National Centers of Academic Excellence and Cybersecurity. They provide designations to colleges and universities that meet their guidelines for what a cybersecurity education program should look like, and it’s based upon what they call knowledge units. And those knowledge units, which actually have some preceding standards and organizations that they were building upon, weren’t necessarily built in the NICE framework.

We use the word ‘aligned’ to make sure that we’re aligned, that they can learn from what we’re doing and apply it, and we can learn from what they’re doing and apply it as well. So I think the biggest challenge is to take those existing organizations or initiatives that already are making great progress and have a lot of momentum, and making sure they’re in step with what we’re doing and vice versa.

Justin Doubleday Part of your work at the NICE program has been launching the CyberSeek database as well, which I think is probably one of the most publicly visible and publicly cited databases that the NIST cybersecurity program puts out there. It publishes data and statistics on cybersecurity job openings across the public and private sectors and other cyber workforce stats. Back when you launched it in 2016, what was the initial goal, and how do you think it’s helped to define some of the cyber workforce challenges that the country has faced over the last decade?

Rodney Petersen At the time, there was a lot of speculation and a lot of survey data about what the cybersecurity workforce needed to be. If you asked any chief information security officer, how many workers do they need? They may say 10. When you ask the same question of, how many can you afford and how many do you plan to hire? The answer might be one. And so thankfully in 2014 when I came in, the Cybersecurity Enhancement Act that Congress passed asked us to forecast what the workforce needs were, starting with the federal government and then looking also at the private sector.

So CyberSeek really came on the scene as an analytics tool, of course, in partnership with CompTIA and now Lightcast, to look at what are the actual jobs that are posted, to begin to quantify that, and then to do it in the context of the NICE framework. We’re looking more specifically at jobs to align to the NICE framework categories and work roles, and to do it not only nationally, but to do it by state and major metro area. And so whether you’re a member of Congress, or you may be at a college university, or you may be a local workforce board, and you really want to see what the demand is in your area, the CyberSeek tool not only gives you a number of open jobs in cyber security, but you can dissect that number to look at the types of jobs, what the requirements or qualifications are necessary to compete for those jobs, what’s the compensation for those jobs. I think bringing that all together really allows us to better forecast what the cybersecurity workforce needs are, both now and in the future.

Justin Doubleday One of the major points in this conversation around cyber workforce was the 2023 national cyber workforce and education strategy. As you reflect on this cyber workforce and education issue becoming a national strategy led out of the White House, whether there are any really impactful outcomes from that strategy over the last couple of years, or whether there’s still some things on the to-do list that you’re particularly keeping track of even as you get ready for retirement?

Rodney Petersen NICE really was an outgrowth of the 2008 Comprehensive National Cybersecurity Initiative. And as that later evolved and established the NICE program office, one of the things we were asked to do was provide some unification across the different investments happening in the federal government, and then by extension things that are happening in academia, in the private sector. And again, back in 2014 when Congress passed the Cybersecurity Enhancement Act, they asked us to build upon successful existing programs. And then later in 2018 when the first Trump administration created an executive order asking us to come up with findings and recommendations. One of the things they asked us to do was an environmental scan of, again, existing programs and assess and evaluate their effectiveness.

So I think as a starting point, any new strategy, any new administration, any new person to this field, needs to acknowledge and research what currently exists and what’s being successful. What should we continue to do, versus what should we stop, or what should we change, or what should we introduce as a new initiative or a new platform? So I think when that previous administration’s National Cyber Workforce and Education Strategy came out, there was a lot of effort, after some time, to take a step back and look at all the existing programs, not only in the federal government, but at the state and local level, in the private sector and academia, and then to build upon that.

And I think they did an excellent job of recognizing some of the good efforts that were already underway. And then fast forward to the present, I think the same is true. One of the biggest changes in my 11 years here has just been the proliferation and the growth and expansion of education and workforce efforts. And so that’s mostly a good thing, because it shows that we’re prioritizing and putting investments in place to both increase the supply and also find the demand. But at the same time, it makes NICE’s mission all the more important to make sure we’re creating a coordinated approach across the U.S.

Justin Doubleday One of the facets of that [2023] strategy was strengthening the federal cyber workforce, and that’s, of course, a big area of interest for our audience. Do you have any assessment of all these different initiatives across the federal workforce, civilian side, defense side? As you mentioned, a lot of has sprung up over the last five or 10 years. How cohesive those are and how successful those have been, as we know this new administration is now looking at its own strategy?

Rodney Petersen In 2015, Congress passed the Federal Cybersecurity Workforce Assessment Act, and that was an early effort to try to essentially identify the number of cybersecurity workers we had in the federal government and that we needed in the federal government. And again, to do that, we had to have some kind of standard to measure against. So the NICE framework was the required tool to use to do that measurement, especially to answer, how many cybersecurity workers do we need? We need a recruitment and retention strategy.

And I would say again, there were a lot of positive efforts led by the national cyber director, but also in partnership with the Office of Personnel Management, Office of Management and Budget, and all the departments and agencies like Commerce, NIST and others who needed that workforce to try to really continue to build momentum and fine tune the federal practices. One of our community subgroups talks about modernizing talent management, and this isn’t meant explicitly for the federal government, but for the private sector as well.

But I would say the federal government is in need of a lot of modernization. Going back to how we currently classify federal jobs, often that OPM classification series, a lot of them are 2210, IT or information security workforce [roles]. And yet the jobs, as the NICE framework represents, the work roles are much more specific than that. So I think there is an ongoing need to evolve that process, but I think some good progress has been made over the years.

Justin Doubleday How much progress do you think we’ve made in the shift towards skills based hiring?

Rodney Petersen At a minimum, there’s increased awareness and the value and the importance that it brings. And really it comes down to relying less on traditional credentials like academic degrees and maybe even certifications and experience, and looking more specifically at the skills, knowledge, capabilities that a job candidate would bring to the workforce. I would think that most organizations, most hiring managers, most cybersecurity professionals, are on board with that.

On the other hand, I think the practices still continue to lag. We still have job announcements that require the degrees, the experiences and things that really disqualify a vast majority of individuals who are probably quite capable. In fact, not only capable today, but have the potential to be the future workforce that is needed. So we need to limit those job announcements or job descriptions that disqualify people due to the lack of those traditional credentials, and really double down on the skills, the competencies that are needed.

Justin Doubleday More generally, you’ve written about the need for cybersecurity awareness among the workforce dating back to at least a decade now in your role at NIST. We now live in a world of annual cybersecurity trainings and PSAs. How would you grade cybersecurity awareness efforts over the past decade and just the level of acumen that we all generally have about cybersecurity?

Rodney Petersen My answer is probably pretty similar to the one I just gave about, how are we doing for skills based education? The awareness programs, I would give an ‘A.’ The awareness efforts and the initiatives are very prolific. The outcomes, the behavioral change, is probably more a ‘C-minus.’ And I think what we’re all discovering with that gap is not that there isn’t good intentions, requirements or educational efforts in place. But it really comes down to changing behaviors, and we need to continue to look for more active ways to influence how employees or citizens or consumers make choices about what they do online and what they do with their computer and how they respond to phishing emails or whatever the case may be. The training, the one-way directional information flow, is not going to be enough. We need to look for more opportunities to simulate, to provide multimedia, to use exercises, to use performance based assessments and exams that really reinforce the behavior change we’re striving to direct.

Justin Doubleday I have to bring it up: artificial intelligence. AI is on everyone’s mind. If you go on LinkedIn, there’s just so much speculation about how AI is going to completely change the future of a cybersecurity career field . . . I’d love to get your thoughts just on how you think about that and how the NIST NICE program has started to perhaps incorporate just some of the taxonomies and the skills that we’re seeing around AI come into play.

Rodney Petersen It’s not just that AI is going to impact the future. AI is impacting the present, and I think we see that all around us. One example is in education. How is AI being used by students? How is it or can it be used by teachers and faculty members? How can it be used by the organizations or the enterprises that run schools and universities? Just last week, we had our K-12 cybersecurity education conference where we had a student panel, and much of their discussion was around their use, their daily use, their hourly use of AI. And they encourage teachers and administrators to embrace it, because it’s not going to go away, and it’s going to be, in their opinion, a helpful part of their learning and educational experience.

A lot of NICE’s focus starts around the impact on the education or the learning enterprise. But from a cybersecurity perspective, I think NIST and NICE as well, and I would add the Centers of Academic Excellence and Cybersecurity, have been primarily focused on three impacts. One is, how do we make sure AI technology is secure? How do we make sure security is built in by design, which is fundamental to all software, all hardware, all kind of technology considerations? And again, the NICE framework talks about design and development as a phase of the technology process life cycle where we need good cybersecurity practices.

We also think about how can AI be used for cybersecurity? How can those that are cybersecurity practitioners leverage AI for their benefit, all the way from writing code, monitoring against attacks and using it for defense, a variety of ways that we can leverage the benefits of AI for the cybersecurity of organizations. And then, thirdly, how do we defend against AI-generated attacks, which we are going to see increasingly. We’re seeing it presently. So it’s those three aspects: building it securely, how do we use it to our advantage, and how do we defend against that?

The post Outgoing NIST cyber workforce director talks job roles, skills-based hiring, and AI first appeared on Federal News Network.

© Getty Images/iStockphoto/bestdesigns

Harmonizing compliance: How oversight modernization can strengthen America’s cyber resilience

24 December 2025 at 16:23

For decades, the federal government has relied on sector-specific regulations to safeguard critical infrastructure. As an example, organizations including the North American Electric Reliability Corporation Critical Infrastructure Protection (NERC CIP) set standards for the energy sector, while the Transportation Security Administration issues pipeline directives and the Environmental Protection Agency makes water utility rules.

While these frameworks were designed to protect individual sectors, the digital transformation of operational technology and information technology has made such compartmentalization increasingly risky.

Today, the boundaries between sectors are blurring – and the gaps between their governance frameworks are becoming attackers’ entry points.

The problem is the lack of harmony.

Agencies are enforcing strong but disconnected standards, and compliance often becomes an end in and of itself, rather than a pathway to resilience.

With the rollout of the Cyber Incident Reporting for Critical Infrastructure Act (CIRCIA) and the release of the National Institute of Standards and Technology’s Cybersecurity Framework 2.0, the United States has an opportunity to modernize oversight, making it more adaptive, consistent and outcome based.

Doing so will require a cultural shift within federal governance: from measuring compliance to ensuring capability.

Overlapping mandates, uneven protection

Every critical infrastructure sector has its own set of cybersecurity expectations, but those rules vary widely in scope, maturity and enforcement. The Energy Department may enforce rigorous incident response requirements for electric utilities, while TSA might focus its directives on pipeline resilience. Meanwhile, small water utilities, overseen by the EPA, often lack the resources to fully comply with evolving standards.

This uneven terrain creates what I call “regulatory dissonance.” One facility may be hardened according to its regulator’s rulebook, while another connected through shared vendors or data exchanges operates under entirely different assumptions. The gaps between these systems can create cascading risk.

The 2021 Colonial Pipeline incident illustrated how oversight boundaries can become national vulnerabilities. While the energy sector had long operated under NERC CIP standards, pipelines fell under less mature guidance until TSA introduced emergency directives after the fact. CIRCIA was conceived to close such gaps by requiring consistent incident reporting across sectors. Yet compliance alone won’t suffice if agencies continue to interpret and implement these mandates in isolation.

Governance as the common language

Modernizing oversight requires more than new rules; it requires shared governance principles that transcend sectors. NIST’s Cybersecurity Framework 2.0 introduces a crucial element in this direction: the new “Govern” function, which emphasizes defining roles, responsibilities and decision-making authority within organizations. This framework encourages agencies and their partners to move from reactive enforcement toward continuous, risk-informed governance.

For federal regulators, this presents an opportunity to align oversight frameworks through a “federated accountability” model. In practice, that means developing consistent taxonomies for cyber risk, harmonized maturity scoring systems and interoperable reporting protocols.

Agencies could begin by mapping common controls across frameworks, aligning TSA directives, EPA requirements and DOE mandates to a shared baseline that mirrors NIST Cybersecurity Framework principles. This kind of crosswalk not only streamlines oversight, but also strengthens public-private collaboration by giving industry partners a clear, consistent compliance roadmap.

Equally important is data transparency. If the Cybersecurity and Infrastructure Security Agency , DOE and EPA share a common reporting structure, insights from one sector can rapidly inform others. A pipeline incident revealing supply chain vulnerabilities could immediately prompt water or energy operators to review similar controls. Oversight becomes a feedback loop rather than a series of disconnected audits.

Engineering resilience into policy

One of the most promising lessons from the technology world comes from the “secure-by-design” movement: Resilience cannot be retrofitted. Security must be built into the design of both systems and the policies that govern them.

In recent years, agencies have encouraged vendors to adopt secure development lifecycles and prioritize vulnerability management. But that same thinking can, and should, be applied to regulation itself. “Secure-by-design oversight” means engineering resilience into the way standards are created, applied and measured.

That could include:

  • Outcome-based metrics: Shifting from binary compliance checks (“Is this control in place?”) to maturity indicators that measure recovery time, detection speed or incident containment capability.
  • Embedded feedback loops: Requiring agencies to test and refine directives through simulated exercises with industry before finalizing rules, mirroring how developers test software before release.
  • Adaptive updates: Implementing versioned regulatory frameworks that can be iteratively updated, similar to patch cycles, rather than rewritten every few years through lengthy rulemaking.

Such modernization would not only enhance accountability but also reduce the compliance burden on operators who currently navigate multiple, sometimes conflicting, reporting channels.

Making oversight measurable

As CIRCIA implementation begins in earnest, agencies must ensure that reporting requirements generate actionable insights. That means designing systems that enable real-time analysis and trend detection across sectors, not just retrospective compliance reviews.

The federal government can further strengthen resilience by integrating incident reporting into national situational awareness frameworks, allowing agencies like CISA and DOE to correlate threat intelligence and issue rapid, unified advisories.

Crucially, oversight modernization must also address the human dimension of compliance. Federal contractors, third-party service providers and local operators often sit at the outer edge of regulatory reach but remain central to national resilience. Embedding training, resource-sharing and technical assistance into federal mandates can elevate the entire ecosystem, rather than penalizing those least equipped to comply.

The next step in federal cyber strategy

Effective harmonization hinges on trust and reciprocity between government and industry. The Joint Cyber Defense Collaborative (JCDC) has demonstrated how voluntary partnerships can accelerate threat information sharing, but most collaboration remains one-directional.

To achieve true synchronization, agencies must move toward reciprocal intelligence exchange, aggregating anonymized, cross-sector data into federal analysis centers and pushing synthesized insights back to operators. This not only democratizes access to threat intelligence, but also creates a feedback-driven regulatory ecosystem.

In the AI era, where both defenders and attackers are leveraging machine learning, shared visibility becomes the foundation of collective defense. Federal frameworks should incorporate AI governance principles, ensuring transparency in data usage, algorithmic accountability and protection against model exploitation, while enabling safe, responsible innovation across critical infrastructure.

A unified future for resilience governance 

CIRCIA and NIST Cybersecurity Framework 2.0 have laid the groundwork for a new era of harmonized oversight — one that treats resilience as a measurable capability rather than a compliance checkbox.

Achieving that vision will require a mindset shift at every level of governance. Federal regulators must coordinate across agencies, industry partners must participate in shaping standards, and both must view oversight as a dynamic, adaptive process.

When frameworks align, insights flow freely, and regulations evolve as quickly as the threats they are designed to mitigate, compliance transforms from a bureaucratic exercise into a national security asset. Oversight modernization is the blueprint for a more resilient nation.

 

Dr. Jerome Farquharson is managing director and senior executive advisor at MorganFranklin Cyber.

The post Harmonizing compliance: How oversight modernization can strengthen America’s cyber resilience first appeared on Federal News Network.

© The Associated Press

A Colonial Pipeline station is seen, Tuesday, May 11, 2021, in Smyrna, Ga., near Atlanta. Colonial Pipeline, which delivers about 45% of the fuel consumed on the East Coast, halted operations last week after revealing a cyberattack that it said had affected some of its systems. (AP Photo/Mike Stewart)

Hunting for Mythic in network traffic

11 December 2025 at 07:00

Post-exploitation frameworks

Threat actors frequently employ post-exploitation frameworks in cyberattacks to maintain control over compromised hosts and move laterally within the organization’s network. While they once favored closed-source frameworks, such as Cobalt Strike and Brute Ratel C4, open-source projects like Mythic, Sliver, and Havoc have surged in popularity in recent years. Malicious actors are also quick to adopt relatively new frameworks, such as Adaptix C2.

Analysis of popular frameworks revealed that their development focuses heavily on evading detection by antivirus and EDR solutions, often at the expense of stealth against systems that analyze network traffic. While obfuscating an agent’s network activity is inherently challenging, agents must inevitably communicate with their command-and-control servers. Consequently, an agent’s presence in the system and its malicious actions can be detected with the help of various network-based intrusion detection systems (IDS) and, of course, Network Detection and Response (NDR) solutions.

This article examines methods for detecting the Mythic framework within an infrastructure by analyzing network traffic. This framework has gained significant traction among various threat actors, including Mythic Likho (Arcane Wolf) и GOFFEE (Paper Werewolf), and continues to be used in APT and other attacks.

The Mythic framework

Mythic C2 is a multi-user command and control (C&C, or C2) platform designed for managing malicious agents during complex cyberattacks. Mythic is built on a Docker container architecture, with its core components – the server, agents, and transport modules – written in Python. This architecture allows operators to add new agents, communication channels, and custom modifications on the fly.

Since Mythic is a versatile tool for the attacker, from the defender’s perspective, its use can align with multiple stages of the Unified Kill Chain, as well as a large number of tactics, techniques, and procedures in the MITRE ATT&CK® framework.

  • Pivoting is a tactic where the attacker uses an already compromised system as a pivot point to gain access to other systems within the network. In this way, they gradually expand their presence within the organization’s infrastructure, bypassing firewalls, network segmentation, and other security controls.
  • Collection (TA0009) is a tactic focused on gathering and aggregating information of value to the attacker: files, credentials, screenshots, and system logs. In the context of network operations, collection is often performed locally on compromised hosts, with data then packaged for transfer. Tools like Mythic automate the discovery and selection of data sought by the adversary.
  • Exfiltration (TA0010) is the process of moving collected information out of the secured network via legitimate or covert channels, such as HTTP(s), DNS, or SMB, etc. Attackers may use resident agents or intermediate relays (pivot hosts) to conceal the exfiltration source and route.
  • Command and Control (TA0011) encompasses the mechanisms for establishing and maintaining a communication channel between the operator and compromised hosts to transmit commands and receive status updates. This includes direct connections, relaying through pivot hosts, and the use of covert protocols. Frameworks like Mythic provide advanced C2 capabilities, such as scheduled command execution, tunneling, and multi-channel communication, which complicate the detection and blocking of their activity.

This article focuses exclusively on the Command and Control (TA0011) tactic, whose techniques can be effectively detected within the network traffic of Mythic agents.

Detecting Mythic agent activity in network traffic

At the time of writing, Mythic supports data transfer over HTTP/S, WebSocket, TCP, SMB, DNS, and MQTT. The platform also boasts over a dozen different agents, written in Go, Python, and C#, designed for Windows, macOS, and Linux.

Mythic employs two primary architectures for its command network:

  • In this model, agents communicate with adjacent agents forming a chain of connections which eventually leads to a node communicating directly with the Mythic C2 server. For this purpose, agents utilize TCP and SMB.
  • In this model, agents communicate directly with the C2 server via HTTP/S, WebSocket, MQTT, or DNS.

P2P communication

Mythic provides pivoting capabilities via named SMB pipes and TCP sockets. To detect Mythic agent activity in P2P mode, we will examine their network traffic and create corresponding Suricata detection rules (signatures).

P2P communication via SMB

When managing agents via the SMB protocol, a named pipe is used by default for communication, with its name matching the agent’s UUID.

Although this parameter can be changed, it serves as a reliable indicator and can be easily described with a regular expression. Example:
[a-z0-9]{8}\-[a-z0-9]{4}\-[a-z0-9]{4}\-[a-z0-9]{4}\-[a-z0-9]{12}

For SMB communication, agents encode and encrypt data according to the pattern: base64(UUID+AES256(JSON)). This data is then split into blocks and transmitted over the network. The screenshot below illustrates what a network session for establishing a connection between agents looks like in Wireshark.

Commands and their responses are packaged within the MythicMessage data structure. This structure contains three header fields, as well as the commands themselves or the corresponding responses:

  • Total size (4 bytes)
  • Number of data blocks (4 bytes)
  • Current block number (4 bytes)
  • Base64-encoded data

The screenshot below shows an example of SMB communication between agents.

The agent (10.63.101.164) sends a command to another agent in the MythicMessage format. The first three Write Requests transmit the total message size, total number of blocks, and current block number. The fourth request transmits the Base64-encoded data. This is followed by a sequence of Read Requests, which are also transmitted in the MythicMessage format.

Below are the data transmitted in the fourth field of the MythicMessage structure.

The content is encoded in Base64. Upon decoding, the structure of the transmitted information becomes visible: it begins with the UUID of the infected host, followed by a data block encrypted using AES-256.

The fact that the data starts with a UUID string can be leveraged to create a signature-based detection rule that searches network packets for the identifier pattern.

To search for packets containing a UUID, the following signature can be applied. It uses specific request types and protocol flags as filters (Command: Ioctl (11), Function: FSCTL_PIPE_WAIT (0x00110018)), followed by a check to see if the pipe name matches the UUID pattern.

alert tcp any any -> any [139, 445] (msg: "Trojan.Mythic.SMB.C&C"; flow: to_server, established; content: "|fe|SMB"; offset: 4; depth: 4; content: "|0b 00|"; distance: 8; within: 2; content: "|18 00 11 00|"; distance: 48; within: 12; pcre: "/\x48\x00\x00\x00[\x00-\xFF]{2}([a-z0-9]\x00){8}\-\x00([a-z0-9]\x00){4}\-\x00([a-z0-9]\x00){4}\-\x00([a-z0-9]\x00){4}\-\x00([a-z0-9]\x00){12}$/R"; threshold: type both, track by_src, count 1, seconds 60; reference: url, https://github.com/MythicC2Profiles/smb; classtype: ndr1; sid: 9000101; rev: 1;)

Agent activity can also be detected by analyzing data transmitted in SMB WriteRequest packets with the protocol flag Command: Write (9) and a distinct packet structure where the BlobOffset and BlobLen fields are set to zero. If the Data field is Base64-encoded and, after decoding, begins with a UUID-formatted string, this indicates a command-and-control channel.

alert tcp any any -> any [139, 445] (msg: "Trojan.Mythic.SMB.C&C"; flow: to_server, established; dsize: > 360; content: "|fe|SMB"; offset: 4; depth: 4; content: "|09 00|"; distance: 8; within: 2; content: "|00 00 00 00 00 00 00 00 00 00 00 00|"; distance: 86; within: 12; base64_decode: bytes 64, offset 0, relative; base64_data; pcre: "/^[a-z0-9]{8}\-[a-z0-9]{4}\-[a-z0-9]{4}\-[a-z0-9]{4}\-[a-z0-9]{12}/"; threshold: type both, track by_src, count 1, seconds 60; reference: url, https://github.com/MythicC2Profiles/smb; classtype: ndr1; sid: 9000102; rev: 1;)

Below is the KATA NDR user interface displaying an alert about detecting a Mythic agent operating in P2P mode over SMB. In this instance, the first rule – which checks the request type, protocol flags, and the UUID pattern – was triggered.

It should be noted that these signatures have a limitation. If the SMBv3 protocol with encryption enabled is used, Mythic agent activity cannot be detected with signature-based methods. A possible alternative is behavioral analysis. However, in this context, it suffers from low accuracy and a high false-positive rate. The SMB protocol is widely used by organizations for various legitimate purposes, making it difficult to isolate behavioral patterns that definitively indicate malicious activity.

P2P communication via TCP

Mythic also supports P2P communications via TCP. The connection initialization process appears in network traffic as follows:

As with SMB, the MythicMessage structure is used for transmitting and receiving data. First, the data length (4 bytes) is sent as a big-endian DWORD in a separate packet. Subsequent packets transmit the number of data blocks, the current block number, and the data itself. However, unlike SMB packets, the value of the current block number field is always 0x00000000, due to TCP’s built-in packet fragmentation support.

The data encoding scheme is also analogous to what we observed with SMB and appears as follows: base64(UUID+AES256(JSON)). Below is an example of a network packet containing Mythic data.

The decoded data appears as follows:

Similar to communication via SMB, signature-based detection rules can be created for TCP traffic to identify Mythic agent activity by searching for packets containing UUID-formatted strings. Below are two Suricata detection rules. The first rule is a utility rule. It does not generate security alerts but instead tags the TCP session with an internal flag, which is then checked by another rule. The second rule verifies the flag and applies filters to confirm that the current packet is being analyzed at the beginning of a network session. It then decodes the Base64 data and searches the resulting content for a UUID-formatted string.

alert tcp any any -> any any (msg: "Trojan.Mythic.TCP.C&C"; flow: from_server, established; dsize: 4; stream_size: server, <, 6; stream_size: client, <, 3; content: "|00 00|"; depth: 2; pcre: "/^\x00\x00[\x00-\x5C]{1}[\x00-\xFF]{1}$/"; flowbits: set, mythic_tcp_p2p_msg_len; flowbits: noalert; threshold: type both, track by_src, count 1, seconds 60; reference: url, https://github.com/MythicC2Profiles/tcp; classtype: ndr1; sid: 9000103; rev: 1;)

alert tcp any any -> any any (msg: "Trojan.Mythic.TCP.C&C"; flow: from_server, established; dsize: > 300; stream_size: server, <, 6000; stream_size: client, <, 6000; flowbits: isset, mythic_tcp_p2p_msg_len; content: "|00 00 00|"; depth: 3; content: "|00 00 00 00|"; distance: 1; within: 4; base64_decode: bytes 64, offset 0, relative; base64_data; pcre: "/^[a-z0-9]{8}\-[a-z0-9]{4}\-[a-z0-9]{4}\-[a-z0-9]{4}\-[a-z0-9]{12}/"; threshold: type both, track by_src, count 1, seconds 60; reference: url, https://github.com/MythicC2Profiles/tcp; classtype: ndr1; sid: 9000104; rev: 1;)

Below is the NDR interface displaying an example of the two rules detecting a Mythic agent operating in P2P mode over TCP.

Egress transport modules

Covert Egress communication

For stealthy operations, Mythic allows agents to be managed through popular services. This makes its activity less conspicuous within network traffic. Mythic includes transport modules based on the following services:

  • Discord
  • GitHub
  • Slack

Of these, only the first two remain relevant at the time of writing. Communication via Slack (the Slack C2 Profile transport module) is no longer supported by the developers and is considered deprecated, so we will not examine it further.

The Discord C2 Profile transport module

The use of the Discord service as a mediator for C2 communication within the Mythic framework has been gaining popularity recently. In this scenario, agent traffic is indistinguishable from normal Discord activity, with commands and their execution results masquerading as messages and file attachments. Communication with the server occurs over HTTPS and is encrypted with TLS. Therefore, detecting Mythic traffic requires decrypting this.

Analyzing decrypted TLS traffic

Let’s assume we are using an NDR platform in conjunction with a network traffic decryption (TLS inspection) system to detect suspicious network activity. In this case, we operate under the assumption that we can decrypt all TLS traffic. Let’s examine possible detection rules for that scenario.

Agent and server communication occurs via Discord API calls to send messages to a specific channel. Communication between the agent and Mythic uses the MythicMessageWrapper structure, which contains the following fields:

  • message: the transmitted data
  • sender_id: a GUID generated by the agent, included in every message
  • to_server: a direction flag – a message intended for the server or the agent
  • id: not used
  • final: not used

Of particular interest to us is the message field, which contains the transmitted data encoded in Base64. The MythicMessageWrapper message is transmitted in plaintext, making it accessible to anyone with read permissions for messages on the Discord server.

Below is an example of data transmission via messages in a Discord channel.

To establish a connection, the agent authenticates to the Discord server via the API call /api/v10/gateway/bot. We observe the following data in the network traffic:

After successful initialization, the agent gains the ability to receive and respond to commands. To create a message in the channel, the agent makes a POST request to the API endpoint /channels/<channel.id>/messages. The network traffic for this call is shown in the screenshot below.

After decoding the Base64, the content of the message field appears as follows:

A structure characteristic of a UUID is visible at the beginning of the packet.

After processing the message, the agent deletes it from the channel via a DELETE request to the API endpoint /channels/{channel.id}/messages/{message.id}.

Below is a Suricata rule that detects the agent’s Discord-based communication activity. It checks the API activity for creating HTTP messages for the presence of Base64-encoded data containing the agent’s UUID.

alert tcp any any -> any any (msg: "Trojan.Mythic.HTTP.C&C"; flow: to_server, established; content: "POST"; http_method; content: "/api/"; http_uri; content: "/channels/"; distance: 0; http_uri; pcre: "/\/messages$/U"; content: "|7b 22|content|22|"; depth: 20; http_client_body; content: "|22|sender_id"; depth: 1500; http_client_body; pcre: "/\x22sender_id\x5c\x22\x3a\x5c\x22[a-z0-9]{8}\-[a-z0-9]{4}\-[a-z0-9]{4}\-[a-z0-9]{4}\-[a-z0-9]{12}/"; threshold: type both, track by_src, count 1, seconds 60; reference: url, https://github.com/MythicC2Profiles/discord; classtype: ndr1; sid: 9000105; rev: 1;)

Below is the NDR user interface displaying an example of detecting the activity of the Discord C2 Profile transport module for a Mythic agent within decrypted HTTP traffic.

Analyzing encrypted TLS traffic

If Discord usage is permitted on the network and there is no capability to decrypt traffic, it becomes nearly impossible to detect agent activity. In this scenario, behavioral analysis of requests to the Discord server may prove useful. Below is network traffic showing frequent TLS connections to the Discord server, which could indicate commands being sent to an agent.

In this case, we can use a Suricata rule to detect the frequent TLS sessions with Discord servers:

alert tcp any any -> any any (msg: "NetTool.PossibleMythicDiscordEgress.TLS.C&C"; flow: to_server, established; tls_sni; content: "discord.com"; nocase; threshold: type both, track by_src, count 4, seconds 420; reference: url, https://github.com/MythicC2Profiles/discord; classtype: ndr3; sid: 9000106; rev: 1;)

Another method for detecting these communications involves tracking multiple DNS queries to the discord.com domain.

The following rule can be applied to detect these:

alert udp any any -> any 53 (msg: "NetTool.PossibleMythicDiscordEgress.DNS.C&C"; content: "|01 00 00 01 00 00 00 00 00 00|"; depth: 10; offset: 2; content: "|07|discord|03|com|00|"; nocase; distance: 0; threshold: type both, track by_src, count 4, seconds 60; reference: url, https://github.com/MythicC2Profiles/discord; classtype: ndr3; sid: 9000107; rev: 1;)

Below is the NDR user interface showing an example of a custom rule in operation, detecting the activity of the Discord C2 Profile transport module for a Mythic agent within encrypted traffic based on characteristic DNS queries.

The proposed rule options have low accuracy and can generate a high number of false positives. Therefore, they must be adapted to the specific characteristics of the infrastructure in which they will run. Threshold and count parameters, which control the triggering frequency and time window, require tuning.

GitHub C2 Profile transport module

GitHub’s popularity has made it an attractive choice as a mediator for managing Mythic agents. The core concept is the same as in other covert Egress communication transport modules. Communication with GitHub utilizes HTTPS. Successful operation requires an account on the target platform and the ability to communicate via API calls. The transport module utilizes the GitHub API to send comments to pre-created Issues and to commit files to a branch within a repository controlled by the attackers. In this model, the agent interacts only with GitHub: it creates and reads comments, uploads files, and manages branches. It does not communicate with any other servers. The communication algorithm via GitHub is as follows:

  1. The agent posts a comment (check-in) to a designated Issue on GitHub, intended for agents to report their results.
  2. The Mythic server validates the comment, deletes it, and posts a reply in an issue designated for server use.
  3. The agent creates a branch with a name matching its UUID and writes a get_tasking file to it (performs a push request).
  4. The Mythic server reads the file and writes a response file to the same branch.
  5. The agent reads the response file, deletes the branch, pauses, and repeats the cycle.
Analyzing decrypted TLS traffic

Let’s consider an approach to detecting agent activity when traffic decryption is possible.

Agent communication with the server utilizes API calls to GitHub. The payload is encoded in Base64 and published in plaintext; therefore, anyone who can view the repository or analyze the traffic contents can decode it.

Analysis of agent communication revealed that the most useful traffic for creating detection rules is associated with publishing check-in comments, creating a branch, and publishing a file.

During the check-in phase, the agent posts a comment to register a new agent and establish communication.

The transmitted data is encoded in Base64 and contains the agent’s UUID and the portion of the message encrypted using AES-256.

This allows for a signature that detects UUID-formatted substrings within GitHub comment creation requests.

alert tcp any any -> any any (msg: "Trojan.Mythic.HTTP.C&C"; flow: to_server, established; content: "POST"; http_method; content: "api.github.com"; http_host; content: "/repos/"; depth: 8; http_uri; pcre: "/\/comments$/U"; content: "|22|body|22|"; depth: 8; http_client_body; base64_decode: bytes 300, offset 2, relative; base64_data; pcre: "/^[a-z0-9]{8}\-[a-z0-9]{4}\-[a-z0-9]{4}\-[a-z0-9]{4}\-[a-z0-9]{12}/"; threshold: type both, track by_src, count 1, seconds 60; reference: url, https://github.com/MythicC2Profiles/github; classtype: ndr1; sid: 9000108; rev: 1;)

Another stage suitable for detection is when the agent creates a separate branch with its UUID as the name. All subsequent relevant communication with the server will occur within this branch. Here is an example of a branch creation request:

Therefore, we can create a detection rule to identify UUID-formatted strings within branch creation requests.

alert tcp any any -> any any (msg: "Trojan.Mythic.HTTP.C&C"; flow: to_server, established; content: "POST"; http_method; content: "api.github.com"; http_host; content: "/repos/"; depth: 100; http_uri; content: "/git/refs"; distance: 0; http_uri; content: "|22|ref|22 3a|"; depth: 10; http_client_body; content: "refs/heads/"; distance: 0; within: 50; http_client_body; pcre: "/refs\/heads\/[a-z0-9]{8}\-[a-z0-9]{4}\-[a-z0-9]{4}\-[a-z0-9]{4}\-[a-z0-9]{12}\x22/"; threshold: type both, track by_src, count 1, seconds 60; reference: url, https://github.com/MythicC2Profiles/github; classtype: ndr1; sid: 9000109; rev: 1;)

After creating the branch, the agent writes a file to it (sends a push request), which contains Base64-encoded data.

Therefore, we can create a rule to trigger on file publication requests to a branch whose name matches the UUID pattern.

alert tcp any any -> any any (msg: "Trojan.Mythic.HTTP.C&C"; flow: to_server, established; content: "PUT"; http_method; content: "api.github.com"; http_host; content: "/repos/"; depth:8; http_uri; content: "/contents/"; distance: 0; http_uri; content: "|22|content|22|"; depth: 100; http_client_body; pcre: "/\x22message\x22\x3a\x22[a-z0-9]{8}\-[a-z0-9]{4}\-[a-z0-9]{4}\-[a-z0-9]{4}\-[a-z0-9]{12}\x22/"; threshold: type both, track by_src, count 1, seconds 60; reference: url, https://github.com/MythicC2Profiles/github; classtype: ndr1; sid: 9000110; rev: 1;)

The screenshot below shows how the NDR solution logs all suspicious communications using the GitHub API and subsequently identifies the Mythic agent’s activity. The result is an alert with the verdict Trojan.Mythic.HTTP.C&C.

Analyzing encrypted TLS traffic

Communication with GitHub occurs over HTTPS; therefore, in the absence of traffic decryption capability, signature-based methods for detecting agent activity cannot be applied. Let’s consider a behavioral agent activity detection approach.

For instance, it is possible to detect connections to GitHub servers that are atypical in frequency and purpose, originating from network segments where this activity is not expected. The screenshot below shows an example of an agent’s multiple TLS sessions. The traffic reflects the execution of several commands, as well as idle time, manifested as constant polling of the server while awaiting new tasks.

Multiple TLS sessions with the GitHub service from uncharacteristic network segments can be detected using the rule presented below:

alert tcp any any -> any any (msg:"NetTool.PossibleMythicGitHubEgress.TLS.C&C"; flow: to_server, established; tls_sni; content: "api.github.com"; nocase; threshold: type both, track by_src, count 4, seconds 60; reference: url, https://github.com/MythicC2Profiles/github; classtype: ndr3; sid: 9000111; rev: 1;)

Additionally, multiple DNS queries to the service can be logged in the traffic.

This activity is detected with the help of the following rule:

alert udp any any -> any 53 (msg: "NetTool.PossibleMythicGitHubEgress.DNS.C&C"; content: "|01 00 00 01 00 00 00 00 00 00|"; depth: 10; offset: 2; content: "|03|api|06|github|03|com|00|"; nocase; distance: 0; threshold: type both, track by_src, count 12, seconds 180; reference: url, https://github.com/MythicC2Profiles/github; classtype: ndr3; sid: 9000112; rev: 1;)

The screenshot below shows the NDR interface with an example of the first rule in action, detecting traces of the GitHub profile activity for a Mythic agent within encrypted TLS traffic.

The suggested rule options can produce false positives, so to improve their effectiveness, they must be adapted to the specific characteristics of the infrastructure in which they will run. The parameters of the threshold keyword – specifically the count and seconds values, which control the number of events required to generate an alert and the time window for their occurrence in NDR – must be configured.

Direct Egress communication

The Egress communication model allows agents to interact directly with the C2 server via the following protocols:

  • HTTP(S)
  • WebSocket
  • MQTT
  • DNS

The first two protocols are the most prevalent. The DNS-based transport module is still under development, and the module based on MQTT sees little use among operators. We will not examine them within the scope of this article.

Communication via HTTP

HTTP is the most common protocol for building a Mythic agent control network. The HTTP transport container acts as a proxy between the agents and the Mythic server. It allows data to be transmitted in both plaintext and encrypted form. Crucially, the metadata is not encrypted, which enables the creation of signature-based detection rules.

Below is an example of unencrypted Mythic network traffic over HTTP. During a GET request, data encoded in Base64 is passed in the value of the query parameter.

After decoding, the agent’s UUID – generated according to a specific pattern – becomes visible. This identifier is followed by a JSON object containing the key parameters of the host, collected by the agent.

If data encryption is applied, the network traffic for agent communication appears as shown in the screenshot below.

After decrypting the traffic and decoding from Base64, the communication data reveals the familiar structure: UUID+AES256(JSON).

Therefore, to create a detection signature for this case, we can also rely on the presence of a UUID within the Base64-encoded data in POST requests.

alert tcp any any -> any any (msg: "Trojan.Mythic.HTTP.C&C"; flow: to_server, established; content: "POST"; http_method; content: "|0D 0A 0D 0A|"; base64_decode: bytes 80, offset 0, relative; base64_data; content: "-"; offset: 8; depth: 1; content: "-"; distance: 4; within: 1; content: "-"; distance: 4; within: 1; content: "-"; distance: 4; within: 1; pcre: "/[0-9a-fA-F]{8}\-[0-9a-fA-F]{4}\-[0-9a-fA-F]{4}\-[0-9a-fA-F]{4}\-[0-9a-fA-F]{12}/"; threshold: type both, track by_src, count 1, seconds 180; reference: md5, 6ef89ccee639b4df42eaf273af8b5ffd; classtype: trojan1; sid: 9000113; rev: 2;)

The screenshot below shows how the NDR platform detects agent communication with the server over HTTP, generating an alert with the name Trojan.Mythic.HTTP.C&C.

Communication via HTTPS

Mythic agents can communicate with the server via HTTPS using the corresponding transport module. In this case, data is encrypted with TLS and is not amenable to signature-based analysis. However, the activity of Mythic agents can be detected if they use the default SSL certificate. Below is an example of network traffic from a Mythic agent with such a certificate.

For this purpose, the following signature is applied:

alert tcp any any -> any any (msg:"Trojan.Mythic.TLS.C&C"; flow:established, from_server; content:"|16 03|"; depth:2; content:"|0B|"; distance:3; within:1; content:"|55 04|"; distance:0; content:"|09|Mythic C2"; nocase; distance:0; threshold:type both,track by_src,count 1,seconds 60; reference:url,github.com/its-a-feature/Mythic; classtype:ndr1; sid:9000114; rev:1;)

WebSocket

The WebSocket protocol enables full-duplex communication between a client and a remote host. Mythic can utilize it for agent management.

The process of agent communication with the server via WebSocket is as follows:

  1. The agent sends a request to the WebSocket container to change the protocol for the HTTP(S) connection.
  2. The agent and the WebSocket container switch to WebSocket to send and receive messages.
  3. The agent sends a message to the WebSocket container requesting tasks from the Mythic container.
  4. The WebSocket container forwards the request to the Mythic container.
  5. The Mythic container returns the tasks to the WebSocket container.
  6. The WebSocket container forwards these tasks to the agent.

It is worth mentioning that in this communication model, both the WebSocket container and the Mythic container reside on the Mythic server. Below is a screenshot of the initial agent connection to the server.

An analysis of the TCP session shows that the actual data is transmitted in the data field in Base64 encoding.

Decoding reveals the familiar data structure: UUID+AES256(JSON).

Therefore, we can use an approach similar to those discussed above to detect agent activity. The signature should rely on the UUID string at the beginning of the data field. The rule first verifies that the session data matches the data:base64 format, then decodes the data field and searches for a string matching the UUID pattern.

alert tcp any any -> any any (msg: "Trojan.Mythic.WebSocket.C&C"; flow: established, from_server; content: "|7B 22|data|22 3a 22|"; depth: 14; pcre: "/^[0-9a-zA-Z\/\+]+[=]{0,2}\x22\x7D\x0A$/R"; content: "|7B 22|data|22 3a 22|"; depth: 14; base64_decode: bytes 48, offset 0, relative; base64_data; pcre: "/^[a-z0-9]{8}\-[a-z0-9]{4}\-[a-z0-9]{4}\-[a-z0-9]{4}\-[a-z0-9]{12}/i"; threshold: type both, track by_src, count 1, seconds 30; reference: url, https://github.com/MythicAgents/; classtype: ndr1; sid: 9000115; rev: 2;)

Below is the Trojan.Mythic.WebSocket.C&C signature triggering on Mythic agent communication over WebSocket.

Takeaways

The Mythic post-exploitation framework continues to gain popularity and evolve rapidly. New agents are emerging, designed for covert persistence within target infrastructures. Despite this evolution, the various implementations of network communication in Mythic share many common characteristics that remain largely consistent over time. This consistency enables IDS/NDR solutions to effectively detect the framework’s agent activity through network traffic analysis.

Mythic supports a wide array of agent management options utilizing several network protocols. Our analysis of agent communications across these protocols revealed that agent activity can be detected by searching for specific data patterns within network traffic. The primary detection criterion involves tracking UUID strings in specific positions within Base64-encoded transmitted data. However, while the general approach to detecting agent activity is similar across protocols, each requires protocol-specific filters. Consequently, creating a single, universal signature for detecting Mythic agents in network traffic is challenging; individual detection rules must be crafted for each protocol. This article has provided signatures that are included in Kaspersky NDR.

Kaspersky NDR is designed to identify current threats within network infrastructures. It enables the detection of all popular post-exploitation frameworks based on their characteristic traffic patterns. Since the network components of these frameworks change infrequently, employing an NDR solution ensures high effectiveness in agent discovery.

Kaspersky verdicts in Kaspersky solutions (Kaspersky Anti-Targeted Attack with NDR module and Kaspersky NGFW)

Trojan.Mythic.SMB.C&C
Trojan.Mythic.TCP.C&C
Trojan.Mythic.HTTP.C&C
Trojan.Mythic.TLS.C&C
Trojan.Mythic.WebSocket.C&C

Innovator Spotlight: Singulr AI

By: Gary
3 October 2025 at 12:41

The AI Governance Tightrope: Enabling Innovation Without Compromising Security  Cybersecurity leaders are facing a critical inflection point. The rapid emergence of artificial intelligence technologies presents both unprecedented opportunities and significant...

The post Innovator Spotlight: Singulr AI appeared first on Cyber Defense Magazine.

Instahacking: Understanding Methods, Risks, and Protection

20 August 2025 at 02:13


In today’s hyper-connected era, social media isn’t just a pastime; it’s how we network, shape our identities, and sometimes even boost sales. Out of all the platforms, Instagram consistently ranks at the top. Every minute, its members flood the feed with selfies, stories, and DMs, providing a treasure trove of digital identities for anyone with the right set of skills. What often goes unnoticed is that this wealth of information is an invitation for cyber attackers. The label “instahacking” has emerged to describe the unauthorized effort to crack an Instagram account. While the term usually conjures images of basement-dwelling hackers, the truth is that anyone can benefit from uncovering the tactics behind these break-ins to shore up their account security.

In this article, we break down the main tactics behind instahacking, touch upon some Linux-based tools that security pros utilize, and furnish everyday users with actionable, straightforward ways to tighten their defenses.

Defining Instahacking

In simple terms, instahacking means accessing an Instagram account in violation of the owner’s consent. The motivations of the perpetrator vary: some want to harvest private DMs, others swap out profile pictures to impersonate the victim, and a few may even resort to ransom demands for account restoration. Regardless of the motive, the behavior is both illegal and unethical and has attracted increasing media attention as reported cases multiply.

Yet the same knowledge can be turned to good. Cybersecurity experts and ethical hackers investigate Instagram hacking techniques not to misuse them, but to fortify the platform and help users stay safe.

Common InstaHacking Techniques  

Familiarity with the tactics hackers use helps us build defenses:  

Phishing  

Phishing seeks to steer victims to a counterfeit Instagram login page. The fake site, almost identical to the real one, captures the username and password as the user logs in. Links to such pages often arrive through deceptive emails, direct messages, or dubious ads.  

Brute Force  

Brute-force attacks rely on automated tools that try countless password combinations until one succeeds. Weak or commonly used passwords risk a successful guess in minutes.  

Keylogging  

A keylogger records every keystroke on an infected device. These covert programs can steal login information, screenshots, and sensitive text without alerting the user.  

Credential Leaks  

Mass theft of password databases from third-party platforms puts users at risk if they re-use passwords. When a site suffers a leak, any identical Instagram password lets an attacker in seamlessly.  

Social Engineering  

Social engineering sidesteps technical vulnerabilities, manipulating users into revealing information with convincing impersonations, urgency, or other psychological pressures.

Threat actors increasingly impersonate Instagram support or leverage leaked personal data to manipulate users into surrendering credentials.  

Linux Tools for Security Testing

Cybersecurity practitioners value Linux for its transparency and breadth of free, robust security-testing tools. Although adversaries sometimes weaponize them, the white-hat community uses the same arsenal to spot vulnerabilities and reinforce defenses. Here are core resources routinely deployed in engagement.  

Hydra  

This parallelized brute-force client tests password options against diverse services in one operation. Legitimate testers leverage Hydra to measure and strengthen user credential strength across their apps.  

John the Ripper  

They employ a preferred, high-performance password-hashing interpreter to measure against shadow or equivalent password databases. Audit teams examine the resulting estimates to ensure internal password-policy compliance and organizational awareness.  

Wireshark  

The real-time, cross-platform packet sniffer saturates a PCAP to reveal live traffic patterns and red-flag anomalous, potentially unwanted data ingress and egress. Security analysts routinely observe immediate indicators of phishing, credential harvesting, or unauthorized exfiltration.  

Metasploit Framework  

The modular, open-source exploitation environment lets testers confirm, protect, and prioritize vulnerabilities through applicable, explicit delay modules against guaranteed scope systems. Distinct, guided, real-time audits furnish development teams a consolidation of severity, paste, and actionable mitigation.  

Deployed in alignment to validated scope and corporate policy, Linux-supported tools furnish crucial visibility and resiliency against evolving credential- and impersonation-based attacks.

How to Keep Your Instagram Account Safe from Unwanted Access  

As cybercriminals constantly refine their methods, there are practical ways you can tighten security on your Instagram profile:  

  • Choose Passwords Wisely: Combine uppercase, lowercase, numbers, and symbols, and aim for at least 12 random characters, steering clear of obvious phrases.  
  • Turn on Two-Factor Authentication: This forces you to verify your identity with a temporary code each time you log on from a new device.  
  • Stay Away from Fishy Links: Links promising free followers, giveaways, or viewer stats are bait—clicking them can hand over your credentials.  
  • Refresh Your Password Periodically: Change it every 60 to 90 days, even if you suspect no breach—but even a new password doesn’t substitute for good habits.  
  • Check Recent Logins: Head to the security settings to review the devices that have accessed your account. Remove anything that looks strange.  
  • Don’t Log In over Public Wi-Fi: Hackers can eavesdrop on poorly secured networks and grab your login info as you type it in.  

Legal Balance on Instagram Security  

Any form of unauthorized access to another person’s Instagram, with the intent to harm, exploit, or defraud, is a crime with severe penalties. Conversely, ethical hackers—experts who are granted permission to probe a company’s defenses—play a constructive role in keeping our accounts safe. By exposing and fixing vulnerabilities in a controlled environment, they help Instagram, and similar platforms, block the very methods that malicious actors would eventually apply. Their work is part of the defense that keeps billions of accounts secure, every day.

Security experts comb through instahacking tactics using legitimate software so they can patch slip-ups before bad actors can pounce. Ordinary Insta users, however, still get the strongest protection just by staying alert and thinking before they tap.

Conclusion

Talk around instahacking has surged lately, sparked by the flood of account swaps and worries about Instagram’s safety. From crafty phishing schemes and brute-force guesses to sneaky keyloggers and social-engineering tricks, the methods vary—but the same intel can teach users to outsmart the criminals. Command-line tools like Hydra, John, Wireshark, and Metasploit, when wielded with a duty of care, help lock down the very holes they probe.

The mission isn’t to expose weaknesses for a cheap thrill, it’s to help everyone cultivate a kind of immune system for their devices. Keep security settings tight, practice smart scrolling, and the odds of landing in a hacker’s crosshairs drop sharply.

Hunger for more grounded tips on what’s happening in the tech and threat landscape? Keep scrolling with Hackersking.


SWIFT Security Controls:Best Practices for Financial Institutions

4 June 2025 at 15:15
4.5/5 - (2 votes)

Last Updated on September 2, 2025 by Narendra Sahoo

SWIFT, the global backbone for secure financial messaging, plays a critical role in enabling fast and reliable cross-border transactions. But as cyber threats grow more advanced, financial institutions must implement robust SWIFT security controls to safeguard their systems and prevent fraud.

The SWIFT Customer Security Programme (CSP) was established to enhance cybersecurity hygiene across its network, helping institutions protect against fraud and cyberattacks. This article explores key security controls within the SWIFT CSP compliance framework and outlines best practices for financial institutions to strengthen their SWIFT security posture.

What is SWIFT CSP?

The SWIFT CSP, launched in 2016, is designed to mitigate cybersecurity risks and enhance the overall security of financial institutions. The program includes the Customer Security Controls Framework (CSCF), which defines both mandatory and advisory security controls based on industry standards such as NIST, ISO 27001/2, and PCI DSS 4.0. These controls aim to secure financial institutions’ environments, restrict unauthorized access, and ensure timely detection and response to potential threats.

To learn more about SWIFT CSP, you may also check out our informative video on – What is the SWIFT Customer Security Programme (CSP)?

Key Security Controls in the SWIFT Framework

SWIFT CSCF has 32 security controls, in which 25 are mandatory and 7 are advisory controls. The difference between the mandatory controls and advisory controls is that the mandatory controls are considered extremely important, considering they set the baseline security that all users must adhere to, while advisory controls are recommended by SWIFT as best practices but are not strictly enforced.

Here are the three core objectives of SWIFT CSCF:

Secure Your Environment – Implementing controls to protect SWIFT-related systems from external and internal threats.

Know and Limit Access – Ensuring that only authorized personnel have access to critical systems.

Detect and Respond – Monitoring and responding to security incidents in a timely manner.

Below is the list of the 32 security controls with their principles.

1. Restrict Internet Access and Protect Critical Systems from General IT Environment

1.1 SWIFT Environment Protection

1.2 Operating System Privileged Account Control

1.3 Virtualisation or Cloud Platform Protection

1.4 Restriction of Internet Access

1.5 Customer Environment Protection

2. Reduce Attack Surface and Vulnerabilities

2.1 Internal Data Flow Security

2.2 Security Updates

2.3 System Hardening

2.4A Back Office Data Flow Security

2.5A External Transmission Data Protection

2.6 Operator Session Confidentiality and Integrity

2.7 Vulnerability Scanning

2.8 Outsourced Critical Activity Protection

2.9 Transaction Business Controls

2.10 Application Hardening

2.11A RMA Business Controls

3. Physically Secure the Environment

3.1 Physical Security

4. Prevent Compromise of Credentials

4.1 Password Policy

4.2 Multi-Factor Authentication

5. Manage Identities and Separate Privileges

5.1 Logical Access Control

5.2 Token Management

5.3A Staff Screening Process

5.4 Password Repository Protection

6. Detect Anomalous Activity to Systems or Transaction Records

6.1 Malware Protection

6.2 Software Integrity

6.3 Database Integrity

6.4 Logging and Monitoring

6.5A Intrusion Detection

7. Plan for Incident Response and Information Sharing

7.1 Cyber Incident Response Planning

7.2 Security Training and Awareness

7.3A Penetration Testing

7.4A Scenario-based Risk Assessment

Best Practices for Financial Institutions to Enhance SWIFT Security

Being SWIFT CSP compliant can bring many advantages to your organization along with enhanced security controls. To align with SWIFT CSP requirements, you should consider the following best practices:

1.     Adopt a Risk-Based Approach

  • Conduct regular risk assessments to identify vulnerabilities and address them proactively.
  • Prioritize security measures based on potential impact and threat landscape.

2.   Strengthen Access Controls

  • Enforce the principle of least privilege by restricting access based on roles and responsibilities.
  • Implement robust authentication mechanisms such as MFA.
  • Regularly review and update access permissions.

3.  Enhance Network Segmentation

  • Isolate SWIFT-related infrastructure from general IT environments.
  • Use firewalls and secure VPNs to control and monitor network traffic.

4.  Implement Continuous Monitoring and Threat Detection

  • Deploy Security Information and Event Management (SIEM) solutions for real-time monitoring.
  • Regularly analyze logs to detect and respond to suspicious activities.

5. Regularly Update and Patch Systems

  • Apply security updates to all SWIFT-related components to mitigate known vulnerabilities.
  • Conduct periodic penetration testing to identify and remediate security gaps.

6. Enhance Security Awareness and Training

  • Train employees on phishing, social engineering, and cybersecurity best practices.
  • Conduct regular security drills to test incident response readiness.

Importance of Engaging Independent Assessors

To ensure compliance with SWIFT CSP requirements and improve security maturity, financial institutions should engage independent assessors. These experts:

  • Provide an unbiased evaluation of SWIFT security implementation.
  • Identify gaps in security controls and recommend improvements.
  • Assist in compliance reporting and attestation processes.

By working with independent assessors, financial institutions can enhance their security resilience, meet regulatory expectations, and mitigate risks effectively.

Conclusion

SWIFT security is a critical component of financial institutions’ cybersecurity strategy. By implementing the best practices outlined in this article and adhering to SWIFT CSP security controls, you can protect your organization’s infrastructure, prevent fraudulent activities, and build a secure financial ecosystem.

Want to assess your SWIFT compliance or need expert guidance on securing your infrastructure? Fill out our inquiry form today and let our experts assist you in achieving a strong and compliant SWIFT security framework.

The post SWIFT Security Controls:Best Practices for Financial Institutions appeared first on Information Security Consulting Company - VISTA InfoSec.

SWIFT Customer Security Programme: What You Need to Know to Stay Compliant?

5 May 2025 at 08:01
4.5/5 - (2 votes)

The SWIFT Customer Security Programme (CSP) is a security framework developed by SWIFT to improve the cyber security posture of financial institutions connected to its network.  It aims to fight against growing cyber threats by providing a structured set of 32 SWIFT security controls that institutions must implement to safeguard their SWIFT related infrastructure.

These controls are grouped under three key objectives: Secure Your Environment, Know and Limit Access, and Detect and Respond. To learn more about the key objectives and principles of the CSP check out this quick guide to SWIFT CSP.

In this article, we will explore the key steps to ensure compliance with SWIFT CSP, common compliance challenges and their solutions, and the consequences of SWIFT CSP non-compliance. So, let’s get started!

Steps for achieving SWIFT CSP compliance

1.Understand the SWIFT CSP framework 

Review the SWIFT Customer Security Controls Framework (CSCF) through the SWIFT CSP portal to understand all the security requirements there related to secure communication, operations, and cybersecurity.

2.Conduct a self-assessment

  • Perform gap analysis to assess your current security posture.
  • Complete the SWIFT CSP compliance questionnaire to check the current alignment with the required controls.

3.Implement security controls

  • Deploy required cybersecurity measures like multi-factor authentication (MFA), data encryption, and segregation of duties.
  • Update internal security policies that need to be updated to meet SWIFT CSP standards and set up continuous security monitoring.

4.Engage in SWIFT’s assurance process

  • If needed, hire a third-party auditor for a formal review and assurance report. Alternatively, complete self-certification to declare compliance.

5.Address gaps and remediate

  • Implement corrective actions for any identified non-compliance areas.
  • Test the security controls to ensure they meet SWIFT’s standards.

6.Regular reviews and updates

  • Continuously monitor and update security measures to stay compliant.
  • Conduct annual reviews to ensure all security controls are current with SWIFT’s evolving requirements.

 7.Document and report compliance

  • Maintain detailed records of assessments, audits, and actions taken.
  • Submit required reports to SWIFT, ensuring all documentation is accurate and up to date.

8.Training and Awareness

  • Provide ongoing training for employees on SWIFT CSP requirements and security best practices.
  • Develop a culture of security awareness to reduce risks and ensure compliance.

Common challenges and solutions to maintain compliance

1. Adapting to Evolving Security Standards

The Challenge:

SWIFT frequently updates its CSP requirements to keep up with new threats and vulnerabilities in the financial system. For institutions with limited resources or complex IT environments, staying ahead of these changes can feel like an uphill battle.

The Solution:

Assign a dedicated compliance officer or team to monitor SWIFT updates and ensure they’re reflected in your security controls. You can register yourself with the SWIFT Council, which will give you access to restricted materials by SWIFT and also get immediate updates of any changes or challenges. Make it a routine to review new SWIFT CSP guidelines, adapt your processes, and document every change. Most importantly, communicate these updates across the organization so everyone is on the same page.

2. Resource Constraints

The Challenge:

Meeting SWIFT CSP’s security requirements is no small feat. For smaller institutions or those with tight budgets, implementing and maintaining these measures can be a significant strain.

The Solution:

Focus on what matters most, and prioritize critical controls that address the biggest risks. Take advantage of cost-effective solutions like cloud-based security tools or automation to streamline processes. When resources are stretched thin, consider outsourcing non-core compliance tasks to specialized third-party providers. Ensure you are regularly audited (even internally) by a third party to confirm that, with the lean resources, you are still a main team with no gaps.

3. Complexity in Security Infrastructure

The Challenge:

Financial institutions often manage sprawling IT systems with diverse technologies and platforms. This complexity can make it challenging to apply SWIFT CSP controls consistently across the board.

The Solution:

Tackle the challenge step by step. Start with a phased approach, prioritizing high-risk areas first. Focus on core security measures like multi-factor authentication (MFA), encryption, and access management. Regularly test your infrastructure to catch integration issues early and ensure everything is working together smoothly. Since the penalties are high and the risks are also pretty high, it would be of good use to your organisation to interact with your auditors or consultants to confirm that you are on the right track.

4. Employee Awareness and Training

The Challenge:
Security isn’t just IT’s job, every employee has a role to play. But getting everyone, from technical staff to end users, to understand their part in SWIFT CSP compliance can be a daunting task, especially in large organizations.

The Solution:
Invest in tailored, role-based training programs that emphasize SWIFT CSP requirements and security best practices. Reinforce this knowledge with periodic security awareness campaigns, like phishing simulations, to keep employees on their toes. Develop a culture of security where compliance isn’t just a checkbox but a shared organizational value. Ensure that the learnings are fine tuned as per the department and the work expectations from a team instead of a generalised training which covers something as mundane as “What is information security”.

5. Continuous Monitoring and Incident Response

The Challenge:
Monitoring security controls around the clock and responding swiftly to incidents can be overwhelming without the right tools and processes in place.

The Solution:
Adopt automated tools for real-time monitoring and incident detection. These systems can flag suspicious activity immediately, allowing your team to act fast. Streamline your response with automated workflows designed to contain threats quickly. Ensure alerts are configured to be sent to relevant personnel to report on critical time sensitive events. Don’t forget to regularly review and update your incident response plans to align with SWIFT’s evolving requirements.

6. Third-Party Risk Management

The Challenge:
Your security is only as strong as your weakest link, which often includes third-party vendors. Managing the security posture of external partners can be tricky, especially when their standards don’t match yours.

The Solution:
Set clear expectations for vendors by requiring them to comply with SWIFT CSP controls. Conduct regular audits to ensure they’re meeting these standards and include robust security clauses in your contracts. Make security assessments a non-negotiable part of your vendor on boarding process. Ensure that these strict processes are not limited to just the onboarding process but also on an ongoing basis. Also make sure you have the right to audit in all your agreements.

The consequences of non-compliance

  1. Financial Losses: Exposure to losses from breaches and cyberattacks.
  2. Reputational Damage: Loss of client trust and business opportunities.
  3. Exclusion from SWIFT: Disconnection from SWIFT, halting transactions.
  4. Regulatory Penalties: Fines for failing to meet compliance requirements.
  5. Increased Cyberattack Risk: Greater vulnerability to data breaches and ransomware.
  6. Loss of Client Confidence: Erosion of client trust in data protection.
  7. Legal Liabilities: Risk of legal action from non-compliance.
  8. Operational Disruption: Delays, errors, and compromised systems.
  9. Remediation Costs: High expenses for fixing compliance gaps.

Wrapping Up

Maintaining SWIFT CSP compliance is important for financial institutions to protect against cyber threats, ensure operational resilience, and uphold trust within the global financial system. By following SWIFT’s security guidelines and taking proactive measures to resolve compliance issues, organizations can steer clear of serious repercussions like financial losses, reputational damage, and exclusion from the SWIFT network.

Why trust VISTA InfoSec for SWIFT CSP compliance?

VISTA InfoSec brings over decades of expertise in cybersecurity and compliance, offering end-to-end support for cybersecurity and SWIFT CSP Certification. Our team of seasoned professionals and SWIFT CSP assessors understands the complexities of the SWIFT CSP framework and provides tailored solutions to address your unique business needs. Partnering with VISTA InfoSec means leveraging our deep industry knowledge, commitment to excellence, and unwavering focus on securing your organization against evolving cyber threats.

Learn more about the SWIFT Customer Security Programme and the reigning cybersecurity regulations and standards at our official YouTube channel. You may also fill out the ‘Enquire Now’ form for a FREE one-time consultation or contact us at the registered number listed on our website to get started with SWIFT CSP compliance.

The post SWIFT Customer Security Programme: What You Need to Know to Stay Compliant? appeared first on Information Security Consulting Company - VISTA InfoSec.

SWIFT CSP: A Quick Guide for Financial Institutions

20 December 2024 at 01:42

The Society for Worldwide Interbank Financial Telecommunication (SWIFT) provides secure and reliable communication networks for over 11500 connected financial institutions to facilitate cross-border payments and securities transactions.

But as digital thieves and cyberattacks became more sophisticated targeting the financial sector, it led to the rise of cyber security cases which is why SWIFT introduced the SWIFT Customer Security Programme (CSP), a set of cybersecurity requirements designed to protect the global financial ecosystem.

In today’s article, we will explore what SWIFT CSP is, its key objectives, the compliance checklist, and how VISTA InfoSec can help you with compliance requirements with an all rounder exclusive SWIFT CSP guide.

What is SWIFT CSP, and why it was introduced?

SWIFT CSP is a cybersecurity initiative established to ensure that financial institutions adopt strong data control measures to protect their environment against cyberattacks. It outlines 32 security controls with 25 mandatory controls and 7 advisory controls that financial institutions connected to the SWIFT network must implement to prevent cyber fraud and maintain the integrity of global financial transactions.

The reason why SWIFT took the initiative to introduce the Customer Security Programme (CSP) was due to a series of high-profile cyberattacks in 2016, particularly the Bangladesh Bank heist which revealed significant vulnerabilities within the local security measures of individual institutions.

Attackers exploited weak local security measures at individual institutions to send fraudulent SWIFT messages, resulting in substantial financial losses. These incidents highlighted the need for a unified security standard across all SWIFT users, and so in 2017 it launched the CSP with the following key objectives:

  1. Strengthening Security: Establishing a consistent baseline of security controls to secure SWIFT-related infrastructure.
  2. Detecting and Responding to Threats: Enhancing the ability of institutions to detect anomalies and respond swiftly to cyber incidents.
  3. Promoting Accountability: Encouraging financial institutions to take responsibility for securing their local environments and ensuring compliance through independent SWIFT CSP assessments.

Swift Customer Security Controls Framework | key objectives and principles

Below are the 3 key objectives and 7 principles, as defined in the updated SWIFT CSP framework.

1.Secure Your Environment

  • Restrict Internet access & segregate critical systems from the general IT environment
  • Reduce attack surface and vulnerabilities
  • Physically secure the environment

2.Know and Limit Access

  • Prevent compromise of credentials
  • Manage identities and segregate privileges

3.Detect and Respond

  • Detect anomalous activity in system or transaction records
  • Plan for incident response and information sharing


SWIFT CSP compliance checklist

1. Governance and Oversight

  • Establish a cybersecurity governance framework for SWIFT-related environments.
  • Assign clear accountability for implementing and maintaining SWIFT security controls.
  • Conduct periodic reviews of security policies and compliance measures.

2. Securing the Local Environment

a) Endpoint Protection:

  • Ensure all SWIFT-related applications, systems, and interfaces are secured.
  • Implement strong firewall configurations to prevent unauthorized access.
  • Regularly patch and update software to address known vulnerabilities.

b) Physical Security:

  • Restrict physical access to SWIFT-connected infrastructure.
  • Use surveillance and access controls for server rooms and data centers.

3. Access Control

  • Implement role-based access controls (RBAC) to limit access to critical systems.
  • Use multi-factor authentication (MFA) for SWIFT interfaces and applications.
  • Regularly review and update user access privileges.
  • Disable unused or unnecessary accounts promptly.

4. Secure Messaging Practices

  • Encrypt all financial messages transmitted over the SWIFT network.
  • Monitor messaging flows to detect any anomalies or unauthorized activities.

 5. Monitoring and Threat Detection

  • Deploy tools for continuous monitoring of SWIFT-related environments.
  • Implement anomaly detection systems to identify unusual patterns in transactions or system behavior.
  • Conduct regular vulnerability scans and penetration tests.

 6.Incident Management

  • Develop and maintain an Incident Response Plan (IRP) specific to SWIFT environments.
  • Test the IRP periodically to ensure its effectiveness in mitigating cyber incidents.
  • Report security incidents to SWIFT promptly, as per the CSP guidelines.

 7. Training and Awareness

  • Conduct regular cybersecurity training for employees and stakeholders.
  • Focus on phishing awareness, secure usage of SWIFT systems, and compliance with CSP requirements.

  8.Annual Attestation

  • Complete and submit the annual compliance attestation between July and December of each year through the SWIFT KYC Security Attestation application.
  • Include evidence of control implementation and details of any compensatory measures.
  • Share attestation results with counterparties as required.

How VISTA InfoSec can assist with SWIFT CSP Compliance?

VISTA InfoSec is recognized with SWIFT as an authorised auditing organisation. As a CREST-certified organization, VISTA InfoSec’s SWIFT CSP assessors bring extensive expertise in cybersecurity and compliance frameworks. Our team provides end-to-end support, starting with a comprehensive gap assessment to evaluate your current security posture against the requirements of the SWIFT Customer Security Controls Framework (CSCF).

Based on this analysis, we deliver actionable insights to address compliance gaps, implement mandatory and advisory controls, and strengthen your overall cybersecurity infrastructure. Our services are designed to ensure a seamless compliance journey, including policy reviews, risk-based control implementation, and ongoing guidance for annual attestations.

We are also offering ‘AuditFusion360’ a one-time audit service for all your compliance needs, including SWIFT CSP, PCI DSS, SOC 2, GDPR, ISO 27001, and more. This unique approach streamlines the compliance process, reduces redundancies, and saves time and resources by addressing multiple frameworks in a single engagement. So, partner with VISTA InfoSec to simplify your compliance efforts and fortify your cybersecurity posture while ensuring adherence to SWIFT CSP requirements.

The post SWIFT CSP: A Quick Guide for Financial Institutions appeared first on Information Security Consulting Company - VISTA InfoSec.

Synergy between cyber security Mesh & the CISO role: Adaptability, visibility & control

By: slandau
22 July 2024 at 09:00

With over two decades of experience in the cyber security industry, I specialize in advising organizations on how to optimize their financial investments through the design of effective and cost-efficient cyber security strategies. Since the year 2000, I’ve had the privilege of collaborating with various channels and enterprises across the Latin American region, serving in multiple roles ranging from Support Engineer to Country Manager. This extensive background has afforded me a unique perspective on the evolving threat landscape and the shifting needs of businesses in the digital world.

The dynamism of technological advancements has transformed cyber security demands, necessitating more proactive approaches to anticipate and prevent threats before they can impact an organization. Understanding this ever-changing landscape is crucial for adapting to emerging security challenges.

In my current role as the Channel Engineering Manager for LATAM at Check Point, I also serve as part of the Cybersecurity Evangelist team under the office of our CTO. I am focused on merging technical skills with strategic decision-making, encouraging organizations to concentrate on growing their business while we ensure security.

The Cyber Security Mesh framework can safeguard businesses from unwieldy and next-generation cyber threats. In this interview, Check Point Security Engineering Manager Angel Salazar Velasquez discusses exactly how that works. Get incredible insights that you didn’t even realize that you were missing. Read through this power-house interview and add another dimension to your organization’s security strategy!

Would you like to provide an overview of the Cyber Security Mesh framework and its significance?

The Cyber Security Mesh framework represents a revolutionary approach to addressing cyber security challenges in increasingly complex and decentralized network environments. Unlike traditional security models that focus on establishing a fixed ‘perimeter’ around an organization’s resources, the Mesh framework places security controls closer to the data, devices, and users requiring protection. This allows for greater flexibility and customization, more effectively adapting to specific security and risk management needs.

For CISOs, adopting the Cyber Security Mesh framework means a substantial improvement in risk management capabilities. It enables more precise allocation of security resources and offers a level of resilience that is difficult to achieve with more traditional approaches. In summary, the Mesh framework provides an agile and scalable structure for addressing emerging threats and adapting to rapid changes in the business and technology environment.

How does the Cyber Security Mesh framework differ from traditional cyber security approaches?

Traditionally, organizations have adopted multiple security solutions from various providers in the hope of building comprehensive defense. The result, however, is a highly fragmented security environment that can lead to a lack of visibility and complex risk management. For CISOs, this situation presents a massive challenge because emerging threats often exploit the gaps between these disparate solutions.

The Cyber Security Mesh framework directly addresses this issue. It is an architecture that allows for better interoperability and visibility by orchestrating different security solutions into a single framework. This not only improves the effectiveness in mitigating threats but also enables more coherent, data-driven risk management. For CISOs, this represents a radical shift, allowing for a more proactive and adaptive approach to cyber security strategy.

Could you talk about the key principles that underly Cyber Security Mesh frameworks and architecture?

Understanding the underlying principles of Cyber Security Mesh is crucial for evaluating its impact on risk management. First, we have the principle of ‘Controlled Decentralization,’ which allows organizations to maintain control over their security policies while distributing implementation and enforcement across multiple security nodes. This facilitates agility without compromising security integrity.

Secondly, there’s the concept of ‘Unified Visibility.’ In an environment where each security solution provides its own set of data and alerts, unifying this information into a single coherent ‘truth’ is invaluable. The Mesh framework allows for this consolidation, ensuring that risk-related decision-making is based on complete and contextual information. These principles, among others, combine to provide a security posture that is much more resilient and adaptable to the changing needs of the threat landscape.

How does the Cyber Security Mesh framework align with or complement Zero Trust?

The convergence of Cyber Security Mesh and the Zero Trust model is a synergy worth exploring. Zero Trust is based on the principle of ‘never trust, always verify,’ meaning that no user or device is granted default access to the network, regardless of its location. Cyber Security Mesh complements this by decentralizing security controls. Instead of having a monolithic security perimeter, controls are applied closer to the resource or user, allowing for more granular and adaptive policies.

This combination enables a much more dynamic approach to mitigating risks. Imagine a scenario where a device is deemed compromised. In an environment that employs both Mesh and Zero Trust, this device would lose its access not only at a global network level but also to specific resources, thereby minimizing the impact of a potential security incident. These additional layers of control and visibility strengthen the organization’s overall security posture, enabling more informed and proactive risk management.

How does the Cyber Security Mesh framework address the need for seamless integration across diverse technologies and platforms?

The Cyber Security Mesh framework is especially relevant today, as it addresses a critical need for seamless integration across various technologies and platforms. In doing so, it achieves Comprehensive security coverage, covering all potential attack vectors, from endpoints to the cloud. This approach also aims for Consolidation, as it integrates multiple security solutions into a single operational framework, simplifying management and improving operational efficiency.

Furthermore, the mesh architecture promotes Collaboration among different security solutions and products. This enables a quick and effective response to any threat, facilitated by real-time threat intelligence that can be rapidly shared among multiple systems. At the end of the day, it’s about optimizing security investment while facing key business challenges, such as breach prevention and secure digital transformation.

Can you discuss the role of AI and Machine Learning within the Cyber Security Mesh framework/architecture?

Artificial Intelligence (AI) and Machine Learning play a crucial role in the Cyber Security Mesh ecosystem. These technologies enable more effective and adaptive monitoring, while providing rapid responses to emerging threats. By leveraging AI, more effective prevention can be achieved, elevating the framework’s capabilities to detect and counter vulnerabilities in real-time.

From an operational standpoint, AI and machine learning add a level of automation that not only improves efficiency but also minimizes the need for manual intervention in routine security tasks. In an environment where risks are constantly evolving, this agility and ability to quickly adapt to new threats are invaluable. These technologies enable coordinated and swift action, enhancing the effectiveness of the Cyber Security Mesh.

What are some of the challenges or difficulties that organizations may see when trying to implement Mesh?

The implementation of a Cyber Security Mesh framework is not without challenges. One of the most notable obstacles is the inherent complexity of this mesh architecture, which can hinder effective security management. Another significant challenge is the technological and knowledge gap that often arises in fragmented security environments. Added to these is the operational cost of integrating and maintaining multiple security solutions in an increasingly diverse and dynamic ecosystem.

However, many of these challenges can be mitigated if robust technology offering centralized management is in place. This approach reduces complexity and closes the gaps, allowing for more efficient and automated operation. Additionally, a centralized system can offer continuous learning as it integrates intelligence from various points into a single platform. In summary, centralized security management and intelligence can be the answer to many of the challenges that CISOs face when implementing the Cyber Security Mesh.

How does the Cyber Security Mesh Framework/Architecture impact the role of traditional security measures, like firewalls and IPS?

Cyber Security Mesh has a significant impact on traditional security measures like firewalls and IPS. In the traditional paradigm, these technologies act as gatekeepers at the entry and exit points of the network. However, with the mesh approach, security is distributed and more closely aligned with the fluid nature of today’s digital environment, where perimeters have ceased to be fixed.

Far from making them obsolete, the Cyber Security Mesh framework allows firewalls and IPS to transform and become more effective. They become components of a broader and more dynamic security strategy, where their intelligence and capabilities are enhanced within the context of a more flexible architecture. This translates into improved visibility, responsiveness, and adaptability to new types of threats. In other words, traditional security measures are not eliminated, but integrated and optimized in a more versatile and robust security ecosystem.

Can you describe real-world examples that show the use/success of the Cyber Security Mesh Architecture?

Absolutely! In a company that had adopted a Cyber Security Mesh architecture, a sophisticated multi-vector attack was detected targeting its employees through various channels: corporate email, Teams, and WhatsApp. The attack included a malicious file that exploited a zero-day vulnerability. The first line of defense, ‘Harmony Email and Collaboration,’ intercepted the file in the corporate email and identified it as dangerous by leveraging its Sandboxing technology and updated the information in its real-time threat intelligence cloud.

When the same malicious file tried to be delivered through Microsoft Teams, the company was already one step ahead. The security architecture implemented also extends to collaboration platforms, so the file was immediately blocked before it could cause harm. Almost simultaneously, another employee received an attack attempt through WhatsApp, which was neutralized by the mobile device security solution, aligned with the same threat intelligence cloud.

This comprehensive and coordinated security strategy demonstrates the strength and effectiveness of the Cyber Security Mesh approach, which allows companies to always be one step ahead, even when facing complex and sophisticated multi-vector attacks. The architecture allows different security solutions to collaborate in real-time, offering effective defense against emerging and constantly evolving threats.

The result is solid security that blocks multiple potential entry points before they can be exploited, thus minimizing risk and allowing the company to continue its operations without interruption. This case exemplifies the potential of a well-implemented and consolidated security strategy, capable of addressing the most modern and complex threats.

Is there anything else that you would like to share with the CyberTalk.org audience?

To conclude, the Cyber Security Mesh approach aligns well with the three key business challenges that every CISO faces:

Breach and Data Leak Prevention: The Cyber Security Mesh framework is particularly strong in offering an additional layer of protection, enabling effective prevention against emerging threats and data breaches. This aligns perfectly with our first ‘C’ of being Comprehensive, ensuring security across all attack vectors.

Secure Digital and Cloud Transformation: The flexibility and scalability of the Mesh framework make it ideal for organizations in the process of digital transformation and cloud migration. Here comes our second ‘C’, which is Consolidation. We offer a consolidated architecture that unifies multiple products and technologies, from the network to the cloud, thereby optimizing operational efficiency and making digital transformation more secure.

Security Investment Optimization: Finally, the operational efficiency achieved through a Mesh architecture helps to optimize the security investment. This brings us to our third ‘C’ of Collaboration. The intelligence shared among control points, powered by our ThreatCloud intelligence cloud, enables quick and effective preventive action, maximizing the return on security investment.

In summary, Cyber Security Mesh is not just a technological solution, but a strategic framework that strengthens any CISO’s stance against current business challenges. It ideally complements our vision and the three C’s of Check Point, offering an unbeatable value proposition for truly effective security.

The post Synergy between cyber security Mesh & the CISO role: Adaptability, visibility & control appeared first on CyberTalk.

REST-Attacker - Designed As A Proof-Of-Concept For The Feasibility Of Testing Generic Real-World REST Implementations

By: Unknown
7 January 2023 at 06:30


REST-Attacker is an automated penetration testing framework for APIs following the REST architecture style. The tool's focus is on streamlining the analysis of generic REST API implementations by completely automating the testing process - including test generation, access control handling, and report generation - with minimal configuration effort. Additionally, REST-Attacker is designed to be flexible and extensible with support for both large-scale testing and fine-grained analysis.

REST-Attacker is maintained by the Chair of Network & Data Security of the Ruhr University of Bochum.


Features

REST-Attacker currently provides these features:

  • Automated generation of tests
    • Utilize an OpenAPI description to automatically generate test runs
    • 32 integrated security tests based on OWASP and other scientific contributions
    • Built-in creation of security reports
  • Streamlined API communication
    • Custom request interface for the REST security use case (based on the Python3 requests module)
    • Communicate with any generic REST API
  • Handling of access control
    • Background authentication/authorization with API
    • Support for the most popular access control mechanisms: OAuth2, HTTP Basic Auth, API keys and more
  • Easy to use & extend
    • Usable as standalone (CLI) tool or as a module
    • Adapt test runs to specific APIs with extensive configuration options
    • Create custom test cases or access control schemes with the tool's interfaces

Install

Get the tool by downloading or cloning the repository:

git clone https://github.com/RUB-NDS/REST-Attacker.git

You need Python >3.10 for running the tool.

You also need to install the following packages with pip:

python3 -m pip install -r requirements.txt

Quickstart

Here you can find a quick rundown of the most common and useful commands. You can find more information on each command and other about available configuration options in our usage guides.

Get the list of supported test cases:

python3 -m rest_attacker --list

Basic test run (with load-time test case generation):

python3 -m rest_attacker <cfg-dir-or-openapi-file> --generate

Full test run (with load-time and runtime test case generation + rate limit handling):

python3 -m rest_attacker <cfg-dir-or-openapi-file> --generate --propose --handle-limits

Test run with only selected test cases (only generates test cases for test cases scopes.TestTokenRequestScopeOmit and resources.FindSecurityParameters):

python3 -m rest_attacker <cfg-dir-or-openapi-file> --generate --test-cases scopes.TestTokenRequestScopeOmit resources.FindSecurityParameters

Rerun a test run from a report:

python3 -m rest_attacker <cfg-dir-or-openapi-file> --run /path/to/report.json

Documentation

Usage guides and configuration format documentation can be found in the documentation subfolders.

Troubleshooting

For fixes/mitigations for known problems with the tool, see the troubleshooting docs or the Issues section.

Contributing

Contributions of all kinds are appreciated! If you found a bug or want to make a suggestion or feature request, feel free to create a new issue in the issue tracker. You can also submit fixes or code ammendments via a pull request.

Unfortunately, we can be very busy sometimes, so it may take a while before we respond to comments in this repository.

License

This project is licensed under GNU LGPLv3 or later (LGPL3+). See COPYING for the full license text and CONTRIBUTORS.md for the list of authors.



REST-Attacker - Designed As A Proof-Of-Concept For The Feasibility Of Testing Generic Real-World REST Implementations

By: Unknown
7 January 2023 at 06:30


REST-Attacker is an automated penetration testing framework for APIs following the REST architecture style. The tool's focus is on streamlining the analysis of generic REST API implementations by completely automating the testing process - including test generation, access control handling, and report generation - with minimal configuration effort. Additionally, REST-Attacker is designed to be flexible and extensible with support for both large-scale testing and fine-grained analysis.

REST-Attacker is maintained by the Chair of Network & Data Security of the Ruhr University of Bochum.


Features

REST-Attacker currently provides these features:

  • Automated generation of tests
    • Utilize an OpenAPI description to automatically generate test runs
    • 32 integrated security tests based on OWASP and other scientific contributions
    • Built-in creation of security reports
  • Streamlined API communication
    • Custom request interface for the REST security use case (based on the Python3 requests module)
    • Communicate with any generic REST API
  • Handling of access control
    • Background authentication/authorization with API
    • Support for the most popular access control mechanisms: OAuth2, HTTP Basic Auth, API keys and more
  • Easy to use & extend
    • Usable as standalone (CLI) tool or as a module
    • Adapt test runs to specific APIs with extensive configuration options
    • Create custom test cases or access control schemes with the tool's interfaces

Install

Get the tool by downloading or cloning the repository:

git clone https://github.com/RUB-NDS/REST-Attacker.git

You need Python >3.10 for running the tool.

You also need to install the following packages with pip:

python3 -m pip install -r requirements.txt

Quickstart

Here you can find a quick rundown of the most common and useful commands. You can find more information on each command and other about available configuration options in our usage guides.

Get the list of supported test cases:

python3 -m rest_attacker --list

Basic test run (with load-time test case generation):

python3 -m rest_attacker <cfg-dir-or-openapi-file> --generate

Full test run (with load-time and runtime test case generation + rate limit handling):

python3 -m rest_attacker <cfg-dir-or-openapi-file> --generate --propose --handle-limits

Test run with only selected test cases (only generates test cases for test cases scopes.TestTokenRequestScopeOmit and resources.FindSecurityParameters):

python3 -m rest_attacker <cfg-dir-or-openapi-file> --generate --test-cases scopes.TestTokenRequestScopeOmit resources.FindSecurityParameters

Rerun a test run from a report:

python3 -m rest_attacker <cfg-dir-or-openapi-file> --run /path/to/report.json

Documentation

Usage guides and configuration format documentation can be found in the documentation subfolders.

Troubleshooting

For fixes/mitigations for known problems with the tool, see the troubleshooting docs or the Issues section.

Contributing

Contributions of all kinds are appreciated! If you found a bug or want to make a suggestion or feature request, feel free to create a new issue in the issue tracker. You can also submit fixes or code ammendments via a pull request.

Unfortunately, we can be very busy sometimes, so it may take a while before we respond to comments in this repository.

License

This project is licensed under GNU LGPLv3 or later (LGPL3+). See COPYING for the full license text and CONTRIBUTORS.md for the list of authors.



❌
❌