Normal view
PCI DSS 4.0.1 Compliance made simple with latest updates
Last Updated on September 26, 2025 by Narendra Sahoo
The world of payment security never stands still, and neither does PCI DSS. PCI DSS 4.0.1 Compliance is now the latest update that is the new talk of the town. Don’t worry it’s not that massive and heavy on changes but it is here to make a remarkable difference in transparency and finance.
The Payment Card Industry Data Security Standard (PCI DSS v.4.0) is a data security framework that helps businesses keep their customers’ sensitive data safe. Every organization, regardless of size and location, that handles customers payment card data has to be PCI DSS compliant. PCI DSS v4.0 consists of 12 main requirements, categorized under 6 core principles that every organization must adhere to in order to maintain compliance.
Since 2008, 4 years from the date it was first introduced, PCI DSS has undergone multiple revisions to keep up with the emerging cyber threats and evolving payment technologies. With each update, organizations are expected to refine their security practices to meet stricter compliance expectations.
Now, with PCI DSS 4.0.1, organizations must once again adapt to the latest regulatory changes. But what does this latest version bring to the table, and how can your organization ensure a smooth transition? Let’s take a closer look.
Introduction to PCI DSS v4.0.1
PCI DSS 4.0.1 is a revised version of PCI DSS v4.0, published by the PCI Security Standard Council (PCI SSC) on June 11, 2024. The latest version focuses on minor adjustments, such as formatting corrections and clarifications, rather than introducing new requirements. Importantly, PCI DSS version 4.0.1 does not add, delete, or modify any existing requirements. So, organizations that have already started transitioning to PCI DSS 4.0, won’t face any drastic changes, but it is crucial to understand the key updates to ensure full compliance.
PCI DSS 4.0.1 changes
We know PCI DSS 4.0.1 does not introduce any brand-new requirements, so what kind of refinements does it bring, and are they worth noting?
The answer is: Yes, they are, and you should comply with them to avoid non-compliance. The new updates aim to enhance clarity, consistency, and usability rather than overhaul existing security controls in PCI DSS.
Below are some of the significant updates in PCI DSS 4.0.1:
- Improved Requirement Clarifications: The PCI Security Standards Council (PCI SSC) has fine-tuned the wording of several requirements to remove ambiguity. This ensures businesses have a clearer understanding of what’s expected.
- Formatting Enhancements: To ensure uniformity across the framework, some sections have been reformatted. This may not impact your technical security controls but will help streamline audits and documentation.
- Additional Implementation Guidance: Organizations now have more explanatory notes to assist them in correctly implementing security controls and compliance measures.
- No Change in Compliance Deadlines: The transition deadline to PCI DSS 4.0 remains firm—March 31, 2025—so organizations need to stay on track with their compliance efforts.
- Alignment with Supporting Documents: Updates ensure consistency across various PCI DSS-related materials like Self-Assessment Questionnaires (SAQs) and Reports on Compliance (ROCs), making assessments more straightforward.
Steps to comply with the new version of PCI DSS 4.0.1
1) Familiarize Yourself with PCI DSS 4.0.1 Updates
- Review the official documentation from the PCI Security Standards Council.
- Understand the refinements and how they apply to your current compliance efforts.
- If you’re already transitioning to PCI DSS 4.0, confirm that 4.0.1 does not require any drastic modifications.
2) Conduct a Compliance Gap Analysis
- Compare your existing security controls against PCI DSS 4.0.1 to identify areas needing adjustment.
- Engage with internal stakeholders to assess any potential compliance gaps.
3) Update Policies and Documentation
- Revise internal policies, security documentation, and operational procedures to align with clarified requirements.
- Ensure that SAQs, ROCs, and Attestations of Compliance (AOCs) reflect the latest version.
4) Validate Security Controls
- Perform security assessments, penetration testing, and vulnerability scans to confirm compliance.
- Make necessary adjustments based on the refined guidance provided in PCI DSS 4.0.1.
5) Train Your Team on Key Updates
- Conduct training sessions to educate staff and stakeholders on clarified expectations.
- Ensure that compliance teams understand how the changes affect security protocols.
6) Consult a Qualified Security Assessor (QSA)
- If your organization requires external validation, work closely with an experienced QSA (like the experts from VISTA InfoSec) to confirm that your compliance strategy meets PCI DSS 4.0.1 expectations.
- Address any concerns raised by the assessor to avoid compliance delays.
7) Maintain Continuous Compliance and Monitoring
- Implement robust logging, monitoring, and threat detection mechanisms.
- Regularly test and update security controls to stay ahead of evolving cyber threats.
8) Prepare for the March 2025 Compliance Deadline
- Keep track of your progress to ensure you meet the transition deadline.
- If you’re already compliant with PCI DSS 4.0, verify that all adjustments from v4.0.1 are incorporated into your security framework.
FAQs
-
What are the main changes in PCI DSS 4.0.1 compared to 4.0?
PCI DSS 4.0.1 introduces clarifications, minor corrections, and additional guidance to make existing requirements in PCI DSS 4.0 easier to understand and implement.
-
Why was PCI DSS 4.0.1 released so soon after PCI DSS 4.0?
PCI DSS 4.0.1 was released to address feedback from organizations and assessors, ensuring requirements are clear, consistent, and practical without changing the core security goals of version 4.0.
-
How should organizations prepare for PCI DSS 4.0.1?
Organizations should review the updated documentation, perform a gap analysis, update policies and procedures if needed, and confirm alignment with the clarified requirements.
-
Are there new technical requirements in PCI DSS 4.0.1?
No new technical requirements were added. PCI DSS 4.0.1 focuses on clarifications and corrections to help organizations implement PCI DSS 4.0 more effectively.
-
What happens if my business does not comply with PCI DSS 4.0.1?
Failure to comply with PCI DSS 4.0.1 can lead to fines, loss of the ability to process card payments, and increased risk of data breaches due to weak security practices.
Conclusion
PCI DSS compliance isn’t just a checkbox exercise, it is your very first commitment when it comes to safeguarding your customer’s data and strengthening cybersecurity. While PCI DSS 4.0.1 may not introduce serious changes, its refinements serve as a crucial reminder that security is an ongoing journey, not a one-time effort. With the March 2025 compliance deadline fast approaching, now is the time to assess, adapt, and act.
Need expert guidance to navigate PCI DSS 4.0.1 seamlessly? Partner with us at VISTA InfoSec for a smooth, hassle-free transition to the latest version of PCI DSS. Because in payment security, compliance is just the beginning, true protection is the actual goal.
The post PCI DSS 4.0.1 Compliance made simple with latest updates appeared first on Information Security Consulting Company - VISTA InfoSec.
Innovator Spotlight: 360 Privacy
The Future of Cyber Resilience The algorithms are hunting us. Not with malicious code, but with something far more insidious. During a recent Black Hat Conference roundtable hosted by Chuck...
The post Innovator Spotlight: 360 Privacy appeared first on Cyber Defense Magazine.
Innovator Spotlight: Harness
Securing the Digital Frontier: How AI is Reshaping Application Security The software development landscape is transforming at breakneck speed. Developers now generate code faster than ever, but this acceleration comes...
The post Innovator Spotlight: Harness appeared first on Cyber Defense Magazine.
Innovator Spotlight: NetBrain
Network Visibility: The Silent Guardian of Cybersecurity Network complexity is killing enterprise security teams. Buried under mountains of configuration data, manual processes, and endless troubleshooting, cybersecurity professionals are drowning in...
The post Innovator Spotlight: NetBrain appeared first on Cyber Defense Magazine.
PCI DSS 4.0 Readiness Roadmap: A Complete Audit Strategy for 2025
Last Updated on December 2, 2025 by Narendra Sahoo
Getting PCI DSS compliant is like preparing for a big exam. You cannot just walk into it blind, you first need to prepare, check your weak areas, next fix them, and then only face the audit. If you are here today for the roadmap, I assume you are preparing for an audit now or sometime in the future, and I hope this PCI DSS 4.0 Readiness Roadmap helps you as your preparation guide. So, let’s get started!
Step 1: List down everything in scope
The first mistake many companies make is they don’t know what is really in the PCI scope. So, start with an inventory.
This is one area where many organizations rely on pci dss compliance consultants to help them correctly identify what truly falls under cardholder data scope.
- Applications: Your payment gateway (Stripe, Razorpay, PayPal, Adyen), POS software, billing apps like Zoho Billing, CRMs like Salesforce that store customer details, in-house payment apps.
- Databases: MySQL, Oracle, SQL Server, MongoDB that store PAN or related card data.
- Servers: Web servers (Apache, Nginx, IIS), application servers (Tomcat, Node.js), DB servers.
- Hardware: POS terminals, card readers, firewalls (Fortinet, Palo Alto, Checkpoint), routers, load balancers (F5).
- Cloud platforms: AWS (S3 buckets, RDS, EC2), Azure, GCP, SaaS apps that store or process card data.
- Third parties: Payment processors, outsourced call centers handling cards, hosting providers.
Write all this down in a spreadsheet. Mark which ones store, process, or transmit card data. This becomes your “scope map.”
Step 2: Do a gap check (compare with PCI DSS 4.0 requirements)
Now take the PCI DSS 4.0 standard and see what applies to you. Some basics:
- Firewalls – Do you have them configured properly or are they still at default rules?
- Passwords – Are your systems still using “welcome123” or weak defaults? PCI needs strong auth.
- Encryption – Is card data encrypted at rest (DB, disk) and in transit (TLS 1.2+)? If not, you may fail your PCI DSS compliance audit.
- Logging – Are you logging access to sensitive systems, and storing logs securely (like in Splunk, ELK, AWS CloudTrail)?
- Access control – Who has access to DB with card data? Is it limited on a need-to-know basis?
Example: If you’re running an e-commerce store on Magento and it connects to MySQL, check if your DB is encrypted and whether DB access logs are kept.
Step 3: Fix the weak spots (prioritize risks)
- If your POS terminals are outdated (like old Verifone models), replace or upgrade.
- If your AWS S3 buckets storing logs are public, fix them immediately.
- If employees are using personal laptops to process payments, enforce company-managed devices with endpoint security (like CrowdStrike, Microsoft Defender ATP).
- If your database with card data is open to all developers, restrict it to just DB admins.
Real story: A retailer I advised had their POS terminals still running Windows XP. They were shocked when I said PCI won’t even allow XP as it’s unsupported.
Step 4: Train your people
PCI DSS is not just about tech. If your staff doesn’t know, they’ll break controls.
- Train call center staff not to write card numbers on paper.
- Train IT admins to never copy card DBs to their laptops for “testing.”
- Train developers to follow secure coding (OWASP Top 10, no hard-coded keys). This not only helps with PCI but also complements SOC 2 compliance.
Example: A company using Zendesk for support had to train agents not to ask customers for card details over chat or email.
Step 5: Set up continuous monitoring
Auditors don’t just look for controls, they look for evidence.
- Centralize your logs in SIEM (Splunk, QRadar, ELK, Azure Sentinel).
- Set up alerts for failed logins, privilege escalations, or DB exports.
- Schedule vulnerability scans (Nessus, Qualys) monthly.
- Do penetration testing on your payment apps (internal and external).
Example: If you are using AWS, enable CloudTrail + GuardDuty to continuously monitor activity.
Step 6: Do a mock audit (internal readiness check)
Before the official audit, test yourself.
- Pick a PCI DSS requirement (like Requirement 8: Identify users and authenticate access). Check if you can prove strong passwords, MFA, and unique IDs.
- Review if your network diagrams, data flow diagrams, and inventories are up to date.
- Run a mock interview: ask your DB admin how they control access to the DB. If they can’t answer, it means you are not ready.
Example: I’ve seen companies that have everything in place but fail because their staff can’t explain what’s implemented.
Step 7: Engage your QSA (when you’re confident)
Finally, once you have covered all major gaps, bring in a QSA (like us at VISTA InfoSec). A QSA will validate and certify your compliance. But if you follow the above steps, the audit becomes smooth and you can avoid surprises.
We recently helped Vodafone Idea achieve PCI DSS 4.0 certification for their retail stores and payment channels. This was a large-scale environment, yet with the right PCI DSS 4.0 Readiness Roadmap (like the one above), compliance was achieved smoothly.
Remember, even the largest organizations can achieve PCI DSS 4.0 compliance if they start early, follow the roadmap step by step, and keep it practical.
Final Words for PCI DSS 4.0 Readiness Roadmap
Most businesses panic only when the audit date gets close. But PCI DSS doesn’t work that way. If you wait till then, it’s already too late.
So, start now. Even small steps today (like training your staff or fixing one gap) move you closer to compliance.
Having trouble choosing a QSA? VISTA InfoSec is here for you!
For more than 20 years, we at VISTA InfoSec have been helping businesses across fintech, telecom, cloud service providers, retail, and payment gateways achieve and maintain PCI DSS compliance. Our team of Qualified Security Assessors (QSAs) and technical experts works with companies of every size, whether it’s a start-up launching its first payment app or a large enterprise.
So, don’t wait! Book a free PCI DSS strategy call today to discuss your roadmap. You may also book a free one-time consultation with our qualified QSA.
The post PCI DSS 4.0 Readiness Roadmap: A Complete Audit Strategy for 2025 appeared first on Information Security Consulting Company - VISTA InfoSec.
Approach to mainframe penetration testing on z/OS. Deep dive into RACF
![]()
In our previous article we dissected penetration testing techniques for IBM z/OS mainframes protected by the Resource Access Control Facility (RACF) security package. In this second part of our research, we delve deeper into RACF by examining its decision-making logic, database structure, and the interactions between the various entities in this subsystem. To facilitate offline analysis of the RACF database, we have developed our own utility, racfudit, which we will use to perform possible checks and evaluate RACF configuration security. As part of this research, we also outline the relationships between RACF entities (users, resources, and data sets) to identify potential privilege escalation paths for z/OS users.
This material is provided solely for educational purposes and is intended to assist professionals conducting authorized penetration tests.
RACF internal architecture
Overall role
To thoroughly analyze RACF, let’s recall its role and the functions of its components within the overall z/OS architecture. As illustrated in the diagram above, RACF can generally be divided into a service component and a database. Other components exist too, such as utilities for RACF administration and management, or the RACF Auditing and Reporting solution responsible for event logging and reporting. However, for a general understanding of the process, we believe these components are not strictly necessary. The RACF database stores information about z/OS users and the resources for which access control is configured. Based on this data, the RACF service component performs all necessary security checks when requested by other z/OS components and subsystems. RACF typically interacts with other subsystems through the System Authorization Facility (SAF) interface. Various z/OS components use SAF to authorize a user’s access to resources or to execute a user-requested operation. It is worth noting that while this paper focuses on the operating principle of RACF as the standard security package, other security packages like ACF2 or Top Secret can also be used in z/OS.
Let’s consider an example of user authorization within the Time Sharing Option (TSO) subsystem, the z/OS equivalent of a command line interface. We use an x3270 terminal emulator to connect to the mainframe. After successful user authentication in z/OS, the TSO subsystem uses SAF to query the RACF security package, checking that the user has permission to access the TSO resource manager. The RACF service queries the database for user information, which is stored in a user profile. If the database contains a record of the required access permissions, the user is authorized, and information from the user profile is placed into the address space of the new TSO session within the ACEE (Accessor Environment Element) control block. For subsequent attempts to access other z/OS resources within that TSO session, RACF uses the information in ACEE to make the decision on granting user access. SAF reads data from ACEE and transmits it to the RACF service. RACF makes the decision to grant or deny access, based on information in the relevant profile of the requested resource stored in the database. This decision is then sent back to SAF, which processes the user request accordingly. The process of querying RACF repeats for any further attempts by the user to access other resources or execute commands within the TSO session.
Thus, RACF handles identification, authentication, and authorization of users, as well as granting privileges within z/OS.
RACF database components
As discussed above, access decisions for resources within z/OS are made based on information stored in the RACF database. This data is kept in the form of records, or as RACF terminology puts it, profiles. These contain details about specific z/OS objects. While the RACF database can hold various profile types, four main types are especially important for security analysis:
- User profile holds user-specific information such as logins, password hashes, special attributes, and the groups the user belongs to.
- Group profile contains information about a group, including its members, owner, special attributes, list of subgroups, and the access permissions of group members for that group.
- Data set profile stores details about a data set, including access permissions, attributes, and auditing policy.
- General resource profile provides information about a resource or resource class, such as resource holders, their permissions regarding the resource, audit policy, and the resource owner.
The RACF database contains numerous instances of these profiles. Together, they form a complex structure of relationships between objects and subjects within z/OS, which serves as the basis for access decisions.
Logical structure of RACF database profiles
Each profile is composed of one or more segments. Different profile types utilize different segment types.
For example, a user profile instance may contain the following segments:
- BASE: core user information in RACF (mandatory segment);
- TSO: user TSO-session parameters;
- OMVS: user session parameters within the z/OS UNIX subsystem;
- KERB: data related to the z/OS Network Authentication Service, essential for Kerberos protocol operations;
- and others.
Different segment types are distinguished by the set of fields they store. For instance, the BASE segment of a user profile contains the following fields:
- PASSWORD: the user’s password hash;
- PHRASE: the user’s password phrase hash;
- LOGIN: the user’s login;
- OWNER: the owner of the user profile;
- AUTHDATE: the date of the user profile creation in the RACF database;
- and others.
The PASSWORD and PHRASE fields are particularly interesting for security analysis, and we will dive deeper into these later.
RACF database structure
It is worth noting that the RACF database is stored as a specialized data set with a specific format. Grasping this format is very helpful when analyzing the DB and mapping the relationships between z/OS objects and subjects.
As discussed in our previous article, a data set is the mainframe equivalent of a file, composed of a series of blocks.
The image above illustrates the RACF database structure, detailing the data blocks and their offsets. From the RACF DB analysis perspective, and when subsequently determining the relationships between z/OS objects and subjects, the most critical blocks include:
- The header block, or inventory control block (ICB), which contains various metadata and pointers to all other data blocks within the RACF database. By reading the ICB, you gain access to the rest of the data blocks.
- Index blocks, which form a singly linked list that contains pointers to all profiles and their segments in the RACF database – that is, to the information about all users, groups, data sets, and resources.
- Templates: a crucial data block containing templates for all profile types (user, group, data set, and general resource profiles). The templates list fields and specify their format for every possible segment type within the corresponding profile type.
Upon dissecting the RACF database structure, we identified the need for a utility capable of extracting all relevant profile information from the DB, regardless of its version. This utility would also need to save the extracted data in a convenient format for offline analysis. Performing this type of analysis provides a comprehensive picture of the relationships between all objects and subjects for a specific z/OS installation, helping uncover potential security vulnerabilities that could lead to privilege escalation or lateral movement.
Utilities for RACF DB analysis
At the previous stage, we defined the following functional requirements for an RACF DB analysis utility:
- The ability to analyze RACF profiles offline without needing to run commands on the mainframe
- The ability to extract exhaustive information about RACF profiles stored in the DB
- Compatibility with various RACF DB versions
- Intuitive navigation of the extracted data and the option to present it in various formats: plaintext, JSON, SQL, etc.
Overview of existing RACF DB analysis solutions
We started by analyzing off-the-shelf tools and evaluating their potential for our specific needs:
- Racf2john extracts user password hashes (from the PASSWORD field) encrypted with the DES and KDFAES algorithms from the RACF database. While this was a decent starting point, we needed more than just the PASSWORD field; specifically, we also needed to retrieve content from other profile fields like PHRASE.
- Racf2sql takes an RACF DB dump as input and converts it into an SQLite database, which can then be queried with SQL. This is convenient, but the conversion process risks losing data critical for z/OS security assessment and identifying misconfigurations. Furthermore, the tool requires a database dump generated by the z/OS IRRDBU00 utility (part of the RACF security package) rather than the raw database itself.
- IRRXUTIL allows querying the RACF DB to extract information. It is also part of the RACF security package. It can be conveniently used with a set of scripts written in REXX (an interpreted language used in z/OS). However, these scripts demand elevated privileges (access to one or more IRR.RADMIN.** resources in the FACILITY resource class) and must be executed directly on the mainframe, which is unsuitable for the task at hand.
- Racf_debug_cleanup.c directly analyzes a RACF DB from a data set copy. A significant drawback is that it only parses BASE segments and outputs results in plaintext.
As you can see, existing tools don’t satisfy our needs. Some utilities require direct execution on the mainframe. Others operate on a data set copy and extract incomplete information from the DB. Moreover, they rely on hardcoded offsets and signatures within profile segments, which can vary across RACF versions. Therefore, we decided to develop our own utility for RACF database analysis.
Introducing racfudit
We have written our own platform-independent utility racfudit in Golang and tested it across various z/OS versions (1.13, 2.02, and 3.1). Below, we delve into the operating principles, capabilities and advantages of our new tool.
Extracting data from the RACF DB
To analyze RACF DB information offline, we first needed a way to extract structured data. We developed a two-stage approach for this:
- The first stage involves analyzing the templates stored within the RACF DB. Each template describes a specific profile type, its constituent segments, and the fields within those segments, including their type and size. This allows us to obtain an up-to-date list of profile types, their segments, and associated fields, regardless of the RACF version.
- In the second stage, we traverse all index blocks to extract every profile with its content from the RACF DB. These collected profiles are then processed and parsed using the templates obtained in the first stage.
The first stage is crucial because RACF DB profiles are stored as unstructured byte arrays. The templates are what define how each specific profile (byte array) is processed based on its type.
Thus, we defined the following algorithm to extract structured data.
- We offload the RACF DB from the mainframe and read its header block (ICB) to determine the location of the templates.
- Based on the template for each profile type, we define an algorithm for structuring specific profile instances according to their type.
- We use the content of the header block to locate the index blocks, which store pointers to all profile instances.
- We read all profile instances and their segments sequentially from the list of index blocks.
- For each profile instance and its segments we read, we apply the processing algorithm based on the corresponding template.
- All processed profile instances are saved in an intermediate state, allowing for future storage in various formats, such as plaintext or SQLite.
The advantage of this approach is its version independence. Even if templates and index blocks change their structure across RACF versions, our utility will not lose data because it dynamically determines the structure of each profile type based on the relevant template.
Analyzing extracted RACF DB information
Our racfudit utility can present collected RACF DB information as an SQLite database or a plaintext file.
Using SQLite, you can execute SQL queries to identify misconfigurations in RACF that could be exploited for privilege escalation, lateral movement, bypassing access controls, or other pentesting tactics. It is worth noting that the set of SQL queries used for processing information in SQLite can be adapted to validate current RACF settings against security standards and best practices. Let’s look at some specific examples of how to use the racfudit utility to uncover security issues.
Collecting password hashes
One of the primary goals in penetration testing is to get a list of administrators and a way to authorize using their credentials. This can be useful for maintaining persistence on the mainframe, moving laterally to other mainframes, or even pivoting to servers running different operating systems. Administrators are typically found in the SYS1 group and its subgroups. The example below shows a query to retrieve hashes of passwords (PASSWORD) and password phrases (PHRASE) for privileged users in the SYS1 group.
select ProfileName,PHRASE,PASSWORD,CONGRPNM from USER_BASE where CONGRPNM LIKE "%SYS1%";
Of course, to log in to the system, you need to crack these hashes to recover the actual passwords. We cover that in more detail below.
Searching for inadequate UACC control in data sets
The universal access authority (UACC) defines the default access permissions to the data set. This parameter specifies the level of access for all users who do not have specific access permissions configured. Insufficient control over UACC values can pose a significant risk if elevated access permissions (UPDATE or higher) are set for data sets containing sensitive data or for APF libraries, which could allow privilege escalation. The query below helps identify data sets with default ALTER access permissions, which allow users to read, delete and modify the data set.
select ProfileName, UNIVACS from DATASET_BASE where UNIVACS LIKE "1%";
The UACC field is not present only in data set profiles; it is also found in other profile types. Weak control in the configuration of this field can give a penetration tester access to resources.
RACF profile relationships
As mentioned earlier, various RACF entities have relationships. Some are explicitly defined; for example, a username might be listed in a group profile within its member field (USERID field). However, there are also implicit relationships. For instance, if a user group has UPDATE access to a specific data set, every member of that group implicitly has write access to that data set. This is a simple example of implicit relationships. Next, we delve into more complex and specific relationships within the RACF database that a penetration tester can exploit.
RACF profile fields
A deep dive into RACF internal architecture reveals that misconfigurations of access permissions and other attributes for various RACF entities can be difficult to detect and remediate in some scenarios. These seemingly minor errors can be critical, potentially leading to mainframe compromise. The explicit and implicit relationships within the RACF database collectively define the mainframe’s current security posture. As mentioned, each profile type in the RACF database has a unique set of fields and attributes that describe how profiles relate to one another. Based on these fields and attributes, we have compiled lists of key fields that help build and analyze relationship chains.
- SPECIAL: indicates that the user has privileges to execute any RACF command and grants them full control over all profiles in the RACF database.
- OPERATIONS: indicates whether the user has authorized access to all RACF-protected resources of the DATASET, DASDVOL, GDASDVOL, PSFMPL, TAPEVOL, VMBATCH, VMCMD, VMMDISK, VMNODE, and VMRDR classes. While actions for users with this field specified are subject to certain restrictions, in a penetration testing context the OPERATIONS field often indicates full data set access.
- AUDITOR: indicates whether the user has permission to access audit information.
- AUTHOR: the creator of the user. It has certain privileges over the user, such as the ability to change their password.
- REVOKE: indicates whether the user can log in to the system.
- Password TYPE: specifies the hash type (DES or KDFAES) for passwords and password phrases. This field is not natively present in the user profile, but it can be created based on how different passwords and password phrases are stored.
- Group-SPECIAL: indicates whether the user has full control over all profiles within the scope defined by the group or groups field. This is a particularly interesting field that we explore in more detail below.
- Group-OPERATIONS: indicates whether the user has authorized access to all RACF-protected resources of the DATASET, DASDVOL, GDASDVOL, PSFMPL, TAPEVOL, VMBATCH, VMCMD, VMMDISK, VMNODE and VMRDR classes within the scope defined by the group or groups field.
- Group-AUDITOR: indicates whether the user has permission to access audit information within the scope defined by the group or groups field.
- CLAUTH (class authority): allows the user to create profiles within the specified class or classes. This field enables delegation of management privileges for individual classes.
- GROUPIDS: contains a list of groups the user belongs to.
- UACC (universal access authority): defines the UACC value for new profiles created by the user.
Group profile fields
- UACC (universal access authority): defines the UACC value for new profiles that the user creates when connected to the group.
- OWNER: the creator of the group. The owner has specific privileges in relation to the current group and its subgroups.
- USERIDS: the list of users within the group. The order is essential.
- USERACS: the list of group members with their respective permissions for access to the group. The order is essential.
- SUPGROUP: the name of the superior group.
General resource and data set profile fields
- UACC (universal access authority): defines the default access permissions to the resource or data set.
- OWNER: the creator of the resource or data set, who holds certain privileges over it.
- WARNING: indicates whether the resource or data set is in WARNING mode.
- USERIDS: the list of user IDs associated with the resource or data set. The order is essential.
- USERACS: the list of users with access permissions to the resource or data set. The order is essential.
RACF profile relationship chains
The fields listed above demonstrate the presence of relationships between RACF profiles. We have decided to name these relationships similarly to those used in BloodHound, a popular tool for analyzing Active Directory misconfigurations. Below are some examples of these relationships – the list is not exhaustive.
- Owner: the subject owns the object.
- MemberOf: the subject is part of the object.
- AllowJoin: the subject has permission to add itself to the object.
- AllowConnect: the subject has permission to add another object to the specified object.
- AllowCreate: the subject has permission to create an instance of the object.
- AllowAlter: the subject has the ALTER privilege for the object.
- AllowUpdate: the subject has the UPDATE privilege for the object.
- AllowRead: the subject has the READ privilege for the object.
- CLAuthTo: the subject has permission to create instances of the object as defined in the CLAUTH field.
- GroupSpecial: the subject has full control over all profiles within the object’s scope of influence as defined in the group-SPECIAL field.
- GroupOperations: the subject has permissions to perform certain operations with the object as defined in the group-OPERATIONS field.
- ImpersonateTo: the subject grants the object the privilege to perform certain operations on the subject’s behalf.
- ResetPassword: the subject grants another object the privilege to reset the password or password phrase of the specified object.
- UnixAdmin: the subject grants superuser privileges to the object in z/OS UNIX.
- SetAPF: the subject grants another object the privilege to set the APF flag on the specified object.
These relationships serve as edges when constructing a graph of subject–object interconnections. Below are examples of potential relationships between specific profile types.
Visualizing and analyzing these relationships helped us identify specific chains that describe potential RACF security issues, such as a path from a low-privileged user to a highly-privileged one. Before we delve into examples of these chains, let’s consider another interesting and peculiar feature of the relationships between RACF database entities.
Implicit RACF profile relationships
We have observed a fascinating characteristic of the group-SPECIAL, group-OPERATIONS, and group-AUDITOR fields within a user profile. If the user has any group specified in one of these fields, that group’s scope of influence extends the user’s own scope.
For instance, consider USER1 with GROUP1 specified in the group-SPECIAL field. If GROUP1 owns GROUP2, and GROUP2 subsequently owns USER5, then USER1 gains privileges over USER5. This is not just about data access; USER1 essentially becomes the owner of USER5. A unique aspect of z/OS is that this level of access allows USER1 to, for example, change USER5’s password, even if USER5 holds privileged attributes like SPECIAL, OPERATIONS, ROAUDIT, AUDITOR, or PROTECTED.
Below is an SQL query, generated using the racfudit utility, that identifies all users and groups where the specified user possesses special attributes:
select ProfileName, CGGRPNM, CGUACC, CGFLAG2 from USER_BASE WHERE (CGFLAG2 LIKE '%10000000%');
Here is a query to find users whose owners (AUTHOR) are not the standard default administrators:
select ProfileName,AUTHOR from USER_BASE WHERE (AUTHOR NOT LIKE '%IBMUSER%' AND AUTHOR NOT LIKE 'SYS1%');
Let’s illustrate how user privileges can be escalated through these implicit profile relationships.
In this scenario, the user TESTUSR has the group-SPECIAL field set to PASSADM. This group, PASSADM, owns the OPERATOR user. This means TESTUSR’s scope of influence expands to include PASSADM’s scope, thereby granting TESTUSR control over OPERATOR. Consequently, if TESTUSR’s credentials are compromised, the attacker gains access to the OPERATOR user. The OPERATOR user, in turn, has READ access to the IRR.PASSWORD.RESET resource, which allows them to assign a password to any user who does not possess privileged permissions.
Having elevated privileges in z/OS UNIX is often sufficient for compromising the mainframe. These can be acquired through several methods:
- Grant the user READ access to the BPX.SUPERUSER resource of the FACILITY class.
- Grant the user READ access to UNIXPRIV.SUPERUSER.* resources of the UNIXPRIV class.
- Set the UID field to 0 in the OMVS segment of the user profile.
For example, the DFSOPER user has READ access to the BPX.SUPERUSER resource, making them privileged in z/OS UNIX and, by extension, across the entire mainframe. However, DFSOPER does not have the explicit privileged fields SPECIAL, OPERATIONS, AUDITOR, ROAUDIT and PROTECTED set, meaning the OPERATOR user can change DFSOPER’s password. This allows us to define the following sequence of actions to achieve high privileges on the mainframe:
- Obtain and use TESTUSR’s credentials to log in.
- Change OPERATOR’s password and log in with those credentials.
- Change DFSOPER’s password and log in with those credentials.
- Access the z/OS UNIX Shell with elevated privileges.
We uncovered another implicit RACF profile relationship that enables user privilege escalation.
In another example, the TESTUSR user has READ access to the OPERSMS.SUBMIT resource of the SURROGAT class. This implies that TESTUSR can create a task under the identity of OPERSMS using the ImpersonateTo relationship. OPERSMS is a member of the HFSADMIN group, which has READ access to the TESTAUTH resource of the TSOAUTH class. This resource indicates whether the user can run an application or library as APF-authorized – this requires only READ access. Therefore, if APF access is misconfigured, the OPERSMS user can escalate their current privileges to the highest possible level. This outlines a path from the low-privileged TESTUSR to obtaining maximum privileges on the mainframe.
At this stage, the racfudit utility allows identifying these connections only manually through a series of SQLite database queries. However, we are planning to add support for another output format, including Neo4j DBMS integration, to automatically visualize the interconnected chains described above.
Password hashes in RACF
To escalate privileges and gain mainframe access, we need the credentials of privileged users. We previously used our utility to extract their password hashes. Now, let’s dive into the password policy principles in z/OS and outline methods for recovering passwords from these collected hashes.
The primary password authentication methods in z/OS, based on RACF, are PASSWORD and PASSPHRASE. PASSWORD is a password composed by default of ASCII characters: uppercase English letters, numbers, and special characters (@#$). Its length is limited to 8 characters. PASSPHRASE, or a password phrase, has a more complex policy, allowing 14 to 100 ASCII characters, including lowercase or uppercase English letters, numbers, and an extended set of special characters (@#$&*{}[]()=,.;’+/). Hashes for both PASSWORD and PASSPHRASE are stored in the user profile within the BASE segment, in the PASSWORD and PHRASE fields, respectively. Two algorithms are used to derive their values: DES and KDFAES.
It is worth noting that we use the terms “password hash” and “password phrase hash” for clarity. When using the DES and KDFAES algorithms, user credentials are stored in the RACF database as encrypted text, not as a hash sum in its classical sense. Nevertheless, we will continue to use “password hash” and “password phrase hash” as is customary in IBM documentation.
Let’s discuss the operating principles and characteristics of the DES and KDFAES algorithms in more detail.
DES
When the DES algorithm is used, the computation of PASSWORD and PHRASE values stored in the RACF database involves classic DES encryption. Here, the plaintext data block is the username (padded to 8 characters if shorter), and the key is the password (also padded to 8 characters if shorter).
PASSWORD
The username is encrypted with the password as the key via the DES algorithm, and the 8-byte result is placed in the user profile’s PASSWORD field.
Keep in mind that both the username and password are encoded with EBCDIC. For instance, the username USR1 would look like this in EBCDIC: e4e2d9f140404040. The byte 0x40 serves as padding for the plaintext to reach 8 bytes.
This password can be recovered quite fast, given the small keyspace and low computational complexity of DES. For example, a brute-force attack powered by a cluster of NVIDIA 4090 GPUs takes less than five minutes.
The hashcat tool includes a module (Hash-type 8500) for cracking RACF passwords with the DES algorithm.
PASSPHRASE
PASSPHRASE encryption is a bit more complex, and a detailed description of its algorithm is not readily available. However, our research uncovered certain interesting characteristics.
First, the final hash length in the PHRASE field matches the original password phrase length. Essentially, the encrypted data output from DES gets truncated to the input plaintext length without padding. This design can clearly lead to collisions and incorrect authentication under certain conditions. For instance, if the original password phrase is 17 bytes long, it will be encrypted in three blocks, with the last block padded with seven bytes. These padded bytes are then truncated after encryption. In this scenario, any password whose first 17 encrypted bytes match the encrypted PASSPHRASE would be considered valid.
The second interesting feature is that the PHRASE field value is also computed using the DES algorithm, but it employs a proprietary block chaining mode. We will informally refer to this as IBM-custom mode.
Given these limitations, we can use the hashcat module for RACF DES to recover the first 8 characters of a password phrase from the first block of encrypted data in the PHRASE field. In some practical scenarios, recovering the beginning of a password phrase allowed us to guess the remainder, especially when weak dictionary passwords were used. For example, if we recovered Admin123 (8 characters) while cracking a 15-byte PASSPHRASE hash, then it is plausible the full password phrase was Admin1234567890.
KDFAES
Computing passwords and password phrases generated with the KDFAES algorithm is significantly more challenging than with DES. KDFAES is a proprietary IBM algorithm that leverages AES encryption. The encryption key is generated from the password using the PBKDF2 function with a specific number of hashing iterations.
PASSWORD
The diagram below outlines the multi-stage KDFAES PASSWORD encryption algorithm.
The first stage mirrors the DES-based PASSWORD computation algorithm. Here, the plaintext username is encrypted using the DES algorithm with the password as the key. The username is also encoded in EBCDIC and padded if it’s shorter than 8 bytes. The resulting 8-byte output serves as the key for the second stage: hashing. This stage employs a proprietary IBM algorithm built upon PBKDF2-SHA256-HMAC. A randomly generated 16-byte string (salt) is fed into this algorithm along with the 8-byte key from the first stage. This data is then iteratively hashed using PBKDF2-SHA256-HMAC. The number of iterations is determined by two parameters set in RACF: the memory factor and the repetition factor. The output of the second stage is a 32-byte hash, which is then used as the key for AES encryption of the username in the third stage.
The final output is 16 bytes of encrypted data. The first 8 bytes are appended to the end of the PWDX field in the user profile BASE segment, while the other 8 bytes are placed in the PASSWORD field within the same segment.
The PWDX field in the BASE segment has the following structure:
| Offset | Size | Field | Comment |
| 0–3 | 4 bytes | Magic number | In the profiles we analyzed, we observed only the value E7D7E66D |
| 4–7 | 4 bytes | Hash type | In the profiles we analyzed, we observed only two values: 00180000 for PASSWORD hashes and 00140000 for PASSPHRASE hashes |
| 8–9 | 2 bytes | Memory factor | A value that determines the number of iterations in the hashing stage |
| 10–11 | 2 bytes | Repetition factor | A value that determines the number of iterations in the hashing stage |
| 12–15 | 4 bytes | Unknown value | In the profiles we analyzed, we observed only the value 00100010 |
| 16–31 | 16 bytes | Salt | A randomly generated 16-byte string used in the hashing stage |
| 32–39 | 8 bytes | The first half of the password hash | The first 8 bytes of the final encrypted data |
You can use the dedicated module in the John the Ripper utility for offline password cracking. While an IBM KDFAES module for an older version of hashcat exists publicly, it was never integrated into the main branch. Therefore, we developed our own RACF KDFAES module compatible with the current hashcat version.
The time required to crack an RACF KDFAES hash has significantly increased compared to RACF DES, largely due to the integration of PBKDF2. For instance, if the memory factor and repetition factor are set to 0x08 and 0x32 respectively, the hashing stage can reach 40,000 iterations. This can extend the password cracking time to several months or even years.
PASSPHRASE
Encrypting a password phrase hash with KDFAES shares many similarities with encrypting a password hash. According to public sources, the primary difference lies in the key used during the second stage. For passwords, data derived from DES-encrypting the username was used, while for a password phrase, its SHA256 hash is used. During our analysis, we could not determine the exact password phrase hashing process – specifically, whether padding is involved, if a secret key is used, and so on.
Additionally, when using a password phrase, the PHRASE and PHRASEX fields instead of PASSWORD and PWDX, respectively, store the final hash, with the PHRASEX value having a similar structure.
Conclusion
In this article, we have explored the internal workings of the RACF security package, developed an approach to extracting information, and presented our own tool developed for the purpose. We also outlined several potential misconfigurations that could lead to mainframe compromise and described methods for detecting them. Furthermore, we examined the algorithms used for storing user credentials (passwords and password phrases) and highlighted their strengths and weaknesses.
We hope that the information presented in this article helps mainframe owners better understand and assess the potential risks associated with incorrect RACF security suite configurations and take appropriate mitigation steps. Transitioning to the KDFAES algorithm and password phrases, controlling UACC values, verifying access to APF libraries, regularly tracking user relationship chains, and other steps mentioned in the article can significantly enhance your infrastructure security posture with minimal effort.
In conclusion, it is worth noting that only a small percentage of the RACF database structure has been thoroughly studied. Comprehensive research would involve uncovering additional relationships between database entities, further investigating privileges and their capabilities, and developing tools to exploit excessive privileges. The topic of password recovery is also not fully covered because the encryption algorithms have not been fully studied. IBM z/OS mainframe researchers have immense opportunities for analysis. As for us, we will continue to shed light on the obscure, unexplored aspects of these devices, to help prevent potential vulnerabilities in mainframe infrastructure and associated security incidents.




-
KitPloit
- YATAS - A Simple Tool To Audit Your AWS Infrastructure For Misconfiguration Or Potential Security Issues With Plugins Integration
YATAS - A Simple Tool To Audit Your AWS Infrastructure For Misconfiguration Or Potential Security Issues With Plugins Integration
Yet Another Testing & Auditing Solution
The goal of YATAS is to help you create a secure AWS environment without too much hassle. It won't check for all best practices but only for the ones that are important for you based on my experience. Please feel free to tell me if you find something that is not covered.
Features
YATAS is a simple and easy to use tool to audit your infrastructure for misconfiguration or potential security issues.
| No details | Details |
|---|---|
Installation
brew tap padok-team/tap
brew install yatasyatas --initModify .yatas.yml to your needs.
yatas --installInstalls the plugins you need.
Usage
yatas -hFlags:
--details: Show details of the issues found.--compare: Compare the results of the previous run with the current run and show the differences.--ci: Exit code 1 if there are issues found, 0 otherwise.--resume: Only shows the number of tests passing and failing.--time: Shows the time each test took to run in order to help you find bottlenecks.--init: Creates a .yatas.yml file in the current directory.--install: Installs the plugins you need.--only-failure: Only show the tests that failed.
Plugins
| Plugins | Description | Checks |
|---|---|---|
| AWS Audit | AWS checks | Good practices and security checks |
| Markdown Reports | Reporting | Generates a markdown report |
Checks
Ignore results for known issues
You can ignore results of checks by adding the following to your .yatas.yml file:
ignore:
- id: "AWS_VPC_004"
regex: true
values:
- "VPC Flow Logs are not enabled on vpc-.*"
- id: "AWS_VPC_003"
regex: false
values:
- "VPC has only one gateway on vpc-08ffec87e034a8953"Exclude a test
You can exclude a test by adding the following to your .yatas.yml file:
plugins:
- name: "aws"
enabled: true
description: "Check for AWS good practices"
exclude:
- AWS_S3_001Specify which tests to run
To only run a specific test, add the following to your .yatas.yml file:
plugins:
- name: "aws"
enabled: true
description: "Check for AWS good practices"
include:
- "AWS_VPC_003"
- "AWS_VPC_004"Get error logs
You can get the error logs by adding the following to your env variables:
export YATAS_LOG_LEVEL=debugThe available log levels are: debug, info, warn, error, fatal, panic and off by default
AWS - 63 Checks
AWS Certificate Manager
- AWS_ACM_001 ACM certificates are valid
- AWS_ACM_002 ACM certificate expires in more than 90 days
- AWS_ACM_003 ACM certificates are used
APIGateway
- AWS_APG_001 ApiGateways logs are sent to Cloudwatch
- AWS_APG_002 ApiGateways are protected by an ACL
- AWS_APG_003 ApiGateways have tracing enabled
AutoScaling
- AWS_ASG_001 Autoscaling maximum capacity is below 80%
- AWS_ASG_002 Autoscaling group are in two availability zones
Backup
- AWS_BAK_001 EC2's Snapshots are encrypted
- AWS_BAK_002 EC2's snapshots are younger than a day old
Cloudfront
- AWS_CFT_001 Cloudfronts enforce TLS 1.2 at least
- AWS_CFT_002 Cloudfronts only allow HTTPS or redirect to HTTPS
- AWS_CFT_003 Cloudfronts queries are logged
- AWS_CFT_004 Cloudfronts are logging Cookies
- AWS_CFT_005 Cloudfronts are protected by an ACL
CloudTrail
- AWS_CLD_001 Cloudtrails are encrypted
- AWS_CLD_002 Cloudtrails have Global Service Events Activated
- AWS_CLD_003 Cloudtrails are in multiple regions
COG
- AWS_COG_001 Cognito allows unauthenticated users
DynamoDB
- AWS_DYN_001 Dynamodbs are encrypted
- AWS_DYN_002 Dynamodb have continuous backup enabled with PITR
EC2
- AWS_EC2_001 EC2s don't have a public IP
- AWS_EC2_002 EC2s have the monitoring option enabled
ECR
- AWS_ECR_001 ECRs image are scanned on push
- AWS_ECR_002 ECRs are encrypted
- AWS_ECR_003 ECRs tags are immutable
EKS
- AWS_EKS_001 EKS clusters have logging enabled
- AWS_EKS_002 EKS clusters have private endpoint or strict public access
LoadBalancer
- AWS_ELB_001 ELB have access logs enabled
GuardDuty
- AWS_GDT_001 GuardDuty is enabled in the account
IAM
- AWS_IAM_001 IAM Users have 2FA activated
- AWS_IAM_002 IAM access key younger than 90 days
- AWS_IAM_003 IAM User can't elevate rights
- AWS_IAM_004 IAM Users have not used their password for 120 days
Lambda
- AWS_LMD_001 Lambdas are private
- AWS_LMD_002 Lambdas are in a security group
- AWS_LMD_003 Lambdas are not with errors
RDS
- AWS_RDS_001 RDS are encrypted
- AWS_RDS_002 RDS are backedup automatically with PITR
- AWS_RDS_003 RDS have minor versions automatically updated
- AWS_RDS_004 RDS aren't publicly accessible
- AWS_RDS_005 RDS logs are exported to cloudwatch
- AWS_RDS_006 RDS have the deletion protection enabled
- AWS_RDS_007 Aurora Clusters have minor versions automatically updated
- AWS_RDS_008 Aurora RDS are backedup automatically with PITR
- AWS_RDS_009 Aurora RDS have the deletion protection enabled
- AWS_RDS_010 Aurora RDS are encrypted
- AWS_RDS_011 Aurora RDS logs are exported to cloudwatch
- AWS_RDS_012 Aurora RDS aren't publicly accessible
S3 Bucket
- AWS_S3_001 S3 are encrypted
- AWS_S3_002 S3 buckets are not global but in one zone
- AWS_S3_003 S3 buckets are versioned
- AWS_S3_004 S3 buckets have a retention policy
- AWS_S3_005 S3 bucket have public access block enabled
Volume
- AWS_VOL_001 EC2's volumes are encrypted
- AWS_VOL_002 EC2 are using GP3
- AWS_VOL_003 EC2 have snapshots
- AWS_VOL_004 EC2's volumes are unused
VPC
- AWS_VPC_001 VPC CIDRs are bigger than /20
- AWS_VPC_002 VPC can't be in the same account
- AWS_VPC_003 VPC only have one Gateway
- AWS_VPC_004 VPC Flow Logs are activated
- AWS_VPC_005 VPC have at least 2 subnets
How to create a new plugin ?
You'd like to add a new plugin ? Then simply visit yatas-plugin and follow the instructions.
Open Source Intelligence
What is Open Source Intelligence?
The term “open source” refers in particular to records this is publically available. A huge part of the internet cannot be found using major search engines. This is called as “Deep Web”. Deep Web is a mass of websites, databases, files, and more that cannot be indexed by Google, Bing, Yahoo, or any other search engine. Despite this, much of the content of the dark web can be considered open source because it is easily available to the public.
There is plenty of information available online that can be found using online tools other than regular search engines. Tools like Shodan can be used to find IP addresses, open ports, CCTV, printers, and everything else that is connected to the internet.
Information can be considered open source if it is:
- Published or broadcasted for a public audience like news
- Available to the public by request for eg. census data
- Available to the public by subscription or purchase
- Could be seen or heard by any casual observer
- Made available at a meeting open to the public
- Obtained by visiting any place or attending any event that is open to the public
How is the Open Source Intelligence Used?
OSINT is widely used in:
1.Ethical Hacking & Penetration Testing
Security professionals use open-source intelligence to identify weaknesses in networks so that they can be remediated before they are exploited by hackers. Commonly found weaknesses include:
- Accidental leaks of sensitive information, like through social media
- Open ports or unsecured internet-connected devices
- Unpatched software, such as websites running old versions of CMS
- Leaked or exposed assets.
2.Identifying External Threats
From identifying which new vulnerabilities are being actively exploited to intercepting threat actors about an upcoming attack, open source intelligence enables security professionals to prioritize their time to address the most significant current threats.
The fact that open-source intelligence is frequently combined with other intelligence categories, is one of the most crucial things to comprehend about. Open-source intelligence is frequently filtered and verified using information from closed sources like external intelligence-sharing forums and closed dark web communities. Analysts can carry out these tasks with the use of a variety of instruments.
The Dark Side of Open Source Intelligence
If security analysts can access everything, threat actors can do the same with ease. Threat actors use open-source intelligence tools and tactics to identify potential targets and exploit weaknesses in target networks. Attackers attempt to exploit a weakness once it has been identified to breach the target.
This procedure is the main cause of the high number of attacks on small and medium-sized businesses. It is not because threat actors target particular businesses; rather, it is because open-source intelligence tools can spot design flaws in a company’s network or website. Additionally, threat actors look for data about people and organizations that can be used to support complex social engineering campaigns utilizing phishing (email), vishing (phone or voicemail), and Smishing (SMS). Sensitive information often published on social networks and blogs can be used to create social engineering campaigns that are very persuasive to convince individuals to compromise the network or assets of their company.
This is why it is crucial to use open-source intelligence for security objectives. It provides you a chance to identify and address network vulnerabilities in your company and delete critical data before threat actors utilize the same tools and strategies to take advantage of them.
Open Source Intelligence Techniques
The methodology to perform OSINT falls under two categories: Passive OSINT & Active OSINT.
Passive OSINT Threat Intelligence Platforms (TIPs) are frequently used in passive OSINT to aggregate several threat feeds into one convenient location. This issue is resolved by more sophisticated threat intelligence solutions, which automate the process of prioritizing and ignoring warnings in accordance with the unique requirements of a company. These solutions use artificial intelligence, machine learning, and natural language processing. Similar to this, organized threat groups frequently employ botnets to gather crucial data using methods like traffic sniffing and keylogging.
Active OSINT is the use of a variety of techniques to search for specific insights or information. For security professionals, this type of collection work is usually done for one of two reasons:
- A passively collected alert has highlighted a potential threat and further insight is required.
- Used in Penetration Testing exercise.
Open Source Intelligence Tools
While there are numerous free and practical tools available to security experts and threat actors alike search engines like Google are among the most often utilized open-source intelligence tools.
The frequency with which common, well-intentioned people unintentionally leave important assets and information exposed to the internet is one of the largest problems encountered by security experts. The data and assets they reveal can be found using a set of sophisticated search techniques known as “Google Dork” queries.
The Public Intelligence website offers a more thorough rundown of Google dork queries, below is an example of what a google dork query looks like:
“sensitive but unclassified” filetype: txt publicintelligence.net”
If you type this search term into a search engine, it returns only TXT documents from the Public Intelligence website that contain the words “sensitive but unclassified” somewhere in the document text. As you can imagine, with hundreds of commands at their disposal, security professionals and threat actors can use similar techniques to search for almost anything.
Beyond search engines, literally hundreds of technologies are available for locating network vulnerabilities or exposed assets.
There are a ton of free and paid programs with common features that may be used to search and analyze open-source data, including:
- Metadata search
- Code search
- People and identity investigation
- Phone number research
- Email search and verification
- Linking social media accounts
- Image analysis
- Geospatial research and mapping
- Wireless network detection and packet analysis
WRAP UP!
All security disciplines can benefit greatly from open-source intelligence. It will take some time and some trial and error to find the best set of tools and methods for your particular needs. The tools and methods required to locate unsecured assets differ from those that will enable you to act on a threat alert. The presence of a clear strategy is the most crucial element in the success of any open-source intelligence endeavor. Once goals have been stated and objectives are clear, it will be much easier to find the best tools and methodologies.
Reasons to Hire a Red Team Assessor for the IT Infrastructure
Red team assessors are professional hackers who are hired to assess the IT Infrastructure of an organization. They are hired to evaluate and perform hacks on systems in a way a malicious hacker would perform an attack and break in into the systems. They basically simulate an attack to exploit gaps in the organization’s IT Infrastructure.
This is precisely the way how a red team assessor evaluates the effectiveness of an organization’s security controls in place. Compared to the penetration test, the red team assessment is broader in scope involving a full-scale attack on the IT Infrastructure which lasts for hours, days, or even weeks.
This type of hack provides insightful data on how and for how long a hacker managed to maintain access within their systems and network. Such assessments help organizations improve and strengthen their cyber security posture. Covering more of this in detail we have today shared the top 5 reasons why we believe organizations must hire red team assessors.
Top 5 reasons to hire a red team assessor
1) Identify Gaps in the IT Infrastructure
Red Team Assessors are often hired by IT firms and businesses to help them identify potential gaps in the systems. More than often the internal team fails to identify gaps, vulnerabilities, or weaknesses in systems that a hacker would possibly find. Such loopholes and gaps are essential to be identified and fixed immediately to prevent incidents of breaches and hacks.
The exercise is performed for finding gaps in terms of operational disruptions, coding errors, misconfigured patches, insider threats, and weaknesses in processes, workflows, technology, and negligence of people involved such as employees, suppliers, and business vendors. So, for these reasons, it is recommended that organizations annually perform a thorough red team assessment to identify such gaps and remediate vulnerabilities in systems. After all, even the best of defense can fall prey to attacks by hackers, given the dynamics of the evolving cybersecurity industry.
2) Evaluate the Effectiveness of Security Controls
Evaluating the effectiveness of security controls is crucial for a business looking to strengthen its cyber security posture. So, Red team assessment is one of the best ways of evaluating the effectiveness and performance of security controls established within the organization. Although internal assessment of security controls and systems may suggest strong security in place, yet a third-party assessment may suggest otherwise.
This is because internal teams may tend to overlook certain things that a third party may detect. So, in that sense, the Red team assessment is a perfect exercise as it gives the organizations a third-party perspective of their cybersecurity posture. Further, their assessment and reports give more credibility to the stakeholders of the organizations.
Moreover, the red team assessment exposes vulnerabilities and weaknesses in the infrastructure and verifies the effectiveness of the security control implemented in the organization. This helps the organization fix gaps and improve the security controls while also allowing the organization to strengthen the overall cybersecurity posture in the industry.
3.Risk Exposure & Impact
Performing red team assessment involves simulating a real attack on systems and infrastructure. This helps the organization understand the risk exposure to the organization and the potential impact of a security breach or compromise on business.
The assessment demonstrates different ways and means by which a hacker can stage an attack on systems and IT infrastructure. It also demonstrates the amount of damage that the attack could have on the organization and the extent of data leakage in case of a compromise.
Not just that, the assessment also helps an organization understand and prioritize their resources on assets and processes that need immediate attention. This is especially for those assets that are highly exposed to risk. Overall, the assessment conducted by the red team assessor highlights the vulnerabilities and their implications on the IT infrastructure and operations.
4.Effectiveness of Security Team
Simulation of real attacks allows the organization to test the effectiveness of not just the security controls but also the effectiveness of the security team within the organization. The assessment will help organizations evaluate how well the security team is equipped to deal with situations of data breach and how quickly can they address the issue. Incidents of data leakage and compromise need to be neutralized at the earliest to prevent further damage. For this, the security team should be well equipped and undergo regular training. So, this way red team assessment will ensure the effectiveness of not just the controls in place but also the effectiveness of the security team.
5.Effectiveness of Incident Response Plans
Red Team Assessment also provides an opportunity for the organization, to test the effectiveness of its incident response plans. The test evaluates the security controls and the real-time incident response of an organization in case of an incident. This process demonstrates the preparedness of the organization in terms of responding to incidents and, strategies for mitigating the risk. The entire process also works as a guide for organizations to improve their Incident Response plans and establish a strong cybersecurity program within the organization.
Final Thought
Red Team Assessors are professional hackers having the skills, experience, and expertise of finding gaps and security flaws like a real-world hacker would, in a given scenario. So, talking about hiring a read team assessor, it would definitely be considered as a good decision for it helps find vulnerabilities and also test the effectiveness of controls in place.
Their dynamic approach and multi-layered, thorough assessment process bring out the accuracy in the evaluation process and test the effectiveness of the organization’s security controls. So, organizations looking to strengthen their cybersecurity programs and ensure the effectiveness of their security controls must definitely consider hiring a red team assessor for performing the exercise.
-
KitPloit
- YATAS - A Simple Tool To Audit Your AWS Infrastructure For Misconfiguration Or Potential Security Issues With Plugins Integration
YATAS - A Simple Tool To Audit Your AWS Infrastructure For Misconfiguration Or Potential Security Issues With Plugins Integration
Yet Another Testing & Auditing Solution
The goal of YATAS is to help you create a secure AWS environment without too much hassle. It won't check for all best practices but only for the ones that are important for you based on my experience. Please feel free to tell me if you find something that is not covered.
Features
YATAS is a simple and easy to use tool to audit your infrastructure for misconfiguration or potential security issues.
| No details | Details |
|---|---|
Installation
brew tap padok-team/tap
brew install yatasyatas --initModify .yatas.yml to your needs.
yatas --installInstalls the plugins you need.
Usage
yatas -hFlags:
--details: Show details of the issues found.--compare: Compare the results of the previous run with the current run and show the differences.--ci: Exit code 1 if there are issues found, 0 otherwise.--resume: Only shows the number of tests passing and failing.--time: Shows the time each test took to run in order to help you find bottlenecks.--init: Creates a .yatas.yml file in the current directory.--install: Installs the plugins you need.--only-failure: Only show the tests that failed.
Plugins
| Plugins | Description | Checks |
|---|---|---|
| AWS Audit | AWS checks | Good practices and security checks |
| Markdown Reports | Reporting | Generates a markdown report |
Checks
Ignore results for known issues
You can ignore results of checks by adding the following to your .yatas.yml file:
ignore:
- id: "AWS_VPC_004"
regex: true
values:
- "VPC Flow Logs are not enabled on vpc-.*"
- id: "AWS_VPC_003"
regex: false
values:
- "VPC has only one gateway on vpc-08ffec87e034a8953"Exclude a test
You can exclude a test by adding the following to your .yatas.yml file:
plugins:
- name: "aws"
enabled: true
description: "Check for AWS good practices"
exclude:
- AWS_S3_001Specify which tests to run
To only run a specific test, add the following to your .yatas.yml file:
plugins:
- name: "aws"
enabled: true
description: "Check for AWS good practices"
include:
- "AWS_VPC_003"
- "AWS_VPC_004"Get error logs
You can get the error logs by adding the following to your env variables:
export YATAS_LOG_LEVEL=debugThe available log levels are: debug, info, warn, error, fatal, panic and off by default
AWS - 63 Checks
AWS Certificate Manager
- AWS_ACM_001 ACM certificates are valid
- AWS_ACM_002 ACM certificate expires in more than 90 days
- AWS_ACM_003 ACM certificates are used
APIGateway
- AWS_APG_001 ApiGateways logs are sent to Cloudwatch
- AWS_APG_002 ApiGateways are protected by an ACL
- AWS_APG_003 ApiGateways have tracing enabled
AutoScaling
- AWS_ASG_001 Autoscaling maximum capacity is below 80%
- AWS_ASG_002 Autoscaling group are in two availability zones
Backup
- AWS_BAK_001 EC2's Snapshots are encrypted
- AWS_BAK_002 EC2's snapshots are younger than a day old
Cloudfront
- AWS_CFT_001 Cloudfronts enforce TLS 1.2 at least
- AWS_CFT_002 Cloudfronts only allow HTTPS or redirect to HTTPS
- AWS_CFT_003 Cloudfronts queries are logged
- AWS_CFT_004 Cloudfronts are logging Cookies
- AWS_CFT_005 Cloudfronts are protected by an ACL
CloudTrail
- AWS_CLD_001 Cloudtrails are encrypted
- AWS_CLD_002 Cloudtrails have Global Service Events Activated
- AWS_CLD_003 Cloudtrails are in multiple regions
COG
- AWS_COG_001 Cognito allows unauthenticated users
DynamoDB
- AWS_DYN_001 Dynamodbs are encrypted
- AWS_DYN_002 Dynamodb have continuous backup enabled with PITR
EC2
- AWS_EC2_001 EC2s don't have a public IP
- AWS_EC2_002 EC2s have the monitoring option enabled
ECR
- AWS_ECR_001 ECRs image are scanned on push
- AWS_ECR_002 ECRs are encrypted
- AWS_ECR_003 ECRs tags are immutable
EKS
- AWS_EKS_001 EKS clusters have logging enabled
- AWS_EKS_002 EKS clusters have private endpoint or strict public access
LoadBalancer
- AWS_ELB_001 ELB have access logs enabled
GuardDuty
- AWS_GDT_001 GuardDuty is enabled in the account
IAM
- AWS_IAM_001 IAM Users have 2FA activated
- AWS_IAM_002 IAM access key younger than 90 days
- AWS_IAM_003 IAM User can't elevate rights
- AWS_IAM_004 IAM Users have not used their password for 120 days
Lambda
- AWS_LMD_001 Lambdas are private
- AWS_LMD_002 Lambdas are in a security group
- AWS_LMD_003 Lambdas are not with errors
RDS
- AWS_RDS_001 RDS are encrypted
- AWS_RDS_002 RDS are backedup automatically with PITR
- AWS_RDS_003 RDS have minor versions automatically updated
- AWS_RDS_004 RDS aren't publicly accessible
- AWS_RDS_005 RDS logs are exported to cloudwatch
- AWS_RDS_006 RDS have the deletion protection enabled
- AWS_RDS_007 Aurora Clusters have minor versions automatically updated
- AWS_RDS_008 Aurora RDS are backedup automatically with PITR
- AWS_RDS_009 Aurora RDS have the deletion protection enabled
- AWS_RDS_010 Aurora RDS are encrypted
- AWS_RDS_011 Aurora RDS logs are exported to cloudwatch
- AWS_RDS_012 Aurora RDS aren't publicly accessible
S3 Bucket
- AWS_S3_001 S3 are encrypted
- AWS_S3_002 S3 buckets are not global but in one zone
- AWS_S3_003 S3 buckets are versioned
- AWS_S3_004 S3 buckets have a retention policy
- AWS_S3_005 S3 bucket have public access block enabled
Volume
- AWS_VOL_001 EC2's volumes are encrypted
- AWS_VOL_002 EC2 are using GP3
- AWS_VOL_003 EC2 have snapshots
- AWS_VOL_004 EC2's volumes are unused
VPC
- AWS_VPC_001 VPC CIDRs are bigger than /20
- AWS_VPC_002 VPC can't be in the same account
- AWS_VPC_003 VPC only have one Gateway
- AWS_VPC_004 VPC Flow Logs are activated
- AWS_VPC_005 VPC have at least 2 subnets
How to create a new plugin ?
You'd like to add a new plugin ? Then simply visit yatas-plugin and follow the instructions.