โŒ

Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

DoDโ€™s plan to track contractor-held property is failing, putting 2028 audit goal at risk

The Pentagonโ€™s plan to fix its decades-old material weaknesses โ€” its inability to reliably track government property in the possession of contractors โ€” is failing, a new inspector general evaluation finds.

The Pentagon IG concluded that the departmentโ€™s corrective action plan โ€” which calls on DoD components to use a software application called the Government Furnished Property Module within the Procurement Integrated Enterprise Environment โ€” has stalled due to a lack of enforcement from the Office of the Secretary of Defense and slow adoption by the military services.

Auditors warn that if DoD components donโ€™t implement the GFP module, the department risks missing its goal of achieving a clean audit opinion by 2028.

โ€œThe implementation of that GFP module is the key to getting this to work,โ€ Mark Thomas, DoD IGโ€™s supervisory auditor, told Federal News Network.

One of the technical challenges, Thomas said, is that each military service uses its own accountable property system of record, or APSR, to track government assets in the hands of contractors. The office of the secretary of defense, however, wants the services to connect their systems to the GFP module.ย ย 

โ€œThat is something that the components have not been able to do yet. Theyโ€™re still working to implement that. Each of the components has corrective action dates for that that are still into the future,โ€ Thomas said.ย 

โ€œThe goal would be to complete everything by 2028, preferably before 2028 so that the auditors, as they come in to do the work, that control environment has been established and been working before the auditors come in and start to do some of the work. That would be the best way to do it,โ€ he added.

But some of the timelines to remediate this weakness stretch beyond the 2028 deadline.ย 

โ€œUnless thereโ€™s a change in those dates, then theyโ€™ll be at risk for missing the deadline,โ€ Thomas said.ย 

Each military service has its own reasons for lagging in implementing the department-wide solution, but most of those reasons center around the same issue โ€” every component is grappling with its own longstanding material weakness in accounting for government property in the possession of contractors.ย 

โ€œThey have their own systems which differ from component to component. So they have their own technical challenges and how their particular system in the Air Force functions and how it accounts for property versus how the Navy does it. Each group is kind of working on their own technical challenges and how theyโ€™re going to report this into their own APSR โ€” they are busy doing that and theyโ€™re actively trying to clean that up so that they can all get opinions on their financial statements,โ€ Thomas said.ย 

But the IG found that this component-level focus has come at the expense of the broader, department-wide effort.ย 

Thomas said the services have been receptive to adapting the department-wide solution, but each faces a number of technical challenges connecting their systems to the GFP module.ย 

โ€œThey understand the importance of it, and they understand what this really would give us if there is a functioning GFP module across the department. This would really give the department a larger birdโ€™s eye view of all of the property that they have in the possession of contractors. And it would provide that enterprise level look and ability to tell we have so much property at contractor x,โ€ Thomas said.ย 

Meanwhile, DoD leaders have not mandated the use of the GFP module, which is stalling the departmentโ€™s efforts to remediate this material weakness. The audit found that the OSD could be โ€œmore forcefulโ€ in recommending and implementing the department-wide solution.

โ€œThey need to be more direct in saying that we will use this module, all the components will use this module. That was one of the areas that we thought was weak, that the department could improve their messaging, and they could improve to be more direct and require the use of this module,โ€ Thomas said.

The post DoDโ€™s plan to track contractor-held property is failing, putting 2028 audit goal at risk first appeared on Federal News Network.

ยฉ The Associated Press

FILE - The Department of Defense logo is seen on the wall in the Press Briefing room at the Pentagon, Oct. 29, 2024, in Washington. (AP Photo/Kevin Wolf, File)

Brink Funds First Third Party Security Audit of Bitcoin Core By Quarkslab

By: Shinobi
19 November 2025 at 13:04

Bitcoin Magazine

Brink Funds First Third Party Security Audit of Bitcoin Core By Quarkslab

Brink, the Bitcoin development organization, recently funded the first ever independent security audit of Bitcoin Core conducted by a third party (the full report is available here). The audit was conducted by Quarkslab, a software security firm, with the help of the Open Source Technology Improvement Fund (OSTIF) and collaboration with Bitcoin Core developers Niklas Gรถgge, from Brink, and Antoine Poinsot, from Chaincode Labs.ย 

This security audit marks a milestone in the development history of Bitcoin Core, the most widely adopted and reference client of the Bitcoin network and protocol.ย 

While Bitcoin Core security policies and practices have been steadily hardened and revised to be more thorough and comprehensive over the last few years, an external audit by a third party specialized in security review is a new bar to meet. It was met.ย 

The audit involved manual code review, static and dynamic analysis with automated tools, and advanced fuzz testing, which takes automatically generated input and runs it through different code paths attempting to reveal unexpected or detrimental behavior.ย 

No critical, high, or medium-severity bugs were discovered in the audit. Two low-severity issues were different, and thirteen other issues that are not classified as vulnerabilities under Bitcoin Coreโ€™s vulnerability classification criteria.ย 

The entire process also resulted in improvements in Bitcoin Coreโ€™s testing infrastructure, including new fuzz testing infrastructure for block connection and chain reorganization scenarios, a new area to be covered by testing, file system improvements speeding up and improving fuzz testing in general, new utilities for testing back sliding code performance, and suggestions for improving code readability for reviewers and new developers.ย 

Some of these improvements are already being worked on for eventual review and merging into the Bitcoin Core repository.ย 

The results of this independent security audit have reinforced that Bitcoin Coreโ€™s improvements over recent years in security policy, testing, and overall quality review have had a meaningful impact on the project.ย 

This post Brink Funds First Third Party Security Audit of Bitcoin Core By Quarkslab first appeared on Bitcoin Magazine and is written by Shinobi.

How to Conduct a Smart Contract Audit Efficiently Without Missing Critical Flaws?

12 November 2025 at 03:18

In the rapidly evolving blockchain ecosystem, smart contracts act as the backbone of decentralized applications, enabling automated, trustless transactions without intermediaries. While their potential is immense, their security vulnerabilities can result in devastating consequences. A single flaw in a smart contract can lead to financial losses amounting to millions, irreversible errors, and significant reputational damage. As blockchain adoption grows, ensuring the integrity and security of smart contracts is no longer optionalโ€Šโ€”โ€Šit is critical for safeguarding both assets and trust within theย network.

Table ofย Contents

โˆ˜ Understanding Smart Contract Audits
โˆ˜ Preparing for a Successful Audit
โˆ˜ Identifying Common Smart Contract Vulnerabilities
โˆ˜ Step-by-Step Smart Contract Audit Process
โˆ˜ Reporting and Remediation
โˆ˜ Best Practices for Continuous Security
โˆ˜ Choosing the Right Tools and Platforms
โˆ˜ Case Studies and Lessonsย Learned

The Financial and Reputational Risks of Vulnerable Contracts

The stakes of deploying insecure smart contracts are high. Exploits and bugs have historically led to high-profile losses in DeFi, NFT platforms, and crypto exchanges. Beyond immediate financial damage, organizations face long-term reputational harm, eroding investor confidence and user trust. Moreover, regulatory scrutiny is intensifying, and deploying vulnerable contracts without thorough audits could expose developers to legal liabilities. Protecting smart contracts is, therefore, a fundamental aspect of maintaining credibility and ensuring sustainable growth in the blockchain space.

Understanding Smart Contractย Audits

What Is a Smart Contract Audit and Why Itย Matters

A smart contract audit is a detailed review of the code and design of a smart contract to identify vulnerabilities, inefficiencies, or unintended behaviors before deployment. Unlike traditional software, smart contracts operate in immutable environmentsโ€Šโ€”โ€Šonce deployed, their code cannot be altered without significant consequences. Audits are crucial to prevent exploits, ensure the contract functions as intended, and instill confidence among users and investors.

Key Objectives of an Audit: Security, Compliance, and Reliability

The primary goal of a smart contract audit is to ensure security. Auditors scrutinize the code for common vulnerabilities such as reentrancy attacks, integer overflows, and access control issues. Beyond security, audits also verify compliance with industry standards and regulatory requirements, ensuring that contracts operate within legal and ethical boundaries. Finally, reliability is assessed to guarantee that contracts perform as expected under various conditions, maintaining smooth operations and userย trust.

Common Misconceptions About Smart Contractย Auditing

Many assume that a smart contract audit guarantees absolute security; however, audits can only minimize riskโ€Šโ€”โ€Šthey cannot eliminate it entirely. Another misconception is that audits are only necessary for large or high-value projects, when, in reality, even smaller contracts can be targets for attackers. Finally, some developers believe automated tools alone are sufficient, but human expertise remains critical for identifying subtle logic flaws and ensuring comprehensive evaluation.

Preparing for a Successful Audit

Defining Audit Goals: Security, Functionality, and Optimization
Before starting an audit, itโ€™s essential to define clear objectives. Security is always the top priority, but functionality and performance must also be assessed. A well-prepared audit ensures that your smart contract not only resists attacks but also performs its intended functions flawlessly. Setting goals early allows auditors to focus on critical components, reducing the likelihood of missed vulnerabilities and unnecessary delays.

Gathering Necessary Resources and Documentation
A successful audit relies on having the right documentation and resources available. This includes the complete codebase, system architecture diagrams, technical specifications, and any previous audit reports. Clear documentation helps auditors understand how the contract is intended to function, which significantly improves the efficiency and accuracy of the auditย process.

Choosing the Right Audit Team: Internal vs External Experts
Selecting a capable audit team is crucial. Internal teams may have deeper knowledge of the project but could overlook blind spots due to familiarity. External experts bring objectivity, specialized expertise, and exposure to a variety of vulnerabilities across projects. Many organizations adopt a hybrid approach, combining internal familiarity with external auditing rigor to maximize security coverage.

Establishing Audit Timelines and Milestones
Time management is key in auditing. Establishing clear timelines and milestones ensures that the audit process remains structured and comprehensive. Dividing the audit into phasesโ€Šโ€”โ€Šsuch as preliminary review, in-depth testing, and remediationโ€Šโ€”โ€Šallows teams to monitor progress and address critical issues promptly without overwhelming developers or delaying deployment.

Identifying Common Smart Contract Vulnerabilities

Reentrancy Attacks and How to Prevent Them
Reentrancy occurs when a contract allows external calls before completing its internal operations, enabling attackers to exploit this flow to drain funds. Preventing reentrancy requires careful ordering of operations, the use of mutexes, and avoiding external calls in critical functions. Auditors must simulate multiple attack scenarios to detect potential risks.

Integer Overflow and Underflow Errors
Arithmetic operations in smart contracts can be vulnerable to overflow or underflow, which can manipulate balances or execute unauthorized transactions. Using safe arithmetic libraries or built-in safeguards in modern blockchain platforms ensures these errors are caught before deployment.

Access Control Misconfigurations
Contracts often include privileged functions that should only be accessible to certain addresses or roles. Misconfigured access control can allow unauthorized users to execute sensitive operations. Auditors verify that roles, permissions, and ownership structures are properly implemented and cannot be bypassed.

Logic Flaws and Unexpected Contract Behavior
Even if a contract is free from common exploits, poor logic design can lead to unintended behavior. This could include incorrect calculations, conditional failures, or state inconsistencies. Auditors carefully analyze the logic flow and test for edge cases to ensure that all conditions perform as intended.

Gas Limit and Optimization Issues
Inefficient code can cause transactions to fail due to gas limits, even if no security vulnerability exists. Auditors evaluate the contractโ€™s computational complexity and suggest optimizations to reduce gas usage, ensuring reliable and cost-effective execution on the blockchain.

External Dependencies and Third-Party Risks
Many contracts interact with external libraries, oracles, or other contracts. These dependencies can introduce hidden vulnerabilities if not properly vetted. Auditors review these integrations, check for known issues, and ensure that external components do not compromise security or functionality.

Step-by-Step Smart Contract Auditย Process

Step 1: Manual Code Review
Manual code review is the foundation of any effective smart contract audit. Experienced auditors analyze the code line by line, checking for logical inconsistencies, security weaknesses, and unintended behavior. Unlike automated tools, manual review identifies nuanced vulnerabilities, subtle logic errors, and edge cases that machines may overlook. Auditors also verify that the contract adheres to best practices and coding standards, ensuring readability, maintainability, and long-term robustness for futureย updates.

Step 2: Automated Testing Tools
Automated testing tools complement manual review by scanning the code for known vulnerabilities, syntax errors, and performance issues. Tools like static analyzers, formal verification software, and dynamic testing frameworks can quickly flag potential security gaps. However, results must be interpreted carefully, as not all flagged issues are critical. Combining automated detection with human expertise ensures both speed, precision, and comprehensive coverage of potential attackย vectors.

Step 3: Security Simulation and Penetration Testing
This step involves simulating real-world attacks to evaluate the contractโ€™s resilience under pressure. Security simulations include testing for reentrancy, overflow, flash loan exploits, and edge-case scenarios that could compromise the contract. Penetration testing allows auditors to probe weaknesses, validate assumptions, and confirm whether existing safeguards are effective. By replicating potential hacker strategies, projects can proactively address vulnerabilities before they become exploitable, reducing financial and reputational risks.

Step 4: Functional and Performance Testing
A smart contract audit is incomplete without functional and performance testing. Auditors verify that the contract executes all intended operations correctly under varying conditions. This includes testing transaction flows, conditional logic, and integration with other contracts or external systems. Performance evaluation focuses on gas efficiency, scalability, and system stability, ensuring the contract runs reliably in production environments while minimizing unnecessary costs.

Step 5: Iterative Review and Validation
Auditing is not a one-time activity. After identifying and addressing vulnerabilities, auditors conduct iterative reviews to validate fixes and confirm that no new issues have been introduced. This iterative process ensures the contract is robust, secure, and fully operational before deployment. Continuous validation also helps teams prepare for future updates, optimizations, and scaling requirements, creating a secure long-term foundation.

Reporting and Remediation

How to Document Findings Clearly for Developers and Stakeholders
An audit report serves as the bridge between technical auditors and development teams. Clear documentation includes a summary of vulnerabilities, detailed explanations of their impact, and step-by-step guidance for remediation. Reports should be structured to highlight critical issues first while providing context for less severe findings, making it actionable for both technical and non-technical stakeholders. Well-documented findings improve transparency and accelerate resolution.

Prioritizing Vulnerabilities by Severity
Not all vulnerabilities are equal. Effective audits categorize issues based on severityโ€Šโ€”โ€Šcritical, high, medium, or low. This prioritization helps developers address the most urgent threats first, ensuring that high-risk vulnerabilities are mitigated before deployment. Clear classification also enables project managers to allocate resources efficiently, balancing security, cost, and development timelines.

Providing Actionable Recommendations
Simply identifying vulnerabilities is not enough. Auditors must provide actionable recommendations that developers can implement directly. This includes code changes, design improvements, best practices for future development, and security enhancements. Actionable guidance reduces remediation time, ensures lasting contract security, and strengthens the overall quality and resilience of theย project.

Collaborating With Developers for Timely Fixes
Audit success depends on collaboration. Auditors should work closely with developers to discuss findings, clarify misunderstandings, and provide support during the remediation process. Open communication ensures that fixes are implemented correctly, reducing the risk of recurring issues and strengthening the projectโ€™s overall security posture. This collaboration also fosters a security-first culture within the development team.

Best Practices for Continuous Security

Implementing Version Control and Secure Deployment
Smart contract security extends beyond auditing. Implementing robust version control ensures that every change is tracked, reviewed, and auditable. Using secure deployment practices, such as multi-signature wallets for contract deployment and verified release pipelines, reduces the risk of unauthorized modifications. Continuous monitoring of deployed contracts ensures that any anomalies or suspicious activity are identified early, allowing for rapid mitigation.

Integrating Security Checks Into Development Workflow
Security should not be a one-time consideration; it must be part of the development lifecycle. Integrating automated testing, static code analysis, and continuous vulnerability scanning into daily development workflows minimizes the risk of introducing errors during iterative updates. Regular code reviews and pair programming further strengthen oversight and create a culture of proactive security awareness.

Using Audits as a Learning Opportunity for Teams
Every audit provides valuable insights. Teams should review audit findings to understand root causes of vulnerabilities, identify recurring patterns, and incorporate lessons learned into future development. This iterative learning improves coding practices, reduces future errors, and equips developers with the knowledge to preemptively address securityย risks.

Preparing for Post-Deployment Monitoring and Updates
Even after deployment, smart contracts require ongoing attention. Monitoring transaction behavior, detecting suspicious activities, and applying timely updates or patches are critical for long-term security. Establishing protocols for post-deployment risk management ensures that contracts remain secure and functional, even as blockchain environments evolve.

Choosing the Right Tools and Platforms

Overview of Leading Smart Contract Audit Platforms
Numerous tools and platforms exist to aid in smart contract auditing. Static analysis platforms detect vulnerabilities automatically, formal verification tools mathematically validate contract logic, and dynamic testing frameworks simulate real-world attacks. Familiarity with these platforms allows teams to select solutions that align with their contract complexity, project goals, and security requirements.

Comparing Manual Audits, Automated Tools, and Hybrid Approaches
Each auditing method has advantages and limitations. Manual audits excel at detecting subtle logic flaws, automated tools offer speed and scalability, and hybrid approaches combine the strengths of both. Depending on the projectโ€™s budget, timeline, and criticality of assets, a hybrid approach often provides the most thorough protection, balancing efficiency with depth of analysis.

Tips for Selecting Tools That Fit Your Project Needs
Selecting the right tools requires evaluating compatibility with your smart contract language, integration capabilities with your development environment, and coverage of potential vulnerabilities. Prioritize platforms that provide actionable insights, detailed reporting, and reliable support. Combining multiple complementary tools can enhance coverage and increase confidence in the contractโ€™s securityย posture.

Case Studies and Lessonsย Learned

Real-World Examples of Smart Contract Failures
History has shown that even minor coding errors in smart contracts can lead to catastrophic outcomes. For example, the infamous DAO hack in 2016 exploited a reentrancy vulnerability, allowing attackers to siphon millions of dollars in Ether. Similarly, several DeFi projects have suffered losses due to unchecked integer overflows, poorly configured access controls, or faulty logic in yield farming protocols. These cases highlight that vulnerabilities are not hypotheticalโ€Šโ€”โ€Šthey are real threats capable of eroding investor confidence, destroying funds, and harming project reputation.

Conclusion

Conducting a thorough smart contract audit is essential for safeguarding assets, ensuring reliability, and building trust in the blockchain ecosystem. By combining manual code review, automated testing, penetration simulations, and continuous monitoring, teams can identify and remediate vulnerabilities before deployment. Following best practices, leveraging the right tools, and fostering a security-first culture not only prevents costly failures but also strengthens investor confidence and project credibility. Learning from past incidents, prioritizing critical issues, and integrating audits into the development lifecycle ensures that smart contracts remain secure, functional, and scalable, providing a robust foundation for long-term success in the rapidly evolving decentralized landscape.


How to Conduct a Smart Contract Audit Efficiently Without Missing Critical Flaws? was originally published in Coinmonks on Medium, where people are continuing the conversation by highlighting and responding to this story.

SOX Compliance and Its Importance in Blockchain & Fintech

26 September 2025 at 07:55
5/5 - (1 vote)

Last Updated on October 8, 2025 by Narendra Sahoo

In the era where technology plays a core part in everything, fintech and blockchain have emerged as transformative forces for businesses. They not only reshape the financial landscape but also promise unparalleled transparency, efficiency and security as the world move forward to digital currency. Thatโ€™s when you know being updated about SOX Compliance in Blockchain & Fintech are important than ever.

As per the latest statisticsย by DemandSage, there are around 29,955 Fintech startups in the world, in which over 13,100 fintech startups are based in the United States.ย  This shows how much business are increasingly embracing technology to innovate and address evolving financial needs. It also highlights the global shift towards digital-first solutions, driven by a demand for greater accessibility and efficiency in financial services.

On the other hand, blockchain technology, also known as Distributed Ledger Technology (DLT) is currently valued at approximately USD $8.70 billion in USA and is estimated to grow an impressive USD $619.28 billion by 2034, according to data from Precedence Research.

However, as this digital continues the revolution, businesses embracing these technologies must also prioritize compliance, security, and accountability. This is where SOX (Sarbanes-Oxley) compliance plays an important role. In todayโ€™s article we are going to explore the reason SOX Compliance is crucial for fintech and blockchain industry. So, lets get started!

ย 

Understanding SOX compliance

The Sarbanes-Oxley Act (SOX), passed in 2002, aims to enhance corporate accountability and transparency in financial reporting. It applies to all publicly traded companies in the U.S. and mandates strict adherence to internal controls, accurate financial reporting, and executive accountability to prevent corporate fraud.

To read more about the SOX you may check the introductory guide to SOX compliance.

The Intersection of SOX and Emerging Technologies

Blockchain technology and fintech solutions disrupt traditional financial systems by offering decentralized and automated alternatives. While these innovations bring significant benefits, they can also obscure transparency and accountability, two principles that SOX aims to uphold. SOX compliance focuses on accurate financial reporting, strong internal controls, and prevention of fraud, aligning with both the potential and risks of emerging technologies.

ย Key reasons why SOX compliance matters

1. Ensuring accurate financial reporting

Blockchain technology is often touted for its transparency and immutability. However, errors in smart contracts, incorrect data inputs, or cyberattacks can lead to inaccurate financial records. SOX compliance mandates stringent controls over financial reporting, ensuring that organizations maintain reliable records even when leveraging blockchain.

2. Mitigating risks in decentralized systems

Fintech platforms and blockchain ecosystems often operate without centralized oversight, making it challenging to identify and address fraud or anomalies. SOXโ€™s requirement for managementโ€™s assessment of internal controls and independent audits provides a critical layer of oversight, helping organizations address vulnerabilities in decentralized environments.

3. Building stakeholder trust

The trust of investors, customers, and regulators is paramount for fintech and blockchain companies. Adhering to SOX requirements demonstrates a commitment to transparency and accountability, promoting confidence among stakeholders and distinguishing compliant organizations from their competitors.

4. Addressing regulatory scrutiny

As blockchain and fintech solutions gain adoption, regulatory scrutiny is intensifying. SOX compliance ensures that organizations are prepared to meet these demands by maintaining rigorous financial practices and demonstrating accountability in their operations.

5. Adapting to hybrid financial models

Many organizations are integrating traditional financial systems with blockchain-based solutions. This hybrid approach can create gaps in controls and reporting mechanisms. Leveraging blockchain in compliance with SOX helps bridge these gaps by enforcing comprehensive internal controls that adapt to both traditional and innovative systems.

6. Promoting operational efficiency

By enforcing stringent controls and systematic processes, SOX compliance encourages better business practices and operational efficiency. This results in more accurate financial reporting, reduced manual interventions, and streamlined processes, which ultimately support better decision-making and resource allocation.

7. Future proofing against emerging technologies

Blockchain and fintech are continuously evolving, and organizations must adapt to new technologies. SOX compliance offers a flexible framework that can scale and evolve with these changes, ensuring that financial reporting and internal controls remain relevant and effective in the face of new technological challenges and opportunities.

Tips to get SOX compliant for fintech and blockchain companies


1. Understand SOX Requirements

  • Familiarize yourself with the key SOX sections, especially Section 302 (corporate responsibility for financial reports) and Section 404 (internal control over financial reporting).
  • Identify the specific areas that apply to your companyโ€™s financial reporting, internal controls, and auditing processes.

2. Form a Compliance Team

  • Assemble an internal team including executives, compliance officers, and IT staff.
  • Consider hiring external experts like auditors to guide the process.

3. Assess Current Financial Processes

  • Review existing financial systems, processes, and internal controls to identify gaps.
  • Document and ensure that these processes are auditable and compliant with SOX.

4. Implement Financial Reporting Systems

  • Automate financial reporting to ensure timely, accurate results.
  • Regularly conduct internal audits to confirm financial controls are working effectively.

5. Strengthen Data Security

  • Implement strong encryption, multi-factor authentication, and role-based access control (RBAC) to secure financial data.
  • Ensure regular backups and disaster recovery plans are in place.

6. Create and Document Policies

  • Develop formal policies for internal controls, financial reporting, and data handling.
  • Train employees on SOX compliance and ensure clear communication about financial responsibilities.

7. Establish Internal Control Framework

  • Build a solid internal control framework, focusing on accuracy, completeness, and fraud prevention in financial reporting.
  • Regularly test, validate controls and consider third-party validation for independent assurance.

8. Disclose Material Changes in Real-Time

  • Develop a process for promptly disclosing any material changes to financial data, ensuring transparency with stakeholders.

9. Prepare for External Audits

  • Engage an independent auditor to review your financial processes and internal controls.
  • Organize records and ensure a clear audit trail to make the audit process smoother.

10. Monitor and Maintain Compliance

  • Continuously monitor financial systems and internal controls to detect errors or fraud.
  • Review and update systems regularly to ensure ongoing SOX compliance.

11. Develop a Compliance Culture

  • Encourage a company-wide focus on SOX compliance, transparency, and accountability.
  • Provide regular training and leadership to instill a culture of compliance.

Conclusion

In the fast-paced era of blockchain and fintech, SOX compliance has evolved from a regulatory necessity to a strategic cornerstone. By driving accurate financial reporting, minimizing risks, and cultivating trust, it sets the stage for lasting growth and innovation. Companies that prioritize compliance and auditing standards donโ€™t just safeguard their operation, but they also position themselves as forward-thinking leaders in the rapidly transforming financial landscape.

The post SOX Compliance and Its Importance in Blockchain & Fintech appeared first on Information Security Consulting Company - VISTA InfoSec.

PowerShell for Hackers: Survival Edition, Part 1

25 September 2025 at 15:27

Welcome back, cyberwarriors.

Weโ€™re continuing our look at how PowerShell can be used in offensive operations, but this time with survival in mind. When youโ€™re operating in hostile territory, creativity and flexibility keep you alive. PowerShell is a powerful tool and how well it serves you depends on how cleverly you use it. The more tricks you know, the better youโ€™ll be at adapting when things get tense. In todayโ€™s chapter weโ€™re focusing on a core part of offensive work, which is surviving while youโ€™re inside the target environment. These approaches have proven themselves in real operations. The longer you blend in and avoid attention, the more you can accomplish.

Weโ€™ll split this series into several parts. This first piece is about reconnaissance and learning the environment youโ€™ve entered. If you map the perimeter and understand the scope of your target up front, youโ€™ll be far better placed to move into exploitation without triggering traps defenders have set up. It takes patience. As OTW says, true compromises usually require time and persistence. Defenders often rely on predictable detection patterns, and that predictability is where many attackers get caught. Neglecting the basics is a common and costly mistake.

When the stakes are high, careless mistakes can ruin everything. You can lose access to a target full of valuable information and damage your reputation among other hackers. Thatโ€™s why we made this guide to help you use PowerShell in ways that emphasize staying undetected and keeping access. Every move should be calculated. Risk is part of the job, but it should never be reckless. Thatโ€™s also why getting comfortable with PowerShell matters, as it gives you the control and flexibility you need to act professionally.

If you read our earlier article PowerShell for Hackers: Basics, then some of the commands in Part 1 will look familiar. In this article we build on those fundamentals and show how to apply them with survival and stealth as the priority.

Basic Reconnaissance

Hostname

Once you have access to a host, perhaps after a compromise or phishing attack, the first step is to find out exactly which system you have landed on. That knowledge is the starting point for planning lateral movement and possible domain compromise:

PS > hostname

running hostname command in powershell

Sometimes the hostname is not very revealing, especially in networks that are poorly organized or where the domain setup is weak. On the other hand, when you break into a large companyโ€™s network, youโ€™ll often see machines labeled with codes instead of plain names. Thatโ€™s because IT staff need a way to keep track of thousands of systems without getting lost. Those codes arenโ€™t random, they follow a logic. If you spend some time figuring out the pattern, you might uncover hints about how the company structures its network.

System Information

To go further, you can get detailed information about the machine itself. This includes whether it is domain-joined, its hardware resources, installed hotfixes, and other key attributes.

PS > systeminfo

running systeminfo in powershell

This command is especially useful for discovering the domain name, identifying whether the machine is virtual, and assessing how powerful it is. A heavily provisioned machine is often important. Just as valuable is the operating system type. For instance, compromising a Windows server is a significant opportunity. Servers typically permit multiple RDP connections and are less likely to be personal workstations. This makes them more attractive for techniques such as LSASS and SAM harvesting. Servers also commonly host information that is valuable for reconnaissance, as well as shares that can be poisoned with malicious LNK files pointing back to your Responder.

Once poisoned, any user accessing those shares automatically leaks their NTLMv2 hashes to you, which you can capture and later crack using tools like Hashcat.

OS Version

If your shell is unstable or noninteractive and you cannot risk breaking it with systeminfo. Here is your alternative:

PS > Get-CimInstance -ClassName Win32_OperatingSystem | Select-Object Caption

finding out os version in powershell

Different versions of Windows expose different opportunities for abuse, so knowing the precise version is always beneficial.

Patches and Hotfixes

Determining patch levels is important. It tells you which vulnerabilities might still be available for exploitation. End-user systems tend to be updated more regularly, but servers and domain controllers often lag behind. Frequently they lack antivirus protection, still run legacy operating systems like Windows Server 2012 R2, and hold valuable data. This makes them highly attractive targets.

Many administrators mistakenly believe isolating domain controllers from the internet is sufficient security. The consequence is often unpatched systems. We once compromised an organization in under 15 minutes with the NoPac exploit, starting from a low-privileged account, purely because their DC was outdated.

To review installed hotfixes:

PS > wmic qfe get Caption,Description,HotFixID,InstalledOn

finding hotfixes with powershell

Remember, even if a system is unpatched, modern antivirus tools may still detect exploitation attempts. Most maintain current signature databases.ย 

Defenses

Before proceeding with exploitation or lateral movement, always understand the defensive posture of the host.

Firewall Rules

Firewall configurations can reveal why certain connections succeed or fail and may contain clues about the broader network. You can find this out through passive reconnaissance:ย 

PS > netsh advfirewall show allprofiles

finding firewall rules with powershell

The output may seem overwhelming, but the more time you spend analyzing rules, the more valuable the information becomes. As you can see above, firewalls can generate logs that are later collected by SIEM tools, so be careful before you initiate any connection.

Antivirus

Antivirus software is common on most systems. Since our objective here is to survive using PowerShell only, we wonโ€™t discuss techniques for abusing AV products or bypassing AMSI, which are routinely detected by those defenses. That said, if you have sufficient privileges you can query installed security products directly to learn whatโ€™s present and how theyโ€™re configured. You might be lucky to find a server with no antivirus at all, but you should treat that as the exception rather than the rule

PS > Get-CimInstance -Namespace root/SecurityCenter2 -ClassName AntivirusProduct

finding the antivirus product on windows with powershell

This method reliably identifies the product in use, not just Microsoft Defender. For more details, such as signature freshness and scan history run this:

PS > Get-MpComputerStatus

getting a detailed report about the antivirus on windows with powershell

To maximize survivability, avoid using malware on these machines. Even if logging is not actively collected, you must treat survival mode as if every move is observed. The lack of endpoint protection does not let you do everything. We saw people install Gsocket on Linux boxes thinking it would secure access, but in reality network monitoring quickly spotted those sockets and defenders shut them down. Same applies to Windows.

Script Logging

Perhaps the most important check is determining whether script logging is enabled. This feature records every executed PowerShell command.

PS > Get-ItemProperty "HKLM:\SOFTWARE\Policies\Microsoft\Windows\PowerShell\ScriptBlockLogging"

checking script logging in powershell

If EnableScriptBlockLogging is set to 1, all your activity is being stored in the PowerShell Operational log. Later we will show you strategies for operating under such conditions.

Users

Identifying who else is present on the system is another critical step.

The quser command is user-focused, showing logged-in users, idle times, and session details:

PS > quser

running quser command in powershell

Meanwhile, qwinsta is session-focused, showing both active and inactive sessions. This is particularly useful when preparing to dump LSASS, as credentials from past sessions often remain in memory. It also shows the connection type whether console or RDP.

PS > qwinsta

running qwinsta command in powershell

Network Enumeration

Finding your way through a hostile network can be challenging. Sometimes you stay low and watch, sometimes you poke around to test the ground. Here are the essential commands to keep you alive.

ARP Cache

The ARP table records known hosts with which the machine has communicated. It is both a reconnaissance resource and an attack surface:

PS > arp -a

running arp to find known hosts

ARP entries can reveal subnets and active hosts. If you just landed on a host, this could be valuable.

Note: a common informal convention is that smaller organizations use the 192.168.x.x address space, mid-sized organizations use 172.16.x.xโ€“172.31.x.x, and larger enterprises operate within 10.0.0.0/8. This is not a rule, but it is often true in practice.

Known Hosts

SSH is natively supported on modern Windows but less frequently used, since tools like PuTTY are more common. Still, it is worth checking for known hosts, as they might give you insights about the network segmentation and subnets:

PS > cat %USERPROFILE%\.ssh\known_hosts

Routes

The route table exposes which networks the host is aware of, including VLANs, VPNs, and static routes. This is invaluable for mapping internal topology and planning pivots:

PS > route print

finding routes with route print

Learning how to read the output can take some time, but itโ€™s definitely worth it. We know many professional hackers that use this command as part of their recon toolbox.

Interfaces

Knowing the network interfaces installed on compromised machines helps you understand connectivity and plan next steps. Always record each host and its interfaces in your notes:

PS > ipconfig /all

showing interfaces with ipconfig all

Maintaining a record of interfaces across compromised hosts prevents redundant authentication attempts and gives a clearer mindmap of the environment.

Net Commands

The net family of commands remains highly useful, though they are often monitored. Later we will discuss bypass methods. For now, letโ€™s review their reconnaissance value.

Password Policy

Knowing the password policy helps you see if brute force or spraying is possible. But keep in mind, these techniques are too noisy for survival mode:

PS > net accounts /domain

Groups and Memberships

Local groups, while rarely customized in domain environments, can still be useful:

PS > net localgroup

listing local groups with powershell

Domain groups are far more significant:

PS > net group /domain

Checking local Administrators can show privilege escalation opportunities:

PS > net localgroup Administrators

listing memebers of a local group with powershell

Investigating domain group memberships often reveals misconfigured privileges:

PS > net group <group_name> /domain

With sufficient rights, groups can be manipulated:

PS > net localgroup Administrators hacker /add

PS > net group "Marketing" user /add /domain

interacting with localgroups with powershell

However, directly adding accounts to highly privileged groups like Domain Admins is reckless. These groups are closely monitored. Experienced hackers instead look for overlooked accounts, such as users with the โ€œpassword not requiredโ€ attribute or exposed credentials in LDAP fields.

Domain Computers and Controllers

Domain computer lists reveal scope, while controllers are critical to identify and study:

PS > net group "Domain Computers" /domain

PS > net group "Domain Controllers" /domain

Controllers in particular hold the keys to Active Directory. LDAP queries against them can return huge amounts of intelligence.

Domain Users

Enumerating users can give you useful account names. Administrators might include purpose-based prefixes such as โ€œadmโ€ or โ€œsvcโ€ for service accounts, and descriptive fields sometimes contain role notes or credential hints.

PS > net user /domain

Shares

Shares are often overlooked by beginners, and thatโ€™s a common mistake. A share is basically a place where valuable items can be stored. At first glance it may look like a pile of junk full of unnecessary files and details. And that might be true, since these shares are usually filled with paperwork and bureaucratic documents. But among that clutter we often find useful IT data like passwords, VPN configurations, network maps and other items. Finding documents owned by assistants is just as important. Assistants usually manage things for their directors, so youโ€™ll often find a lot of directorsโ€™ private information, passwords, emails, and similar items. Here is how you find local shares hosted on your computer:

PS > net share

listing local shares with net share with powershell

Remote shares can also be listed:

PS > net view \\computer /ALL

Enumerating all domain shares creates a lot of noise, but it can be done if you donโ€™t have a clear understanding of the hosts. We do not recommend doing this. If the host names already give you enough information about their purpose, for example, โ€œDBโ€ or โ€œBACKUPโ€, then further enumeration isnโ€™t necessary. Going deeper can get you caught, even on a small or poorly managed network. If you decide to do it, here is how you can enumerate all shares in the domain:

PS > net view /all /domain[:domainname]

Interesting shares can be mounted for detailed searching:

PS > net use x: \\computer\share

You can search through documents in a share using specific keywords:

PS > Get-ChildItem -Recurse | Select-String -Pattern "keyword" -SimpleMatch -CaseSensitive:$false

Summary

Thatโ€™s it for Part 1 of the Survival Series. Weโ€™re excited to keep this going, showing you different ways to work with systems even when youโ€™re limited in what you can do. Sure, the commands you have are restricted, but survival sometimes means taking risks. If you play it too safe, you might get stuck and have no way forward. Time can work against you, and making bold moves at the right moment can pay off.

The goal of this series is to help you get comfortable with the Windows tools you have at your disposal for recon and pentesting. There will be times when you donโ€™t have much, and youโ€™ll need to make the most of whatโ€™s available.

In Part 2, weโ€™ll go deeper looking at host inspections, DC queries, and the Active Directory modules that can give you even more insight. Having these native tools makes it easier to stay under the radar, even when things are going smoothly. As you get more experience, youโ€™ll find that relying on built-in tools is often the simplest, most reliable way to get the job done.

The post PowerShell for Hackers: Survival Edition, Part 1 first appeared on Hackers Arise.

SOC 2 Compliance for SaaS: How to Win and Keep Client Trust

23 April 2025 at 03:16
3.4/5 - (8 votes)

The Software as a Service (SaaS) industry has seen both great expansion and notable downturns in recent years, with key market shifts redefining the landscape.As companies adapt to the shifting SaaS landscape, SOC 2 Compliance for SaaS has emerged as a key priorityโ€”not just as a checkbox for security, but as a signal of trustworthiness and a commitment to protecting customer data in an increasingly cautious market. After reaching record highs in 2021, the SaaS industry faced a major downturn in 2022, with company valuations dropping by almost 50%, according to Meritech Capital.

This downturn shook the market, creating pressures around profitability and customer retention. However, now in 2024, it is a different story. That is despite the challenges, the SaaS industry is now stabilizing, with B2B SaaS companies projected to grow at an 11% compound annual growth rate (CAGR) and B2C SaaS at 8% for the remainder of the year according to the recent report of Paddle.

This period of cautious optimism underscores an undeniable priority for SaaS companies: client trust, particularly as clients increasingly scrutinize data security and compliance practices. Getting SOC 2 (System and Organization Controls 2) compliance has become a critical step in building this trust, as it ensures that a companyโ€™s data handling and security protocols meet the appropriate standards.

In this guide, we will learn why SOC 2 for SaaS companies is essential and offer practical steps to achieve SOC 2 compliance for SaaS in 2024.

Why SaaS companies need SOC 2?

As a SaaS company, you are handling a vast number of customer data from personal information to financial records. Now data breaches and mishandling of those information cannot only impact your reputation but can also lead to the loss of your clientโ€™s trust. As we learned in the introduction, SOC 2 is an important step that helps you build trust and transparency that you will need to assure clients that their data is protected at every level.

By being SOC 2 compliant, you will be able to stand out in a competitive market expressing your serious concern and approach to data security. That will show also how much serious you are about data security and are willing to go the extra mile to safeguard your clientโ€™s trust.

Plus, many companies often need to comply with various regulations to operate securely on a global scale which often includes frameworks like ISO 27001, a widely recognized security standard. When comparing SOC 2 vs ISO 27001, the key difference lies in their specific scope and focus.

While SOC 2 emphasizes trust principles for data security, ISO 27001 provides a broader framework for information security management. This is also true for other regulations like GDPR or HIPAA, which may apply depending on your industry or location.

Once your SaaS company becomes SOC 2 compliant, youโ€™ll not only be able to demonstrate a proactive approach to data security but also align with broader regulatory standards. This will build trust, strengthen your reputation, and position your company as a security-focused partner in an increasingly competitive marketplace.

soc2 compliance checklist

Core Trust Principles: Building blocks of SOC 2 for SaaS

SOC 2 compliance is built around five core trust principles that serve as the frameworkโ€™s foundation. Each principle addresses a crucial aspect of data protection, making SOC 2 comprehensive and adaptable to SaaS environments:

  1. Security: Measures to protect against unauthorized access, such as firewalls, encryption, and intrusion detection.
  2. Availability: Ensuring systems are accessible to users, with safeguards against downtime and disruptions.
  3. Processing integrity: Assuring that systems process data accurately, reliably, and free from errors.
  4. Confidentiality: Protecting sensitive data from unauthorized disclosure, particularly in shared environments.
  5. Privacy: Ensuring that personal data is collected, used, retained, and disposed of in compliance with privacy regulations.

By adhering to the above principles, your SaaS organization can build a strong security foundation that meets client expectations and supports compliance.

Which type of SOC 2 report is suitable for SaaS?

  • SOC 2 Type 1: This report will assess the design of your companyโ€™s control at a specific point in time and verify whether the necessary controls are in place. If your SaaS company is just starting out with SOC 2 compliance a Type 1 report would be helpful as an ideal starting point.
  • SOC 2 Type 2: This report is generally comprehensive and goes a step further in evaluating the effectiveness of those controls over a defined time period (6 to 1 year). Type 2 report is ideal if your SaaS company is looking to demonstrate sustained adherence to security practices, a requirement often favored by enterprise-level clients and partners who prioritize reliability and consistency in security measures.

Considering both options, you should first evaluate your companyโ€™s current stage in the SOC 2 compliance journey and the needs of your clients. If youโ€™re just starting out, a SOC 2 Type 1 report is a good first step as I mentioned before, but then again if youโ€™re working with enterprise clients who require proof of ongoing security practices, a SOC 2 Type 2 report is more appropriate.

Key steps to achieve SOC 2 compliance for SaaS companies

1. Identify the relevant SOC 2 trust principles

Determine which SOC 2 trust principles apply to your business. While SaaS providers prioritize the Security principle, client requirements may require identifying and addressing other principles such as Availability or Confidentiality.

2. Conduct a readiness assessment

Perform a SOC 2 readiness assessment or gap analysis to identify gaps in your current security practices compared to SOC 2 requirements. This helps in understanding what controls need to be added or improved.

3. Establish and document security policies and procedures

Develop detailed, documented policies and procedures addressing each selected SOC 2 principle. These should cover areas like data encryption, access control, incident response, and more, and will serve as the foundation for your compliance efforts.

4. Implement required security controls

Based on the readiness assessment, implement or strengthen controls to meet SOC 2 standards. This can include access management protocols, network monitoring, secure software development practices, and continuous vulnerability assessments.

5. Train employees on SOC 2 requirements

Conduct regular training sessions to ensure employees understand their role in achieving and maintaining SOC 2 compliance. This step is crucial to prevent insider threats and maintain a high standard of security awareness.

6.Engage in ongoing monitoring and logging

Set up logging and monitoring systems to track access, detect security incidents, and provide evidence of control operation. For SOC 2 Type 2 compliance, monitoring must demonstrate consistent control effectiveness over a period (usually 3, 6 months to a year).

7.Conduct a readiness review with an auditor

Engage a SOC 2 auditor for a readiness review, which provides an informal evaluation of your current controls and identifies areas needing improvement. This step prepares you for the official audit by allowing time to address any remaining gaps.

8. Schedule and complete the SOC 2 audit

Once ready, schedule the SOC 2 audit with a certified public accounting (CPA) firm. For a Type 1 report, the audit will assess controls at a specific point in time, while a Type 2 audit will assess controls over an extended period.

9. Address findings and achieve continuous compliance

If the audit identifies areas for improvement, address them promptly. Once compliant, continue regular monitoring, updating policies, and conducting internal audits to maintain SOC 2 standards over time.

Check out this YouTube video to learn in detail about the SOC 2 requirements and practical tips to ensure a smooth audit process.

SOC2 Audit and Attestation

The Best way to get your SOC 2 ready

While securing SOC 2 compliance is definitely beneficial, the process could feel quite overwhelming. This is especially true for SaaS companies that are just starting out, due to complex regulations and security standards which could make it challenging to know where to start and what to prioritize.

Plus, SOC 2 compliance requires not only the implementation of strong security measures but also an ongoing commitment to maintaining them which could be time consuming and resource intensive. Now this is where VISTA InfoSec comes in. At VISTA InfoSec, we provide SOC 2 audit and attestation services, helping SaaS providers confidently achieve and sustain SOC 2 compliance.

Our approach to SOC 2 compliance is designed to take the stress out of the process. With us you will not only meet compliance standards but will also build a solid foundation of trust with your clients, proving your dedication to protecting their data. Contact us today to start your journey to SOC 2 compliance. You can also book a FREE 1 time consultation with our expert by filling in the โ€˜Enquire Nowโ€™ form.

The post SOC 2 Compliance for SaaS: How to Win and Keep Client Trust appeared first on Information Security Consulting Company - VISTA InfoSec.

DC-Sonar - Analyzing AD Domains For Security Risks Related To User Accounts

By: Unknown
25 January 2023 at 06:30

DC Sonar Community

Repositories

The project consists of repositories:

Disclaimer

It's only for education purposes.

Avoid using it on the production Active Directory (AD) domain.

Neither contributor incur any responsibility for any using it.

Social media

Check out our Red Team community Telegram channel

Description

Architecture

For the visual descriptions, open the diagram files using the diagrams.net tool.

The app consists of:


Functionallity

The DC Sonar Community provides functionality for analyzing AD domains for security risks related to accounts:

  • Register analyzing AD domain in the app

  • See the statuses of domain analyzing processes

  • Dump and brute NTLM hashes from set AD domains to list accounts with weak and vulnerable passwords

  • Analyze AD domain accounts to list ones with never expire passwords

  • Analyze AD domain accounts by their NTLM password hashes to determine accounts and domains where passwords repeat

Installation

Docker

In progress ...

Manually using dpkg

It is assumed that you have a clean Ubuntu Server 22.04 and account with the username "user".

The app will install to /home/user/dc-sonar.

The next releases maybe will have a more flexible installation.

Download dc_sonar_NNNN.N.NN-N_amd64.tar.gz from the last distributive to the server.

Create a folder for extracting files:

mkdir dc_sonar_NNNN.N.NN-N_amd64

Extract the downloaded archive:

tar -xvf dc_sonar_NNNN.N.NN-N_amd64.tar.gz -C dc_sonar_NNNN.N.NN-N_amd64

Go to the folder with the extracted files:

cd dc_sonar_NNNN.N.NN-N_amd64/

Install PostgreSQL:

sudo bash install_postgresql.sh

Install RabbitMQ:

sudo bash install_rabbitmq.sh

Install dependencies:

sudo bash install_dependencies.sh

It will ask for confirmation of adding the ppa:deadsnakes/ppa repository. Press Enter.

Install dc-sonar itself:

sudo dpkg -i dc_sonar_NNNN.N.NN-N_amd64.deb

It will ask for information for creating a Django admin user. Provide username, mail and password.

It will ask for information for creating a self-signed SSL certificate twice. Provide required information.

Open: https://localhost

Enter Django admin user credentials set during the installation process before.

Style guide

See the information in STYLE_GUIDE.md

Deployment for development

Docker

In progress ...

Manually using Windows host and Ubuntu Server guest

In this case, we will set up the environment for editing code on the Windows host while running Python code on the Ubuntu guest.

Set up the virtual machine

Create a virtual machine with 2 CPU, 2048 MB RAM, 10GB SSD using Ubuntu Server 22.04 iso in VirtualBox.

If Ubuntu installer asks for updating ubuntu installer before VM's installation - agree.

Choose to install OpenSSH Server.

VirtualBox Port Forwarding Rules:

Name Protocol Host IP Host Port Guest IP Guest Port
SSH TCP 127.0.0.1 2222 10.0.2.15 22
RabbitMQ management console TCP 127.0.0.1 15672 10.0.2.15 15672
Django Server TCP 127.0.0.1 8000 10.0.2.15 8000
NTLM Scrutinizer TCP 127.0.0.1 5000 10.0.2.15 5000
PostgreSQL TCP 127.0.0.1 25432 10.0.2.15 5432

Config Window

Download and install Python 3.10.5.

Create a folder for the DC Sonar project.

Go to the project folder using Git for Windows:

cd '{PATH_TO_FOLDER}'

Make Windows installation steps for dc-sonar-user-layer.

Make Windows installation steps for dc-sonar-workers-layer.

Make Windows installation steps for ntlm-scrutinizer.

Make Windows installation steps for dc-sonar-frontend.

Set shared folders

Make steps from "Open VirtualBox" to "Reboot VM", but add shared folders to VM VirtualBox with "Auto-mount", like in the picture below:

After reboot, run command:

sudo adduser $USER vboxsf

Perform logout and login for the using user account.

In /home/user directory, you can use mounted folders:

ls -l
Output:
total 12
drwxrwx--- 1 root vboxsf 4096 Jul 19 13:53 dc-sonar-user-layer
drwxrwx--- 1 root vboxsf 4096 Jul 19 10:11 dc-sonar-workers-layer
drwxrwx--- 1 root vboxsf 4096 Jul 19 14:25 ntlm-scrutinizer

Config Ubuntu Server

Config PostgreSQL

Install PostgreSQL on Ubuntu 20.04:

sudo apt update
sudo apt install postgresql postgresql-contrib
sudo systemctl start postgresql.service

Create the admin database account:

sudo -u postgres createuser --interactive
Output:
Enter name of role to add: admin
Shall the new role be a superuser? (y/n) y

Create the dc_sonar_workers_layer database account:

sudo -u postgres createuser --interactive
Output:
Enter name of role to add: dc_sonar_workers_layer
Shall the new role be a superuser? (y/n) n
Shall the new role be allowed to create databases? (y/n) n
Shall the new role be allowed to create more new roles? (y/n) n

Create the dc_sonar_user_layer database account:

sudo -u postgres createuser --interactive
Output:
Enter name of role to add: dc_sonar_user_layer
Shall the new role be a superuser? (y/n) n
Shall the new role be allowed to create databases? (y/n) n
Shall the new role be allowed to create more new roles? (y/n) n

Create the back_workers_db database:

sudo -u postgres createdb back_workers_db

Create the web_app_db database:

sudo -u postgres createdb web_app_db

Run the psql:

sudo -u postgres psql

Set a password for the admin account:

ALTER USER admin WITH PASSWORD '{YOUR_PASSWORD}';

Set a password for the dc_sonar_workers_layer account:

ALTER USER dc_sonar_workers_layer WITH PASSWORD '{YOUR_PASSWORD}';

Set a password for the dc_sonar_user_layer account:

ALTER USER dc_sonar_user_layer WITH PASSWORD '{YOUR_PASSWORD}';

Grant CRUD permissions for the dc_sonar_workers_layer account on the back_workers_db database:

\c back_workers_db
GRANT CONNECT ON DATABASE back_workers_db to dc_sonar_workers_layer;
GRANT USAGE ON SCHEMA public to dc_sonar_workers_layer;
GRANT ALL ON ALL TABLES IN SCHEMA public TO dc_sonar_workers_layer;
GRANT ALL ON ALL SEQUENCES IN SCHEMA public TO dc_sonar_workers_layer;
GRANT ALL ON ALL FUNCTIONS IN SCHEMA public TO dc_sonar_workers_layer;

Grant CRUD permissions for the dc_sonar_user_layer account on the web_app_db database:

\c web_app_db
GRANT CONNECT ON DATABASE web_app_db to dc_sonar_user_layer;
GRANT USAGE ON SCHEMA public to dc_sonar_user_layer;
GRANT ALL ON ALL TABLES IN SCHEMA public TO dc_sonar_user_layer;
GRANT ALL ON ALL SEQUENCES IN SCHEMA public TO dc_sonar_user_layer;
GRANT ALL ON ALL FUNCTIONS IN SCHEMA public TO dc_sonar_user_layer;

Exit of the psql:

\q

Open the pg_hba.conf file:

sudo nano /etc/postgresql/12/main/pg_hba.conf

Add the line for the connection to allow the connection from the host machine to PostgreSQL, save changes and close the file:

# IPv4 local connections:
host all all 127.0.0.1/32 md5
host all admin 0.0.0.0/0 md5

Open the postgresql.conf file:

sudo nano /etc/postgresql/12/main/postgresql.conf

Change specified below params, save changes and close the file:

listen_addresses = 'localhost,10.0.2.15'
shared_buffers = 512MB
work_mem = 5MB
maintenance_work_mem = 100MB
effective_cache_size = 1GB

Restart the PostgreSQL service:

sudo service postgresql restart

Check the PostgreSQL service status:

service postgresql status

Check the log file if it is needed:

tail -f /var/log/postgresql/postgresql-12-main.log

Now you can connect to created databases using admin account and client such as DBeaver from Windows.

Config RabbitMQ

Install RabbitMQ using the script.

Enable the management plugin:

sudo rabbitmq-plugins enable rabbitmq_management

Create the RabbitMQ admin account:

sudo rabbitmqctl add_user admin {YOUR_PASSWORD}

Tag the created user for full management UI and HTTP API access:

sudo rabbitmqctl set_user_tags admin administrator

Open management UI on http://localhost:15672/.

Install Python3.10

Ensure that your system is updated and the required packages installed:

sudo apt update && sudo apt upgrade -y

Install the required dependency for adding custom PPAs:

sudo apt install software-properties-common -y

Then proceed and add the deadsnakes PPA to the APT package manager sources list as below:

sudo add-apt-repository ppa:deadsnakes/ppa

Download Python 3.10:

sudo apt install python3.10=3.10.5-1+focal1

Install the dependencies:

sudo apt install python3.10-dev=3.10.5-1+focal1 libpq-dev=12.11-0ubuntu0.20.04.1 libsasl2-dev libldap2-dev libssl-dev

Install the venv module:

sudo apt-get install python3.10-venv

Check the version of installed python:

python3.10 --version

Output:
Python 3.10.5
Hosts

Add IP addresses of Domain Controllers to /etc/hosts

sudo nano /etc/hosts

Layers

Set venv

We have to create venv on a level above as VM VirtualBox doesn't allow us to make it in shared folders.

Go to the home directory where shared folders located:

cd /home/user

Make deploy steps for dc-sonar-user-layer on Ubuntu.

Make deploy steps for dc-sonar-workers-layer on Ubuntu.

Make deploy steps for ntlm-scrutinizer on Ubuntu.

Config modules

Make config steps for dc-sonar-user-layer on Ubuntu.

Make config steps for dc-sonar-workers-layer on Ubuntu.

Make config steps for ntlm-scrutinizer on Ubuntu.

Run

Make run steps for ntlm-scrutinizer on Ubuntu.

Make run steps for dc-sonar-user-layer on Ubuntu.

Make run steps for dc-sonar-workers-layer on Ubuntu.

Make run steps for dc-sonar-frontend on Windows.

Open https://localhost:8000/admin/ in a browser on the Windows host and agree with the self-signed certificate.

Open https://localhost:4200/ in the browser on the Windows host and login as created Django user.



DC-Sonar - Analyzing AD Domains For Security Risks Related To User Accounts

By: Unknown
25 January 2023 at 06:30

DC Sonar Community

Repositories

The project consists of repositories:

Disclaimer

It's only for education purposes.

Avoid using it on the production Active Directory (AD) domain.

Neither contributor incur any responsibility for any using it.

Social media

Check out our Red Team community Telegram channel

Description

Architecture

For the visual descriptions, open the diagram files using the diagrams.net tool.

The app consists of:


Functionallity

The DC Sonar Community provides functionality for analyzing AD domains for security risks related to accounts:

  • Register analyzing AD domain in the app

  • See the statuses of domain analyzing processes

  • Dump and brute NTLM hashes from set AD domains to list accounts with weak and vulnerable passwords

  • Analyze AD domain accounts to list ones with never expire passwords

  • Analyze AD domain accounts by their NTLM password hashes to determine accounts and domains where passwords repeat

Installation

Docker

In progress ...

Manually using dpkg

It is assumed that you have a clean Ubuntu Server 22.04 and account with the username "user".

The app will install to /home/user/dc-sonar.

The next releases maybe will have a more flexible installation.

Download dc_sonar_NNNN.N.NN-N_amd64.tar.gz from the last distributive to the server.

Create a folder for extracting files:

mkdir dc_sonar_NNNN.N.NN-N_amd64

Extract the downloaded archive:

tar -xvf dc_sonar_NNNN.N.NN-N_amd64.tar.gz -C dc_sonar_NNNN.N.NN-N_amd64

Go to the folder with the extracted files:

cd dc_sonar_NNNN.N.NN-N_amd64/

Install PostgreSQL:

sudo bash install_postgresql.sh

Install RabbitMQ:

sudo bash install_rabbitmq.sh

Install dependencies:

sudo bash install_dependencies.sh

It will ask for confirmation of adding the ppa:deadsnakes/ppa repository. Press Enter.

Install dc-sonar itself:

sudo dpkg -i dc_sonar_NNNN.N.NN-N_amd64.deb

It will ask for information for creating a Django admin user. Provide username, mail and password.

It will ask for information for creating a self-signed SSL certificate twice. Provide required information.

Open: https://localhost

Enter Django admin user credentials set during the installation process before.

Style guide

See the information in STYLE_GUIDE.md

Deployment for development

Docker

In progress ...

Manually using Windows host and Ubuntu Server guest

In this case, we will set up the environment for editing code on the Windows host while running Python code on the Ubuntu guest.

Set up the virtual machine

Create a virtual machine with 2 CPU, 2048 MB RAM, 10GB SSD using Ubuntu Server 22.04 iso in VirtualBox.

If Ubuntu installer asks for updating ubuntu installer before VM's installation - agree.

Choose to install OpenSSH Server.

VirtualBox Port Forwarding Rules:

Name Protocol Host IP Host Port Guest IP Guest Port
SSH TCP 127.0.0.1 2222 10.0.2.15 22
RabbitMQ management console TCP 127.0.0.1 15672 10.0.2.15 15672
Django Server TCP 127.0.0.1 8000 10.0.2.15 8000
NTLM Scrutinizer TCP 127.0.0.1 5000 10.0.2.15 5000
PostgreSQL TCP 127.0.0.1 25432 10.0.2.15 5432

Config Window

Download and install Python 3.10.5.

Create a folder for the DC Sonar project.

Go to the project folder using Git for Windows:

cd '{PATH_TO_FOLDER}'

Make Windows installation steps for dc-sonar-user-layer.

Make Windows installation steps for dc-sonar-workers-layer.

Make Windows installation steps for ntlm-scrutinizer.

Make Windows installation steps for dc-sonar-frontend.

Set shared folders

Make steps from "Open VirtualBox" to "Reboot VM", but add shared folders to VM VirtualBox with "Auto-mount", like in the picture below:

After reboot, run command:

sudo adduser $USER vboxsf

Perform logout and login for the using user account.

In /home/user directory, you can use mounted folders:

ls -l
Output:
total 12
drwxrwx--- 1 root vboxsf 4096 Jul 19 13:53 dc-sonar-user-layer
drwxrwx--- 1 root vboxsf 4096 Jul 19 10:11 dc-sonar-workers-layer
drwxrwx--- 1 root vboxsf 4096 Jul 19 14:25 ntlm-scrutinizer

Config Ubuntu Server

Config PostgreSQL

Install PostgreSQL on Ubuntu 20.04:

sudo apt update
sudo apt install postgresql postgresql-contrib
sudo systemctl start postgresql.service

Create the admin database account:

sudo -u postgres createuser --interactive
Output:
Enter name of role to add: admin
Shall the new role be a superuser? (y/n) y

Create the dc_sonar_workers_layer database account:

sudo -u postgres createuser --interactive
Output:
Enter name of role to add: dc_sonar_workers_layer
Shall the new role be a superuser? (y/n) n
Shall the new role be allowed to create databases? (y/n) n
Shall the new role be allowed to create more new roles? (y/n) n

Create the dc_sonar_user_layer database account:

sudo -u postgres createuser --interactive
Output:
Enter name of role to add: dc_sonar_user_layer
Shall the new role be a superuser? (y/n) n
Shall the new role be allowed to create databases? (y/n) n
Shall the new role be allowed to create more new roles? (y/n) n

Create the back_workers_db database:

sudo -u postgres createdb back_workers_db

Create the web_app_db database:

sudo -u postgres createdb web_app_db

Run the psql:

sudo -u postgres psql

Set a password for the admin account:

ALTER USER admin WITH PASSWORD '{YOUR_PASSWORD}';

Set a password for the dc_sonar_workers_layer account:

ALTER USER dc_sonar_workers_layer WITH PASSWORD '{YOUR_PASSWORD}';

Set a password for the dc_sonar_user_layer account:

ALTER USER dc_sonar_user_layer WITH PASSWORD '{YOUR_PASSWORD}';

Grant CRUD permissions for the dc_sonar_workers_layer account on the back_workers_db database:

\c back_workers_db
GRANT CONNECT ON DATABASE back_workers_db to dc_sonar_workers_layer;
GRANT USAGE ON SCHEMA public to dc_sonar_workers_layer;
GRANT ALL ON ALL TABLES IN SCHEMA public TO dc_sonar_workers_layer;
GRANT ALL ON ALL SEQUENCES IN SCHEMA public TO dc_sonar_workers_layer;
GRANT ALL ON ALL FUNCTIONS IN SCHEMA public TO dc_sonar_workers_layer;

Grant CRUD permissions for the dc_sonar_user_layer account on the web_app_db database:

\c web_app_db
GRANT CONNECT ON DATABASE web_app_db to dc_sonar_user_layer;
GRANT USAGE ON SCHEMA public to dc_sonar_user_layer;
GRANT ALL ON ALL TABLES IN SCHEMA public TO dc_sonar_user_layer;
GRANT ALL ON ALL SEQUENCES IN SCHEMA public TO dc_sonar_user_layer;
GRANT ALL ON ALL FUNCTIONS IN SCHEMA public TO dc_sonar_user_layer;

Exit of the psql:

\q

Open the pg_hba.conf file:

sudo nano /etc/postgresql/12/main/pg_hba.conf

Add the line for the connection to allow the connection from the host machine to PostgreSQL, save changes and close the file:

# IPv4 local connections:
host all all 127.0.0.1/32 md5
host all admin 0.0.0.0/0 md5

Open the postgresql.conf file:

sudo nano /etc/postgresql/12/main/postgresql.conf

Change specified below params, save changes and close the file:

listen_addresses = 'localhost,10.0.2.15'
shared_buffers = 512MB
work_mem = 5MB
maintenance_work_mem = 100MB
effective_cache_size = 1GB

Restart the PostgreSQL service:

sudo service postgresql restart

Check the PostgreSQL service status:

service postgresql status

Check the log file if it is needed:

tail -f /var/log/postgresql/postgresql-12-main.log

Now you can connect to created databases using admin account and client such as DBeaver from Windows.

Config RabbitMQ

Install RabbitMQ using the script.

Enable the management plugin:

sudo rabbitmq-plugins enable rabbitmq_management

Create the RabbitMQ admin account:

sudo rabbitmqctl add_user admin {YOUR_PASSWORD}

Tag the created user for full management UI and HTTP API access:

sudo rabbitmqctl set_user_tags admin administrator

Open management UI on http://localhost:15672/.

Install Python3.10

Ensure that your system is updated and the required packages installed:

sudo apt update && sudo apt upgrade -y

Install the required dependency for adding custom PPAs:

sudo apt install software-properties-common -y

Then proceed and add the deadsnakes PPA to the APT package manager sources list as below:

sudo add-apt-repository ppa:deadsnakes/ppa

Download Python 3.10:

sudo apt install python3.10=3.10.5-1+focal1

Install the dependencies:

sudo apt install python3.10-dev=3.10.5-1+focal1 libpq-dev=12.11-0ubuntu0.20.04.1 libsasl2-dev libldap2-dev libssl-dev

Install the venv module:

sudo apt-get install python3.10-venv

Check the version of installed python:

python3.10 --version

Output:
Python 3.10.5
Hosts

Add IP addresses of Domain Controllers to /etc/hosts

sudo nano /etc/hosts

Layers

Set venv

We have to create venv on a level above as VM VirtualBox doesn't allow us to make it in shared folders.

Go to the home directory where shared folders located:

cd /home/user

Make deploy steps for dc-sonar-user-layer on Ubuntu.

Make deploy steps for dc-sonar-workers-layer on Ubuntu.

Make deploy steps for ntlm-scrutinizer on Ubuntu.

Config modules

Make config steps for dc-sonar-user-layer on Ubuntu.

Make config steps for dc-sonar-workers-layer on Ubuntu.

Make config steps for ntlm-scrutinizer on Ubuntu.

Run

Make run steps for ntlm-scrutinizer on Ubuntu.

Make run steps for dc-sonar-user-layer on Ubuntu.

Make run steps for dc-sonar-workers-layer on Ubuntu.

Make run steps for dc-sonar-frontend on Windows.

Open https://localhost:8000/admin/ in a browser on the Windows host and agree with the self-signed certificate.

Open https://localhost:4200/ in the browser on the Windows host and login as created Django user.



โŒ
โŒ