With a lightweight design, AMOLED display, and full health tracking suite, it could become an appealing alternative to pricier Apple and Samsung options.
ImageSat International, a provider of space-based intelligence solutions, has introduced a new satellite platform named RUNNER, capable of real-time movement tracking from orbit—a capability the company says marks a new era for operational intelligence. According to ISI, RUNNER is the first satellite of its kind to provide dynamic, in-orbit surveillance enhanced by embedded artificial intelligence. […]
The Portuguese Air Force has formally signed a contract with ICEYE for the direct acquisition of a Synthetic Aperture Radar (SAR) satellite, marking the first time the service will fully own and control a space-based intelligence asset. The announcement, made jointly by ICEYE and the Portuguese Air Force, represents a major step in expanding Portugal’s […]
[Josh] aka [Ham Radio Crash Course] is demonstrating this build on his channel and showing every step needed to get something like this working. The first part is finding the correct LoRa module, which will be the bulk of the cost of this project. Unlike those used for most Meshtastic nodes, this one needs to be built for the 433 MHz band. The software running on this module is from TinyGS, which we have featured here before, and which allows a quick and easy setup to listen in to these types of satellites. This build goes much further into detail on building the antenna, though, and also covers some other ancillary tasks like mounting it somewhere outdoors.
With all of that out of the way, though, the setup is able to track hundreds of satellites on very little hardware, as well as display information about each of them. We’d always favor a build that lets us gather data like this directly over using something like a satellite tracking app, although those do have their place. And of course, with slightly more compute and a more directed antenna there is all kinds of other data beaming down that we can listen in on as well, although that’s not always the intent.
Portal Space Systems CEO Jeff Thornburg checks out the vacuum chamber where space hardware is tested. (GeekWire Photo / Alan Boyle)
Editor’s note:This series profiles six of the Seattle region’s “Uncommon Thinkers”: inventors, scientists, technologists and entrepreneurs transforming industries and driving positive change in the world. They will be recognized Dec. 11at the GeekWire Gala. Uncommon Thinkers is presented in partnership with Greater Seattle Partners.
BOTHELL, Wash. — Before he became the CEO of Portal Space Systems, Jeff Thornburg worked for two of the world’s most innovative space-minded billionaires. Now he’s working on an idea those billionaires never thought to pursue: building a spacecraft powered by the heat of focused sunlight.
Thornburg and his teammates are aiming to make Bothell-based Portal the first commercial venture to capitalize on solar thermal propulsion, a technology studied decades ago by NASA and the U.S. Air Force. The concept involves sending a propellant through a heat exchanger, where the heat gathered up from sunlight causes it to expand and produce thrust, like steam whistling out of a teakettle.
The technology is more fuel-efficient than traditional chemical propulsion — and faster-acting than solar electric propulsion, which uses solar arrays to turn sunlight into electricity to power an ion drive. Solar thermal propulsion nicely fills a niche between those two methods to move a spacecraft between orbits. But neither NASA nor the Air Force followed up on the concept.
“They didn’t abandon it for technical reasons,” Thornburg said. At the time, it just didn’t make economic or strategic sense to take the concept any further.
What’s changed?
“Lower launch costs, coupled with additive manufacturing, are the major unlocks to bring the tech to life, and make it affordable and in line with commercial development,” Thornburg said.
Thornburg argues that it’s the right time for Portal’s spacecraft to fill a gap in America’s national security posture on the high frontier. “There was no imperative for rapid movement on orbit in the 1990s,” he said. “Only recently have the threats from our adversaries highlighted the weaknesses in current electric propulsion systems, in that they have so little thrust and can’t enable rapid mobility.”
So, how did Thornburg hit upon the idea of turning a decades-old idea into reality?
The path to propulsion
Thornburg, who’s now 52 years old, has focused on making things fly for most of his career. It all started when he was a college student in Missouri in the early 1990s, earning his aerospace engineering degree with an ROTC scholarship from the Air Force. He recalled a conversation he had with an instructor who was an old F-4 fighter pilot.
“With my nearsightedness, I was out of the game from a pilot standpoint,” Thornburg said. “But he said, ‘Thornburg, if you can’t fly the planes, go be as close to them as you can.'”
Thornburg signed up for a program that fast-tracked him into an aircraft maintenance role. He traveled around the world with KC-135 cargo planes, supporting missions that included the NATO-led air campaign against Yugoslavia in 1999. During his time as a flight commander and aircraft maintenance officer at MacDill Air Force Base in Florida, “I had a couple of hundred enlisted people who worked hard to keep me out of trouble,” he said.
The Air Force is where he earned his master’s degree in aerospace engineering. “My adviser had a friend that worked at the Air Force Research Lab,” Thornburg recalled. “He called him and said, ‘The Air Force is about to send this guy to do something with airplanes, but I’m pretty sure he’s going to be disappointed if he can’t come out and work on rocket engines.'”
Sure enough, Thornburg was soon working on rocket propulsion development, including a project to create what’s known as a full-flow staged combustion cycle engine. “We made what people thought was not possible possible with that program,” Thornburg said.
In 2004, Thornburg left the Air Force to work on rocket propulsion systems at Exquadrum, Aerojet and NASA. Then, in 2011, he took a phone call from SpaceX’s billionaire founder, Elon Musk. “We talked for about an hour, hour and a half on the phone — and he said, ‘I’ve got a project I want to talk to you about,'” Thornburg said.
That project led to the development of SpaceX’s methane-fueled Raptor rocket engine, which leveraged the technology that Thornburg helped pioneer at the Air Force. “That was a wild ride, because that felt like about 15 or 20 years of experience in a five-year time period,” he recalled.
Jeff Thornburg strikes a pose in front of a test stand at NASA’s Stennis Space Center during his time as vice president of propulsion engineering at Stratolaunch. (Stratolaunch Systems Photo / 2018)
After five years at SpaceX, Thornburg needed to wind down. He decided to do some consulting at his home base in Huntsville, Alabama, also known as Rocket City. “About six months in, I’m like, I need a real job again,” he said. “And some friends of mine introduced me to, ultimately, Paul Allen. Paul called me and said, ‘Can you come out to my Seattle office?'”
The Microsoft co-founder and software billionaire enlisted Thornburg to become the head of rocket propulsion development for Stratolaunch, Allen’s space venture. Thornburg led the effort to create a liquid rocket engine known as the PGA — which stood for “Paul G. Allen.”
Unfortunately, Allen passed away in 2018, just one month after the engine was unveiled. Under new ownership, Stratolaunch pivoted to hypersonic testing, and the PGA project fell by the wayside. Once again, Thornburg and his family hunkered down in Huntsville.
Building a business
“I decided to start my first space company after Paul died,” Thornburg said. “I focused on hydrogen propulsion technology and solutions, kind of like what we were working on for Paul.”
That first company, Interstellar Technologies, started working on projects for NASA, Northrop Grumman and a couple of other customers. Then the pandemic hit. “The investors that were about to provide funding disappeared,” Thornburg said. “NASA went home, Northrop Grumman went home. And so I had to find my small team other jobs.”
Just as Thornburg was about to resign himself to riding out the pandemic in Alabama, Amazon’s recruiters called. They asked him to move to Seattle to run engineering and manufacturing for Project Kuiper, the satellite internet project that’s now known as Amazon Leo. “That’s ultimately what got us moved to Seattle,” Thornburg said.
His yearlong stint at Amazon was long enough to establish the process for building Project Kuiper’s two prototypes and the production-grade satellites that came after them. Then he took on engineering management roles at Agility Robotics and Commonwealth Fusion Systems.
That’s when Portal Space Systems took shape.
VIPs cut the ribbon at Portal Space Systems’ HQ in Bothell, Wash., in March 2025. From left: U.S. Rep. Suzan DelBene; Portal co-founders Prashaanth Ravindran, Jeff Thornburg and Ian Vorbach; and Bothell Mayor Mason Thompson. (GeekWire Photo / Alan Boyle)
To be fair, the seeds for Portal were planted back in 2016, just weeks after Thornburg left SpaceX. “Lawrence Livermore Lab had called and said, ‘We’re doing a seminar on the future of propulsion. Would you like to come be a speaker?'” he recalled. “I said, ‘Yes, what do you want me to talk about?’ They said, ‘We want you to tell us what the future of propulsion looks like.’ Oh my gosh, no pressure on that!”
As he did the research for his talk, he came across the idea of putting a nuclear reactor on a spacecraft, and using the concentrated heat from that reactor to blast a propellant through a thruster. The concept, known as nuclear thermal propulsion, seemed like a stretch — but then Thornburg had an uncommon thought.
“Can you concentrate solar energy to heat a thrust chamber and do the same thing?” Thornburg said. “You can. It’s not quite as effective as a nuclear reactor, for obvious reasons, but it’s all the same pieces. … Now I don’t have to wait on a low-cost, low-weight, space-rated nuclear reactor that doesn’t exist yet.”
Thornburg mulled over the idea for years. “I was thinking about Portal, and I was starting the beginnings of Portal in 2021, but I still had to pay the bills,” he said. For a couple of years, he worked during the day at Agility Robotics and Commonwealth Fusion — and spent nights and weekends laying the groundwork for the startup.
“When Portal could really start to stand on its own, as we started to win over the Defense Department, that’s when I made the switch with all of my time focused on what was going on in Portal,” Thornburg said. In April 2024, the startup emerged from stealth and announced it had received more than $3 million in funding from the Defense Department and the Space Force.
The road ahead
Portal’s flagship vehicle is called Supernova. It’s a rapid-transorbital, multi-mission vehicle that should be capable of moving itself and its payloads from one orbit to another — even from low Earth orbit to geostationary Earth orbit, more than 20,000 miles higher up. And it should be able to do that within hours or a day, rather than the weeks or months that are typically required.
The spacecraft itself will be about the size of a restaurant refrigerator. To concentrate sunlight on its heat exchanger and thruster system, Supernova will use sheets of reflective material that can unfold to a width of roughly 55 feet. Ammonia will serve as the propellant. The 3D-printed heat exchanger thruster, dubbed Flare, was successfully tested earlier this year.
Next year’s orbital demonstration will involve putting an instrument package known as Mini-Nova, which is about the size of a tissue box, on a satellite platform that’s due for launch on a SpaceX rideshare mission. The demonstration is meant to validate Supernova’s system design.
Portal CEO Jeff Thornburg holds a Mini-Nova model that carries the signatures of Thornburg and teammates who worked on the project. (GeekWire Photo / Alan Boyle)
In late 2026, Portal plans to send up a free-flying spacecraft called Starburst, which will be equipped with thrusters powered by an electrothermal heating system. Starburst won’t be as powerful as Supernova, but it will provide Portal’s customers with an early option for rapid maneuverability in orbit. If next year’s test goes well, Starburst is expected to start taking on customer missions in 2027.
Throughout Portal’s formative years, Thornburg has worked with fellow members of the “small team” he assembled at Interstellar Technologies. Both of Portal’s other co-founders — chief operating officer Ian Vorbach and engineering vice president Prashaanth Ravindran — crossed paths with Thornburg at Interstellar, and at Stratolaunch before that.
Vorbach, whose background includes startup experience as well as engineering experience, said Portal’s business model has been fine-tuned to make sure it addresses the needs of its target market. He and Thornburg identified the U.S. military’s need for tactical responsiveness in space as the top priority.
Portal Space Systems is working on two types of orbital transfer vehicles: Supernova, which uses large mirrors to concentrate sunlight on a heat exchanger / thruster system (at left); and Starburst (at right), a smaller spacecraft that leverages many of the technologies developed for Supernova. (Portal Space Systems Illustrations)
“What happens a lot in the space industry is that you have incredibly technical, talented people who have a technology that provides some very unique performance, and then they build it, and it turns out that performance isn’t needed,” Vorbach said. “There’s got to be a reason to bring that innovation to market.”
Vorbach is grateful for Thornburg’s leadership. “We work very long hours, but I think Jeff does a great job of making sure people know that they’re valued,” he said. “I appreciate that, and I think it’s why we, fortunately, are able to hire great talent from the places he’s come from, whether it’s SpaceX or Kuiper.”
Ravindran, who worked at Jeff Bezos’ Blue Origin space venture before taking a founder’s role at Portal, agreed with that assessment. “It’s always amazing to have someone like Jeff out there, because he’s come up the engineering road to realize our pain points as well, and he doesn’t try to hold us to unfair standards,” he said. “That way, we are not set up for failure.”
Stan Shull, a space industry analyst at Bellevue, Wash.-based Alliance Velocity, gives Portal high marks. “In space terms, a highly maneuverable satellite is said to have high delta-V,” he told GeekWire in an email. “Portal, as a company, feels high delta-V too.”
Thornburg’s experience and expertise are big factors behind Portal’s rapid progress, Shull said. “He’s very knowledgeable about national security issues and is a straight shooter about the growing threat environment in orbit,” he said. “It’s no surprise the Space Force is among the many customers interested in what the company is up to.”
What will Portal be up to next? Looking long-term, Thornburg is intrigued by the quantum frontier. “I think there are some very interesting things happening in our understanding of quantum physics that will have propulsion applications, that won’t look like propulsion as we know it right now,” he said. “If we could fold spacetime in clever ways … there’s been plenty of writing about that.”
But when he takes a more realistic look at what could happen in his lifetime, Thornburg can’t stop thinking about nuclear propulsion. “Our Supernova spacecraft will have a version that will leverage a nuclear reactor at some point. That was always the going-in position,” he said.
The way Thornburg sees it, the nuclear option will revolutionize spacecraft — and expand humanity’s reach on the final frontier while we figure out how to fold spacetime.
“Nuclear thermal will get us further into the solar system, and this Earth-moon-Mars becomes our backyard,” he said. “But, you know, for my 12-year-old version of myself, that’s not enough.”
A worker installs an Amazon Leo antenna at a Hunt Energy facility. (Amazon Photo)
Amazon Leo — the satellite internet service provider formerly known as Project Kuiper — says it has started shipping its top-of-the-line terminals to select customers for testing.
Today’s announcement serves as further evidence that Amazon is closing in on providing space-based, high-speed access to the internet to customers around the world after years of preparation. Amazon Leo is still far behind SpaceX’s Starlink satellite network, but the Seattle-based tech giant has lined up a wide array of partners to help get its network off the ground.
The top tier of Amazon Leo’s global broadband service, known as Leo Ultra, will offer download speeds of up to 1 gigabit per second and upload speeds of up to 400 megabits per second, Amazon said today in a blog post. That’s the first time Amazon has shared details about uplink performance.
During an enterprise preview, some of Amazon’s business customers will begin testing the network using production-grade hardware and software. Amazon said the preview will give its Leo teams “an opportunity to collect more customer feedback and tailor solutions for specific industries ahead of a broader rollout.”
“Amazon Leo represents a massive opportunity for businesses operating in challenging environments,” said Chris Weber, vice president of consumer and enterprise business for Amazon Leo. “From our satellite and network design to our portfolio of high-performance phased array antennas, we’ve designed Amazon Leo to meet the needs of some of the most complex business and government customers out there, and we’re excited to provide them with the tools they need to transform their operations, no matter where they are in the world.”
The 20-by-30-inch antennas for the Leo Ultra terminals are powered by a custom silicon chip that’s been optimized for applications including videoconferencing, real-time monitoring and cloud computing. The service can connect directly to Amazon Web Services as well as other cloud and on-premise networks, allowing customers to move data securely from remote assets to private networks without touching the public internet, Amazon said.
In addition to Leo Ultra, Amazon will offer two lower tiers of service: Leo Nano, which will use a compact 7-inch antenna to provide download speeds of up to 100 Mbps; and Leo Pro, which will use a standard 11-inch antenna supporting download speeds of up to 400 Mbps.
Amazon said it’s shipping Leo Ultra and Leo Pro units to select companies for the preview program. “We’ll expand the program to more customers as we add coverage and capacity to the network,” the company said. Pricing details have not yet been disclosed.
Photos released today by Amazon show installations of Leo hardware at Hunt Energy facilities, where the network will provide high-speed connectivity for Hunt’s infrastructure assets.
“Hunt Energy Company operates a wide range of energy assets across the globe, and this requires exceptional connectivity to be able to operate, maintain and deliver our products,” said Hunter Hunt, CEO of Hunt Energy Holdings and board chairman of Hunt Energy’s Skyward division. “The combination of Amazon Leo bandwidth capabilities and the secure private link is exactly what we needed.”
JetBlue intends to use Amazon Leo to boost the low-cost airline’s in-flight Wi-Fi service. “Having collaborated with Amazon before, we knew Amazon Leo would share our passion for customer-first innovation,” JetBlue President Marty St. George said. “Choosing Amazon Leo reflects our commitment to staying ahead of what customers want most when traveling, such as fast, reliable performance and flexibility in our free in-flight Wi-Fi.”
Amazon Leo plans to offer high-speed satellite internet service to millions of people around the world, as well as to commercial ventures and government entities. But it still has a long way to go to follow through on that plan.
Over the past year, 153 of Amazon’s production-grade satellites have been launched into low Earth orbit (also known as LEO, an acronym that inspired the newly announced name of the service). Amazon plans to fill out its first-generation constellation with more than 3,000 additional satellites. Under the terms of its license from the Federal Communications Commission, half of those satellites are supposed to be launched by mid-2026. It seems likely that Amazon will seek an extension of that deadline.
The United States Army has reopened its effort to identify companies capable of developing ground-based counter surveillance and reconnaissance systems designed to address space-based threats, issuing a new Sources Sought Notice under number W58SFN-25-R-0001-1. The notice was reposted on November 21 and remains open until December 5. According to the Army Contracting Command at Redstone […]
BlackSky, a U.S.-based space intelligence company headquartered in Herndon, Virginia, released a detailed satellite image on November 19 showing the United Arab Emirates’ Al Fursan aerobatic team flying their newly acquired Chinese-built L-15 aircraft during the 2025 Dubai Airshow. The image, captured by one of BlackSky’s real-time Earth observation satellites, shows multiple L-15 trainers in […]
In this artist’s concept, the ocean-observing satellite Sentinel-6B orbits Earth with its deployable solar panels extended.
Credit: NASA/JPL-Caltech
NASA will provide live coverage of prelaunch and launch activities for Sentinel-6B, an international mission delivering critical sea level and ocean data to protect coastal infrastructure, improve weather forecasting, and support commercial activities at sea.
Launch is targeted at 12:21 a.m. EST, Monday, Nov. 17 (9:21 p.m. PST, Sunday, Nov. 16) aboard a SpaceX Falcon 9 rocket from Space Launch Complex 4 East at Vandenberg Space Force Base in California.
Watch coverage beginning at 11:30 p.m. EST (8:30 p.m. PST) on NASA+, Amazon Prime, and more. Learn how to watch NASA content through a variety of platforms, including social media.
The Sentinel-6B mission continues a decades-long effort to monitor global sea level and ocean conditions using precise radar measurements from space. Since the early 1990s, satellites launched by NASA and domestic and international partners have collected precise sea level data. The launch of Sentinel-6B will extend this dataset out to nearly four decades.
NASA’s mission coverage is as follows (all times Eastern and subject to change based on real-time operations):
Saturday, Nov. 15
4 p.m. – NASA Prelaunch Teleconference on International Ocean Tracking Mission
Karen St. Germain, director, Earth Science Division, NASA Headquarters in Washington
Pierrik Veuilleumier, Sentinel-6B project manager, ESA (European Space Agency)
Parag Vaze, Sentinel-6B project manager, NASA’s Jet Propulsion Laboratory in Pasadena, California
Tim Dunn, senior launch director, Launch Services Program, NASA’s Kennedy Space Center in Florida
Julianna Scheiman, director, NASA Science Missions, SpaceX
1st Lt. William Harbin, launch weather officer, U.S. Air Force
11:30 p.m. – Launch coverage begins on NASA+, Amazon Prime, and more.
Audio-only coverage
Audio-only of the launch coverage will be carried on the NASA “V” circuits, which may be accessed by dialing 321-867-1220 or -1240. On launch day, “mission audio” countdown activities without NASA+ launch commentary will be carried at 321-867-7135.
NASA website launch coverage
Launch day coverage of the mission will be available on the agency’s website. Coverage will include links to live streaming and blog updates beginning no earlier than 11 p.m. EST, Nov. 16, as the countdown milestones occur. Streaming video and photos of the launch will be accessible on demand shortly after liftoff. Follow countdown coverage on NASA’s Sentinel-6/Jason-CS blog.
For questions about countdown coverage, contact the NASA Kennedy newsroom at: 321-867-2468.
Attend launch virtually
Members of the public can register to attend this launch virtually. NASA’s virtual guest program for this mission includes curated launch resources, notifications about related opportunities or changes, and a stamp for the NASA virtual guest passport following launch.
Watch, engage on social media
Let people know you’re watching the mission on X, Facebook, and Instagram by following and tagging these accounts:
Sentinel-6B is the second of twin satellites in the Copernicus Sentinel-6/Jason-CS (Continuity of Service) mission, a collaboration among NASA, ESA, EUMETSAT (European Organisation for the Exploitation of Meteorological Satellites), and the National Oceanic and Atmospheric Administration (NOAA). The first satellite in the mission, Sentinel-6 Michael Freilich, launched in November 2020. The European Commission contributed funding support, while France’s space agency CNES (Centre National d’Études Spatiales) provided technical expertise. The mission also marks the first international involvement in Copernicus, the European Union’s Earth Observation Programme.
Canary’s fund is set to be the sixth single-crypto ETF if it launches.
The fund’s official website has gone live ahead of the anticipated debut.
Past ETFs launched during the government shutdown used automatic effectiveness rules.
The cryptocurrency market is poised for a new addition with the likely debut of the first spot XRP exchange-traded fund, issued by Canary Capital.
On Wednesday, Nasdaq confirmed it had accepted the Form 8-A filing for the Canary XRP ETF, under the ticker XRPC, signalling formal readiness to list the asset.
While the announcement stirred excitement among ETF watchers, the fund still lacks the US Securities and Exchange Commission’s final approval to begin trading.
This has left its launch in limbo, even as industry observers anticipate a possible debut on Thursday.
Canary’s ETF becomes the sixth single-asset crypto fund to reach this milestone following earlier approvals for Bitcoin, Ether, Solana, Litecoin and Hedera.
However, this fund’s progression highlights a more complex regulatory backdrop, influenced by recent shifts in SEC processes during the US government shutdown.
Certification clears Nasdaq listing, but trading awaits
Nasdaq formally notified the SEC that it had received and filed the Form 8-A for Canary’s XRP ETF.
Bloomberg’s ETF analyst Eric Balchunas shared the update on X, stating that “The official listing notice for XRPC has arrived from Nasdaq.”
Despite this progress, the ETF has not yet received the green light to commence trading. The letter issued by Nasdaq confirmed approval of the listing but did not equate to SEC authorisation.
Observers have clarified that the letter is a procedural step and part of the process to join the registrant’s request for the fund to become effective.
Some in the crypto community highlighted the difference, noting that the Nasdaq letter does not declare the fund effective but only acknowledges the listing certification.
The SEC has not issued an effectiveness order, which means trading cannot begin until that step is completed.
Canary’s XRP fund joins crypto ETF roster
Following the Nasdaq filing, Canary Capital launched its official website for the ETF.
Nate Geraci, president of NovaDius Wealth Management, posted about the development, signalling that Canary was likely to be the first to market with an XRP-backed ETF.
If approved, the XRPC ETF will join the growing roster of single-asset crypto ETFs now available to investors. These include Bitcoin, Ether, Solana, Litecoin and Hedera.
Eleanor Terrett of Crypto America also indicated on X that Nasdaq had cleared XRPC for a market open launch, which further raised expectations for an imminent debut. However, the fund cannot proceed to trading without confirmation from the SEC.
Canary’s ETF launch coincides with the recent end of the longest US government shutdown in history.
On Wednesday, President Donald Trump signed legislation that officially reopened government operations.
During the shutdown, ETFs for Solana, Litecoin and Hedera began trading under automatic effectiveness provisions.
These mechanisms allowed trading to begin without active SEC approval during periods when regulatory processes were delayed.
This approach was not used in earlier launches of Bitcoin and Ether ETFs, which both started trading only after formal authorisation from the regulator.
It remains unclear which approach the XRPC fund will follow.
Without a current effectiveness order, Canary’s ETF may be subject to additional delays, unless it qualifies under the same automatic provisions used during the shutdown period.
Launch window narrows as market watches SEC decision
Although Nasdaq has certified the listing and Canary’s infrastructure appears ready, the fate of the XRPC ETF ultimately depends on the SEC.
Canary’s website launch and market interest reflect growing anticipation, but trading cannot begin until regulators give their final approval.
Although Nasdaq certified the listing and Canary Capital launched its website, the fund did not begin trading immediately after 28 October, the initially anticipated date.
Without a final effectiveness order from the SEC, the ETF remains in limbo. Until that regulatory step is completed, XRPC cannot begin trading, and the market continues to await confirmation.
Apple introduced the ability to send and receive text messages via satellite last year, but now it wants to expand these features with 5G support and more.
Apple introduced the ability to send and receive text messages via satellite last year, but now it wants to expand these features with 5G support and more.
Manny Mora, CEO and president of Kymeta. (Kymeta Photo)
Redmond, Wash.-based Kymeta, a mobile satellite communications company, announced Manny Mora as its new president and CEO, effective immediately.
The company, founded in 2012 with backing from Microsoft co-founder Bill Gates, is ramping up efforts to provide services across the U.S. Department of Defense and allied militaries.
Mora spent nearly 40 years with General Dynamics Mission Systems, leading the Virginia-based company’s Space and Intelligence Systems. In this role he supported the company’s partnerships with DOD, the intelligence community, the U.S. Department of Homeland Security and others.
“As the defense community modernizes its command-and-control infrastructure, Kymeta is uniquely positioned to deliver mobile SATCOM solutions that perform in the most demanding environments,” said Nicole Piasecki, the executive chair of Kymeta’s board of directors, in a statement.
“Manny Mora brings the operational depth and strategic clarity to scale our impact and strengthen our role as a trusted partner to national security customers,” she added.
Kymeta is riding tailwinds from an aerospace and defense sector being reshaped by advances in software systems, autonomous platforms, satellite communications, and AI.
Kymeta was recently chosen by the U.S. Army as the multi-orbit satellite communications provider for its Next Generation Command and Control pilot. The initiative will use the company’s Osprey u8 terminal technology to provide connectivity for military operators.
“Our breakthrough technology is already transforming how defense and government customers communicate across domains,” Mora said in a statement.
In taking the role, Mora replaces Rick Bergman, a former executive vice president at semiconductor giant AMD, who took the helm in April 2024.
Kymeta makes use of an innovative type of technology called metamaterials to build antennas that can be steered by software, without moving parts. Its hybrid cellular-satellite terminals enable communications in hard-to-reach areas — an application that’s been of particular interest to defense customers.
The company also provides technology for emergency services, maritime operations, wildfire-fighting and other applications.
Kymeta raised $84 million in 2022. Total funding to date is nearly $400 million.
An artist’s conception shows Portal’s Starburst spacecraft in the foreground with its Supernova space vehicle (and Earth) in the background. (Portal Space Systems Illustration)
Bothell, Wash.-based Portal Space Systems has added another spacecraft to its product line: a rapid-maneuverability vehicle called Starburst, which takes advantage of technologies that are being developed for its more powerful Supernova satellite platform.
Starburst-1 is due to star in Portal’s first free-flying space mission with live payloads a year from now, starting with a launch on SpaceX’s Transporter-18 satellite rideshare mission. Portal says the mission will demonstrate rendezvous and proximity operations, rapid retasking and rapid orbital change for national security and commercial applications.
Portal says Starburst and the larger Supernova platform will share many manufacturing processes and core systems, including the thrusters being developed for Supernova’s reaction control system. Like Supernova, Starburst will use heated ammonia as a propellant.
“Our strategy is to deliver what customers need now and accelerate what they’ll need next,” Portal CEO Jeff Thornburg said today in a news release. “Starburst gives operators a maneuverable bus that supports proliferated architectures in the orbit that matters to them. Supernova brings the trans-orbital reach. Flying Starburst-1 in 2026 lets us field capability quickly and advance the shared systems that raise confidence for Supernova’s 2027 debut.”
Starburst-1 is to be deployed into a sun-synchronous orbit for a one-year primary mission. Portal’s target for on-orbit maneuverability is 1 kilometer per second of total delta-v, which translates to a change in velocity amounting to more than 2,200 mph.
The ESPA-class spacecraft will carry two hosted payloads: a stereo video monitoring system provided by California-based TRL11; and a superconducting magnetic actuator provided by New Zealand-based Zenno Astronautics. Zenno plans to demonstrate the magnet technology that it has developed for satellite positioning and precision interactions between satellites.
In an email, Thornburg told GeekWire that “the Starburst-1 mission is completely funded by Portal to reduce risk and prove capability for our customers ahead of future contracted missions.” Portal plans to offer Starburst for customer missions starting in 2027.
In our previous article we dissected penetration testing techniques for IBM z/OS mainframes protected by the Resource Access Control Facility (RACF) security package. In this second part of our research, we delve deeper into RACF by examining its decision-making logic, database structure, and the interactions between the various entities in this subsystem. To facilitate offline analysis of the RACF database, we have developed our own utility, racfudit, which we will use to perform possible checks and evaluate RACF configuration security. As part of this research, we also outline the relationships between RACF entities (users, resources, and data sets) to identify potential privilege escalation paths for z/OS users.
This material is provided solely for educational purposes and is intended to assist professionals conducting authorized penetration tests.
RACF internal architecture
Overall role
z/OS access control diagram
To thoroughly analyze RACF, let’s recall its role and the functions of its components within the overall z/OS architecture. As illustrated in the diagram above, RACF can generally be divided into a service component and a database. Other components exist too, such as utilities for RACF administration and management, or the RACF Auditing and Reporting solution responsible for event logging and reporting. However, for a general understanding of the process, we believe these components are not strictly necessary. The RACF database stores information about z/OS users and the resources for which access control is configured. Based on this data, the RACF service component performs all necessary security checks when requested by other z/OS components and subsystems. RACF typically interacts with other subsystems through the System Authorization Facility (SAF) interface. Various z/OS components use SAF to authorize a user’s access to resources or to execute a user-requested operation. It is worth noting that while this paper focuses on the operating principle of RACF as the standard security package, other security packages like ACF2 or Top Secret can also be used in z/OS.
Let’s consider an example of user authorization within the Time Sharing Option (TSO) subsystem, the z/OS equivalent of a command line interface. We use an x3270 terminal emulator to connect to the mainframe. After successful user authentication in z/OS, the TSO subsystem uses SAF to query the RACF security package, checking that the user has permission to access the TSO resource manager. The RACF service queries the database for user information, which is stored in a user profile. If the database contains a record of the required access permissions, the user is authorized, and information from the user profile is placed into the address space of the new TSO session within the ACEE (Accessor Environment Element) control block. For subsequent attempts to access other z/OS resources within that TSO session, RACF uses the information in ACEE to make the decision on granting user access. SAF reads data from ACEE and transmits it to the RACF service. RACF makes the decision to grant or deny access, based on information in the relevant profile of the requested resource stored in the database. This decision is then sent back to SAF, which processes the user request accordingly. The process of querying RACF repeats for any further attempts by the user to access other resources or execute commands within the TSO session.
Thus, RACF handles identification, authentication, and authorization of users, as well as granting privileges within z/OS.
RACF database components
As discussed above, access decisions for resources within z/OS are made based on information stored in the RACF database. This data is kept in the form of records, or as RACF terminology puts it, profiles. These contain details about specific z/OS objects. While the RACF database can hold various profile types, four main types are especially important for security analysis:
User profile holds user-specific information such as logins, password hashes, special attributes, and the groups the user belongs to.
Group profile contains information about a group, including its members, owner, special attributes, list of subgroups, and the access permissions of group members for that group.
Data set profile stores details about a data set, including access permissions, attributes, and auditing policy.
General resource profile provides information about a resource or resource class, such as resource holders, their permissions regarding the resource, audit policy, and the resource owner.
The RACF database contains numerous instances of these profiles. Together, they form a complex structure of relationships between objects and subjects within z/OS, which serves as the basis for access decisions.
Logical structure of RACF database profiles
Each profile is composed of one or more segments. Different profile types utilize different segment types.
For example, a user profile instance may contain the following segments:
BASE: core user information in RACF (mandatory segment);
TSO: user TSO-session parameters;
OMVS: user session parameters within the z/OS UNIX subsystem;
KERB: data related to the z/OS Network Authentication Service, essential for Kerberos protocol operations;
and others.
User profile segments
Different segment types are distinguished by the set of fields they store. For instance, the BASE segment of a user profile contains the following fields:
PASSWORD: the user’s password hash;
PHRASE: the user’s password phrase hash;
LOGIN: the user’s login;
OWNER: the owner of the user profile;
AUTHDATE: the date of the user profile creation in the RACF database;
and others.
The PASSWORD and PHRASE fields are particularly interesting for security analysis, and we will dive deeper into these later.
RACF database structure
It is worth noting that the RACF database is stored as a specialized data set with a specific format. Grasping this format is very helpful when analyzing the DB and mapping the relationships between z/OS objects and subjects.
As discussed in our previous article, a data set is the mainframe equivalent of a file, composed of a series of blocks.
RACF DB structure
The image above illustrates the RACF database structure, detailing the data blocks and their offsets. From the RACF DB analysis perspective, and when subsequently determining the relationships between z/OS objects and subjects, the most critical blocks include:
The header block, or inventory control block (ICB), which contains various metadata and pointers to all other data blocks within the RACF database. By reading the ICB, you gain access to the rest of the data blocks.
Index blocks, which form a singly linked list that contains pointers to all profiles and their segments in the RACF database – that is, to the information about all users, groups, data sets, and resources.
Templates: a crucial data block containing templates for all profile types (user, group, data set, and general resource profiles). The templates list fields and specify their format for every possible segment type within the corresponding profile type.
Upon dissecting the RACF database structure, we identified the need for a utility capable of extracting all relevant profile information from the DB, regardless of its version. This utility would also need to save the extracted data in a convenient format for offline analysis. Performing this type of analysis provides a comprehensive picture of the relationships between all objects and subjects for a specific z/OS installation, helping uncover potential security vulnerabilities that could lead to privilege escalation or lateral movement.
Utilities for RACF DB analysis
At the previous stage, we defined the following functional requirements for an RACF DB analysis utility:
The ability to analyze RACF profiles offline without needing to run commands on the mainframe
The ability to extract exhaustive information about RACF profiles stored in the DB
Compatibility with various RACF DB versions
Intuitive navigation of the extracted data and the option to present it in various formats: plaintext, JSON, SQL, etc.
Overview of existing RACF DB analysis solutions
We started by analyzing off-the-shelf tools and evaluating their potential for our specific needs:
Racf2john extracts user password hashes (from the PASSWORD field) encrypted with the DES and KDFAES algorithms from the RACF database. While this was a decent starting point, we needed more than just the PASSWORD field; specifically, we also needed to retrieve content from other profile fields like PHRASE.
Racf2sql takes an RACF DB dump as input and converts it into an SQLite database, which can then be queried with SQL. This is convenient, but the conversion process risks losing data critical for z/OS security assessment and identifying misconfigurations. Furthermore, the tool requires a database dump generated by the z/OS IRRDBU00 utility (part of the RACF security package) rather than the raw database itself.
IRRXUTIL allows querying the RACF DB to extract information. It is also part of the RACF security package. It can be conveniently used with a set of scripts written in REXX (an interpreted language used in z/OS). However, these scripts demand elevated privileges (access to one or more IRR.RADMIN.** resources in the FACILITY resource class) and must be executed directly on the mainframe, which is unsuitable for the task at hand.
Racf_debug_cleanup.c directly analyzes a RACF DB from a data set copy. A significant drawback is that it only parses BASE segments and outputs results in plaintext.
As you can see, existing tools don’t satisfy our needs. Some utilities require direct execution on the mainframe. Others operate on a data set copy and extract incomplete information from the DB. Moreover, they rely on hardcoded offsets and signatures within profile segments, which can vary across RACF versions. Therefore, we decided to develop our own utility for RACF database analysis.
Introducing racfudit
We have written our own platform-independent utility racfudit in Golang and tested it across various z/OS versions (1.13, 2.02, and 3.1). Below, we delve into the operating principles, capabilities and advantages of our new tool.
Extracting data from the RACF DB
To analyze RACF DB information offline, we first needed a way to extract structured data. We developed a two-stage approach for this:
The first stage involves analyzing the templates stored within the RACF DB. Each template describes a specific profile type, its constituent segments, and the fields within those segments, including their type and size. This allows us to obtain an up-to-date list of profile types, their segments, and associated fields, regardless of the RACF version.
In the second stage, we traverse all index blocks to extract every profile with its content from the RACF DB. These collected profiles are then processed and parsed using the templates obtained in the first stage.
The first stage is crucial because RACF DB profiles are stored as unstructured byte arrays. The templates are what define how each specific profile (byte array) is processed based on its type.
Thus, we defined the following algorithm to extract structured data.
Extracting data from the RACF DB using templates
We offload the RACF DB from the mainframe and read its header block (ICB) to determine the location of the templates.
Based on the template for each profile type, we define an algorithm for structuring specific profile instances according to their type.
We use the content of the header block to locate the index blocks, which store pointers to all profile instances.
We read all profile instances and their segments sequentially from the list of index blocks.
For each profile instance and its segments we read, we apply the processing algorithm based on the corresponding template.
All processed profile instances are saved in an intermediate state, allowing for future storage in various formats, such as plaintext or SQLite.
The advantage of this approach is its version independence. Even if templates and index blocks change their structure across RACF versions, our utility will not lose data because it dynamically determines the structure of each profile type based on the relevant template.
Analyzing extracted RACF DB information
Our racfudit utility can present collected RACF DB information as an SQLite database or a plaintext file.
RACF DB information as an SQLite DB (top) and text data (bottom)
Using SQLite, you can execute SQL queries to identify misconfigurations in RACF that could be exploited for privilege escalation, lateral movement, bypassing access controls, or other pentesting tactics. It is worth noting that the set of SQL queries used for processing information in SQLite can be adapted to validate current RACF settings against security standards and best practices. Let’s look at some specific examples of how to use the racfudit utility to uncover security issues.
Collecting password hashes
One of the primary goals in penetration testing is to get a list of administrators and a way to authorize using their credentials. This can be useful for maintaining persistence on the mainframe, moving laterally to other mainframes, or even pivoting to servers running different operating systems. Administrators are typically found in the SYS1 group and its subgroups. The example below shows a query to retrieve hashes of passwords (PASSWORD) and password phrases (PHRASE) for privileged users in the SYS1 group.
select ProfileName,PHRASE,PASSWORD,CONGRPNM from USER_BASE where CONGRPNM LIKE "%SYS1%";
Of course, to log in to the system, you need to crack these hashes to recover the actual passwords. We cover that in more detail below.
Searching for inadequate UACC control in data sets
The universal access authority (UACC) defines the default access permissions to the data set. This parameter specifies the level of access for all users who do not have specific access permissions configured. Insufficient control over UACC values can pose a significant risk if elevated access permissions (UPDATE or higher) are set for data sets containing sensitive data or for APF libraries, which could allow privilege escalation. The query below helps identify data sets with default ALTER access permissions, which allow users to read, delete and modify the data set.
select ProfileName, UNIVACS from DATASET_BASE where UNIVACS LIKE "1%";
The UACC field is not present only in data set profiles; it is also found in other profile types. Weak control in the configuration of this field can give a penetration tester access to resources.
RACF profile relationships
As mentioned earlier, various RACF entities have relationships. Some are explicitly defined; for example, a username might be listed in a group profile within its member field (USERID field). However, there are also implicit relationships. For instance, if a user group has UPDATE access to a specific data set, every member of that group implicitly has write access to that data set. This is a simple example of implicit relationships. Next, we delve into more complex and specific relationships within the RACF database that a penetration tester can exploit.
RACF profile fields
A deep dive into RACF internal architecture reveals that misconfigurations of access permissions and other attributes for various RACF entities can be difficult to detect and remediate in some scenarios. These seemingly minor errors can be critical, potentially leading to mainframe compromise. The explicit and implicit relationships within the RACF database collectively define the mainframe’s current security posture. As mentioned, each profile type in the RACF database has a unique set of fields and attributes that describe how profiles relate to one another. Based on these fields and attributes, we have compiled lists of key fields that help build and analyze relationship chains.
User profile fields
SPECIAL: indicates that the user has privileges to execute any RACF command and grants them full control over all profiles in the RACF database.
OPERATIONS: indicates whether the user has authorized access to all RACF-protected resources of the DATASET, DASDVOL, GDASDVOL, PSFMPL, TAPEVOL, VMBATCH, VMCMD, VMMDISK, VMNODE, and VMRDR classes. While actions for users with this field specified are subject to certain restrictions, in a penetration testing context the OPERATIONS field often indicates full data set access.
AUDITOR: indicates whether the user has permission to access audit information.
AUTHOR: the creator of the user. It has certain privileges over the user, such as the ability to change their password.
REVOKE: indicates whether the user can log in to the system.
Password TYPE: specifies the hash type (DES or KDFAES) for passwords and password phrases. This field is not natively present in the user profile, but it can be created based on how different passwords and password phrases are stored.
Group-SPECIAL: indicates whether the user has full control over all profiles within the scope defined by the group or groups field. This is a particularly interesting field that we explore in more detail below.
Group-OPERATIONS: indicates whether the user has authorized access to all RACF-protected resources of the DATASET, DASDVOL, GDASDVOL, PSFMPL, TAPEVOL, VMBATCH, VMCMD, VMMDISK, VMNODE and VMRDR classes within the scope defined by the group or groups field.
Group-AUDITOR: indicates whether the user has permission to access audit information within the scope defined by the group or groups field.
CLAUTH (class authority): allows the user to create profiles within the specified class or classes. This field enables delegation of management privileges for individual classes.
GROUPIDS: contains a list of groups the user belongs to.
UACC (universal access authority): defines the UACC value for new profiles created by the user.
Group profile fields
UACC (universal access authority): defines the UACC value for new profiles that the user creates when connected to the group.
OWNER: the creator of the group. The owner has specific privileges in relation to the current group and its subgroups.
USERIDS: the list of users within the group. The order is essential.
USERACS: the list of group members with their respective permissions for access to the group. The order is essential.
SUPGROUP: the name of the superior group.
General resource and data set profile fields
UACC (universal access authority): defines the default access permissions to the resource or data set.
OWNER: the creator of the resource or data set, who holds certain privileges over it.
WARNING: indicates whether the resource or data set is in WARNING mode.
USERIDS: the list of user IDs associated with the resource or data set. The order is essential.
USERACS: the list of users with access permissions to the resource or data set. The order is essential.
RACF profile relationship chains
The fields listed above demonstrate the presence of relationships between RACF profiles. We have decided to name these relationships similarly to those used in BloodHound, a popular tool for analyzing Active Directory misconfigurations. Below are some examples of these relationships – the list is not exhaustive.
Owner: the subject owns the object.
MemberOf: the subject is part of the object.
AllowJoin: the subject has permission to add itself to the object.
AllowConnect: the subject has permission to add another object to the specified object.
AllowCreate: the subject has permission to create an instance of the object.
AllowAlter: the subject has the ALTER privilege for the object.
AllowUpdate: the subject has the UPDATE privilege for the object.
AllowRead: the subject has the READ privilege for the object.
CLAuthTo: the subject has permission to create instances of the object as defined in the CLAUTH field.
GroupSpecial: the subject has full control over all profiles within the object’s scope of influence as defined in the group-SPECIAL field.
GroupOperations: the subject has permissions to perform certain operations with the object as defined in the group-OPERATIONS field.
ImpersonateTo: the subject grants the object the privilege to perform certain operations on the subject’s behalf.
ResetPassword: the subject grants another object the privilege to reset the password or password phrase of the specified object.
UnixAdmin: the subject grants superuser privileges to the object in z/OS UNIX.
SetAPF: the subject grants another object the privilege to set the APF flag on the specified object.
These relationships serve as edges when constructing a graph of subject–object interconnections. Below are examples of potential relationships between specific profile types.
Examples of relationships between RACF profiles
Visualizing and analyzing these relationships helped us identify specific chains that describe potential RACF security issues, such as a path from a low-privileged user to a highly-privileged one. Before we delve into examples of these chains, let’s consider another interesting and peculiar feature of the relationships between RACF database entities.
Implicit RACF profile relationships
We have observed a fascinating characteristic of the group-SPECIAL, group-OPERATIONS, and group-AUDITOR fields within a user profile. If the user has any group specified in one of these fields, that group’s scope of influence extends the user’s own scope.
Scope of influence of a user with a group-SPECIAL field
For instance, consider USER1 with GROUP1 specified in the group-SPECIAL field. If GROUP1 owns GROUP2, and GROUP2 subsequently owns USER5, then USER1 gains privileges over USER5. This is not just about data access; USER1 essentially becomes the owner of USER5. A unique aspect of z/OS is that this level of access allows USER1 to, for example, change USER5’s password, even if USER5 holds privileged attributes like SPECIAL, OPERATIONS, ROAUDIT, AUDITOR, or PROTECTED.
Below is an SQL query, generated using the racfudit utility, that identifies all users and groups where the specified user possesses special attributes:
select ProfileName, CGGRPNM, CGUACC, CGFLAG2 from USER_BASE WHERE (CGFLAG2 LIKE '%10000000%');
Here is a query to find users whose owners (AUTHOR) are not the standard default administrators:
select ProfileName,AUTHOR from USER_BASE WHERE (AUTHOR NOT LIKE '%IBMUSER%' AND AUTHOR NOT LIKE 'SYS1%');
Let’s illustrate how user privileges can be escalated through these implicit profile relationships.
Privilege escalation via the group-SPECIAL field
In this scenario, the user TESTUSR has the group-SPECIAL field set to PASSADM. This group, PASSADM, owns the OPERATOR user. This means TESTUSR’s scope of influence expands to include PASSADM’s scope, thereby granting TESTUSR control over OPERATOR. Consequently, if TESTUSR’s credentials are compromised, the attacker gains access to the OPERATOR user. The OPERATOR user, in turn, has READ access to the IRR.PASSWORD.RESET resource, which allows them to assign a password to any user who does not possess privileged permissions.
Having elevated privileges in z/OS UNIX is often sufficient for compromising the mainframe. These can be acquired through several methods:
Grant the user READ access to the BPX.SUPERUSER resource of the FACILITY class.
Grant the user READ access to UNIXPRIV.SUPERUSER.* resources of the UNIXPRIV class.
Set the UID field to 0 in the OMVS segment of the user profile.
For example, the DFSOPER user has READ access to the BPX.SUPERUSER resource, making them privileged in z/OS UNIX and, by extension, across the entire mainframe. However, DFSOPER does not have the explicit privileged fields SPECIAL, OPERATIONS, AUDITOR, ROAUDIT and PROTECTED set, meaning the OPERATOR user can change DFSOPER’s password. This allows us to define the following sequence of actions to achieve high privileges on the mainframe:
Obtain and use TESTUSR’s credentials to log in.
Change OPERATOR’s password and log in with those credentials.
Change DFSOPER’s password and log in with those credentials.
Access the z/OS UNIX Shell with elevated privileges.
We uncovered another implicit RACF profile relationship that enables user privilege escalation.
Privilege escalation from a chain of misconfigurations
In another example, the TESTUSR user has READ access to the OPERSMS.SUBMIT resource of the SURROGAT class. This implies that TESTUSR can create a task under the identity of OPERSMS using the ImpersonateTo relationship. OPERSMS is a member of the HFSADMIN group, which has READ access to the TESTAUTH resource of the TSOAUTH class. This resource indicates whether the user can run an application or library as APF-authorized – this requires only READ access. Therefore, if APF access is misconfigured, the OPERSMS user can escalate their current privileges to the highest possible level. This outlines a path from the low-privileged TESTUSR to obtaining maximum privileges on the mainframe.
At this stage, the racfudit utility allows identifying these connections only manually through a series of SQLite database queries. However, we are planning to add support for another output format, including Neo4j DBMS integration, to automatically visualize the interconnected chains described above.
Password hashes in RACF
To escalate privileges and gain mainframe access, we need the credentials of privileged users. We previously used our utility to extract their password hashes. Now, let’s dive into the password policy principles in z/OS and outline methods for recovering passwords from these collected hashes.
The primary password authentication methods in z/OS, based on RACF, are PASSWORD and PASSPHRASE. PASSWORD is a password composed by default of ASCII characters: uppercase English letters, numbers, and special characters (@#$). Its length is limited to 8 characters. PASSPHRASE, or a password phrase, has a more complex policy, allowing 14 to 100 ASCII characters, including lowercase or uppercase English letters, numbers, and an extended set of special characters (@#$&*{}[]()=,.;’+/). Hashes for both PASSWORD and PASSPHRASE are stored in the user profile within the BASE segment, in the PASSWORD and PHRASE fields, respectively. Two algorithms are used to derive their values: DES and KDFAES.
It is worth noting that we use the terms “password hash” and “password phrase hash” for clarity. When using the DES and KDFAES algorithms, user credentials are stored in the RACF database as encrypted text, not as a hash sum in its classical sense. Nevertheless, we will continue to use “password hash” and “password phrase hash” as is customary in IBM documentation.
Let’s discuss the operating principles and characteristics of the DES and KDFAES algorithms in more detail.
DES
When the DES algorithm is used, the computation of PASSWORD and PHRASE values stored in the RACF database involves classic DES encryption. Here, the plaintext data block is the username (padded to 8 characters if shorter), and the key is the password (also padded to 8 characters if shorter).
PASSWORD
The username is encrypted with the password as the key via the DES algorithm, and the 8-byte result is placed in the user profile’s PASSWORD field.
DES encryption of a password
Keep in mind that both the username and password are encoded with EBCDIC. For instance, the username USR1 would look like this in EBCDIC: e4e2d9f140404040. The byte 0x40 serves as padding for the plaintext to reach 8 bytes.
This password can be recovered quite fast, given the small keyspace and low computational complexity of DES. For example, a brute-force attack powered by a cluster of NVIDIA 4090 GPUs takes less than five minutes.
The hashcat tool includes a module (Hash-type 8500) for cracking RACF passwords with the DES algorithm.
PASSPHRASE
PASSPHRASE encryption is a bit more complex, and a detailed description of its algorithm is not readily available. However, our research uncovered certain interesting characteristics.
First, the final hash length in the PHRASE field matches the original password phrase length. Essentially, the encrypted data output from DES gets truncated to the input plaintext length without padding. This design can clearly lead to collisions and incorrect authentication under certain conditions. For instance, if the original password phrase is 17 bytes long, it will be encrypted in three blocks, with the last block padded with seven bytes. These padded bytes are then truncated after encryption. In this scenario, any password whose first 17 encrypted bytes match the encrypted PASSPHRASE would be considered valid.
The second interesting feature is that the PHRASE field value is also computed using the DES algorithm, but it employs a proprietary block chaining mode. We will informally refer to this as IBM-custom mode.
DES encryption of a password phrase
Given these limitations, we can use the hashcat module for RACF DES to recover the first 8 characters of a password phrase from the first block of encrypted data in the PHRASE field. In some practical scenarios, recovering the beginning of a password phrase allowed us to guess the remainder, especially when weak dictionary passwords were used. For example, if we recovered Admin123 (8 characters) while cracking a 15-byte PASSPHRASE hash, then it is plausible the full password phrase was Admin1234567890.
KDFAES
Computing passwords and password phrases generated with the KDFAES algorithm is significantly more challenging than with DES. KDFAES is a proprietary IBM algorithm that leverages AES encryption. The encryption key is generated from the password using the PBKDF2 function with a specific number of hashing iterations.
PASSWORD
The diagram below outlines the multi-stage KDFAES PASSWORD encryption algorithm.
KDFAES encryption of a password
The first stage mirrors the DES-based PASSWORD computation algorithm. Here, the plaintext username is encrypted using the DES algorithm with the password as the key. The username is also encoded in EBCDIC and padded if it’s shorter than 8 bytes. The resulting 8-byte output serves as the key for the second stage: hashing. This stage employs a proprietary IBM algorithm built upon PBKDF2-SHA256-HMAC. A randomly generated 16-byte string (salt) is fed into this algorithm along with the 8-byte key from the first stage. This data is then iteratively hashed using PBKDF2-SHA256-HMAC. The number of iterations is determined by two parameters set in RACF: the memory factor and the repetition factor. The output of the second stage is a 32-byte hash, which is then used as the key for AES encryption of the username in the third stage.
The final output is 16 bytes of encrypted data. The first 8 bytes are appended to the end of the PWDX field in the user profile BASE segment, while the other 8 bytes are placed in the PASSWORD field within the same segment.
The PWDX field in the BASE segment has the following structure:
In the profiles we analyzed, we observed only the value E7D7E66D
4–7
4 bytes
Hash type
In the profiles we analyzed, we observed only two values: 00180000 for PASSWORD hashes and 00140000 for PASSPHRASE hashes
8–9
2 bytes
Memory factor
A value that determines the number of iterations in the hashing stage
10–11
2 bytes
Repetition factor
A value that determines the number of iterations in the hashing stage
12–15
4 bytes
Unknown value
In the profiles we analyzed, we observed only the value 00100010
16–31
16 bytes
Salt
A randomly generated 16-byte string used in the hashing stage
32–39
8 bytes
The first half of the password hash
The first 8 bytes of the final encrypted data
You can use the dedicated module in the John the Ripper utility for offline password cracking. While an IBM KDFAES module for an older version of hashcat exists publicly, it was never integrated into the main branch. Therefore, we developed our own RACF KDFAES module compatible with the current hashcat version.
The time required to crack an RACF KDFAES hash has significantly increased compared to RACF DES, largely due to the integration of PBKDF2. For instance, if the memory factor and repetition factor are set to 0x08 and 0x32 respectively, the hashing stage can reach 40,000 iterations. This can extend the password cracking time to several months or even years.
PASSPHRASE
KDFAES encryption of a password phrase
Encrypting a password phrase hash with KDFAES shares many similarities with encrypting a password hash. According to public sources, the primary difference lies in the key used during the second stage. For passwords, data derived from DES-encrypting the username was used, while for a password phrase, its SHA256 hash is used. During our analysis, we could not determine the exact password phrase hashing process – specifically, whether padding is involved, if a secret key is used, and so on.
Additionally, when using a password phrase, the PHRASE and PHRASEX fields instead of PASSWORD and PWDX, respectively, store the final hash, with the PHRASEX value having a similar structure.
Conclusion
In this article, we have explored the internal workings of the RACF security package, developed an approach to extracting information, and presented our own tool developed for the purpose. We also outlined several potential misconfigurations that could lead to mainframe compromise and described methods for detecting them. Furthermore, we examined the algorithms used for storing user credentials (passwords and password phrases) and highlighted their strengths and weaknesses.
We hope that the information presented in this article helps mainframe owners better understand and assess the potential risks associated with incorrect RACF security suite configurations and take appropriate mitigation steps. Transitioning to the KDFAES algorithm and password phrases, controlling UACC values, verifying access to APF libraries, regularly tracking user relationship chains, and other steps mentioned in the article can significantly enhance your infrastructure security posture with minimal effort.
In conclusion, it is worth noting that only a small percentage of the RACF database structure has been thoroughly studied. Comprehensive research would involve uncovering additional relationships between database entities, further investigating privileges and their capabilities, and developing tools to exploit excessive privileges. The topic of password recovery is also not fully covered because the encryption algorithms have not been fully studied. IBM z/OS mainframe researchers have immense opportunities for analysis. As for us, we will continue to shed light on the obscure, unexplored aspects of these devices, to help prevent potential vulnerabilities in mainframe infrastructure and associated security incidents.