Reading view

There are new articles available, click to refresh the page.

SEC Chair Paul Atkins Advocates For Modernizing Crypto Regulations– Here’s How

In remarks made on December 4, US Securities and Exchange Commission (SEC) Chair Paul Atkins expressed an optimistic outlook for the cryptocurrency industry. Atkins emphasized the SEC’s intent to modernize its rules to facilitate an on-chain market environment, leveraging distributed ledger technology and the tokenization of financial assets.

SEC Chair Advocates For Crypto Tokenization

Atkins highlighted the transformative potential of these technologies for the capital markets. He stressed that enhancing these markets is essential for US firms and investors to maintain their leadership on a global scale. 

The chair underscored that the advancements in blockchain technology could streamline not only trading processes but also the entire issuer-investor relationship, which would enable a more efficient and transparent financial ecosystem.

Tokenization, according to Atkins, goes beyond merely changing the mechanics of trading. He pointed out that it can foster direct connections for various important functions such as proxy voting, dividend payments, and shareholder communications, all while reducing the reliance on multiple intermediaries. 

In his address, Atkins acknowledged several innovative models that deserve consideration. He noted that some companies are directly issuing equity on public distributed ledgers in the form of programmable assets. 

These assets can integrate compliance features, voting rights, and governance capabilities, allowing investors to hold securities in a digital format that promotes transparency and reduces the number of intermediaries involved.

Additionally, he mentioned that third parties are engaging in the tokenization of equities by generating on-chain security entitlements that represent ownership stakes in traditional equities. 

The emergence of synthetic exposures—tokenized products designed to reflect the performance of public equities—was also highlighted. While many of these offerings are currently being developed offshore, they showcase the international interest in US market exposure supported by distributed ledger technology.

Atkins Critiques Past SEC Strategies

However, Atkins cautioned that transitioning to on-chain capital markets entails more than just issuance. He stated that it is essential to address various stages of the securities transaction lifecycle effectively. 

For instance, if tokenized shares cannot be traded competitively in liquid on-chain environments, they risk becoming little more than conceptual assets without practical utility. 

The chair also criticized the previous SEC’s approach toward the crypto industry under the agency’s former chair Gary Gensler, which attempted to adapt to on-chain markets through an expansive redefinition of “exchange.” 

This earlier strategy enforced a broad regulatory framework that ultimately created uncertainty and stifled innovation, Atkins stated. He said that it is vital to avoid repeating such mistakes in order to stimulate innovation, investment, and job creation in the United States.

To foster a conducive environment for growth, Atkins called for compliant pathways that can enable market participants to capitalize on the unique benefits of new technologies like crypto. 

In light of this conviction, he has instructed SEC staff to explore recommendations for utilizing the agency’s exemptive authorities, permitting on-chain innovations while the Commission works toward developing long-term, effective crypto regulatory frameworks.

Crypto

Featured image from DALL-E, chart from TradingView.com

Three steps to build a data foundation for federal AI innovation

America’s AI Action Plan outlines a comprehensive strategy for the country’s leadership in AI. The plan seeks, in part, to accelerate AI adoption in the federal government. However, there is a gap in that vision: agencies have been slow to adopt AI tools to better serve the public. The biggest barrier to adopting and scaling trustworthy AI isn’t policy or compute power — it’s the foundation beneath the surface. How agencies store, access and govern their records will determine whether AI succeeds or stalls. Those records aren’t just for retention purposes; they are the fuel AI models need to power operational efficiencies through streamlined workflows and uncover mission insights that enable timely, accurate decisions. Without robust digitalization and data governance, federal records cannot serve as the reliable fuel AI models need to drive innovation.

Before AI adoption can take hold, agencies must do something far less glamorous but absolutely essential: modernize their records. Many still need to automate records management, beginning with opening archival boxes, assessing what is inside, and deciding what is worth keeping. This essential process transforms inaccessible, unstructured records into structured, connected datasets that AI models can actually use. Without it, agencies are not just delaying AI adoption, they’re building on a poor foundation that will collapse under the weight of daily mission demands.

If you do not know the contents of the box, how confident can you be that the records aren’t crucial to automating a process with AI? In AI terms, if you enlist the help of a model like OpenAI, the results will only be as good as the digitized data behind it. The greater the knowledge base, the faster AI can be adopted and scaled to positively impact public service. Here is where agencies can start preparing their records — their knowledge base — to lay a defensible foundation for AI adoption.

Step 1: Inventory and prioritize what you already have

Many agencies are sitting on decades’ worth of records, housed in a mix of storage boxes, shared drives, aging databases, and under-governed digital repositories. These records often lack consistent metadata, classification tags or digital traceability, making them difficult to find, harder to govern, and nearly impossible to automate.

This fragmentation is not new. According to NARA’s 2023 FEREM report, only 61% of agencies were rated as low-risk in their management of electronic records — indicating that many still face gaps in easily accessible records, digitalization and data governance. This leaves thousands of unstructured repositories vulnerable to security risks and unable to be fed into an AI model. A comprehensive inventory allows agencies to see what they have, determine what is mission-critical, and prioritize records cleanup. Not everything needs to be digitalized. But everything needs to be accounted for. This early triage is what ensures digitalization, automation and analytics are focused on the right things, maximizing return while minimizing risk.

Without this step, agencies risk building powerful AI models on unreliable data, a setup that undermines outcomes and invites compliance pitfalls.

Step 2: Make digitalization the bedrock of modernization

One of the biggest misconceptions around modernization is that digitalization is a tactical compliance task with limited strategic value. In reality, digitalization is what turns idle content into usable data. It’s the on-ramp to AI driven automation across the agency, including one-click records management and data-driven policymaking.

By focusing on high-impact records — those that intersect with mission-critical workflows, the Freedom of Information Act, cybersecurity enforcement or policy enforcement — agencies can start to build a foundation that’s not just compliant, but future-ready. These records form the connective tissue between systems, workforce, data and decisions.

The Government Accountability Office estimates that up to 80% of federal IT budgets are still spent maintaining legacy systems. Resources that, if reallocated, could help fund strategic digitalization and unlock real efficiency gains. The opportunity cost of delay is increasing exponentially everyday.

Step 3: Align records governance with AI strategy

Modern AI adoption isn’t just about models and computation; it’s about trust, traceability, and compliance. That’s why strong information governance is essential.

Agencies moving fastest on AI are pairing records management modernization with evolving governance frameworks, synchronizing classification structures, retention schedules and access controls with broader digital strategies. The Office of Management and Budget’s 2025 AI Risk Management guidance is clear: explainability, reliability and auditability must be built in from the start.

When AI deployment evolves in step with a diligent records management program centered on data governance, agencies are better positioned to accelerate innovation, build public trust, and avoid costly rework. For example, labeling records with standardized metadata from the outset enables rapid, digital retrieval during audits or investigations, a need that’s only increasing as AI use expands. This alignment is critical as agencies adopt FedRAMP Moderate-certified platforms to run sensitive workloads and meet compliance requirements. These platforms raise the baseline for performance and security, but they only matter if the data moving through them is usable, well-governed and reliable.

Infrastructure integrity: The hidden foundation of AI

Strengthening the digital backbone is only half of the modernization equation. Agencies must also ensure the physical infrastructure supporting their systems can withstand growing operational, environmental, and cybersecurity demands.

Colocation data centers play a critical role in this continuity — offering secure, federally compliant environments that safeguard sensitive data and maintain uptime for mission-critical systems. These facilities provide the stability, scalability and redundancy needed to sustain AI-driven workloads, bridging the gap between digital transformation and operational resilience.

By pairing strong information governance with resilient colocation infrastructure, agencies can create a true foundation for AI, one that ensures innovation isn’t just possible, but sustainable in even the most complex mission environments.

Melissa Carson is general manager for Iron Mountain Government Solutions.

The post Three steps to build a data foundation for federal AI innovation first appeared on Federal News Network.

© Getty Images/iStockphoto/FlashMovie

Digital information travels through fiber optic cables through the network and data servers behind glass panels in the server room of the data center. High speed digital lines 3d illustration

The VA’s size and complexity may be keeping top tech minds away, and veterans pay the price

Interview transcript

Terry Gerton You have spent a lot of time on the Hill lately talking to lawmakers about ways the VA could modernize access to care. Tell us both what your message is and what you’re hearing from the lawmakers.

Sean O’Connor Yeah. And maybe before that, Terry, just to touch on why we think this is so important or why personally it’s so important to me. And then thank you again for having us, and [I’m] looking forward to having this conversation today. So just at the start, I’m a third-generation veteran. Both my grandfathers fought and served in World War II, one in the Pacific, one in Europe. My father and my uncles all served during the during the Vietnam era. And I’m a 9/11 vet and served during nine eleven. So since the 1940s, my family has been, you know, leaning on and relying on the VA for all kinds of support and care. So, it’s a mission and it’s an institution that’s very important to me personally and very important to the fabric of our country. So, I think it’s no surprise the VA has struggled, you know, being in the early forefront of EHR … adoption to kind of being a laggard now in kind of EHR modernization. And there’s 9 million vets that really struggle to get access to timely care for some of the services they need as the VA works to modernize. So we’ve been spending a lot of time just talking to some of the leadership on the Hill around the momentum that seems to be building to try to modernize finally and kind of make access to care easier for veterans and and trying to make sure that as community care grows and the VA and veterans have more options to seek care both inside and outside the VA, that we really move the needle on reducing time to care and improving efficiency of care delivery for veterans. So that’s where we’re trying to, you know, spend time talking to the folks in SVAC and the Hill about, and learn about some of the strategies people are trying to implement when it comes to the Dole Act and some of the other things that people are trying to advance when it comes to improving access to care for veterans and really, we’re a small technology company that focuses on healthcare access. And we’re just, you know, trying to support improving access to care for veterans wherever and whenever we can because it’s a really important institution. It’s the largest health system in our country. And it’s probably one of the most outdated when it comes to the complexity of modernizing care for scheduling and finding appointments for veterans. And there’s a lot of things that I think we can do to help the VA as they work to improve some of those services.

Terry Gerton You’ve said that the VA was built for the last century and you’ve just mentioned the Electronic Health Record that the VA spent billions of dollars on and still doesn’t have an operational system. What would you recommend in terms of practice for modernizing some of those administrative functions of the VA?

Sean O’Connor Yeah, it’s complicated. So I’m not suggesting this isn’t complicated. It’s, the VA has gone through four different attempts to try to modernize and it’s still not successful yet in trying to get to the end goal of improving access to care for veterans and having a global view of care. So I think the first thing we’ve been talking to folks about is, today everything works in silos. And it’s tough to leverage the size and sophistication of the VA caregivers when everything’s in silos. And there’s close to 130 different VistA instances, a growing number of Oracle instances. And one of the leaders we talked to at the VA last time we were in D.C. said that the complexity of VA care delivery is beyond human comprehension. There’s how customized each of those VistA instances are. They’re all a unique Snowflake. They don’t talk to each other, they don’t share inventory. One of the VISNs we’re talking to now about a project, there’s roughly 10,000 appointments that go unutilized every month in his hospital because these different EHR instances don’t talk to each other. So one of the first things we’re talking about is, you know, trying to break down those data cells to bring all the supply and all the demand into one queue. And this is what we do for some of the other largest health systems in the country, Kaiser and other folks, where we take this global view of inventory and then you can use, you know, AI and some of these sophisticated navigation tools that have been built in the digital age of healthcare since the pandemic, to start to look at how you load balance that network a little more efficiently, how you share resources, how you improve internal utilization, improve efficiency, and reduce care gaps across boards. So I think until the VA finds a way through either a massive conversion to a centralized EHR or finding ways to work with technology entrepreneurs and vendors that can break down some of these data silos, they’ll continue to have the problem of trying to transition to a large EMR system in Oracle and through that process still have these 130 other systems and up to 24 different scheduling solutions that have been customized across the various VISNs, none of them working together, none of them sharing information across each other. So you have the largest health system in the country, 9 million veterans and their family members that we’re supposed to provide and care for, and none of this stuff talks to each other to share capacity, to share utilization, to share best practices. It’s a very fragmented, siloed and complicated environment. So until we find ways to break down those silos and share, leverage the power of tech and data to kind of level that playing field, it’s going to be very difficult to move anything in a substantial manner, we think.

Terry Gerton I’m speaking with Sean O’Connor. He’s a Navy veteran and co founder and chief strategy officer at DexCare. The VA is not the only federal agency that’s bad at a big bang tech deployment. So when you talk about an agency-wide solution that breaks down silos, anybody who’s been around for a while probably rolls their eyes at us. What would intermediate sorts of technology be that could provide some solution while an agency-wide solution is underway?

Sean O’Connor Yeah, we’ve been a big proponent in working with other really large healthcare systems in the country and doing, you know, scalable, strategically thought-out proof of concepts and smaller fragments first and then learning and scaling and iterating and adopting quickly. So I think one of the things the VA has for it is it does have the VISN network and the ability to kind of do proof of concepts in some of these smaller regional health systems, learn, iterate and adopt and then look to scale from there. We think that’s the best way to do this stuff. That’s how we’ve done it with Kaiser and some of these other really large healthcare systems. You do smaller proof of concepts, you learn the integration points that are important to move the needle. You begin with the end in mind and understanding the success metrics that are going to be important to drive this. And then you learn, iterate and scale quickly from there with bottom-down and top-down support is the only way to kind of move these things. And at the same time, being very conscious of the providers as well. So all of the technology companies we’ve built, we built inside of large healthcare systems. And often cases, technology is only 50% of the problem. Understanding the provider and the change management and the amount of pressure that those folks are under to provide care, and not being disruptive to their workflows and making their lives less efficient. You have to be very thoughtful about that, or none of the stuff is going to go anywhere. You can’t just have tech for tech’s sake. It has to understand the provider world and how the provider interacts. And you have to be very purposeful in how you build these things out to scale from the bottom up over time.

Terry Gerton One of the big points that you’ve emphasized is real time access to care, especially for mental health services and especially in rural communities. Those are two big complicating aspects of the VA’s network. How can the VA think about addressing those kinds of issues? Is it a technology solution? Is it a culture solution? How do they get on to real time care, especially in mental health?

Sean O’Connor I think it’s both. And I think the hard part is it’s probably more culture than technology. But it’s a — I don’t know of a bigger issue for us to kind of rally around as a community to try to improve access care of veterans than this. So when I transitioned from the service in 2004, the VA received roughly $21 billion to support its mission, and 17 men and women took their life every day to suicide: friends, brothers, sisters, husbands, wives. Fast forward to 2024, the VA received $121 billion to support its mission, and that number is still the same. Roughly 17 men and women, brothers, sisters, mothers, daughters took their lives to suicide. We’ve lost more people to suicide in the last 20 years than we did in, you know, during the 9/11 era and supporting the 9/11 kind of ground-on combat. So it’s it’s a crisis that’s not talked about. We haven’t really moved the needle on it despite spending over $100 billion more to support the healthcare delivery mission of the VA. So it’s clearly not just a technology issue, but not having — going back to your first question, Terry — not having the ability to share resources across the network and reduce time to care and make it easier for vets to find and get into the services initially is a problem. I won’t say that’s the biggest problem, but it certainly doesn’t help. So … mental health services in the veteran community is a really complicated issue … It’s not just about having access to cares. You know, a big portion of people that need the care aren’t even enrolled in the VA, and then there’s a homeless population that’s not enrolled in the VA. And how do you how do you outreach and bring those folks in that need the help the most? So it’s a complicated issue, but not being able to have one 24/7-365 on-demand network that shares capacity across mental health services for the VA is an issue as well. And the technology issues are easier to address. We just got to have people that are willing to address them. The cultural issues and the stigmatism around, you know, raising your hand for help is a harder issue to address, but it’s just something we gotta continue to talk about because it’s a travesty that in over 20 years, that number really hasn’t moved, despite putting, you know, literally over $100 billion more at the overall global healthcare issue.

Terry Gerton Well you talked about capacity there and certainly building out the community network of care is a big issue and a big initiative for VA. Are there issues on the community participant side of this so, that community care providers don’t understand the VA as much as the VA doesn’t understand community care providers?

Sean O’Connor We’re going to run out of time on your podcast. Yes, so that’s to me like, you know, obviously selfishly, like, we want to help the VA as a technology company, but the importance of improving access to care for veterans is at the heart of everything that we’re trying to do here. So the beauty of the VA to me — I mention I’m a third-generation veteran, it is a unique community. So when I when I first got out of the military, I moved to Seattle, like, it was a tough transition going from the military to the corporate world. I didn’t know anybody up here. My family and I grew up in Jersey, all my family was on the East Coast. I would literally just go to the Seattle VA and hang out in the lobby and just talk to people that you know had their Vietnam hat on. It’s a community and a culture that you know, should be protected in this institution, in this country. And some of the caregivers, you know, we’re talking about the technology piece here. These are some of the most mission-driven caregivers in the world. Like, they can make more money outside the VA. They choose to work with this community and this provider network for a reason. So there is an understanding of that that I think we need to protect because there is an understanding of someone that’s come back from deployment and has been through some serious high optempo stuff that comes back, and you just get a different conversation with your primary care provider in the VA than somebody outside the VA. So I think there’s that element that we have to protect. But there’s also the element, frankly, that you know, as a veteran, I like the option to have choice to go outside the VA for services that they may not be expert in. So certainly, you know, wound care, PTSD, that stuff, I think should stay in the VA. But maybe, you know, I’m a former athlete and tore my knee up and can get into an ortho appointment outside the VA. I want to have that optionality. And some stuff like that, the history isn’t as important to the veteran for some of those conditions. So, to have the optionality to go out there and do that is important. But what we’re seeing, at least for some of the areas that we work with is the community providers, one, they don’t have a lot of excess capacity to share with the VA. Every health system is stretched to the gill. Like there’s not a ton of health systems raising hands saying, hey, we have providers sitting on their hands. It’s six to eight months to get into an ortho appointment in some of these large health systems as it is. So to have that capacity to share with the VA, one, is difficult. Some of those things I think are bigger deals than others to your point of, you know, should there be a continuum to care in the VA? I’d argue some services is, just do it in the VA and some are easily, you know, sourced out. And then there’s the whole issue of, when they’re sourced out, how do you manage the care gaps for the veteran? How do we close some of those care gaps as those services continue to rise and the disparate records continue to grow across the network?

The post The VA’s size and complexity may be keeping top tech minds away, and veterans pay the price first appeared on Federal News Network.

© Getty Images/Kiyoshi Tanno

At VA, cyber dominance is in, cyber compliance is out

The Department of Veterans Affairs is moving toward a more operational approach to cybersecurity.

This means VA is applying a deeper focus on protecting the attack surfaces and closing off threat vectors that put veterans’ data at risk.

Eddie Pool, the acting principal assistant secretary for information and technology and acting principal deputy chief information officer at VA, said the agency is changing its cybersecurity posture to reflect a cyber dominance approach.

Eddie Pool is the acting principal assistant secretary for information and technology and acting principal deputy chief information officer at the Department of Veterans Affairs.

“That’s a move away from the traditional and an exclusively compliance based approach to cybersecurity, where we put a lot of our time resources investments in compliance based activities,” Pool said on Ask the CIO. “For example, did someone check the box on a form? Did someone file something in the right place? We’re really moving a lot of our focus over to the risk-based approach to security, pushing things like zero trust architecture, micro segmentation of our networks and really doing things that are more focused on the operational landscape. We are more focused on protecting those attack surfaces and closing off those threat vectors in the cyber space.”

A big part of this move to cyber dominance is applying the concepts that make up a zero trust architecture like micro segmentation and identity and access management.

Pool said as VA modernizes its underlying technology infrastructure, it will “bake in” these zero trust capabilities.

“Over the next several years, you’re going to see that naturally evolve in terms of where we are in the maturity model path. Our approach here is not necessarily to try to map to a model. It’s really to rationalize what are the highest value opportunities that those models bring, and then we prioritize on those activities first,” he said. “We’re not pursuing it in a linear fashion. We are taking parts and pieces and what makes the most sense for the biggest thing for our buck right now, that’s where we’re putting our energy and effort.”

One of those areas that VA is focused on is rationalizing the number of tools and technologies it’s using across the department. Pool said the goal is to get down to a specific set instead of having the “31 flavors” approach.

“We’re going to try to make it where you can have any flavor you want so long as it’s chocolate. We are trying to get that standardized across the department,” he said. “That gives us the opportunity from a sustainment perspective that we can focus the majority of our resources on those enterprise standardized capabilities. From a security perspective, it’s a far less threat landscape to have to worry about having 100 things versus having two or three things.”

The business process reengineering priority

Pool added that redundancy remains a key factor in the security and tool rationalization effort. He said VA will continue to have a diversity of products in its IT investment portfolios.

“Where we are at is we are looking at how do we build that future state architecture, as elegantly and simplistically as possible so that we can manage it more effectively, they can protect it more securely,” he said.

In addition to standardizing on technology and cyber tools and technologies, Pool said VA is bringing the same approach to business processes for enterprisewide services.

He said over the years, VA has built up a laundry list of legacy technology all with different versions and requirements to maintain.

“We’ve done a lot over the years in the Office of Information and Technology to really standardize on our technology platforms. Now it’s time to leverage that, to really bring standard processes to the business,” he said. “What that does is that really does help us continue to put the veteran at the center of everything that we do, and it gives a very predictable, very repeatable process and expectation for veterans across the country, so that you don’t have different experiences based on where you live or where you’re getting your health care and from what part of the organization.”

Part of the standardization effort is that VA will expand its use of automation, particularly in processing of veterans claims.

Pool said the goal is to take more advantage of the agency’s data and use artificial intelligence to accelerate claims processing.

“The richness of the data and the standardization of our data that we’re looking at and how we can eliminate as many steps in these processes as we can, where we have data to make decisions, or we can automate a lot of things that would completely eliminate what would be a paper process that is our focus,” Pool said. “We’re trying to streamline IT to the point that it’s as fast and as efficient, secure and accurate as possible from a VA processing perspective, and in turn, it’s going to bring a decision back to the veteran a lot faster, and a decision that’s ready to go on to the next step in the process.”

Many of these updates already are having an impact on VA’s business processes. The agency said that it set a new record for the number of disability and pension claims processed in a single year, more than 3 million. That beat its record set in 2024 by more than 500,000.

“We’re driving benefit outcomes. We’re driving technology outcomes. From my perspective, everything that we do here, every product, service capability that the department provides the veteran community, it’s all enabled through technology. So technology is the underpinning infrastructure, backbone to make all things happen, or where all things can fail,” Pool said. “First, on the internal side, it’s about making sure that those infrastructure components are modernized. Everything’s hardened. We have a reliable, highly available infrastructure to deliver those services. Then at the application level, at the actual point of delivery, IT is involved in every aspect of every challenge in the department, to again, bring the best technology experts to the table and look at how can we leverage the best technologies to simplify the business processes, whether that’s claims automation, getting veterans their mileage reimbursement earlier or by automating processes to increase the efficacy of the outcomes that we deliver, and just simplify how the veterans consume the services of VA. That’s the only reason why we exist here, is to be that enabling partner to the business to make these things happen.”

The post At VA, cyber dominance is in, cyber compliance is out first appeared on Federal News Network.

© Getty Images/ipopba

Cyber security network and data protection technology on virtual interface screen.

Case Closed: Bitcoin’s Underlying Value, Explained

A combined obituary for TradFi’s (mis)understanding of bitcoin’s underlying value.

This article was written in response to a statement made by European Central Bank President Christine Lagarde in an October 7, 2025, interview, where she claimed that bitcoin has “no intrinsic” or “underlying value.”

When Christine Lagarde says Bitcoin has no “intrinsic” or “underlying value,” she’s (likely) referring to the fact that it — unlike an equity — doesn’t produce a cash flow. The classic critique that follows is that it’s “purely speculative”, meaning it’s only worth what someone else is willing to buy it for in the future.

She further dismisses Bitcoin as a form of “digital gold” and seems to suggest that physical gold is somehow different — presumably because she assigns it value for its use cases beyond its function as money (if I had to guess).

To say that Bitcoin doesn’t have a cash flow is factually correct — but as nonsensical as saying “language” or “mathematics” have no cash flow.

One could, of course, counter Lagarde’s statement by appealing to the subjective value proposition — arguing that there’s no such thing as intrinsic value, since all value is subjective, and that anything can only ever be worth what someone else is willing to pay for it in the future.

But instead of taking that route, I’ll go the roundabout (and more entertaining) way of showing why she’s not only wrong, but also inconsistent by her own logic.

Let’s start with gold and the idea that something supposedly has “intrinsic value” because it has a use case beyond its function as money — to get that out of the way.

Gold

We’ll start with a forum excerpt from Satoshi themselves:

The entire point of money is to be one step removed from bartering — to serve as a neutral medium that communicates the underlying economic reality between supply and demand in an economy, allowing participants to make maximally informed decisions.

For this reason, throughout history, the evolution of money has consistently trended toward what cannot be easily recreated at will. The reason is simple: it’s within everyone’s self-interest, and the economy as a whole (as we will see), that the money being used and accepted cannot be diluted.

If gold were assumed for a moment to be absolutely scarce and used solely as money, the price of an apple becomes a pure function of supply and demand. The price, expressed in gold, could only change if the real supply or demand for apples changed. In this setup, all market participants are maximally informed and economic reality is upheld.

Apple price = f(Apple supply, Apple demand)

If, however, gold all of a sudden gained demand for some other purpose, such as being used for jewellery, the dynamics change. The price of an apple now becomes a function not only of the supply and demand for apples, but also the jewellery demand, as it’s causing a change in the denominator (money) itself. The result is a less-than-ideal form of money, where economic reality is distorted and market participants are presented with compromised information.

Apple price = f(Apple supply, Apple demand, Jewellery demand)

Note that this is materially different from a setup where, as in the real economy, billions of participants want billions of different things while still using the same money.

Money is merely the measuring stick, which means that the demand for bananas isn’t going to affect the price of apples just because both prices are expressed in the same unit of account. What is going to distort prices is if people start demanding the good being used as money for something other than its monetary function.

The irony here, of course, is that gold’s supposed “usefulness” — beyond money — its role in jewellery or industry — which supposedly makes it an exception to the rule of having underlying value, actually makes it less perfect as money. By having a non-monetary use, gold introduces an additional demand parameter into what’s meant to be a neutral measuring stick.

The ideal money, as Satoshi pointed out, would be a kind of “grey metal” — something with no other purpose than being perfect money itself. That “grey metal” is, of course, Bitcoin.

Let’s now move on to cash flows — the main topic of discussion whenever TradFi talks about “underlying” or “intrinsic” value.

After all, many of the same people who point out that Bitcoin doesn’t have any aren’t as internally conflicted as Lagarde, and extend the same judgment to gold (that it doesn’t have intrinsic value)— which, at the very least, is a more consistent position.

Cash flows

Last year, Meta (Facebook), Google, and Amazon reported combined cash flows of roughly $160 billion. If someone asked Lagarde whether these equities had an underlying value, she would of course say yes. Each company sits on billion-dollar assets and billion-dollar expected future cash flows that can be discounted to generate an equity valuation.

Bitcoin, on the other hand, has no comparable cash flows to speak of — no disagreement there.

But before we go further, let’s ask: Where do those cash flows actually come from? In other words, what is the driver of those cash flows from Meta, Google, and Amazon?

We’ve all used Facebook. It offers a global platform for people to connect, message, and share. Its revenue comes from selling ads on top of user attention. Why do people use Facebook? Because everyone else does. Because it offers the best experience. It’s a social network, meaning every new user adds value to everyone else.

What about Google? Same logic. It’s the world’s leading search engine — the front door of the internet. It also monetises through targeted advertising. Why do you use Google instead of Yahoo or Bing? Because everyone else does. The more data it gathers, the better it gets for everyone. Another network effect (often leading to winner-take-all outcomes).

Amazon? Same principle, different domain. It’s the default marketplace of the world, connecting buyers and sellers on a global scale. Amazon profits from subscriptions and logistics fees. Consumers use it because every supplier is there; suppliers use it because every consumer is there. Every new participant makes the network more useful. It started with books — now it sells everything.

Now, imagine each of these companies woke up one morning after a collective bump to the head, decided profit was overrated, and poured their fortunes into an endowment run entirely by an AI workforce — keeping the networks running exactly as before, just without the monetisation.

Shareholder value would immediately drop to zero.

But what about the network?
Would people still use Facebook, Google and Amazon? Of course!

Because the underlying value to the users was never the company itself — it was the network it monetised (which they had no other way of accessing without going through that monetisation). The fact that the network now costs nothing or very little to use wouldn’t make it less valuable for them, now would it?

The equity value and the network value are two different things.

The Bitcoin Company

Now, imagine another startup with a single vision: “We’re going to build the best money in the world.”

Its service is to launch a global network for value transfer and storage, promising a monetary asset with a fixed supply of 21 million units forever — no dilution, no exceptions (pinky promise). The monetisation model: small transaction fees, 10x lower than competitors.

We call it “The Bitcoin Company”.

Imagine it miraculously gained some early traction. Why would people continue or grow interested in using it? Well, because more and more people does. And as they do, both the equity value of those owning the company (as they collect fees) and the network value to the users would grow.

There you’d have your cash flows.

Ironically, this is the same “business model” that underpins the central banking system, only they defaulted on their original promise. By positioning themselves as issuers atop the fiat monetary network, central banks and megabanks monetise it through two layers.
At the base lies the fiat monetary network, consisting of state-backed money. Central banks monetise this layer by issuing the very units the network runs on and indirectly financing government deficits. Above them, megabanks monetise the same network through credit creation, earning profits from interest on loans, and now increasingly from stablecoins (which is like credit on top of credit.).
Lagarde insists stablecoins are “different” because she views them as network expanders that amplify the monetary network she controls. Just as Facebook’s advertising revenue grows with its user base, the spread of stablecoins enlarges the euro monetary network, giving central banks greater room for monetary expansion.
From her perspective, this expansion of units as the network grows functions like “cash flow” in the business model of central banking — and, in her eyes, that’s what constitutes its underlying value.
The fiat monetary network stack. Stablecoins has the potential to expand the fiat monetary network.

Now imagine the same twist: the Bitcoin Company dissolves. No CEO. No board. No office, anymore. The equity value and the cash flows immediately go to zero, but the Bitcoin Network remains —operations henceforth run without rulers (according to some “decentralised consensus protocol” dreamt up one night by some mysterious entity called Satoshi).

Ask yourself: would that make the network more or less valuable?

To be clear — we’ve just removed all counterparty risk.
No late-night CEO tweets.
No offices to raid.
No conflict of interest.
No Coldplay scandals.

The network just became (1) even cheaper to use, and (2) even the tiniest worry about that pinky promise was just erased (which, to be fair, you probably should have been pretty worried about).

So yes, from the user’s perspective, the network just became more valuable.

Equity value vs Network value

Christine Lagarde simply hasn’t done the intellectual groundwork needed to understand what she’s critiquing. Like so many others before her, she’s mistaking equity value (which generates cash flows) for the network value — without recognising the path dependency between them: there would be no cash flows without the network in the first place (!)

The wrong question: What is the equity value of the company monetising the network?
The right question: What is the network’s value to the users?

In other words:

  • What is the value of being able to speak with anyone in the world, for free, instantly, across borders and cultures? (Facebook)
  • What is the value of instantly accessing the world’s knowledge? (Google)
  • What is the value of finding, comparing, and receiving any product from anywhere on Earth, delivered in a day? (Amazon)
  • What is the value of moving your money — across borders and across time? Perhaps even more refined, what is the value of undistorted price signals in an economy? (Bitcoin)

The Bitcoin network isn’t valuable despite not being a company — it’s more valuable because it isn’t.

Unlike Meta, Google, or Amazon — whose networks power applications and commerce —the Bitcoin network provides the monetary foundation beneath them all. Its total addressable market is every transaction on Earth.

Now, you could try to build a straw man argument by claiming that the Bitcoin network isn’t truly a monetary network, since it isn’t “widely accepted” by your standards. The problem with that line of reasoning is (1) it implies that nothing new could ever emerge under the sun unless the entire world agreed on it in advance (pretty unreasonable), and (2) it would, by your own logic, require you to dismiss over 90% of the world’s sovereign currencies as money — including the Canadian dollar, the Swedish krona, and the Swiss franc — since Bitcoin’s market capitalisation already surpasses them many times over and would likely be accepted as payment by far more people globally.

The Bitcoin Network ranks 8 out of 108 fiat currencies. Source.

Returning to the initial claim, to say that Bitcoin doesn’t have a cash flow is factually correct — but as nonsensical as saying “language” or “mathematics” have no cash flow. True enough, not in themselves — but they’re indispensable tools for creating everything that does.

In fact, if the money you’re using did offer cash flows (an interest rate yield), that would be a sign you were dealing with defective money.

Let me explain why in the simplest terms:

Suppose the total money supply is $100,000, and ten depositors each place $10,000 into a bank. The bank offers them 4% interest and lends out the full amount to borrowers at 5%. After a year, the borrowers owe $105,000 in total (principal plus interest).

Do you see the problem?

The borrowers owe more money than exists in the entire system. Where does the extra $5000 come from?

No amount of productivity or hard work can solve this mathematical impossibility. The only thing that can is the creation of new money to fill the gap. For the system to keep running, the money supply would have to grow at par with, or faster than, the interest rate being offered to depositors. It’s the only way the math can work out. That means the supposed “cash flow” being offered in the form of an interest rate is being paid for by diluting the very money it’s denominated in, which is the very definition of a Ponzi scheme (!)

The result is a lesser form of money — one that must constantly lose value for the math to work out.

It would now appear we’re at a paradoxical intersection: on one hand, Lagarde and others dismiss Bitcoin’s underlying value on the grounds that it has no cash flow; on the other, we can now see that if it did have a cash flow, it would by definition be flawed money.

It therefore seems that the very trait that makes Bitcoin perfect money — its inability to conjure fake cash flows out of thin air — is precisely what’s being used to dismiss it by those defending a system that only functions by doing exactly that. So how do we work this out?

Here lies the crucial insight that Lagarde, and many others, fail to grasp: something can possess underlying or intrinsic value in a roundabout way.

The roundabout way

Take car insurance (or any other insurance policy, for that matter). Judged in isolation, it has a negative expected value — you pay premiums every month, and it’s structurally priced so that you’ll never get rich buying infinite insurance policies (if that were possible, everyone would).

But when you combine the policy with the car you own and depend on — the picture changes. You’ve now removed the risk of potential ruin. Evaluated together, you now have a situation where the insurance policy explodes in value (generating a positive cash flow) precisely when you need it most — when the car breaks down. Viewed as a whole, you end up with a positive geometric return (that is, underlying value through the omission of ruin) when the accident eventually occurs, which, odds are, it eventually will.

Cash flow/usefulness of an insurance policy.

To illustrate this more practically, consider a scenario where a person depends on their car to get to work. Without insurance, a breakdown might mean they can’t afford the repair, resulting in the loss of both the car and their income. With insurance, however, the repair is covered, allowing them to maintain their income stream. In this way, the insurance policy has value far beyond its direct payoff, as it preserves the ability to keep generating cash flow.

Y axis = Cash flow from income.

This, as we shall now understand, is the entire logic behind money in the first place — and we could just as easily swap the insurance policy for a stack of cash (which is really just a more universal, unspecific form of insurance). You save money not because it generates a cash flow, but because it gives you future optionality and explodes in usefulness when you need it most, allowing you to quickly recover and adapt when the unexpected occurs.

This is not speculative behavior. The reason you hold money is not because you’re engaging in what critics accuse you of — the “greater fool” prediction business, but precisely because you want to avoid it! You hold money not because you’re making a prediction of the future, but because you know you can’t, and therefore want to be ready for whatever it brings. After all, why would you pay for car insurance if you knew you would never need it?

The “greater fool” argument collapses under closer scrutiny because it assumes every individual faces the same circumstances, preferences, and time horizons. It treats the economy as a zero-sum game in which one person’s prudence must come at another’s expense. But reality is the opposite: what’s rational for each participant depends on their unique position in time and space.

Someone sitting on a vast reserve of cash might rationally choose to exchange part of it for a new car with a better A/C that improve their comfort and quality of life. Someone else, with less savings or living in a colder climate, might rationally do the precise opposite — defer a new car purchase and strengthen their savings buffer. Both are acting rationally within their own context. The latter isn’t a “greater fool” for buying the money the former is selling for a car. They’re both winners! Otherwise they wouldn’t agree to the trade in the first place!

Markets exist precisely because we don’t share the same circumstances or needs. The value of money, then, isn’t born from finding a “greater fool”, but from coordinating billions of rational actors, each seeking to balance their own lives in their own way.

We can extend this observation to all the networks and protocols mentioned earlier. Whether it’s a monetary network, a social network, mathematics, or language — each derives its value in a roundabout way that continues to fly over the heads of people like Lagarde, whose job ironically is supposed to be an expert on these things.


Case Closed: Bitcoin’s Underlying Value, Explained was originally published in Coinmonks on Medium, where people are continuing the conversation by highlighting and responding to this story.

Cross-Border Trade Made Simple with Blockchain Supply Chain Solutions

By: Duredev

Cross-border trade is one of the most powerful drivers of global business — but it’s also one of the most complicated. Importers and exporters face endless paperwork, customs clearances, freight forwarding processes, and multiple intermediaries. Every delay increases costs, adds port storage fees, and leads to dissatisfied clients.

Cross-Border Trade Made Simple with Blockchain Supply Chain Solutions

This is why blockchain in supply chain is becoming a game-changer. By digitizing documents, automating approvals, and ensuring tamper-proof records, blockchain technology in supply chain management delivers faster, more reliable, and secure global trade.

At Duredev, we design blockchain-powered workflows that simplify trade for enterprises worldwide.

🧾 Challenges in Cross-Border Logistics

Despite globalization, international trade is still full of challenges:

  • Manual paperwork slows operations
  • Lack of trust between countries causes repeated checks
  • Customs clearance is slow and unpredictable
  • High fraud risks increase costs

These problems make blockchain technology for supply chain management an essential solution.

🔑 How Blockchain Solves These Issues

Block chain management improves international trade by building trust and automating workflows. Here’s how:

  • Smart Contracts: Automate customs clearance once requirements are met
  • Immutable Records: Store shipping docs, invoices, and certificates securely
  • Instant Verification: Regulators can verify authenticity in seconds
  • Trust Across Borders: Blockchain acts as a neutral source of truth

With this, block chain and supply chain networks become more transparent, efficient, and fraud-resistant.

👕 Real-World Example

Take the case of an apparel exporter shipping goods overseas:

  • Shipping documents are digitized on a blockchain system
  • Customs officials access compliance records instantly
  • Smart contracts trigger clearance when rules are met
  • Faster clearance reduces storage costs at ports

This shows how block chain in logistics and block chain in scm streamline cross-border workflows.

🌐 Blockchain and the Future of International SCM

Blockchain and logistics go hand-in-hand with modern trade. In global supply chains:

  • Customs checks are automated
  • Payments are released automatically after delivery confirmation
  • Trade documents are secure and tamper-proof

For companies, this means fewer delays and lower costs. For regulators, it ensures stronger compliance. For customers, it means faster deliveries.

This is why blockchain and supply chain management is quickly becoming the foundation of international commerce.

💡 The Role of Blockchain in SCM

In today’s world, blockchain and the supply chain are inseparable. Here’s why:

  • Supply chain in blockchain improves visibility at every stage
  • Supply chain management and blockchain reduce fraud by tracking goods in real-time
  • Supply chain management blockchain ensures global trust across borders
  • Supply chain on blockchain creates efficiency in customs and payments
  • Blockchain for scm improves collaboration between importers, exporters, and regulators

When paired with logistics, logistics and blockchain make trade smarter and safer.

🔍 Transparency Through Blockchain

For governments, regulators, and businesses, blockchain for supply chain transparency is crucial. With blockchain supply chain transparency, stakeholders gain:

  • Real-time visibility into shipments
  • Verified documents with no tampering
  • Smooth customs checks
  • Greater trust between trading nations

This transparency helps eliminate disputes and creates a secure, neutral record of global trade.

🏆 Why Choose Duredev

At Duredev, we bring real-world blockchain solutions to enterprises across the globe. Our focus is on solving pain points in blockchain technology in supply chain management and blockchain technology for supply chain management with systems that:

  • Reduce paperwork
  • Increase visibility
  • Accelerate customs clearance
  • Lower risks of fraud

With our expertise, businesses can leverage supply chain management blockchain to stay ahead in a fast-changing global economy.

📌 Conclusion

International trade no longer needs to be slow, costly, or full of risks. By adopting supply chain on blockchain, businesses can digitize documents, automate customs, and build stronger trust worldwide.

Blockchain and supply chain management is not the future — it’s the present. Companies that move early gain faster shipments, lower costs, and improved customer satisfaction.

At Duredev, we empower businesses globally with blockchain for supply chain management, creating workflows that transform cross-border trade into a faster, smarter, and safer process.

👉 Talk to us today

❓ Frequently Asked Questions (FAQ)

1. How does blockchain help supply chain management?

Blockchain technology in supply chain management helps businesses reduce paperwork and fraud. Duredev provides solutions that record transactions securely and streamline global trade workflows.

2. What is the role of blockchain in logistics?

Blockchain and logistics improve customs clearance, automate payments, and reduce delays. Duredev blockchain solutions give freight forwarders and import-export businesses real-time visibility into shipments and compliance records.

3. Why is blockchain supply chain transparency important?

Blockchain supply chain transparency allows regulators, customs, and businesses to track shipments instantly. Duredev solutions reduce fraud, ensure secure records, and build trust across borders.

4. What is supply chain on blockchain?

Supply chain on blockchain manages invoices, automated customs approvals, and tamper-proof records. Duredev blockchain services help enterprises achieve faster clearances and lower port costs globally.

5. Is blockchain the future of SCM?

Block chain in scm improves efficiency, reduces costs, and builds trust. Many businesses adopt supply chain management blockchain solutions, and Duredev helps enterprises implement these workflows to stay ahead.


🌍 Cross-Border Trade Made Simple with Blockchain Supply Chain Solutions was originally published in Coinmonks on Medium, where people are continuing the conversation by highlighting and responding to this story.

Chinese hackers reportedly targeting government entities using 'Brickstorm' malware

By: Matt Tate

Hackers with links to China reportedly successfully infiltrated a number of unnamed government and tech entities using advanced malware. As reported by Reuters, cybersecurity agencies from the US and Canada confirmed the attack, which used a backdoor known as “Brickstorm” to target organizations using the VMware vSphere cloud computing platform.

As detailed in a report published by the Canadian Centre for Cyber Security on December 4, PRC state-sponsored hackers maintained "long-term persistent access" to an unnamed victim’s internal network. After compromising the affected platform, the cybercriminals were able to steal credentials, manipulate sensitive files and create "rogue, hidden VMs" (virtual machines), effectively seizing control unnoticed. The attack could have begun as far back as April 2024 and lasted until at least September of this year.

The malware analysis report published by the Canadian Cyber Centre, with assistance from The Cybersecurity and Infrastructure Security Agency (CISA) and the National Security Agency (NSA), cites eight different Brickstorm malware samples. It is not clear exactly how many organizations in total were either targeted or successfully penetrated.

In an email to Reuters, a spokesperson for VMware vSphere owner Broadcom said it was aware of the alleged hack, and encouraged its customers to download up-to-date security patches whenever possible. In September, the Google Threat Intelligence Group published its own report on Brickstorm, in which it urged organizations to "reevaluate their threat model for appliances and conduct hunt exercises" against specified threat actors.

This article originally appeared on Engadget at https://www.engadget.com/big-tech/chinese-hackers-reportedly-targeting-government-entities-using-brickstorm-malware-133501894.html?src=rss

©

© Greggory DiSalvo via Getty Images

A hacker infiltrates a remote network on a laptop

Agents-as-a-service are poised to rewire the software industry and corporate structures

This was the year of AI agents. Chatbots that simply answered questions are now evolving into autonomous agents that can carry out tasks on a user’s behalf, so enterprises continue to invest in agentic platforms as transformation evolves. Software vendors are investing in it as fast as they can, too.

According to a National Research Group survey of more than 3,000 senior leaders, more than half of executives say their organization is already using AI agents. Of the companies that spend no less than half their AI budget on AI agents, 88% say they’re already seeing ROI on at least one use case, with top areas being customer service and experience, marketing, cybersecurity, and software development.

On the software provider side, Gartner predicts 40% of enterprise software applications in 2026 will include agentic AI, up from less than 5% today. And agentic AI could drive approximately 30% of enterprise application software revenue by 2035, surpassing $450 billion, up from 2% in 2025. In fact, business users might not have to interact directly with the business applications at all since AI agent ecosystems will carry out user instructions across multiple applications and business functions. At that point, a third of user experiences will shift from native applications to agentic front ends, Gartner predicts.

It’s already starting. Most enterprise applications will have embedded assistants, a precursor to agentic AI, by the end of this year, adds Gartner.

IDC has similar predictions. By 2028, 45% of IT product and service interactions will use agents as the primary interface, the firm says. That’ll change not just how companies work, but how CIOs work as well.

Agents as employees

At financial services provider OneDigital, chief product officer Vinay Gidwaney is already working with AI agents, almost as if they were people.

“We decided to call them AI coworkers, and we set up an AI staffing team co-owned between my technology team and our chief people officer and her HR team,” he says. “That team is responsible for hiring AI coworkers and bringing them into the organization.” You heard that right: “hiring.”

The first step is to sit down with the business leader and write a job description, which is fed to the AI agent, and then it becomes known as an intern.

“We have a lot of interns we’re testing at the company,” says Gidwaney. “If they pass, they get promoted to apprentices and we give them our best practices, guardrails, a personality, and human supervisors responsible for training them, auditing what they do, and writing improvement plans.”

The next promotion is to a full-time coworker, and it becomes available to be used by anyone at the company.

“Anyone at our company can go on the corporate intranet, read the skill sets, and get ice breakers if they don’t know how to start,” he says. “You can pick a coworker off the shelf and start chatting with them.”

For example, there’s Ben, a benefits expert who’s trained on everything having to do with employee benefits.

“We have our employee benefits consultants sitting with clients every day,” Gidwaney says. “Ben will take all the information and help the consultants strategize how to lower costs, and how to negotiate with carriers. He’s the consultants’ thought partner.”

There are similar AI coworkers working on retirement planning, and on property and casualty as well. These were built in-house because they’re core to the company’s business. But there are also external AI agents who can provide additional functionality in specialized yet less core areas, like legal or marketing content creation. In software development, OneDigital uses third-party AI agents as coding assistants.

When choosing whether to sign up for these agents, Gidwaney says he doesn’t think of it the way he thinks about licensing software, but more to hiring a human consultant or contractor. For example, will the agent be a good cultural fit?

But in some cases, it’s worse than hiring humans since a bad human hire who turns out to be toxic will only interact with a small number of other employees. But an AI agent might interact with thousands of them.

“You have to apply the same level of scrutiny as how you hire real humans,” he says.

A vendor who looks like a technology company might also, in effect, be a staffing firm. “They look and feel like humans, and you have to treat them like that,” he adds.

Another way that AI agents are similar to human consultants is when they leave the company, they take their expertise with them, including what they gained along the way. Data can be downloaded, Gidwaney says, but not necessarily the fine-tuning or other improvements the agent received. Realistically, there might not be any practical way to extract that from a third-party agent, and that could lead to AI vendor lock-in.

Edward Tull, VP of technology and operations at JBGoodwin Realtors, says he, too, sees AI agents as something akin to people. “I see it more as a teammate,” he says. “As we implement more across departments, I can see these teammates talking to each other. It becomes almost like a person.”

Today, JBGoodwin uses two main platforms for its AI agents. Zapier lets the company build its own and HubSpot has its own AaaS, and they’re already pre-built. “There are lead enrichment agents and workflow agents,” says Tull.

And the company is open to using more. “In accounting, if someone builds an agent to work with this particular type of accounting software, we might hire that agent,” he says. “Or a marketing coordinator that we could hire that’s built and ready to go and connected to systems we already use.”

With agents, his job is becoming less about technology and more about management, he adds. “It’s less day-to-day building and more governance, and trying to position the company to be competitive in the world of AI,” he says.

He’s not the only one thinking of AI agents as more akin to human workers than to software.

“With agents, because the technology is evolving so far, it’s almost like you’re hiring employees,” says Sheldon Monteiro, chief product officer at Publicis Sapient. “You have to determine whom to hire, how to train them, make sure all the business units are getting value out of them, and figure when to fire them. It’s a continuous process, and this is very different from the past, where I make a commitment to a platform and stick with it because the solution works for the business.”

This changes how the technology solutions are managed, he adds. What companies will need now is a CHRO, but for agentic employees.

Managing outcomes, not persons

Vituity is one of the largest national, privately-held medical groups, with 600 hospitals, 13,800 employees, and nearly 14 million patients. The company is building its own AI agents, but is also using off-the-shelf ones, as AaaS. And AI agents aren’t people, says CIO Amith Nair. “The agent has no feelings,” he says. “AGI isn’t here yet.”

Instead, it all comes down to outcomes, he says. “If you define an outcome for a task, that’s the outcome you’re holding that agent to.” And that part isn’t different to holding employees accountable to an outcome. “But you don’t need to manage the agent,” he adds. “They’re not people.”

Instead, the agent is orchestrated and you can plug and play them. “It needs to understand our business model and our business context, so you ground the agent to get the job done,” he says.

For mission-critical functions, especially ones related to sensitive healthcare data, Vituity is building its own agents inside a HIPAA-certified LLM environment using the Workato agent development platform and the Microsoft agentic platform.

For other functions, especially ones having to do with public data, Vituity uses off-the-shelf agents, such as ones from Salesforce and Snowflake. The company is also using Claude with GitHub Copilot for coding. Nair can already see that agentic systems will change the way enterprise software works.

“Most of the enterprise applications should get up to speed with MCP, the integration layer for standardization,” he says. “If they don’t get to it, it’s going to become a challenge for them to keep selling their product.”

A company needs to be able to access its own data via an MCP connector, he says. “AI needs data, and if they don’t give you an MCP, you just start moving it all to a data warehouse,” he adds.

Sharp learning curve

In addition to providing a way to store and organize your data, enterprise software vendors also offer logic and functionality, and AI will soon be able to handle that as well.

“All you need is a good workflow engine where you can develop new business processes on the fly, so it can orchestrate with other agents,” Nair says. “I don’t think we’re too far away, but we’re not there yet. Until then, SaaS vendors are still relevant. The question is, can they charge that much money anymore.”

The costs of SaaS will eventually have to come down to the cost of inference, storage, and other infrastructure, but they can’t survive the way they’re charging now he says. So SaaS vendors are building agents to augment or replace their current interfaces. But that approach itself has its limits. Say, for example, instead of using Salesforce’s agent, a company can use its own agents to interact with the Salesforce environment.

“It’s already happening,” Nair adds. “My SOC agent is pulling in all the log files from Salesforce. They’re not providing me anything other than the security layer they need to protect the data that exists there.”

AI agents are set to change the dynamic between enterprises and software vendors in other ways, too. One major difference between software and agents is software is well-defined, operates in a particular way, and changes slowly, says Jinsook Han, chief of strategy, corporate development, and global agentic AI at Genpact.

“But we expect when the agent comes in, it’s going to get smarter every day,” she says. “The world will change dramatically because agents are continuously changing. And the expectations from the enterprises are also being reshaped.”

Another difference is agents can more easily work with data and systems where they are. Take for example a sales agent meeting with customers, says Anand Rao, AI professor at Carnegie Mellon University. Each salesperson has a calendar where all their meetings are scheduled, and they have emails, messages, and meeting recordings. An agent can simply access those emails when needed.

“Why put them all into Salesforce?” Rao asks. “If the idea is to do and monitor the sale, it doesn’t have to go into Salesforce, and the agents can go grab it.”

When Rao was a consultant having a conversation with a client, he’d log it into Salesforce with a note, for instance, saying the client needs a white paper from the partner in charge of quantum.

With an agent taking notes during the meeting, it can immediately identify the action items and follow up to get the white paper.

“Right now we’re blindly automating the existing workflow,” Rao says. “But why do we need to do that? There’ll be a fundamental shift of how we see value chains and systems. We’ll get rid of all the intermediate steps. That’s the biggest worry for the SAPs, Salesforces, and Workdays of the world.”

Another aspect of the agentic economy is instead of a human employee talking to a vendor’s AI agent, a company agent can handle the conversation on the employee’s behalf. And if a company wants to switch vendors, the experience will be seamless for employees, since they never had to deal directly with the vendor anyway.

“I think that’s something that’ll happen,” says Ricardo Baeza-Yates, co-chair of the  US technology policy committee at the Association for Computing Machinery. “And it makes the market more competitive, and makes integrating things much easier.”

In the short term, however, it might make more sense for companies to use the vendors’ agents instead of creating their own.

“I recommend people don’t overbuild because everything is moving,” says Bret Greenstein, CAIO at West Monroe Partners, a management consulting firm. “If you build a highly complicated system, you’re going to be building yourself some tech debt. If an agent exists in your application and it’s localized to the data in that application, use it.”

But over time, an agent that’s independent of the application can be more effective, he says, and there’s a lot of lock-in that goes into applications. “It’s going to be easier every day to build the agent you want without having to buy a giant license. “The effort to get effective agents is dropping rapidly, and the justification for getting expensive agents from your enterprise software vendors is getting less,” he says.

The future of software

According to IDC, pure seat-based pricing will be obsolete by 2028, forcing 70% of vendors to figure out new business models.

With technology evolving as quickly as it is, JBGoodwin Realtors has already started to change its approach to buying tech, says Tull. It used to prefer long-term contracts, for example but that’s not the case anymore “You save more if you go longer, but I’ll ask for an option to re-sign with a cap,” he says.

That doesn’t mean SaaS will die overnight. Companies have made significant investments in their current technology infrastructure, says Patrycja Sobera, SVP of digital workplace solutions at Unisys.

“They’re not scrapping their strategies around cloud and SaaS,” she says. “They’re not saying, ‘Let’s abandon this and go straight to agentic.’ I’m not seeing that at all.”

Ultimately, people are slow to change, and institutions are even slower. Many organizations are still running legacy systems. For example, the FAA has just come out with a bold plan to update its systems by getting rid of floppy disks and upgrading from Windows 95. They expect this to take four years.

But the center of gravity will move toward agents and, as it does, so will funding, innovation, green-field deployments, and the economics of the software industry.

“There are so many organizations and leaders who need to cross the chasm,” says Sobera. “You’re going to have organizations at different levels of maturity, and some will be stuck in SaaS mentality, but feeling more in control while some of our progressive clients will embrace the move. We’re also seeing those clients outperform their peers in revenue, innovation, and satisfaction.”

Italy orders non-compliant VASPs to exit as MiCAR rules kick in

  • Consob has urged VASPs to secure CASP approval or shut down by December 30, 2025.
  • This comes as the deadline for transitioning to new MiCAR policies approaches.
  • Unauthorised operators will halt their services and return user assets.

Italy’s financial regulator Consob has issued an urgent call to digital assets investors and operators as the nation moves closer to adopting MiCAR policies.

According to the late yesterday press release, Consob emphasised December 30, 2025, as the last day VASPs (Virtual Asset Service Providers) operating under the existing regime will be able to serve without full approval.

Consob has warned that operators who fail to follow this transition risk a ban.

Thus, any VASP operating in Italy should adhere to the EU’s Markets in Crypto-Assets Regulation or exit the marketplace.

The press release highlighted:

30 December 2025 is the last day on which Virtual Asset Service Providers (VASPs, operators currently offering virtual asset services, such as cryptocurrency exchanges) registered with the OAM (the Organismo Agenti e Mediatori, or Agents and Brokers Organisation) can continue to operate.

MiCAR resets Italy’s regulatory rulebook

Italian regulators have only wanted VASPs to secure the OAM (Organismo Agenti e Mediatori) certificate to operate seamlessly over the years.

Meanwhile, MiCAR brings tougher rules, with only fully licensed Crypto-Asset Service Providers (CASPs) permitted to serve the European Union.

Meanwhile, the authorisation procedure involves operational checks, client protection requirements, supervisory controls, and existing monitoring. That’s far stricter than the previous model.

Consob stressed that VASPs will only operate if they apply for CASP certification in Italy or any other European Union Member State by December 30.

Operators who submit applications by this deadline can keep offering services until the final decision, but all entities should adhere to MiCAR by June 30, 2026.

What’s next for investors?

Consob has warned both operators and day-to-day cryptocurrency users.

Investors should promptly confirm whether their desired service provider plans to adhere to the new policies and requirements.

Here, they can monitor two crucial things.

First and foremost, investors should check whether the operator has published its MiCAR transition plans.

Secondly, investors should verify the provider’s regulatory status after the deadline.

VASPs that don’t apply or fail to secure approval will not operate in Italy after December 30, and customers can request a return of their assets upon such developments.

Meanwhile, Consob confirmed warning operators multiple times during the transition phase, highlighting updates in September last year, July 2025, and the October 31 notice to companies still holding only the OAM certificate.

While some operators view MiCAR as the pathway for regulated, international operations, others consider the new regulation as the end of the road.

Meanwhile, digital assets investors should stay alert, check the provider’s regulatory status, and act before the new MiCAR regulations lock them out or pressure them with last-minute withdrawals.

The post Italy orders non-compliant VASPs to exit as MiCAR rules kick in appeared first on CoinJournal.

AWS CEO Matt Garman thought Amazon needed a million developers — until AI changed his mind

AWS CEO Matt Garman, left, with Acquired hosts Ben Gilbert and David Rosenthal. (GeekWire Photo / Todd Bishop)

LAS VEGAS — Matt Garman remembers sitting in an Amazon leadership meeting six or seven years ago, thinking about the future, when he identified what he considered a looming crisis.

Garman, who has since become the Amazon Web Services CEO, calculated that the company would eventually need to hire a million developers to deliver on its product roadmap. The demand was so great that he considered the shortage of software development engineers (SDEs) the company’s biggest constraint.

With the rise of AI, he no longer thinks that’s the case.

Speaking with Acquired podcast hosts Ben Gilbert and David Rosenthal at the AWS re:Invent conference Thursday afternoon, Garman told the story in response to Gilbert’s closing question about what belief he held firmly in the past that he has since completely reversed.

“Before, we had way more ideas than we could possibly get to,” he said. Now, “because you can deliver things so fast, your constraint is going to be great ideas and great things that you want to go after. And I would never have guessed that 10 years ago.”

He was careful to point out that Amazon still needs great software engineers. But earlier in the conversation, he noted that massive technical projects that once required “dozens, if not hundreds” of people might now be delivered by teams of five or 10, thanks to AI and agents.

Garman was the closing speaker at the two-hour event with the hosts of the hit podcast, following conversations with Netflix Co-CEO Greg Peters, J.P. Morgan Payments Global Co-Head Max Neukirchen, and Perplexity Co-founder and CEO Aravind Srinivas.

A few more highlights from Garman’s comments:

Generative AI, including Bedrock, represents a multi-billion dollar business for Amazon. Asked to quantify how much of AWS is now AI-related, Garman said it’s getting harder to say, as AI becomes embedded in everything. 

Speaking off-the-cuff, he told the Acquired hosts that Bedrock is a multi-billion dollar business. Amazon clarified later that he was referring to the revenue run rate for generative AI overall. That includes Bedrock, which is Amazon’s managed service that offers access to AI models for building apps and services. [This has been updated since publication.]

How AWS thinks about its product strategy. Garman described a multi-layered approach to explain where AWS builds and where it leaves room for partners. At the bottom are core building blocks like compute and storage. AWS will always be there, he said.

In the middle are databases, analytics engines, and AI models, where AWS offers its own products and services alongside partners. At the top are millions of applications, where AWS builds selectively and only when it believes it has differentiated expertise.

Amazon is “particularly bad” at copying competitors. Garman was surprisingly blunt about what Amazon doesn’t do well. “One of the things that Amazon is particularly bad at is being a fast follower,” he said. “When we try to copy someone, we’re just bad at it.” 

The better formula, he said, is to think from first principles about solving a customer problem, only when it believes it has differentiated expertise, not simply to copy existing products.

Agencies, IT companies impacted by latest malware from China

Hackers sponsored by China are targeting federal agencies, technology companies and critical infrastructure sector organizations with a new type of malware affecting Linux, VMWare kernel and Windows environments that may be difficult to detect and eradicate.

The Cybersecurity and Infrastructure Security Agency, the National Security Agency and the Canadian Centre for Cyber Security are strongly advising organizations take steps to scan systems for BRICKSTORM using detection signatures and rules; inventory all network edge devices; monitor edge devices for suspicious network connectivity and ensure proper network segmentation. The organizations released a malware analysis report to help organizations combat the threat.

Nick Andersen of CISA
Nick Andersen is CISA’s executive assistant director for cybersecurity.

“BRICKSTORM underscores the grave threats that are posed by the People’s Republic of China to our nation’s critical infrastructure. State sponsored actors are not just infiltrating networks, they are embedding themselves to enable long term access, disruption and potential sabotage. That’s why we’re urging every organization to treat this threat with the seriousness that it demands,” said Nick Andersen, CISA’s executive assistant director for cybersecurity, during a call with reporters today. “The advisory we issued today provides indicators of compromise (IOCs) and detection signatures to assist critical infrastructure owners and operators in determining whether they have been compromised. It also gives recommended mitigation actions to protect against what is truly pervasive PRC activity.”

CISA says BRICKSTORM features advanced functionality to conceal communications, move laterally and tunnel into victim networks and automatically reinstall or restart the malware if disrupted. Andersen said CISA became aware of the threat in mid-August and it’s part of a “persistent, long-term campaigns of nation state threat actors, in particular those that are sponsored by the People’s Republic of China, to hold at risk our nation’s critical infrastructure through cyber means.”

The malware has impacted at least eight organizations, including one where CISA provided incident response services to. Andersen wouldn’t say how many of those eight were federal agencies or which ones have been impacted.

“This is a terribly sophisticated piece of malware that’s being used, and that’s why we’re encouraging all organizations to take action to protect themselves, and if they do become victims of it or other malicious activity, to report it to CISA, so we can have a better understanding of the full picture of not just where this malware is being employed, but the more robust picture of the wider cyber threat landscape,” Andersen said.

New way to interact with industry

Since January, CISA has issued 20 joint cybersecurity advisories and threat intelligence guidance documents with U.S. allies, including the United Kingdom, Canada, Australia and New Zealand, as well as with our other international partners.

“Together, we’ve exposed nation-state sponsored intrusions, AI enabled ransomware operations and the ever evolving threats to critical infrastructure,” Andersen said.

Along with the warnings and analysis about BRICKSTORM, CISA also launched a new Industry Engagement Platform (IEP). CISA says it’s designed to let the agency and companies share information and develop innovative and security technologies.

“The IEP enables CISA to better understand emerging solutions across the technology ecosystem while giving industry a clear, transparent pathway to engage with the agency,” CISA said in a release. “The IEP allows organizations – including industry, non-profits, academia, government partners … and the research community – with a structured process to request conversations with CISA subject matter experts to describe new technologies and capabilities. These engagements give innovators the opportunity to present solutions that may strengthen our nation’s cyber and infrastructure security.”

CISA says while participation in the IEP does not provide preferential consideration for future federal contracts, it serves as a channel for the government to gain insight into new capabilities and market trends.

Current areas of interest include:

  • Information technology and security controls
  • Data, analytics, storage, and data management
  • Communications technologies
  • Any emerging technologies that advance CISA’s mission, including post-quantum cryptography and other next-generation capabilities

Andersen said while the IEP and related work is separate from the BRICKSTORM analysis, it’s all part of how CISA is trying to ensure all organizations protect themselves from the ever-changing cyber threat.

“The threat here is not theoretical, and BRICKSTORM underscores the grave threats that are posed by the People’s Republic of China to our nation’s critical infrastructure,” he said  “We know that state sponsored actors are not just infiltrating networks. They’re embedding themselves to enable the long term access disruption and potential sabotage that enables their strategic objectives, and that’s why we continue to urge every organization to treat this threat with serious demands.”

The post Agencies, IT companies impacted by latest malware from China first appeared on Federal News Network.

© The Associated Press

FILE - This Feb 23, 2019, file photo shows the inside of a computer. Three former U.S. intelligence and military operatives have agreed to pay nearly $1.7 million to resolve criminal charges that they provided sophisticated hacking technology to the United Arab Emirates. A charging document in federal court in Washington accuses them of helping develop “advanced covert hacking systems for U.A.E. government agencies.” (AP Photo/Jenny Kane, File)

Space-routed internet moves to the mainstream

By: Tom Temin

Amazon might be most known for how it has mastered the logistics of moving millions of items on the ground. But it’s also active in space, in a race to build out the next generation of enterprise communications capabilities.

Amazon Leo, formerly known as Project Kuiper, has already put some 150 satellites into low earth orbit (LEO), according to its principal business development lead, Rich Pang. Leo’s goal, Pang said, is to “enable connecting folks who don’t have connectivity or who have poor connectivity.”

Operating at a height of about 600 kilometers, the satellites’ RF links “are easily done with small terminals and, because of that closeness to earth, [with] high throughput and low latency,” he said.

That includes enterprises, including the Defense Department and federal national security agencies.

“We know that the defense and national security apparatus is not a fixed force, it’s a mobile force,” Pang said. “It requires multi domain connectivity to ensure that airplanes, ships, trucks, command vehicles are always connected, not only in receiving information, but getting commands out to the field as well.”

He said Leo augments communications capabilities the military and national security components already have with “more resilient and secure connectivity to ensure they have that ability to connect all those operations regardless of which domain they operate in.”

Remote regions of the oceans where the Navy operates come to mind, but land areas also have connectivity gaps, or ground-based comms get knocked out.

“You can’t have guaranteed fiber connectivity or usual connectivity that you’re used to having back at home station,” Pang said. “It’s important to have very flexible types of comms that can respond rapidly to wherever they need to deploy forces.”

“I often think about our first responders, or disaster response customers that have multiple systems at any given time to ensure they have connectivity,” he added.

They already have their radios, microwave and cellular connections. Now, Pang said, “in the event any of those are taken down, they have to have satellite as a backup.”

Resilient, redundant                                 

The addition of LEO satellites, with their low latency relative to geosynchronous satellites, contribute to what Pang called next generation connectivity. It’s marked by resiliency because of the alternate pathways for data movement the satellites bring.

Optical links among the satellites themselves contribute to the resiliency, Pang said. Inter-satellite pathways “remove congestion from certain ground points [and] allow us to have multiple paths to move information … not only on the ground but in space as well.”

Rather than operate as a separate entity, the satellite comms integrate with terrestrial capabilities and, for that matter, to commercial computing clouds, Pang said.

To ensure compliance with customers’ security requirements, Pang said, Leo operates within “this private connectivity directly into the cloud services … for our customers who are seeking secure solutions.” He noted that some industries have security needs at least as rigorous as the FIPS (Federal Information Processing Standards) requirement of the government.

As a managed service, Pang said, Leo constantly optimizes itself to maintain maximum use of its available bandwidth.

“It’s got varying geometries. It’s got varying frequencies,” he said. “And so inherently, these types of capabilities also make it more secure in that it helps reduce interference, whether meaningful or unintended.”

Beyond that, the Leo satellites fit in with a general trend of internet protocol (IP) as the basis for all communications, whether voice or data. That is, the multiprotocol label switching gives way to IP and software-defined wide area networks.

“I think this opens up the aperture to incorporate a lot of different capabilities throughout the many domains [the DoD] operates and also shorten the timeline in which they get that information from sensors to processing centers to engagement vehicles,” Pang said.

Grand orchestration

Therein lies the importance of redundancy and resiliency, especially in austere or contested environments. Pang described those qualities as “not being locked into a single architecture, but rather having many choices, having alternative to getting your information where it needs to go.”

“Resiliency, in my mind, is creating a dynamic system that allows you to choose the best path to take when you’re moving information around,” he added.

Pang said the government has been working continuously on how to integrate disparate networks and applications at the terminal level, where they operate single apertures that work on multiple networks.” This requires “an orchestration of all those capabilities to build that resiliency into the broader architecture that the Defense Department is trying to deploy now.”

Signal interruption, for instance by weather or intentionally interfered with by adversaries, occur regularly in Defense and national security situations.

“The system is designed to always sense for interference, whether it’s intentional or not,” Pang said. “It’s sensing for weather interference. It’s sensing for intentional interference, so it always knows that it needs an alternate path.”

Sensing and rerouting happen automatically, he said. The system “always knows that if I have interference in a particular path, it knows to look for the alternative or the tertiary path. The system is designed to constantly be optimizing itself very rapidly to ensure that that interference is dealt with.”

Pang said the LEO satellites of Amazon strengthen an important link in the information-to-decision chain. Once data from various sources arrived where it’s needed, “there are a lot of fusion engines, whether they sit on premises, in the cloud or even at the tactical edge.”

Leo is concerned with the movement of the data to those fusion sites.

“Our play is getting information to where it needs to be, whether it’s at the tactical edge or back to a data center to be fused, processed and then redistributed,” Pang said. “As the transport layer, not only can we get all that information back, we can help redistribute that information very quickly to the tactical user, so that commanders can make decisions in a much shortened timeline.”

The post Space-routed internet moves to the mainstream first appeared on Federal News Network.

© Federal News Network

GettyImages-2236603314

Cybersecurity in focus: DOJ aggressively investigating contractors’ cybersecurity practices

The Justice Department recently resolved several investigations into federal contractors’ cybersecurity requirements as part of the federal government’s Civil Cyber-Fraud Initiative. The initiative, first announced in 2021, ushered in the DOJ’s efforts to pursue cybersecurity-related fraud by government contractors and grant recipients pursuant to the False Claims Act. Since then, the DOJ has publicly announced approximately 15 settlements against federal contractors, with the DOJ undoubtedly conducting even more investigations outside of the public’s view.

As an initial matter, these latest settlements signal that the new administration has every intention of continuing to prioritize government contractors’ cybersecurity practices and combating new and emerging cyber threats to the security of sensitive government information and critical systems. These settlements also coincide with the lead up to the Nov. 10 effective date of the Defense Department’s final rule amending the Defense Federal Acquisition Regulation Supplement, which incorporates the standards of the Cybersecurity Maturity Model Certification.

Key DOJ cyber-fraud decisions

The first of these four recent DOJ settlements was announced in July 2025, and resulted in Hill Associates agreeing to pay the United States a minimum of $14.75 million. In this case, Hill Associates provided certain IT services to the General Services Administration. According to the DOJ’s allegations, Hill Associates had not passed the technical evaluations required by GSA for a contractor to offer certain highly adaptive cybersecurity services to government customers. Nevertheless, the contractor submitted claims charging the government for such cybersecurity services, which the DOJ alleged violated the FCA.

The second settlement, United States ex. rel. Lenore v. Illumina Inc., was announced later in July 2025, and resulted in Illumina agreeing to pay $9.8 million — albeit with Illumina denying the DOJ’s allegations. According to the DOJ, Illumina violated the FCA by selling federal agencies, including the departments of Health and Human Services, Homeland Security and Agriculture, certain genomic sequencing systems that contained cybersecurity vulnerabilities. Specifically, the DOJ alleged that with respect to the cybersecurity of its product, Illumina: (1) falsely represented that its software and systems adhered to cybersecurity standards, including standards of the International Organization for Standardization and National Institute of Standards and Technology; (2) knowingly failed to incorporate product cybersecurity in its software design, development, installation and on-market monitoring; (3) failed to properly support and resource personnel, systems and processes tasked with product security; and (4) failed to adequately correct design features that introduced cybersecurity vulnerabilities.

That same day, the DOJ announced its third settlement, which was with Aero Turbine Inc., and Gallant Capital Partners, LLC (collectively, “Aero”), and resulted in a $1.75 million settlement. This settlement resolved the DOJ’s allegations that Aero violated the FCA by knowingly failing to comply with the cybersecurity requirements of its contract with the Department of the Air Force. Pursuant to the contract, Aero was required to implement the security requirements outlined by NIST Special Publication 800-171, “Protecting Controlled Unclassified Information in Nonfederal Information Systems and Organizations,” but failed to fully do so. This included failing to control the flow of and limit unauthorized access to sensitive defense information when it provided an unauthorized Egypt-based software company and its personnel with files containing sensitive Defense information.

The fourth and latest DOJ settlement was announced in Sept. 2025, and resolved the DOJ’s FCA lawsuit against the Georgia Tech Research Corporation. As part of the settlement, GRTC agreed to pay $875,000 to resolve allegations resulting from a whistleblower complaint that it failed to meet the cybersecurity requirements in its DoD contracts. Specifically, the DOJ alleged that until December 2021, the contractor failed to install, update or run anti-virus or anti-malware tools on desktops, laptops, servers and networks while conducting sensitive cyber-defense research for the DoD. The DOJ further alleged that the contractor did not have a system security plan setting out cybersecurity controls, as required by the government contract. Lastly, the DOJ alleged that the contractor submitted a false summary level cybersecurity assessment score of 98 to the DoD, with the score being premised on a “fictitious” environment, and did not apply to any system being used to process, store or transmit sensitive Defense information.

Takeaways for federal contractors

These recent enforcement actions provide valuable guidance for federal contractors.

  • DOJ has explicitly stated that cyber fraud can exist regardless of whether a federal contractor experienced a cyber breach.
  • DOJ is focused on several practices to support allegations of cyber fraud, including a federal contractor’s cybersecurity practices during product development and deployment, as well as contractors’ statements regarding assessment scores and underlying representations.
  • DOJ takes whistleblower complaints seriously, with several of these actions stemming from complaints by federal contractors’ former employees.
  • To mitigate these risks, federal contractors should ensure that they understand and operationalize their contractual obligations, particularly with respect to the new DFARS obligations.
  • Federal contractors would be well advised to:
    • (1) review and understand their cybersecurity contractional obligations;
    • (2) develop processes to work with the appropriate internal teams (information security, information technology, etc.) to ensure that contractual obligations have been appropriately implemented; and
    • (3) develop processes to monitor compliance with the contractual obligations on an ongoing basis.

Joshua Mullen, Luke Cass, Christopher Lockwood and Tyler Bridegan are partners at Womble Bond Dickinson (US) LLP.

The post Cybersecurity in focus: DOJ aggressively investigating contractors’ cybersecurity practices first appeared on Federal News Network.

© Getty Images/iStockphoto/maxkabakov

Data security and privacy concept. Visualization of personal or business information safety.

Outdated SEC communications rules are putting compliance and competitiveness at risk

Interview transcript

Terry Gerton The Securities Industry and Financial Markets Association has recently written to the SEC asking to modernize its communication and record keeping rules. Help us understand what the big issue is here.

Robert Cruz Well, I think the fundamental issue that SIFMA is calling out is a mismatch between the technology that firms use today and the rules, which were written a long time ago — and in some cases, you know, the Securities and Exchange Act from 1940. So essentially we’re all struggling trying to find a way to fit the way that we interact today into rules that are very old, written when we were doing things with typewriters and, you know, over written communication. So it’s trying to minimize the gap between those two things, between the technology and what the rule requires firms to do.

Terry Gerton So instead of all of those hard copy letters that we get from investment firms and those sorts of things, we also get emails, text messages. That’s where the disconnect is happening?

Robert Cruz Yes. It’s the fact that individuals can collaborate and communicate with their customers over a variety of mechanisms. And some of these may be casual. They may be not related to business. And that’s the fundamental problem is that SIFMA is looking for the rules to be clarified so it pertains only to the things that matter to the firm, that create value or risk to their business or to the investor.

Terry Gerton And what would those kinds of communications look like?

Robert Cruz I think what they’ll look like is external communication. So, right now the rule doesn’t distinguish between internal — you and I as colleagues talking versus things that pertain to, you know, communications with the public or with a potential investor. So it’s trying to carve out those things that really do relate to the business’s products or services and exclude some of the things that may be more just conversational, as you and I might pass each other in the hallway, we can chat on a chat board someplace. It’s trying to remove those kind of just transitory communications from the record keeping obligations.

Terry Gerton Right. The letter even mentions things like emojis and messages like “I’m running late.”

Robert Cruz Exactly. And you know, it’s a fundamental problem that firms have is the fact that if you say you’re going to be able to use a tool, even if it’s as simple as email, that means that our firm has an obligation to capture it. And when it captures it, it captures everything, everything that is delivered through that communication channel. So that creates some of that problem of like, somebody left their lunch in the refrigerator. We need to clean it up. it’s trying to remove all of that noise from the things that really do matter to the business.

Terry Gerton Not only does that kind of record keeping impose a cost on the organization, the reporting organization, but it also would create quite a burden on the regulators trying to sort out the meaningful communication in that electronic file cabinet, so to speak.

Robert Cruz Absolutely. Well, the firm clearly has the obligation to sift through all of this data to find the things that matter. If you have a regulatory inquiry, you’ve got to find everything that relates to it. Even if it’s, you know, I talked to an investor and there was an emoji in that conversation. I still need to account for that. So the burden is both on the firm as well as on the regulator to try to parse through these very large sets of data that are very, you know, heterogeneous with a lot of different activities that are captured in today’s tools.

Terry Gerton Relative to the question about the tools, you’ve said that SEC rules should be agnostic to technology. Unpack that for me. What exactly does that mean?

Robert Cruz Sure. This kind of goes back a few years where there was a revision to the rule 17A-4 from the SEC, which is the fundamental record keeping obligation. It says you need to have complete and accurate records. What they tried to do at that time was remove references to old technologies and spinning disks and things we used to do long ago. And so the objective was to be more independent of technology. Your obligation is your obligation. If it matters to the business, that’s the principle that should govern, not the particular tool that you use. So technology being agnostic — or rules being agnostic; technology means it doesn’t matter whether it’s delivered via email, via text, via emojis, carrier pigeons or anything else. If it matters to the business, it matters to the business.

Terry Gerton How do today’s variety of technologies complicate a business’ compliance requirements?

Robert Cruz The challenge is very complex, period. It’s always going to be with us because there’s always going to be a new way that your your client wants to engage. There may be a new tool that you’re not familiar with that they want to interact on. Or you may get pull from your employees internally because they’re familiar with tools from their personal lives. So that encroachment of new tools, it doesn’t go away. It’s always been with us. And so it’s things that we have to anticipate. Again, be agnostic because there’s going to be something that comes right along behind it that potentially makes you know an explicit regulation irrelevant from the outset.

Terry Gerton I’m speaking with Robert Cruz. He’s the Vice President for Regulatory and Information Governance at SMARSH. All right, let’s follow along with that because you’ve got a proposal that includes a compliance safe harbor. So along with these compliance questions, what would that change for firms and how does it address the challenges of enforcement?

Robert Cruz Well, it’s an interesting concept because the rules today are meant to be principles-based. They’re not prescriptive. In other words, they don’t tell you, you must do the following. And that’s one of the challenges the industry has is that, what is good enough? What is the SEC specifically looking for? So this is like trying to give people a safe spot to which then you can say, well, SEC, if you really care about, you know, particular areas of these communications, they can tune their programs to do that. So it feels like it’s just giving some latitude so that we can define best practices. We can get a clearer sense of what the regulators are looking for. It’ll guide our governance processes by just having a clearer picture of where enforcement’s going to be focused.

Terry Gerton The regulatory process that would apply here is notoriously slow and complicated. What’s at stake for firms and investors if we don’t get this modernized?

Robert Cruz Well, I think you’re going to continue to see just a lot of individual practices that will vary. Some firms will interpret things differently and we’ll need to wait for enforcement to determine which is the best way. So, case in point, generative AI — if you’re using these technologies inside of the tools that you currently support, are these going to be considered issues for the SEC or not? We we have to wait until we get some interpretation from the regulators to say, yes, we need to have stronger controls around this, or yes, we need to block these tools. You know, you need to make that adjustment based upon the way that the SEC responds to it.

Terry Gerton And what is your sense of how the SEC might respond to this?

Robert Cruz My gut tells me that just given where we are right now, you know, the SEC has a reduction in headcount it’s dealing with. It’s stating its mission very clearly and its focus is on crypto, is on capital formation, is on reducing regulatory burden. I just don’t know if this makes the list. So it clearly is being abdicated strongly from SIFMA, but, whether this makes page one of the SEC priorities list with the 20% reduction in headcount, it really seems like an outside chance that it gets onto their agenda.

Terry Gerton Could it inform some of the other regulation issues that they’re addressing, such as crypto and and capital formation?

Robert Cruz Absolutely. And that’s a great comment — the notion of using an unapproved communication tool, it didn’t go away. We may not see the big fines anymore, but I think the regulators are going to be saying if there’s an issue related to crypto, related to investor harm or what have you, if you’re using a tool that is not approved for use, you don’t have the artifact, you don’t have the historical record. They’re not going to view that you know favorably if you’re not able to defend your business. And so it’ll come up in context of other examinations that they’re carrying out. So maybe not a means to an end as it’s been for the last two years, but it will impact their ability to do their jobs ultimately.

The post Outdated SEC communications rules are putting compliance and competitiveness at risk first appeared on Federal News Network.

© Getty Images/iStockphoto/Maxxa_Satori

Business woman hand using smartphone with digital marketing via multi-channel communication network on mobile application technology.

Risk and Compliance 2025 Exchange: Diligent’s Jason Venner on moving beyond manual cyber compliance

The Pentagon is taking a major step forward in modernizing how it addresses cybersecurity risks.

Defense Department officials have emphasized the need to move beyond “legacy shortcomings” to deliver technology to warfighters more rapidly. In September, DoD announced a new cybersecurity risk management construct to address those challenges.

“The previous Risk Management Framework was overly reliant on static checklists and manual processes that failed to account for operational needs and cyber survivability requirements,” DoD wrote at the time. “These limitations left defense systems vulnerable to sophisticated adversaries and slowed the delivery of secure capabilities to the field.”

Weeding through legacy manual processes

The legacy of manual processes has built up over decades. Jason Venner, a solutions sales director at Diligent, said agencies have traditionally relied on people and paperwork to ensure compliance.

“It’s no one’s fault,” Venner said during Federal News Network’s Risk & Compliance Exchange 2025. “It just sort of evolved that way, and now it’s time to stop and reassess where we’re at. I think the administration is doing a pretty good job in looking at all the different regs that they’re promulgating and revising them.”

Venner said IT leaders are interested in ways to help streamline the governance, risk and compliance process while ensuring security.

“Software should help make my life easier,” he said. “If I’m a CIO or a CISO, it should help my make my life easier, and not just for doing security scans or vulnerability scans, but actually doing IT governance, risk and compliance.”

Katie Arrington, who is performing the duties of the DoD chief information officer, has talked about the need to “blow up” the current RMF. The department moved to the framework in 2018 when it transitioned away from the DoD Information Assurance Certification and Accreditation Process (DIACAP).

“I remember when we were going from DIACAP to RMF, I wanted to pull my hair out,” Arrington said earlier this year. “It’s still paper. Who reads it? What we do is a program protection plan. We write it, we put it inside the program. We say, ‘This is what we’ll be looking to protect the program.’ We put it in a file, and we don’t look at it for three years. We have to get away from paperwork. We have to get away from the way we’ve done business to the way we need to do business, and it’s going to be painful, and there are going to be a lot of things that we do, and mistakes will be made. I really hope that industry doesn’t do what industry tends to do, [which] is want to sue the federal government instead of working with us to fix the problems. I would really love that.”

Arrington launched the Software Fast Track initiative to once again tackle the challenge of quickly adopting secure software.

Evolving risk management through better automation, analytics

DoD’s new risk management construct includes a five-phase lifecycle and then core principles, including automation, continuous monitoring and DevSecOps.

Arrington talked about the future vision for cyber risk management within DoD earlier this year.

“I’m going to ask you, if you’re a software provider, to provide me your software bill of materials in both your sandbox and production, along with a third-party SBOM. You’re going to populate those artifacts into our Enterprise Mission Assurance Support Service,” she said. “I will have AI tools on the back end to review the data instead of waiting for a human and if all of it passes the right requirements, provisional authority to operate.”

Venner said the use of automation and AI rest on a foundation of data analytics. He argued the successful use of AI for risk management will require purpose-built models.

“Can you identify, suggest, benchmark things for me and then identify controls to mitigate these risks, and then let me know what data I need to monitor to ensure those controls are working. That’s where AI can really accelerate the conversation,” Venner said.

Discover more articles and videos now on our Risk & Compliance Exchange 2025 event page.

The post Risk and Compliance 2025 Exchange: Diligent’s Jason Venner on moving beyond manual cyber compliance first appeared on Federal News Network.

© Federal News Network

fnr-icon-full

Electromagnetic Warfare: NATO's Blind Spot Could Decide the Next Conflict

12/4/25
MILITARY TECHNOLOGY
Enable IntenseDebate Comments: 
Enable IntenseDebate Comments

The war in Ukraine has exposed a critical front long neglected by Western militaries: electromagnetic warfare (EW). Control over this invisible battlespace, where communications are jammed, drones blinded, and precision weapons thrown off course, can decide the outcome of a conflict. Russia has understood this sooner than NATO, using EW to isolate Ukrainian units, disrupt command networks, and neutralize Western systems. Ukraine has adapted with ingenuity, but it is learning in combat what NATO should have learned in training.

read more

Amazon reportedly considering ending ties with the US Postal Service

Amazon is reportedly considering discontinuing use of the US Postal Service and building out its own shipping network to rival it, according to The Washington Post. The e-commerce behemoth spends more than $6 billion a year on the public mail carrier, representing just shy of 8 percent of the service's total revenues. That's up from just shy of $4 billion in 2019, and Amazon continues to grow.

However, it sounds like that split might be due to a breakdown in negotiations between Amazon and the USPS rather than Amazon proactively pullings its business. Amazon provided Engadget with the following statement regarding the Post's reporting and its negotiations with the USPS: 

"The USPS is a longstanding and trusted partner and we remain committed to working together. We’ve continued to discuss ways to extend our partnership that would increase our spend with them, and we look forward to hearing more from them soon — with the goal of extending our relationship that started more than 30 years ago. We were surprised to hear they want to run an auction after nearly a year of negotiations, so we still have a lot to work through. Given the change of direction and the uncertainty it adds to our delivery network, we're evaluating all of our options that would ensure we can continue to deliver for our customers."

The auction Amazon is referring to would be a "reverse auction," according to the Post. The USPS would be offering its mailing capabilities to the highest bidder, essentially making Amazon and other high-volume shippers compete for USPS resources. This move would reportedly be a result of the breakdown in talks between Amazon and the USPS. 

Over the past decade, Amazon has invested heavily in shipping logistics, buying its own Boeing planes, debuting electric delivery vans and slowly building out a drone delivery network. Last year, Amazon handled over 6.3 billion parcels, a 7 percent increase over the previous year, according to the Pitney Bowes parcel shipping index. USPS, for its part, handled roughly 6.9 billion, just a 3 percent increase over 2023. That is to say that Amazon's shipping network can already handle over 90 percent of the volume of the US Postal Service (at least by sheer numbers).

The USPS has been in dire financial condition for some time, losing billions of dollars a year. Negotiations between Amazon and the public carrier have reportedly stalled, which, together with the agency's need to keep raising its prices, may create more urgency for the company to eliminate its reliance on the service altogether.

The Postal Service has struggled to modernize and adapt (its attempt to electrify the truck fleet was a bust) in a market where the likes of Amazon and Walmart are investing billions in delivering packages around the country at lightning speed. The ever-accelerating digitization of communication and heavy investment in privately owned shipping operations threatens the very existence of one of the country's greatest public goods.

Update, December 4, 2025, 2:24PM ET: This story has been updated with a statement from Amazon and more details about the "reverse auction" the USPS reportedly wants to conduct if it no longer works with Amazon.

This article originally appeared on Engadget at https://www.engadget.com/big-tech/amazon-reportedly-considering-ending-ties-with-the-us-postal-service-144555021.html?src=rss

©

© FinkAvenue via Getty Images

Munich, Bavaria Germany - December 11 2022: Amazon Deutschland Services GmbH e-commerce german headquarters office building with glass green trademark logo. Ultra HD
❌