Reading view

There are new articles available, click to refresh the page.

IPv6 Monitoring: Why IT Admins Can't Ignore This Transition

Okay, real talk. When's the last time you actually thought about IPv6? Like, really thought about it?

I'm guessing it's been sitting on your to-do list collecting dust. Maybe you glance at it occasionally, tell yourself dual-stack is working fine, that IPv4 will stick around for another few years at least. I mean, it's been working this long, right?

Pix, SEPA, BRICS and BIS: Four Paths Toward the Same Missing Layer

Why global instant payments fail not because of technology, but because settlement, governance and programmability remain structurally misaligned

Conceptual illustration of a layered global payment architecture with fast-moving digital transaction flows and interconnected networks above a solid settlement foundation. The image includes the headline text: “Why global instant payments fail not because of technology, but because settlement, governance and programmability remain structurally misaligned,” emphasising that scalable instant payments depend on aligned settlement and governance infrastructure rather than execution speed alone.
Layered global payment architecture: Fast execution and orchestration layers sit above a stable settlement foundation, highlighting why global instant payments depend less on new technology and more on aligning settlement, governance and programmability.

Abstract

Despite significant advances in instant payment systems, tokenisation and digital asset infrastructures, global payments remain structurally fragmented. While execution speeds have increased markedly, settlement finality, governance and programmability continue to be addressed in isolation rather than as integrated components of a coherent financial architecture. This fragmentation becomes particularly visible as payment processes move towards real-time, event-driven and increasingly automated models in global trade and treasury operations.

This article examines four prominent approaches to modern payment infrastructure: Brazil’s Pix, Europe’s SEPA, emerging BRICS cross-border initiatives and the Bank for International Settlements’ experimental projects, not as competing systems, but as partial solutions optimising different layers of the payment stack. Each addresses specific challenges, ranging from domestic execution efficiency to regional standardisation and wholesale settlement in central bank money. None, however, provides an end-to-end framework that aligns execution, settlement finality, governance and programmability across jurisdictions.

By analysing these initiatives through a layered architectural lens, the article argues that the central challenge of global instant payments is no longer technological capability, but institutional coordination and settlement design. It proposes that sustainable innovation in global payments requires the integration of programmable processes with interoperable settlement layers anchored in central bank money, supported by open governance and legally robust finality. In this context, the debate shifts from the optimisation of individual rails to the design of shared infrastructure capable of supporting real-time global trade.

Introduction

Over the past decade, the global payments landscape has undergone a remarkable acceleration. Instant payment systems, real-time treasury operations, tokenised assets and digital settlement experiments have moved from conceptual pilots to operational reality in multiple regions. Yet this apparent progress masks a deeper structural tension. While payments are increasingly executed in real time, the underlying settlement, governance and legal finality mechanisms remain fragmented, jurisdiction-bound and inconsistently integrated. The result is a global payments environment that is faster, but not fundamentally more coherent.

This tension is becoming increasingly visible as global trade and corporate finance adopt event-driven and programmable operating models. Execution speed alone is no longer sufficient. As payment processes automate and scale across borders, the question shifts from how quickly money moves to when, where and under which authority value is finally settled. It is at this architectural level, rather than at the level of individual products or political narratives, that today’s debates around instant payments, central bank digital currencies and alternative settlement networks must be examined.

This article approaches that examination by comparing four influential payment infrastructures, Pix, SEPA, emerging BRICS initiatives and BIS-led experiments, not as rivals, but as complementary attempts to address different layers of a shared problem.

Speed Is No Longer the Binding Constraint

For much of the past three decades, the evolution of payment systems has been framed primarily as a problem of speed and efficiency. Batch-based processing, limited operating hours and fragmented correspondent banking arrangements were widely identified as the principal bottlenecks in cross-border and domestic payments. Considerable institutional and technological effort was therefore directed towards accelerating execution, reducing cut-off times and improving straight-through processing.

These efforts have largely succeeded. A growing number of jurisdictions now operate domestic instant payment systems that provide near-real-time execution and immediate availability of funds to end users. Brazil’s Pix, Europe’s SEPA Instant Credit Transfer (SCT Inst), India’s UPI and similar systems demonstrate that real-time execution at scale is technically feasible, economically viable and socially impactful. From a purely operational perspective, the question of “how fast payments can move” has largely been answered.

However, the increasing prevalence of real-time execution has exposed a more fundamental limitation. Speed optimises only one layer of the payment process: execution. It does not, by itself, resolve questions of settlement finality, legal certainty, balance sheet exposure or cross-system interoperability. In fact, as execution accelerates, the weaknesses of underlying settlement arrangements become more pronounced rather than less relevant.

This distinction is particularly important in cross-border and multi-currency contexts. While instant payment systems can deliver rapid crediting of accounts, the ultimate settlement of obligations often continues to rely on deferred processes, commercial bank money and jurisdiction-specific legal frameworks. As a result, faster execution may coexist with persistent settlement risk, intraday liquidity pressures and fragmented governance structures.

From an architectural perspective, this implies that further gains in payment system performance cannot be achieved by execution-layer optimisation alone. Once speed ceases to be the binding constraint, attention must shift to the design and coordination of settlement mechanisms, governance models and the legal foundations of finality. It is at this level that contemporary initiatives increasingly diverge, and where meaningful comparison between systems such as Pix, SEPA, BRICS-linked proposals and BIS-led experiments becomes analytically productive.

Execution, Clearing and Settlement as Distinct Architectural Layers

A persistent source of confusion in contemporary payment debates arises from the tendency to treat execution, clearing and settlement as a single, homogeneous process. While closely related in operational terms, these functions represent analytically distinct layers within the payment architecture, each governed by different technical, legal and institutional logics.

Execution refers to the initiation and routing of a payment instruction and the conditional crediting of accounts. Modern instant payment systems have significantly optimised this layer, enabling near-immediate user-facing outcomes. Clearing, by contrast, concerns the calculation and netting of obligations between participating institutions. Settlement represents the final discharge of those obligations through the transfer of a settlement asset, thereby extinguishing counterparty claims and producing legal finality.

In traditional banking infrastructures, these layers are tightly coupled but not temporally aligned. Execution may occur within seconds, while clearing and settlement may follow hours or even days later, often across different systems and balance sheets. This decoupling has historically been managed through credit risk, liquidity buffers and legal constructs designed for batch-based environments.

As payment systems move towards real-time and continuous operation, this architectural separation becomes increasingly consequential. Accelerated execution compresses the time available to manage settlement risk, while automated processes reduce the scope for discretionary intervention. Under such conditions, the nature of the settlement asset and the legal framework governing finality assume heightened importance.

Commercial bank money, which dominates settlement in most existing systems, represents a private liability and therefore embeds counterparty risk by design. Central bank money, by contrast, constitutes a public settlement asset with unique properties of legal certainty, risk insulation and systemic trust. The distinction between these assets is not merely technical but foundational, as it determines how risk is distributed across participants and how resilient a system remains under stress.

An architectural analysis must therefore distinguish clearly between improvements at the execution layer and transformations at the settlement layer. While the former can deliver efficiency gains and enhanced user experience, only the latter can fundamentally alter the risk, governance and interoperability characteristics of a payment system. This distinction provides the analytical basis for assessing whether current initiatives represent incremental optimisation or structural innovation.

It is against this layered framework that domestic instant payment systems, regional standards and emerging cross-border settlement proposals must be evaluated. Their differences lie less in technological sophistication than in how, and whether, they address the settlement layer explicitly.

Domestic Instant Payment Systems as Execution-Layer Optimisations

Over the past decade, a growing number of jurisdictions have introduced domestic instant payment systems designed to modernise retail and business payments. These systems typically prioritise speed, availability and user experience, offering continuous operation, immediate confirmation and increasingly rich data exchange. From an execution perspective, they represent a significant departure from batch-oriented legacy infrastructures.

Architecturally, however, these systems are best understood as execution-layer optimisations rather than as comprehensive transformations of the payment stack. They focus on the rapid transmission and processing of payment instructions between participant institutions, often supported by enhanced messaging standards and real-time liquidity management. The user-facing outcome is near-immediate crediting, which materially improves cash-flow visibility and operational efficiency for end users.

Crucially, these improvements do not, in themselves, alter the underlying settlement logic. In most implementations, final settlement continues to rely on commercial bank money, with positions ultimately reconciled through deferred or periodic settlement processes. Even where prefunding or intraday liquidity mechanisms are employed, the settlement asset remains a private liability rather than a public one.

This distinction matters because execution speed and settlement finality are not interchangeable. Instant execution reduces operational friction but does not eliminate counterparty exposure. As long as settlement occurs outside central bank balance sheets, the system’s resilience depends on the creditworthiness and liquidity management of participating institutions, as well as on legal arrangements designed to manage failure scenarios.

From a systemic perspective, domestic instant payment systems therefore deliver substantial efficiency gains without fundamentally reconfiguring risk allocation. They improve how quickly value appears to move, but not how or where value is ultimately settled. This makes them highly effective within stable domestic environments, yet structurally limited when extended across borders, currencies or regulatory regimes.

The growing success of these systems can paradoxically obscure this limitation. High adoption rates and positive user experience create the impression of infrastructural completeness, even though the settlement layer remains unchanged. As long as payments remain predominantly domestic and low-risk, this architectural gap may appear tolerable. However, as real-time execution becomes the norm and payment flows increasingly span jurisdictions, the absence of an equally real-time, risk-free settlement layer becomes more pronounced.

Understanding domestic instant payment systems as optimisation layers rather than as full-stack solutions is therefore essential. It clarifies why further gains in speed and availability, while valuable, cannot by themselves address challenges of cross-border interoperability, systemic risk reduction and global scalability. These challenges reside not at the execution layer, but at the level of settlement architecture.

Why Cross-Border Extension Exposes the Limits of Execution-Only Models

The extension of real-time payment capabilities across borders introduces a set of structural challenges that execution-layer optimisation alone cannot resolve. While domestic instant payment systems benefit from legal harmonisation, shared currency frameworks and aligned supervisory regimes, these conditions rarely persist beyond national or regional boundaries.

Cross-border payments operate at the intersection of multiple currencies, legal systems, regulatory frameworks and liquidity regimes. Execution speed in such environments does not simply amplify efficiency; it amplifies coordination problems. Each additional jurisdiction introduces new settlement calendars, risk thresholds, compliance requirements and failure modes, which cannot be neutralised by faster messaging or improved user interfaces alone.

In execution-only architectures, cross-border connectivity is typically achieved through bilateral or hub-based linkages between domestic systems. These arrangements focus on routing payment instructions and managing prefunding or liquidity bridges between participants. While such approaches can reduce friction at the margins, they leave the fundamental settlement logic unchanged. Obligations continue to be settled in commercial bank money, often across multiple balance sheets and time zones.

This creates a structural asymmetry: execution becomes real-time, while settlement remains fragmented, deferred and risk-bearing. As a result, credit and liquidity risk are not eliminated but redistributed, often in opaque ways. The faster the execution layer operates, the more sensitive the system becomes to settlement disruptions, liquidity bottlenecks and legal uncertainty.

Furthermore, execution-only cross-border models tend to rely on conditional guarantees, bilateral credit lines or collateralisation schemes to manage risk. These mechanisms introduce complexity and cost, and they scale poorly as the number of participants and corridors increases. What appears manageable in limited pilot corridors becomes increasingly brittle when extended to global networks.

From an institutional perspective, this exposes a deeper limitation. Cross-border payments are not merely technical exchanges between systems; they are legal and economic events that require universally recognised finality. Without a shared settlement asset that is trusted across jurisdictions, execution-layer connectivity cannot produce true interoperability. It can only simulate immediacy while deferring risk resolution.

The consequence is a proliferation of partially connected networks rather than a coherent global infrastructure. Each linkage optimises locally, but the system as a whole remains fragmented. This fragmentation is not accidental; it reflects the absence of a settlement layer capable of operating across borders with uniform legal certainty and risk neutrality.

Cross-border extension thus acts as a stress test for execution-only models. It reveals that speed, availability and user experience, while necessary, are insufficient conditions for global scalability. The binding constraint is not how fast instructions move, but how and where value is ultimately settled.

The Settlement Asset as the Missing Variable in Global Scalability

The structural limitations identified in cross-border execution-only models ultimately converge on a single, often underexplored variable: the nature of the settlement asset itself. While execution systems determine how payment instructions are transmitted and processed, it is the settlement asset that determines whether obligations are discharged with legal certainty, risk neutrality and systemic trust.

In most existing payment architectures, settlement relies on commercial bank money. This asset represents a private liability, issued by individual institutions and embedded within their balance sheets. While commercial bank money functions efficiently within established domestic frameworks, its suitability diminishes as payment processes become continuous, automated and cross-border by design. The resulting exposure to credit, liquidity and legal risk does not disappear with faster execution; it becomes more immediate and more tightly coupled to system stability.

Global scalability requires a settlement asset that is universally recognised, legally final and institutionally neutral. Central bank money uniquely fulfils these criteria. It constitutes the ultimate settlement asset within a currency area, free from private credit risk and anchored in public law. Historically, access to central bank settlement has been restricted to regulated financial institutions, reflecting the batch-oriented nature of legacy infrastructures and the need to manage systemic risk through controlled participation.

As payment processes evolve towards real-time operation, this historical separation between execution innovation and settlement architecture becomes increasingly untenable. Automated, condition-based transactions require deterministic settlement outcomes. Without a settlement asset that can support continuous finality, programmability at the execution layer merely accelerates the accumulation of contingent claims.

This observation reframes the debate around payment modernisation. The central challenge is not how to connect execution systems more efficiently, but how to anchor those systems to a settlement layer capable of operating at the same temporal and legal resolution. Without such anchoring, global interoperability remains fragile, dependent on bilateral arrangements and risk mitigation techniques that do not scale.

The settlement asset therefore functions as the gravitational centre of the payment architecture. It determines not only risk distribution, but also governance, access and trust. Any attempt to construct globally interoperable payment infrastructures without addressing this layer will inevitably reproduce fragmentation at higher speeds.

Recognising the settlement asset as a design variable rather than a given marks a conceptual shift. It opens the analytical space for institutional innovation at the infrastructure level, rather than continued optimisation within inherited constraints. This shift provides the foundation for understanding why recent initiatives increasingly focus on settlement itself, rather than solely on execution efficiency.

Institutional Responses to the Settlement Constraint

Once the settlement asset is recognised as the limiting factor in global payment scalability, recent institutional initiatives can be reinterpreted not as isolated experiments, but as convergent responses to a shared architectural problem. Across jurisdictions and governance models, a growing number of actors are exploring ways to reintroduce central bank money as an active settlement layer capable of supporting real-time, cross-border processes.

At the multilateral level, initiatives coordinated by international institutions have focused explicitly on settlement interoperability rather than on execution efficiency. Experimental platforms exploring multi-currency settlement, shared ledgers and synchronised settlement mechanisms reflect a recognition that global payments cannot be stabilised through bilateral optimisation alone. These projects treat settlement finality as a public good, requiring coordination across central banks rather than competition between private intermediaries.

Parallel to these efforts, several emerging market economies and regional blocs have begun to articulate settlement infrastructures aimed at reducing dependency on correspondent banking chains and dominant reserve currencies. While often framed in geopolitical terms, these initiatives are more coherently understood as attempts to regain control over settlement finality and liquidity management in cross-border trade. Their common feature is not political alignment, but the explicit use of central bank money as the settlement anchor.

Within advanced economies, central bank digital currency initiatives represent a complementary response. While many early discussions have focused on retail use cases, the underlying architectural implication is broader. By making central bank money natively compatible with digital infrastructures, CBDCs reopen the question of how settlement access, programmability and interoperability can be designed for continuous operation. Importantly, this does not imply a displacement of private sector innovation, but a reconfiguration of its foundation.

These institutional responses share a critical characteristic: they shift the locus of innovation from the execution layer to the settlement layer. Rather than attempting to optimise existing commercial bank-based arrangements indefinitely, they seek to redesign the conditions under which settlement occurs. This marks a departure from incrementalism towards structural intervention at the infrastructure level.

At the same time, these approaches remain incomplete. Most initiatives address either wholesale or retail settlement in isolation, and few are yet integrated into corporate payment and treasury workflows. Nevertheless, they signal an emerging consensus: global payment systems cannot achieve real-time, programmable and interoperable operation without revisiting the role of central bank money in settlement.

Understanding these developments as architectural responses rather than ideological positions allows for a more constructive evaluation. It highlights convergence where public discourse often emphasises divergence, and it clarifies that the underlying objective is not control over execution, but stability, finality and trust at the settlement layer.

Synthesising Pix, SEPA, BRICS and BIS Within a Layered Framework

When examined through a layered architectural lens, the apparent diversity of contemporary payment initiatives becomes analytically coherent. Systems such as Pix, SEPA Instant, emerging BRICS-linked settlement efforts and BIS-coordinated projects do not represent competing visions of the future, but rather address different layers of the same structural problem.

Domestic instant payment systems, exemplified by Pix and SEPA Instant, operate primarily at the execution layer. They demonstrate that real-time payment initiation, continuous availability and high-volume processing can be achieved reliably within harmonised legal and currency environments. Their success lies in operational efficiency, user adoption and economic inclusion. However, their settlement logic remains anchored in commercial bank money, rendering them optimised yet incomplete from a global scalability perspective.

Cross-border initiatives associated with BRICS economies focus on a different constraint. Their primary objective is not execution speed for end users, but sovereignty over settlement and liquidity management in international trade. By emphasising settlement in central bank money and reducing reliance on correspondent banking chains and dominant reserve currencies, these efforts explicitly target the settlement layer. While often interpreted through geopolitical narratives, architecturally they address the same deficiency identified in execution-only models: the absence of a universally trusted settlement anchor.

BIS-led initiatives occupy a distinct but complementary position. They do not aim to create production payment systems, but to explore interoperable settlement architectures across currencies and jurisdictions. By experimenting with shared settlement platforms, synchronised settlement mechanisms and multi-currency coordination, these projects explicitly treat settlement as a design problem rather than an inherited constraint. Their value lies less in immediate deployment and more in defining architectural primitives for future systems.

Viewed together, these initiatives illustrate a fragmented but converging trajectory. Execution-layer optimisation, regional standardisation and settlement-layer innovation are progressing in parallel, but largely without integration. Each addresses a necessary condition for global instant payments, yet none alone provides a sufficient solution. The absence of a unifying architectural framework explains why debates often oscillate between domestic efficiency, monetary sovereignty and technological experimentation without resolving their interdependence.

This synthesis suggests that the current landscape should not be interpreted as a competition between models, but as an incomplete assembly of layers. The challenge lies not in selecting a single approach, but in designing interfaces between them that preserve their respective strengths while addressing their limitations.

Toward a Coherent Architecture for Global Instant Payments

A coherent global instant payment architecture cannot be achieved through further optimisation at individual layers alone. Nor can it emerge from isolated institutional initiatives, however sophisticated. What is required is an explicit architectural alignment between execution, clearing and settlement, supported by governance structures capable of operating across jurisdictions.

At the execution layer, systems must support real-time, programmable and data-rich payment initiation. This capability is already largely in place. At the settlement layer, value transfer must occur in assets that provide legal finality, systemic trust and neutrality across borders. Central bank money, whether accessed through existing infrastructures or digitally native representations, remains uniquely positioned to fulfil this role.

Crucially, programmability should be understood as a property of processes, not of money itself. Payment logic, compliance rules and conditional execution belong in systems and applications. Finality belongs in the settlement asset. Conflating these functions risks either undermining trust or constraining innovation. A coherent architecture must therefore separate concerns while ensuring deterministic interaction between layers.

From an institutional perspective, this implies a redefinition of roles. Central banks act as infrastructure providers and guarantors of settlement integrity, not as competitors in product markets. Private institutions innovate at the execution and service layers, building differentiated offerings on top of shared settlement foundations. International coordination bodies provide the standards and interfaces necessary for interoperability.

The transition to such an architecture will be incremental rather than revolutionary. It will require bridging domestic instant payment systems to interoperable settlement layers, integrating wholesale and retail perspectives, and aligning regulatory frameworks with architectural realities. Yet the direction is clear. As payment processes become real-time and programmable, settlement finality can no longer remain deferred, fragmented or opaque.

The future of global instant payments will not be defined by the fastest execution rail, the most sophisticated token or the loudest political narrative. It will be defined by the ability to align speed with finality, innovation with trust, and domestic efficiency with global interoperability. Designing that alignment is no longer a theoretical exercise. It is the next structural challenge for the global financial system.

Conclusion

Viewed through the lens of Global Instant Payments, the developments discussed in this article point to a common architectural requirement rather than divergent institutional agendas. The RELEVANT framework, Regulatory, Economic and Legal Enablement through Value Alignment and Networked Trust, provides a way to integrate execution efficiency, settlement finality and governance coherence into a single analytical construct. It emphasises that sustainable innovation in payments does not arise from isolated optimisation, but from aligning technological capability with legal certainty and institutional trust at scale. In this sense, GIP is not a call for faster payments alone, but for an infrastructure in which real-time execution and central bank settlement operate as complementary layers, enabling programmable, resilient and globally interoperable financial processes.


Pix, SEPA, BRICS and BIS: Four Paths Toward the Same Missing Layer was originally published in Coinmonks on Medium, where people are continuing the conversation by highlighting and responding to this story.

Amazon fixes Alexa ordering bug, Microsoft rethinks AI data centers, and cameras capture every fan

Someone listening to last week’s GeekWire Podcast caught something we missed: a misleading comment by Alexa during our voice ordering demo — illustrating the challenges of ordering by voice vs. screen. We followed up with Amazon, which says it has fixed the underlying bug.

On this week’s show, we play the audio of the order again. Can you catch it? 

Plus, Microsoft announces a “community first” approach to AI data centers after backlash over power and water usage — and President Trump scooped us on the story. We discuss the larger issues and play a highlight from our interview with Microsoft President Brad Smith.

Also: the technology capturing images of every fan at Lumen Field, UK police blame Copilot for a hallucinated soccer match, and Redfin CEO Glenn Kelman departs six months after the company’s acquisition by Rocket.

Subscribe to GeekWire in Apple Podcasts, Spotify, or wherever you listen.

Audio editing by Curt Milton.

TSMC says AI demand is “endless” after record Q4 earnings

On Thursday, Taiwan Semiconductor Manufacturing Company (TSMC) reported record fourth-quarter earnings and said it expects AI chip demand to continue for years. During an earnings call, CEO C.C. Wei told investors that while he cannot predict the semiconductor industry's long-term trajectory, he remains bullish on AI.

TSMC manufactures chips for companies including Apple, Nvidia, AMD, and Qualcomm, making it a linchpin of the global electronics supply chain. The company produces the vast majority of the world's most advanced semiconductors, and its factories in Taiwan have become a focal point of US-China tensions over technology and trade. When TSMC reports strong demand and ramps up spending, it signals that the companies designing AI chips expect years of continued growth.

"All in all, I believe in my point of view, the AI is real—not only real, it's starting to grow into our daily life. And we believe that is kind of—we call it AI megatrend, we certainly would believe that," Wei said during the call. "So another question is 'can the semiconductor industry be good for three, four, five years in a row?' I'll tell you the truth, I don't know. But I look at the AI, it looks like it's going to be like an endless—I mean, that for many years to come."

Read full article

Comments

© BING-JHEN HONG via Getty Images

Wikipedia signs major AI firms to new priority data access deals

On Thursday, the Wikimedia Foundation announced API access deals with Microsoft, Meta, Amazon, Perplexity, and Mistral AI, expanding its effort to get major tech companies to pay for high-volume API access to Wikipedia content, which these companies use to train AI models like Microsoft Copilot and ChatGPT.

The deals mean that most major AI developers have now signed on to the foundation's Wikimedia Enterprise program, a commercial subsidiary that sells high-speed API access to Wikipedia's 65 million articles at higher speeds and volumes than the free public APIs provide. Wikipedia's content remains freely available under a Creative Commons license, but the Enterprise program charges for faster, higher-volume access to the data. The foundation did not disclose the financial terms of the deals.

The new partners join Google, which signed a deal with Wikimedia Enterprise in 2022, as well as smaller companies like Ecosia, Nomic, Pleias, ProRata, and Reef Media. The revenue helps offset infrastructure costs for the nonprofit, which otherwise relies on small public donations while watching its content become a staple of training data for AI models.

Read full article

Comments

© Wikipedia

We’re Building Payment Infrastructure for Nigerian Businesses. Here’s Why We Need Your Help

The Problem We Keep Hearing

Over the past three weeks, I’ve sent 50+ emails to Nigerian businesses — fashion designers in Lagos, freelance developers, gadget retailers, drone wholesalers, and crowdfunding platforms.

The feedback has been surprisingly consistent.

These aren’t isolated complaints. They’re systemic infrastructure gaps.

The pattern is clear:

  • Businesses are losing 10–15% to international payment fees
  • Freelancers are watching Naira devaluation eat their savings in real-time
  • E-commerce businesses are dealing with failed cross-border transactions
  • Operations teams face manual reconciliation nightmares
  • There’s constant fear of frozen accounts when accepting international payments

Traditional payment rails weren’t built for this moment. Banks can’t solve it. Payment processors won’t solve it. Someone has to create the alternative.

That’s why we’re building BillingBase.

BillingBase | Non-Custodial Billing Layer for Stablecoins

What We’re Building (And Why It’s Different)

BillingBase is non-custodial crypto billing infrastructure for global businesses.

Let me break down what that actually means:

  1. Non-custodial means payments go directly to YOUR wallet. We never hold your funds. We don’t control your money. We don’t have the ability to freeze your account. You maintain complete custody while we handle the infrastructure.
  2. Crypto billing infrastructure means we provide the payment primitives you’re already familiar with — checkout links, subscriptions, refunds, webhooks, invoicing — except they work with stablecoins instead of traditional payment rails.
  3. For Nigerian businesses, this means we understand the specific problems you face: Naira devaluation, international payment friction, high platform fees, and the need for dollar-denominated earnings that hold their value.

Here’s what you can do with BillingBase:

  • Accept stablecoin payments: USDT, USDC, DAI, and CNGN (Naira-pegged)
  • Use familiar tools: Payment links, recurring subscriptions, one-time payments, refunds, webhooks, and a dashboard to track everything
  • Get built-in protection: Chainalysis wallet screening, transaction-level risk checks, KYB verification, and audit trails for compliance

Why We’re Starting with a Beta

We don’t have all the answers yet.

What we know:

Nigerian businesses need better cross-border payments. Naira devaluation makes dollar earnings critical. Platform fees (Upwork’s 15%, Stripe’s 3.9%, PayPal’s conversion markups) are too high. International customers increasingly pay with stablecoins. Operations teams need reconciliation tools that traditional banks don’t provide.

What we need to validate:

  1. Core features: Which payment primitives matter most? Do fashion designers prioritize deposit handling or subscription billing? Do freelancers want simple payment links or full API integration? Where’s the acceptable onboarding friction threshold?
  2. Compliance: What documentation feels reasonable versus invasive? KYB is necessary, but where’s the line between thorough and annoying?
  3. Infrastructure: Which blockchains — Base (lowest fees), Polygon (widely supported), Arbitrum (fast settlement)? Do businesses care, or just want “cheapest and fastest”? USDC (dollar-pegged) or CNGN (Naira-pegged)?
  4. Integrations: QuickBooks? Xero? Google Sheets? Slack notifications when payments arrive?

Questions only real usage can answer:

  • How do designers handle deposits versus final payments?
  • Do freelancers prefer shareable links or automated invoicing? What reporting do finance teams need?
  • How often do merchants convert stablecoins to Naira?
  • What’s the right balance between automated risk controls and merchant control?

We can’t answer these in a vacuum. We need real businesses using BillingBase with real customers.

That’s why we’re opening a beta.

Sign up here!

Who We’re Looking For

We’re looking for specific businesses where stablecoin payments solve real problems.

1. SME Businesses with Global Clients Fashion designers, bridal boutiques, custom clothiers, bespoke service providers with international clients who ship worldwide or provide remote services.

Your pain: 3–7 day payment delays, high cross-border fees, difficult deposit handling, currency conversion losses.

What you get: Instant stablecoin settlement, payment links for deposits/finals, dollar-denominated earnings, free integrated website during beta.

2. Freelancers & Service Providers Developers, designers, consultants, educators working directly with clients or through Upwork/Fiverr/Toptal.

Your pain: 10–15% platform fees, poor PayPal conversion rates, dollar invoices settled in devalued Naira, no professional low-cost direct payment option.

What you get: 0.5% transaction fees, payment link dashboard, dollar earnings protected from devaluation, automatic receipts and audit trails.

3. E-commerce & Retail Gadget stores, tech retailers, drone wholesalers, online merchants (B2B/B2C).

Your pain: Failed international transactions, 3–5% gateway fees, chargebacks, fraud, reconciliation headaches.

What you get: API integration, payment links, 0.5% fees, no chargebacks (crypto is final), and clean transaction records.

4. Content Creators & Educators: Web3 educators, course creators, online tutors, digital content producers.

Your pain: Platform fees (Gumroad, Teachable, Patreon), platform dependency, limited international payment options, and expensive recurring billing.

What you get: Subscription billing, one-time payment links, direct payments (no middleman), and automatic invoicing.

5. Fundraising Platforms: Crowdfunding platforms, donation platforms, NGOs raising money locally and internationally.

Your pain: Slow, expensive wire transfers, high cross-border fees, poor reporting, and difficulty tracking recurring contributions.

What you get: Donation links (one-time/recurring), lower fees, instant international contributions via stablecoins, and clean reporting for finance teams and donors.

What we’re NOT looking for (yet):

High-volume enterprises: If you’re processing 10,000 transactions per day, we’re not ready for you. We’re optimizing for businesses with 10–500 transactions per month during beta.

Businesses requiring instant Naira conversion: We don’t provide off-ramp services yet. You’ll need your own method to convert stablecoins to Naira if needed (P2P platforms like Binance, Bybit, or local exchanges work well).

Companies needing white-label solutions: If you want to rebrand BillingBase as your own product, that’s not our focus right now.

Anyone expecting zero bugs: This is a beta. There will be rough edges. If you need production-perfect software on day one, wait for our public launch later this year.

What Beta Participants Get

1. Free Integration & Setup

  • No setup fees: Most payment platforms charge $500-$2,000 for integration. We don’t.
  • No monthly subscription during beta: Use BillingBase for free while we’re testing.
  • Transaction fees waived for first 90 days (or first 100 transactions): Whichever comes first. After that, standard fees apply (0.5% or lower depending on volume).
  • One-on-one onboarding support: We’ll walk you through wallet setup, dashboard usage, and first transactions. No “figure it out yourself” documentation dumps.

2. Custom Solutions Based on Your Business Type

For fashion designers: We’ll build you a one-pager website with integrated payment links. Free during beta. Professional design. Consultation booking system if needed.

For e-commerce businesses: We’ll integrate BillingBase API directly with your existing website. Custom checkout flow. Webhook setup. Testing support.

For freelancers: Payment link dashboard optimized for invoicing. Easy sharing via email, WhatsApp, or social media. Automatic receipt generation.

For educators: Subscription billing for courses or memberships. Payment links for one-time consultations or content. Recurring payment automation.

3. Direct Access to the Team

  • Weekly feedback calls (optional): Tell us what’s working and what’s broken.
  • Slack/WhatsApp channel with founders: Direct line to Ngozi and the team. No support ticket black holes.
  • Priority bug fixes: If something breaks, we fix it fast. Beta participants get priority.
  • Your input shapes the product roadmap: We’re not building in isolation. If you need a feature, we’ll consider adding it based on beta feedback.

4. Early Mover Advantage

  • Lifetime discounted pricing after beta: When we launch publicly, you’ll pay less than new customers — forever.
  • Featured case studies (with permission): If you’re willing, we’ll showcase how you’re using BillingBase. Good for your brand, good for ours.
  • First access to new features: New integrations, reporting tools, or blockchain support? Beta participants see it first.
  • Referral program with revenue share: Refer other businesses to BillingBase and earn a percentage of their transaction fees.

What We’re Asking From You

This isn’t passive testing. If you just want to “try it out and see,” that’s not what we need.

Time Commitment:

  • 30–60 minutes: Initial setup (wallet, KYB verification, dashboard walkthrough, first test transaction)
  • 15–30 minutes periodically: Feedback calls (every 2 weeks to monthly, depending on usage)
  • 24–48 hour response time: When we ask “Can you try this again and tell us what happened?”
  • 5–10 minutes monthly: Survey on feature usage and pain points

Honesty Requirement:

We need brutal feedback, not polite validation.

Tell us what’s confusing, broken, or doesn’t fit your workflow. Share what your customers say — positive and negative. If onboarding felt complicated, the dashboard doesn’t make sense, or a feature exists but you don’t understand why you’d use it, tell us. Screenshots help. Screen recordings are better.

We’re not looking for cheerleaders. We’re looking for honest partners who’ll tell us when we’re wrong.

Real Usage:

Use this with real customers. Test transactions catch bugs, but real revenue matters more. We can’t improve what we don’t see in production.

Send a payment link to a real client. Use BillingBase for a real deposit. Integrate it and see if customers choose the crypto option. We’re not asking you to bet your entire business on this, but we need real scenarios to see what happens when money is on the line.

Patience:

There will be bugs — we’ll fix them fast. Some features won’t exist yet — we’re prioritizing based on feedback. Documentation might be incomplete — we’re improving it weekly.

Beta means “we’re still learning.” If you need production-perfect software on day one, wait for our public launch.

How to Apply to BillingBase Beta

Step 1: Visit the Waitlist Page and Sign Up

Go to billingbase.io/waitlist and fill out the form:

  • Input your email

Step 2: Quick Call (15 Minutes)

If your business is a good fit, we’ll schedule a short call:

  • We learn about your workflow and payment challenges
  • You ask questions about BillingBase
  • We determine if it’s a mutual fit

This isn’t a sales call. It’s a conversation. If we don’t think BillingBase solves your specific problems, we’ll tell you honestly.

Step 3: Onboarding

Once approved:

KYB verification (1–2 business days):

  • Business registration documents
  • Owner/founder identification
  • Basic compliance screening

We know this feels bureaucratic, but it’s necessary. Compliance protects everyone in the ecosystem — including you.

Wallet setup assistance:

  • We’ll help you set up a non-custodial wallet if you don’t have one
  • Connect your wallet to BillingBase
  • Fund it with a small amount for testing

Dashboard walkthrough:

  • How to create payment links
  • How to track transactions
  • How to generate invoices and receipts
  • How to set up webhooks (if needed)

First test transaction:

  • You’ll make a test payment to yourself
  • We’ll verify everything works
  • Then you’re ready for real customers

And that’s all.


We’re Building Payment Infrastructure for Nigerian Businesses. Here’s Why We Need Your Help was originally published in Coinmonks on Medium, where people are continuing the conversation by highlighting and responding to this story.

Microsoft vows to cover full power costs for energy-hungry AI data centers

On Tuesday, Microsoft announced a new initiative called "Community-First AI Infrastructure" that commits the company to paying full electricity costs for its data centers and refusing to seek local property tax reductions.

As demand for generative AI services has increased over the past year, Big Tech companies have been racing to spin up massive new data centers for serving chatbots and image generators that can have profound economic effects on the surrounding areas where they are located. Among other concerns, communities across the country have grown concerned that data centers are driving up residential electricity rates through heavy power consumption and by straining water supplies due to server cooling needs.

The International Energy Agency (IEA) projects that global data center electricity demand will more than double by 2030, reaching around 945 TWh, with the United States responsible for nearly half of total electricity demand growth over that period. This growth is happening while much of the country's electricity transmission infrastructure is more than 40 years old and under strain.

Read full article

Comments

© Bloomberg via Getty Images

Microsoft responds to AI data center revolt, vowing to cover full power costs and reject local tax breaks

Microsoft’s Fairwater data center near Atlanta is part of the company’s broader AI expansion. (Microsoft Photo)

President Trump was right about Microsoft — but he only leaked part of the story.

Microsoft is changing its approach to building massive data centers for artificial intelligence, unveiling what it calls a “community first” initiative in response to growing opposition from people across the country facing higher electricity bills and dwindling water supplies.

The new plan, announced Tuesday morning in Washington, D.C, includes pledges to pay the company’s full power costs, reject local property tax breaks, replenish more water than it uses, train local workers, and invest in AI education and community programs.

“This sector worked one way in the past, and needs to work in some different ways going forward,” said Brad Smith, Microsoft president and vice chair, in an interview with GeekWire. He later described the shift as “both the right thing to do and the smart thing to do.”

Trump made headlines Monday night with a Truth Social post in advance of the news, saying his administration has been working with tech companies “to secure their commitment to the American People.” He called Microsoft “first up” and said it would “make major changes … to ensure that Americans don’t ‘pick up the tab’ for their POWER consumption.”

Backlash against AI expansion

Microsoft’s rollout comes at a critical juncture for tech. 

Amazon, Google, OpenAI, Microsoft and others are betting hundreds of billions of dollars on AI, but those ambitions hinge on their ability to build out the infrastructure to support them — a prospect that depends increasingly on the cooperation of local communities that have grown skeptical of the costs and tradeoffs.

Smith said Microsoft has been developing its initiative since September. He described it as a response to shifting public sentiment — which he witnessed firsthand during visits to his home state of Wisconsin for Microsoft’s data center expansion. Back in 2024, local residents wanted to talk about jobs. By last October, the big topics were electricity prices and water use.

Microsoft’s Brad Smith announces the “Community-First AI Infrastructure Plan” in Washington, D.C., Tuesday. (Screenshot via webcast)

“We saw this catch fire, to a degree, for many other companies in many other places around the country as each month unfolded,” he said. 

In data‑center hubs such as Virginia, Illinois and Ohio, residential power prices jumped 12–16% over the past year — noticeably faster than the U.S. average, according to U.S. government data — as grid operators scrambled to add capacity for large new facilities.

The issue has drawn scrutiny on Capitol Hill. Last month, three Democratic senators launched an investigation into whether tech giants are raising residential power bills, sending letters to Amazon, Microsoft, Google and Meta. An Amazon-funded study found that the company more than covers the utility costs associated with its electricity use in some regions.

Microsoft’s change of course

Microsoft’s new approach, as outlined in a post by Smith, is a clear departure from its own past practices. The company has accepted tax abatements for data centers in states including Ohio and Iowa, and its identity was kept under wraps in a Michigan township until recently.

In the interview, Smith promised new levels of transparency. 

He acknowledged that the traditional approach in the industry was for companies to buy land under nondisclosure agreements to avoid driving up prices — giving them a competitive edge but leaving communities in the dark about who was moving in and how they would operate.

“That is clearly not the path that’s going to take us forward,” he said. The companies that succeed with data centers in the long run, he added, “will be the companies that have a strong and healthy relationship with local communities.”

Asked if Microsoft hopes to inspire or compel others to follow suit, Smith stopped short of positioning Microsoft as the sole leader, crediting Amazon for “really good and well-executed work in this space” while adding that “the industry is going to need to set a higher bar for itself.”

Microsoft’s plan starts by addressing the electricity issue, pledging to work with utilities and regulators to ensure its electricity costs aren’t passed on to residential customers. Smith cited a new “Very Large Customers” rate structure in Wisconsin as a model, where data centers pay the full cost of the power they use, including grid upgrades required to support them.

The company’s other commitments include:

  • A 40% improvement in water efficiency by 2030, plus a pledge to replenish more water than it uses in each district where it operates. (Microsoft cited a recent $25 million investment in water and sewer upgrades in Leesburg, Va., as an example.)
  • A new partnership with North America’s Building Trades Unions for apprenticeship programs, and expansion of its Datacenter Academy for operations training.
  • Full payment of local property taxes, with no requests for municipal tax breaks.
  • AI training through schools, libraries, and chambers of commerce, plus new Community Advisory Boards at major data center sites.

Record spending on AI infrastructure

Microsoft did not say how much it plans to spend on these new initiatives, separate from its broader capital expenditures, which approached $35 billion in its first fiscal quarter

Asked if the company would truly be able to follow through on all of these commitments, Smith said, “we have to follow through.” Internally, he said, Microsoft is “bringing some groups together” and “adding resources” to execute the plan, describing it as essential to the company’s long-term business strategy.

As for how Microsoft’s position squares with OpenAI’s push for federal incentives to support large-scale AI infrastructure projects, Smith drew a distinction. He said he supports federal help with permitting and land access, but not electricity subsidies.

“When it comes to things like electricity prices, when it comes to the water system, when it comes to training for local jobs, these are local issues,” he said.

Smith’s post references the Trump administration’s AI Action Plan and pledges to work with the Department of Labor on workforce programs. Microsoft says it will announce specific community partnerships during the first week of July, timed to America’s 250th anniversary.

NASA Marshall Removes 2 Historic Test Stands

By: Lee Mohon

NASA’s Marshall Space Flight Center in Huntsville, Alabama, removed two of its historic test stands – the Propulsion and Structural Test Facility and the Dynamic Test Facility – with carefully coordinated implosions on Jan. 10, 2026. The demolition of these historic structures is part of a larger project at Marshall that began in spring 2022, targeting several inactive structures and building a dynamic, interconnected campus ready for the next era of space exploration. Crews began demolition in December 2025 at the Neutral Buoyancy Simulator. Learn more about these iconic facilities.

Credits: NASA

Signals for 2026

We’re three years into a post-ChatGPT world, and AI remains the focal point of the tech industry. In 2025, several ongoing trends intensified: AI investment accelerated; enterprises integrated agents and workflow automation at a faster pace; and the toolscape for professionals seeking a career edge is now overwhelmingly expansive. But the jury’s still out on the ROI from the vast sums that have saturated the industry. 

We anticipate that 2026 will be a year of increased accountability. Expect enterprises to shift focus from experimentation to measurable business outcomes and sustainable AI costs. There are promising productivity and efficiency gains to be had in software engineering and development, operations, security, and product design, but significant challenges also persist.  

Bigger picture, the industry is still grappling with what AI is and where we’re headed. Is AI a worker that will take all our jobs? Is AGI imminent? Is the bubble about to burst? Economic uncertainty, layoffs, and shifting AI hiring expectations have undeniably created stark career anxiety throughout the industry. But as Tim O’Reilly pointedly argues, “AI is not taking jobs: The decisions of people deploying it are.” No one has quite figured out how to make money yet, but the organizations that succeed will do so by creating solutions that “genuinely improve. . .customers’ lives.” That won’t happen by shoehorning AI into existing workflows but by first determining where AI can actually improve upon them, then taking an “AI first” approach to developing products around these insights.

As Tim O’Reilly and Mike Loukides recently explained, “At O’Reilly, we don’t believe in predicting the future. But we do believe you can see signs of the future in the present.” We’re watching a number of “possible futures taking shape.” AI will undoubtedly be integrated more deeply into industries, products, and the wider workforce in 2026 as use cases continue to be discovered and shared. Topics we’re keeping tabs on include context engineering for building more reliable, performant AI systems; LLM posttraining techniques, in particular fine-tuning as a means to build more specialized, domain-specific models; the growth of agents, as well as the protocols, like MCP, to support them; and computer vision and multimodal AI more generally to enable the development of physical/embodied AI and the creation of world models. 

Here are some of the other trends that are pointing the way forward.

Software Development

In 2025, AI was embedded in software developers’ everyday work, transforming their roles—in some cases dramatically. A multitude of AI tools are now available to create code, and workflows are undergoing a transformation shaped by new concepts including vibe coding, agentic development, context engineering, eval- and spec-driven development, and more.

In 2026, we’ll see an increased focus on agents and the protocols, like MCP, that support them; new coding workflows; and the impact of AI on assisting with legacy code. But even as software development practices evolve, fundamental skills such as code review, design patterns, debugging, testing, and documentation are as vital as ever.

And despite major disruption from GenAI, programming languages aren’t going anywhere. Type-safe languages like TypeScript, Java, and C# provide compile-time validation that catches AI errors before production, helping mitigate the risks of AI-generated code. Memory safety mandates will drive interest in Rust and Zig for systems programming: Major players such as Google, Microsoft, Amazon, and Meta have adopted Rust for critical systems, and Zig is behind Anthropic’s most recent acquisition, Bun. And Python is central to creating powerful AI and machine learning frameworks, driving complex intelligent automation that extends far beyond simple scripting. It’s also ideal for edge computing and robotics, two areas where AI is likely to make inroads in the coming year.

Takeaways

Which AI tools programmers use matter less than how they use them. With a wide choice of tools now available in the IDE and on the command line, and new options being introduced all the time, it’s useful to focus on the skills needed to produce good code rather than focusing on the tool itself. After all, whatever tool they use, developers are ultimately responsible for the code it produces.

Effectively communicating with AI models is the key to doing good work. The more background AI tools are given about a project, the better the code they generate will be. Developers have to understand both how to manage what the AI knows about their project (context engineering) and how to communicate it (prompt engineering) to get useful outputs.

AI isn’t just a pair programmer; it’s an entire team of developers. Software engineers have moved beyond single coding assistants. They’re building and deploying custom agents, often within complex setups involving multi-agent scenarios, teams of coding agents, and agent swarms. But as the engineering workflow shifts from conducting AI to orchestrating AI, the fundamentals of building and maintaining good software—code review, design patterns, debugging, testing, and documentation—stay the same and will be what elevates purposeful AI-assisted code above the crowd.

Software Architecture

AI has progressed from being something architects might have to consider to something that is now essential to their work. They can use LLMs to accelerate or optimize architecture tasks; they can add AI to existing software systems or use it to modernize those systems; and they can design AI-native architectures, an approach that requires new considerations and patterns for system design. And even if they aren’t working with AI (yet), architects still need to understand how AI relates to other parts of their system and be able to communicate their decisions to stakeholders at all levels.

Takeaways

AI-enhanced and AI-native architectures bring new considerations and patterns for system design. Event-driven models can enable AI agents to act on incoming triggers rather than fixed prompts. In 2026, evolving architectures will become more important as architects look for ways to modernize existing systems for AI. And the rise of agentic AI means architects need to stay up-to-date on emerging protocols like MCP.

Many of the concerns from 2025 will carry over into the new year. Considerations such as incorporating LLMs and RAG into existing architectures, emerging architecture patterns and antipatterns specifically for AI systems, and the focus on API and data integrations elevated by MCP are critical.

The fundamentals still matter. Tools and frameworks are making it possible to automate more tasks. However, to successfully leverage these capabilities to design sustainable architecture, enterprise architects must have a full command of the principles behind them: when to add an agent or a microservice, how to consider cost, how to define boundaries, and how to act on the knowledge they already have.

Infrastructure and Operations

The InfraOps space is undergoing its most significant transformation since cloud computing, as AI evolves from a workload to be managed to an active participant in managing infrastructure itself. With infrastructure sprawling across multicloud environments, edge deployments, and specialized AI accelerators, manual management is becoming nearly impossible. In 2026, the industry will keep moving toward self-healing systems and predictive observability—infrastructure that continuously optimizes itself, shifting the human role from manual maintenance to system oversight, architecture, and long-term strategy.

Platform engineering makes this transformation operational, abstracting infrastructure complexity behind self-service interfaces, which lets developers deploy AI workloads, implement observability, and maintain security without deep infrastructure expertise. The best platforms will evolve into orchestration layers for autonomous systems. While fully autonomous systems remain on the horizon, the trajectory is clear.

Takeaways

AI is becoming a primary driver of infrastructure architecture. AI-native workloads demand GPU orchestration at scale, specialized networking protocols optimized for model training and inference, and frameworks like Ray on Kubernetes that can distribute compute intelligently. Organizations are redesigning infrastructure stacks to accommodate these demands and are increasingly considering hybrid environments and alternatives to hyperscalers to power their AI workloads—“neocloud” platforms like CoreWeave, Lambda, and Vultr.

AI is augmenting the work of operations teams with real-time intelligence. Organizations are turning to AIOps platforms to predict failures before they cascade, identify anomalies humans would miss, and surface optimization opportunities in telemetry data. These systems aim to amplify human judgment, giving operators superhuman pattern recognition across complex environments.

AI is evolving into an autonomous operator that makes its own infrastructure decisions. Companies will implement emerging “agentic SRE” practices: systems that reason about infrastructure problems, form hypotheses about root causes, and take independent corrective action, replicating the cognitive workload that SREs perform, not just following predetermined scripts.

Data

The big story of the back half of 2025 was agents. While the groundwork has been laid, in 2026 we expect focus on the development of agentic systems to persist—and this will necessitate new tools and techniques, particularly on the data side. AI and data platforms continue to converge, with vendors like Snowflake, Databricks, and Salesforce releasing products to help customers build and deploy agents. 

Beyond agents, AI is making its influence felt across the entire data stack, as data professionals target their workflows to support enterprise AI. Significant trends include real-time analytics, enhanced data privacy and security, and the increasing use of low-code/no-code tools to democratize data access. Sustainability also remains a concern, and data professionals need to consider ESG compliance, carbon-aware tooling, and resource-optimized architectures when designing for AI workloads.

Takeaways

Data infrastructure continues to consolidate. The consolidation trend has not only affected the modern data stack but also more traditional areas like the database space. In response, organizations are being more intentional about what kind of databases they deploy. At the same time, modern data stacks have fragmented across cloud platforms and open ecosystems, so engineers must increasingly design for interoperability. 

A multiple database approach is more important than ever. Vector databases like Pinecone, Milvus, Qdrant, and Weaviate help power agentic AI—while they’re a new technology, companies are beginning to adopt vector databases more widely. DuckDB’s popularity is growing for running analytical queries. And even though it’s been around for a while, ClickHouse, an open source distributed OLAP database used for real-time analytics, has finally broken through with data professionals.

The infrastructure to support autonomous agents is coming together. GitOps, observability, identity management, and zero-trust orchestration will all play key roles. And we’re following a number of new initiatives that facilitate agentic development, including AgentDB, a database designed specifically to work effectively with AI agents; Databricks’ recently announced Lakebase, a Postgres database/OLTP engine integrated within the data lakehouse; and Tiger Data’s Agentic Postgres, a database “designed from the ground up” to support agents.

Security

AI is a threat multiplier—59% of tech professionals cited AI-driven cyberthreats as their biggest concern in a recent survey. In response, the cybersecurity analyst role is shifting from low-level human-in-the-loop tasks to complex threat hunting, AI governance, advanced data analysis and coding, and human-AI teaming oversight. But addressing AI-generated threats will also require a fundamental transformation in defensive strategy and skill acquisition—and the sooner it happens, the better.

Takeaways

Security professionals now have to defend a broader attack surface. The proliferation of AI agents expands the attack surface. Security tools must evolve to protect it. Implementing zero trust for machine identities is a smart opening move to mitigate sprawl and nonhuman traffic. Security professionals must also harden their AI systems against common threats such as prompt injection and model manipulation.

Organizations are struggling with governance and compliance. Striking a balance between data utility and vulnerability requires adherence to data governance best practices (e.g., least privilege). Government agencies, industry and professional groups, and technology companies are developing a range of AI governance frameworks to help guide organizations, but it’s up to companies to translate these technical governance frameworks into board-level risk decisions and actionable policy controls.

The security operations center (SOC) is evolving. The velocity and scale of AI-driven attacks can overwhelm traditional SIEM/SOAR solutions. Expect increased adoption of agentic SOC—a system of specialized, coordinated AI agents for triage and response. This shifts the focus of the SOC analyst from reactive alert triage to proactive threat hunting, complex analysis, and AI system oversight.

Product Management and Design

Business focus in 2025 shifted from scattered AI experiments to the challenge of building defensible, AI-native businesses. Next year we’re likely to see product teams moving from proof of concept to proof of value

One thing to look for: Design and product responsibilities may consolidate under a “product builder”—a full stack generalist in product, design, and engineering who can rapidly build, validate, and launch new products. Companies are currently hiring for this role, although few people actually possess the full skill set at the moment. But regardless of whether product builders become ascendant, product folks in 2026 and beyond will need the ability to combine product validation, good-enough engineering, and rapid design, all enabled by AI as a core accelerator. We’re already seeing the “product manager” role becoming more technical as AI spreads throughout the product development process. Nearly all PMs use AI, but they’ll increasingly employ purpose-built AI workflows for research, user-testing, data analysis, and prototyping.

Takeaways

Companies need to bridge the AI product strategy gap. Most companies have moved past simple AI experiments but are now facing a strategic crisis. Their existing product playbooks (how to size markets, roadmapping, UX) weren’t designed for AI-native products. Organizations must develop clear frameworks for building a portfolio of differentiated AI products, managing new risks, and creating sustainable value. 

AI product evaluation is now mission-critical. As AI becomes a core product component and strategy matures, rigorous evaluation is the key to turning products that are good on paper into those that are great in production. Teams should start by defining what “good” means for their specific context, then build reliable evals for models, agents, and conversational UIs to ensure they’re hitting that target.

Design’s new frontier is conversations and interactions. Generative AI has pushed user experience beyond static screens into probabilistic new multimodal territory. This means a harder shift toward designing nonlinear, conversational systems, including AI agents. In 2026, we’re likely to see increased demand for AI conversational designers and AI interaction designers to devise conversation flows for chatbots and even design a model’s behavior and personality.

What It All Means

While big questions about AI remain unanswered, the best way to plan for uncertainty is to consider the real value you can create for your users and for your teams themselves right now. The tools will improve, as they always do, and the strategies to use them will grow more complex. Being deeply versed in the core knowledge of your area of expertise gives you the foundation you’ll need to take advantage of these quickly evolving technologies—and ensure that whatever you create will be built on bedrock, not shaky ground.

❌