Bernard Lambeau, the human half of a pair programming team, explains how he's using AI
feature Bernard Lambeau, a Belgium-based software developer and founder of several technology companies, created a programming language called Elo with the help of Anthropic's Claude Code.…
If you're serious about encryption, keep control of your encryption keys
updated If you think using Microsoft's BitLocker encryption will keep your data 100 percent safe, think again. Last year, Redmond reportedly provided the FBI with encryption keys to unlock the laptops of Windows users charged in a fraud indictment.…
Business transformation, but not much remuneration
Making money isn't everything ... at least not when it comes to AI. Research from professional services firm Deloitte shows that, for most companies, adopting AI tools hasn't helped the bottom line at all. But researchers still sing the technology's praises.…
Think of the children...and the monetization options available where they're not allowed
OpenAI says it has begun deploying an age prediction model to determine whether ChatGPT users are old enough to view "sensitive or potentially harmful content."…
Keeping models on the Assistant Axis improves AI safety
Researchers from Anthropic and other orgs have observed situations in which LLMs act like a helpful personal assistant, and are trying to study the phenomenon further to make sure chatbots don't go off the rails and cause harm.…
Why global instant payments fail not because of technology, but because settlement, governance and programmability remain structurally misaligned
Layered global payment architecture: Fast execution and orchestration layers sit above a stable settlement foundation, highlighting why global instant payments depend less on new technology and more on aligning settlement, governance and programmability.
Abstract
Despite significant advances in instant payment systems, tokenisation and digital asset infrastructures, global payments remain structurally fragmented. While execution speeds have increased markedly, settlement finality, governance and programmability continue to be addressed in isolation rather than as integrated components of a coherent financial architecture. This fragmentation becomes particularly visible as payment processes move towards real-time, event-driven and increasingly automated models in global trade and treasury operations.
This article examines four prominent approaches to modern payment infrastructure: Brazil’s Pix, Europe’s SEPA, emerging BRICS cross-border initiatives and the Bank for International Settlements’ experimental projects, not as competing systems, but as partial solutions optimising different layers of the payment stack. Each addresses specific challenges, ranging from domestic execution efficiency to regional standardisation and wholesale settlement in central bank money. None, however, provides an end-to-end framework that aligns execution, settlement finality, governance and programmability across jurisdictions.
By analysing these initiatives through a layered architectural lens, the article argues that the central challenge of global instant payments is no longer technological capability, but institutional coordination and settlement design. It proposes that sustainable innovation in global payments requires the integration of programmable processes with interoperable settlement layers anchored in central bank money, supported by open governance and legally robust finality. In this context, the debate shifts from the optimisation of individual rails to the design of shared infrastructure capable of supporting real-time global trade.
Introduction
Over the past decade, the global payments landscape has undergone a remarkable acceleration. Instant payment systems, real-time treasury operations, tokenised assets and digital settlement experiments have moved from conceptual pilots to operational reality in multiple regions. Yet this apparent progress masks a deeper structural tension. While payments are increasingly executed in real time, the underlying settlement, governance and legal finality mechanisms remain fragmented, jurisdiction-bound and inconsistently integrated. The result is a global payments environment that is faster, but not fundamentally more coherent.
This tension is becoming increasingly visible as global trade and corporate finance adopt event-driven and programmable operating models. Execution speed alone is no longer sufficient. As payment processes automate and scale across borders, the question shifts from how quickly money moves to when, where and under which authority value is finally settled. It is at this architectural level, rather than at the level of individual products or political narratives, that today’s debates around instant payments, central bank digital currencies and alternative settlement networks must be examined.
This article approaches that examination by comparing four influential payment infrastructures, Pix, SEPA, emerging BRICS initiatives and BIS-led experiments, not as rivals, but as complementary attempts to address different layers of a shared problem.
Speed Is No Longer the Binding Constraint
For much of the past three decades, the evolution of payment systems has been framed primarily as a problem of speed and efficiency. Batch-based processing, limited operating hours and fragmented correspondent banking arrangements were widely identified as the principal bottlenecks in cross-border and domestic payments. Considerable institutional and technological effort was therefore directed towards accelerating execution, reducing cut-off times and improving straight-through processing.
These efforts have largely succeeded. A growing number of jurisdictions now operate domestic instant payment systems that provide near-real-time execution and immediate availability of funds to end users. Brazil’s Pix, Europe’s SEPA Instant Credit Transfer (SCT Inst), India’s UPI and similar systems demonstrate that real-time execution at scale is technically feasible, economically viable and socially impactful. From a purely operational perspective, the question of “how fast payments can move” has largely been answered.
However, the increasing prevalence of real-time execution has exposed a more fundamental limitation. Speed optimises only one layer of the payment process: execution. It does not, by itself, resolve questions of settlement finality, legal certainty, balance sheet exposure or cross-system interoperability. In fact, as execution accelerates, the weaknesses of underlying settlement arrangements become more pronounced rather than less relevant.
This distinction is particularly important in cross-border and multi-currency contexts. While instant payment systems can deliver rapid crediting of accounts, the ultimate settlement of obligations often continues to rely on deferred processes, commercial bank money and jurisdiction-specific legal frameworks. As a result, faster execution may coexist with persistent settlement risk, intraday liquidity pressures and fragmented governance structures.
From an architectural perspective, this implies that further gains in payment system performance cannot be achieved by execution-layer optimisation alone. Once speed ceases to be the binding constraint, attention must shift to the design and coordination of settlement mechanisms, governance models and the legal foundations of finality. It is at this level that contemporary initiatives increasingly diverge, and where meaningful comparison between systems such as Pix, SEPA, BRICS-linked proposals and BIS-led experiments becomes analytically productive.
Execution, Clearing and Settlement as Distinct Architectural Layers
A persistent source of confusion in contemporary payment debates arises from the tendency to treat execution, clearing and settlement as a single, homogeneous process. While closely related in operational terms, these functions represent analytically distinct layers within the payment architecture, each governed by different technical, legal and institutional logics.
Execution refers to the initiation and routing of a payment instruction and the conditional crediting of accounts. Modern instant payment systems have significantly optimised this layer, enabling near-immediate user-facing outcomes. Clearing, by contrast, concerns the calculation and netting of obligations between participating institutions. Settlement represents the final discharge of those obligations through the transfer of a settlement asset, thereby extinguishing counterparty claims and producing legal finality.
In traditional banking infrastructures, these layers are tightly coupled but not temporally aligned. Execution may occur within seconds, while clearing and settlement may follow hours or even days later, often across different systems and balance sheets. This decoupling has historically been managed through credit risk, liquidity buffers and legal constructs designed for batch-based environments.
As payment systems move towards real-time and continuous operation, this architectural separation becomes increasingly consequential. Accelerated execution compresses the time available to manage settlement risk, while automated processes reduce the scope for discretionary intervention. Under such conditions, the nature of the settlement asset and the legal framework governing finality assume heightened importance.
Commercial bank money, which dominates settlement in most existing systems, represents a private liability and therefore embeds counterparty risk by design. Central bank money, by contrast, constitutes a public settlement asset with unique properties of legal certainty, risk insulation and systemic trust. The distinction between these assets is not merely technical but foundational, as it determines how risk is distributed across participants and how resilient a system remains under stress.
An architectural analysis must therefore distinguish clearly between improvements at the execution layer and transformations at the settlement layer. While the former can deliver efficiency gains and enhanced user experience, only the latter can fundamentally alter the risk, governance and interoperability characteristics of a payment system. This distinction provides the analytical basis for assessing whether current initiatives represent incremental optimisation or structural innovation.
It is against this layered framework that domestic instant payment systems, regional standards and emerging cross-border settlement proposals must be evaluated. Their differences lie less in technological sophistication than in how, and whether, they address the settlement layer explicitly.
Domestic Instant Payment Systems as Execution-Layer Optimisations
Over the past decade, a growing number of jurisdictions have introduced domestic instant payment systems designed to modernise retail and business payments. These systems typically prioritise speed, availability and user experience, offering continuous operation, immediate confirmation and increasingly rich data exchange. From an execution perspective, they represent a significant departure from batch-oriented legacy infrastructures.
Architecturally, however, these systems are best understood as execution-layer optimisations rather than as comprehensive transformations of the payment stack. They focus on the rapid transmission and processing of payment instructions between participant institutions, often supported by enhanced messaging standards and real-time liquidity management. The user-facing outcome is near-immediate crediting, which materially improves cash-flow visibility and operational efficiency for end users.
Crucially, these improvements do not, in themselves, alter the underlying settlement logic. In most implementations, final settlement continues to rely on commercial bank money, with positions ultimately reconciled through deferred or periodic settlement processes. Even where prefunding or intraday liquidity mechanisms are employed, the settlement asset remains a private liability rather than a public one.
This distinction matters because execution speed and settlement finality are not interchangeable. Instant execution reduces operational friction but does not eliminate counterparty exposure. As long as settlement occurs outside central bank balance sheets, the system’s resilience depends on the creditworthiness and liquidity management of participating institutions, as well as on legal arrangements designed to manage failure scenarios.
From a systemic perspective, domestic instant payment systems therefore deliver substantial efficiency gains without fundamentally reconfiguring risk allocation. They improve how quickly value appears to move, but not how or where value is ultimately settled. This makes them highly effective within stable domestic environments, yet structurally limited when extended across borders, currencies or regulatory regimes.
The growing success of these systems can paradoxically obscure this limitation. High adoption rates and positive user experience create the impression of infrastructural completeness, even though the settlement layer remains unchanged. As long as payments remain predominantly domestic and low-risk, this architectural gap may appear tolerable. However, as real-time execution becomes the norm and payment flows increasingly span jurisdictions, the absence of an equally real-time, risk-free settlement layer becomes more pronounced.
Understanding domestic instant payment systems as optimisation layers rather than as full-stack solutions is therefore essential. It clarifies why further gains in speed and availability, while valuable, cannot by themselves address challenges of cross-border interoperability, systemic risk reduction and global scalability. These challenges reside not at the execution layer, but at the level of settlement architecture.
Why Cross-Border Extension Exposes the Limits of Execution-Only Models
The extension of real-time payment capabilities across borders introduces a set of structural challenges that execution-layer optimisation alone cannot resolve. While domestic instant payment systems benefit from legal harmonisation, shared currency frameworks and aligned supervisory regimes, these conditions rarely persist beyond national or regional boundaries.
Cross-border payments operate at the intersection of multiple currencies, legal systems, regulatory frameworks and liquidity regimes. Execution speed in such environments does not simply amplify efficiency; it amplifies coordination problems. Each additional jurisdiction introduces new settlement calendars, risk thresholds, compliance requirements and failure modes, which cannot be neutralised by faster messaging or improved user interfaces alone.
In execution-only architectures, cross-border connectivity is typically achieved through bilateral or hub-based linkages between domestic systems. These arrangements focus on routing payment instructions and managing prefunding or liquidity bridges between participants. While such approaches can reduce friction at the margins, they leave the fundamental settlement logic unchanged. Obligations continue to be settled in commercial bank money, often across multiple balance sheets and time zones.
This creates a structural asymmetry: execution becomes real-time, while settlement remains fragmented, deferred and risk-bearing. As a result, credit and liquidity risk are not eliminated but redistributed, often in opaque ways. The faster the execution layer operates, the more sensitive the system becomes to settlement disruptions, liquidity bottlenecks and legal uncertainty.
Furthermore, execution-only cross-border models tend to rely on conditional guarantees, bilateral credit lines or collateralisation schemes to manage risk. These mechanisms introduce complexity and cost, and they scale poorly as the number of participants and corridors increases. What appears manageable in limited pilot corridors becomes increasingly brittle when extended to global networks.
From an institutional perspective, this exposes a deeper limitation. Cross-border payments are not merely technical exchanges between systems; they are legal and economic events that require universally recognised finality. Without a shared settlement asset that is trusted across jurisdictions, execution-layer connectivity cannot produce true interoperability. It can only simulate immediacy while deferring risk resolution.
The consequence is a proliferation of partially connected networks rather than a coherent global infrastructure. Each linkage optimises locally, but the system as a whole remains fragmented. This fragmentation is not accidental; it reflects the absence of a settlement layer capable of operating across borders with uniform legal certainty and risk neutrality.
Cross-border extension thus acts as a stress test for execution-only models. It reveals that speed, availability and user experience, while necessary, are insufficient conditions for global scalability. The binding constraint is not how fast instructions move, but how and where value is ultimately settled.
The Settlement Asset as the Missing Variable in Global Scalability
The structural limitations identified in cross-border execution-only models ultimately converge on a single, often underexplored variable: the nature of the settlement asset itself. While execution systems determine how payment instructions are transmitted and processed, it is the settlement asset that determines whether obligations are discharged with legal certainty, risk neutrality and systemic trust.
In most existing payment architectures, settlement relies on commercial bank money. This asset represents a private liability, issued by individual institutions and embedded within their balance sheets. While commercial bank money functions efficiently within established domestic frameworks, its suitability diminishes as payment processes become continuous, automated and cross-border by design. The resulting exposure to credit, liquidity and legal risk does not disappear with faster execution; it becomes more immediate and more tightly coupled to system stability.
Global scalability requires a settlement asset that is universally recognised, legally final and institutionally neutral. Central bank money uniquely fulfils these criteria. It constitutes the ultimate settlement asset within a currency area, free from private credit risk and anchored in public law. Historically, access to central bank settlement has been restricted to regulated financial institutions, reflecting the batch-oriented nature of legacy infrastructures and the need to manage systemic risk through controlled participation.
As payment processes evolve towards real-time operation, this historical separation between execution innovation and settlement architecture becomes increasingly untenable. Automated, condition-based transactions require deterministic settlement outcomes. Without a settlement asset that can support continuous finality, programmability at the execution layer merely accelerates the accumulation of contingent claims.
This observation reframes the debate around payment modernisation. The central challenge is not how to connect execution systems more efficiently, but how to anchor those systems to a settlement layer capable of operating at the same temporal and legal resolution. Without such anchoring, global interoperability remains fragile, dependent on bilateral arrangements and risk mitigation techniques that do not scale.
The settlement asset therefore functions as the gravitational centre of the payment architecture. It determines not only risk distribution, but also governance, access and trust. Any attempt to construct globally interoperable payment infrastructures without addressing this layer will inevitably reproduce fragmentation at higher speeds.
Recognising the settlement asset as a design variable rather than a given marks a conceptual shift. It opens the analytical space for institutional innovation at the infrastructure level, rather than continued optimisation within inherited constraints. This shift provides the foundation for understanding why recent initiatives increasingly focus on settlement itself, rather than solely on execution efficiency.
Institutional Responses to the Settlement Constraint
Once the settlement asset is recognised as the limiting factor in global payment scalability, recent institutional initiatives can be reinterpreted not as isolated experiments, but as convergent responses to a shared architectural problem. Across jurisdictions and governance models, a growing number of actors are exploring ways to reintroduce central bank money as an active settlement layer capable of supporting real-time, cross-border processes.
At the multilateral level, initiatives coordinated by international institutions have focused explicitly on settlement interoperability rather than on execution efficiency. Experimental platforms exploring multi-currency settlement, shared ledgers and synchronised settlement mechanisms reflect a recognition that global payments cannot be stabilised through bilateral optimisation alone. These projects treat settlement finality as a public good, requiring coordination across central banks rather than competition between private intermediaries.
Parallel to these efforts, several emerging market economies and regional blocs have begun to articulate settlement infrastructures aimed at reducing dependency on correspondent banking chains and dominant reserve currencies. While often framed in geopolitical terms, these initiatives are more coherently understood as attempts to regain control over settlement finality and liquidity management in cross-border trade. Their common feature is not political alignment, but the explicit use of central bank money as the settlement anchor.
Within advanced economies, central bank digital currency initiatives represent a complementary response. While many early discussions have focused on retail use cases, the underlying architectural implication is broader. By making central bank money natively compatible with digital infrastructures, CBDCs reopen the question of how settlement access, programmability and interoperability can be designed for continuous operation. Importantly, this does not imply a displacement of private sector innovation, but a reconfiguration of its foundation.
These institutional responses share a critical characteristic: they shift the locus of innovation from the execution layer to the settlement layer. Rather than attempting to optimise existing commercial bank-based arrangements indefinitely, they seek to redesign the conditions under which settlement occurs. This marks a departure from incrementalism towards structural intervention at the infrastructure level.
At the same time, these approaches remain incomplete. Most initiatives address either wholesale or retail settlement in isolation, and few are yet integrated into corporate payment and treasury workflows. Nevertheless, they signal an emerging consensus: global payment systems cannot achieve real-time, programmable and interoperable operation without revisiting the role of central bank money in settlement.
Understanding these developments as architectural responses rather than ideological positions allows for a more constructive evaluation. It highlights convergence where public discourse often emphasises divergence, and it clarifies that the underlying objective is not control over execution, but stability, finality and trust at the settlement layer.
Synthesising Pix, SEPA, BRICS and BIS Within a Layered Framework
When examined through a layered architectural lens, the apparent diversity of contemporary payment initiatives becomes analytically coherent. Systems such as Pix, SEPA Instant, emerging BRICS-linked settlement efforts and BIS-coordinated projects do not represent competing visions of the future, but rather address different layers of the same structural problem.
Domestic instant payment systems, exemplified by Pix and SEPA Instant, operate primarily at the execution layer. They demonstrate that real-time payment initiation, continuous availability and high-volume processing can be achieved reliably within harmonised legal and currency environments. Their success lies in operational efficiency, user adoption and economic inclusion. However, their settlement logic remains anchored in commercial bank money, rendering them optimised yet incomplete from a global scalability perspective.
Cross-border initiatives associated with BRICS economies focus on a different constraint. Their primary objective is not execution speed for end users, but sovereignty over settlement and liquidity management in international trade. By emphasising settlement in central bank money and reducing reliance on correspondent banking chains and dominant reserve currencies, these efforts explicitly target the settlement layer. While often interpreted through geopolitical narratives, architecturally they address the same deficiency identified in execution-only models: the absence of a universally trusted settlement anchor.
BIS-led initiatives occupy a distinct but complementary position. They do not aim to create production payment systems, but to explore interoperable settlement architectures across currencies and jurisdictions. By experimenting with shared settlement platforms, synchronised settlement mechanisms and multi-currency coordination, these projects explicitly treat settlement as a design problem rather than an inherited constraint. Their value lies less in immediate deployment and more in defining architectural primitives for future systems.
Viewed together, these initiatives illustrate a fragmented but converging trajectory. Execution-layer optimisation, regional standardisation and settlement-layer innovation are progressing in parallel, but largely without integration. Each addresses a necessary condition for global instant payments, yet none alone provides a sufficient solution. The absence of a unifying architectural framework explains why debates often oscillate between domestic efficiency, monetary sovereignty and technological experimentation without resolving their interdependence.
This synthesis suggests that the current landscape should not be interpreted as a competition between models, but as an incomplete assembly of layers. The challenge lies not in selecting a single approach, but in designing interfaces between them that preserve their respective strengths while addressing their limitations.
Toward a Coherent Architecture for Global Instant Payments
A coherent global instant payment architecture cannot be achieved through further optimisation at individual layers alone. Nor can it emerge from isolated institutional initiatives, however sophisticated. What is required is an explicit architectural alignment between execution, clearing and settlement, supported by governance structures capable of operating across jurisdictions.
At the execution layer, systems must support real-time, programmable and data-rich payment initiation. This capability is already largely in place. At the settlement layer, value transfer must occur in assets that provide legal finality, systemic trust and neutrality across borders. Central bank money, whether accessed through existing infrastructures or digitally native representations, remains uniquely positioned to fulfil this role.
Crucially, programmability should be understood as a property of processes, not of money itself. Payment logic, compliance rules and conditional execution belong in systems and applications. Finality belongs in the settlement asset. Conflating these functions risks either undermining trust or constraining innovation. A coherent architecture must therefore separate concerns while ensuring deterministic interaction between layers.
From an institutional perspective, this implies a redefinition of roles. Central banks act as infrastructure providers and guarantors of settlement integrity, not as competitors in product markets. Private institutions innovate at the execution and service layers, building differentiated offerings on top of shared settlement foundations. International coordination bodies provide the standards and interfaces necessary for interoperability.
The transition to such an architecture will be incremental rather than revolutionary. It will require bridging domestic instant payment systems to interoperable settlement layers, integrating wholesale and retail perspectives, and aligning regulatory frameworks with architectural realities. Yet the direction is clear. As payment processes become real-time and programmable, settlement finality can no longer remain deferred, fragmented or opaque.
The future of global instant payments will not be defined by the fastest execution rail, the most sophisticated token or the loudest political narrative. It will be defined by the ability to align speed with finality, innovation with trust, and domestic efficiency with global interoperability. Designing that alignment is no longer a theoretical exercise. It is the next structural challenge for the global financial system.
Conclusion
Viewed through the lens of Global Instant Payments, the developments discussed in this article point to a common architectural requirement rather than divergent institutional agendas. The RELEVANT framework, Regulatory, Economic and Legal Enablement through Value Alignment and Networked Trust, provides a way to integrate execution efficiency, settlement finality and governance coherence into a single analytical construct. It emphasises that sustainable innovation in payments does not arise from isolated optimisation, but from aligning technological capability with legal certainty and institutional trust at scale. In this sense, GIP is not a call for faster payments alone, but for an infrastructure in which real-time execution and central bank settlement operate as complementary layers, enabling programmable, resilient and globally interoperable financial processes.
OpenAI's budget ChatGPT Go subscription tier has migrated to the US, soon to be accompanied by advertising. The company's free tier will be similarly afflicted.…
Does that kind of time saving actually pay for itself?
Researchers at Dakota State University, in partnership with regional insurance carrier Safety Insurance, devised an experimental chatbot called "Axlerod" to assist independent insurance agents. Whether that assistance was substantial is up for some debate.…
Fix landed in July, but OEM firmware updates are required
If you use virtual machines, there's reason to feel less-than-Zen about AMD's CPUs. Computer scientists affiliated with the CISPA Helmholtz Center for Information Security in Germany have found a vulnerability in AMD CPUs that exposes secrets in its secure virtualization environment.…
A series of layoffs across Meta’s virtual reality division has affected Bellevue, Wash.-based studio Camouflaj (Republique, Batman: Arkham Shadow).
This week opened with reports that Meta had shuttered several game developers in its studio network, including Twisted Pixel (Marvel’s Deadpool VR) and Armature (Resident Evil 4 VR), both of which were headquartered in Texas.
Rumors subsequently entered circulation on Wednesday that the same wave of layoffs had affected Camouflaj. Several outlets, including Aftermath and The Verge, confirmed Thursday that while Camouflaj is still in operation, it’s been reduced to a “handful of employees.” The studio has around 30 employees, according to LinkedIn data.
In addition, a planned sequel to 2024’s Batman: Arkham Shadow has been cancelled, and the project’s developer Sanzaru has been shut down.
GeekWire has reached out to Meta and Camouflaj for further comment.
Camouflaj began as an independent studio led by Microsoft veteran Ryan Payton, which made headlines in 2012 for using crowdfunding to start work on its debut project. Republique, an episodic stealth game set in a fictional totalitarian state, was initially an exclusive for iOS devices. It was later released for PC, Mac, Android, and PlayStation 4.
Camouflaj later retooled Republique as a virtual reality game, which marked a shift by the company towards developing VR projects. Its next game, made with Sony Interactive Entertainment and Marvel Studios, was 2020’s Iron Man VR for PlayStation and Meta Quest. Camouflaj was then acquired by Oculus in 2022.
Its latest game, Arkham Shadow, is an official entry in the long-running “Arkham series” of Batman video games. Set early in Batman’s crime-fighting career, before many of his trademark villains had assumed their costumed identities, Shadow pits a young Batman against a villain known as the Rat King.
It’s worth noting that while Meta was not the only significant player in the virtual reality sector, it did command a large part of the overall market due to its standalone headsets. If Meta plans to back out of VR to any further degree, it could potentially pose an existential threat to the format. Valve’s recent announcement of a new VR headset does offer some hope to VR fans, however.
Adafruit claims SparkFun aims to shoot the messenger for criticizing corporate tolerance of intolerance
Retailer SparkFun Electronics last month said it would no longer do business with electronics kit-maker Adafruit Industries, citing violations of SparkFun's Code of Conduct during online interactions.…
But private data will stay private and won't be used for training, Google says
Google on Wednesday began inviting Gemini users to let its chatbot read their Gmail, Photos, Search history, and YouTube data in exchange for possibly more personalized responses.…
Chromium commit adds support for image decoder after the Big G ditched it a few years back
Google has added support for the JPEG XL (JXL) image format to the open source Chromium code base, reversing a decision in 2022 to drop the technology.…
Just be careful not to entrust the AI model with your sensitive data
Anthropic on Monday announced the research preview of Claude Cowork, a tool for automating office work that comes with the now familiar recitation of machine learning risks.…
When planting outdoors, it’s highly probable that there will be problems that may leave plants in a less than perfect state. Even the best cocktail of nutrient and trace chemicals can still allow a nutrient deficiency. Grasshoppers may rear their ugly green heads or the nutrient will attract unsavory company, leading to an infestation that must be dealt with.
There are many variables to growing outdoors, but the most common of nutrient deficiencies will be encountered during the green foliage growth period. Lack of nitrogen is the most common deficiency. A large green leafy plant requires a very high level of nitrogen to achieve its full glory. The first sign is a gradual creep of yellow among the lowest and therefore leaves of the plant. If this happens, be sure to add a full ration of nitrogen to the next watering session. The yellow creep can be cured in only a few days if it hasn’t progressed to a point at which the tips of the leaves are curling and black or brown. At that point, it’s a permanent situation that can’t be remedied. It will be necessary to increase the amount of nitrogen so it doesn’t damage any newer leaves that would be higher on the plant. Some other symptoms of a nitrogen deficiency include red stems, smaller new leaves and slow growth.
A phosphorus deficiency rears its head by slow and stunted growth. The newer leaves of the plant will be smaller and a darker green than usual. As with nitrogen deficiency, a red color appears on the stems. The leaves may also develop a nasty red or purple color in the veins on the underside of the leaf. If phosphorous isn’t added, the older leaves will start to die. The affected leaves won’t be healed, but the progression of the damage will be stopped. The leaves will lighten in color to the beautiful green and the growth rate will pick up.
A potassium deficiency is often a tricky one to diagnose. Most of the time a potassium deficient plant will be tall and healthy looking, though they may be slightly phototropic in appearance. The indicators are the phototropic appearance and browning of the ends of the oldest leaves. A phototropic plant is one who expends all of its energy to reach a feeble light source, thus the tall spindly look they have. Recovery from a potassium deficiency is usually slow and is measured in weeks. The leaves that have been browned already usually die off. The leaves will have brown spots on them, particularly along the prominent center vein. As with most deficiencies of a serious nature, the stems and underside veins have a reddish or purple hue to them. The most common source of potassium is wood ash; so if last year the crop had a potassium deficiency, add a cup of wood ash this year to the nutrient or growing medium.
There are also deficiencies to be had with the elements iron, manganese, boron, molybdenum, zinc and copper. Because most outdoor growing mediums tend to be natural in source, nature has already included the other trace elements required for most of their life. However, adding trace elements two or three times in the life of the plant is always a good idea. If the plants don’t require them, they simply won’t take them up.
For those of growers in the country and planting in the backyard, the easiest way to keep pests and animals away from the plants is to plant geraniums around them. The common geranium secretes a substance that acts as an all-around pest repellent. This is its natural way to combat predators and has been working great for a lot longer than humans have been growing grass, so take note. Both animals and pests will shy away from your crop.
Whatever growing medium that will be used will eventually attract a pest, then many pests. This infestation of the growing medium can be tricky to get rid of. If the little critters are in the topmost inch or so of the growing medium, that medium will have to be replaced. Be gentle with the root system and deluge the area with a good garden-safe insecticide after removal of the top inch. It’s important to replace the growing medium with a chemically-inert medium. Test and alter the pH of the medium as required to hit a neutral value of seven. The growing medium will eventually adjust itself to the pH levels the plant is accustomed to over a space of about a week.
Growing outdoors is an easy and productive means to reduce or even replace the costs incurred by our green friend over the winter. With the right knowledge this year’s crop should thrive.
What are some of your outdoor growing tips? Share with our community on Facebook.
It’s become a truism that facts alone don’t change people’s minds. Perhaps nowhere is this more clear than when it comes to conspiracy theories: Many people believe that you can’t talk conspiracists out of their beliefs.
But that’s not necessarily true. It turns out that many conspiracy believers do respond to evidence and arguments—information that is now easy to deliver in the form of a tailored conversation with an AI chatbot.
In research we published in the journal Science this year, we had over 2,000 conspiracy believers engage in a roughly eight-minute conversation with DebunkBot, a model we built on top of OpenAI’s GPT-4 Turbo (the most up-to-date GPT model at that time). Participants began by writing out, in their own words, a conspiracy theory that they believed and the evidence that made the theory compelling to them. Then we instructed the AI model to persuade the user to stop believing in that conspiracy and adopt a less conspiratorial view of the world. A three-round back-and-forth text chat with the AI model (lasting 8.4 minutes on average) led to a 20% decrease in participants’ confidence in the belief, and about one in four participants—all of whom believed the conspiracy theory beforehand—indicated that they did not believe it after the conversation. This effect held true for both classic conspiracies (think the JFK assassination or the moon landing hoax) and more contemporary politically charged ones (like those related to the 2020 election and covid-19).
This story is part of MIT Technology Review’s series “The New Conspiracy Age,” on how the present boom in conspiracy theories is reshaping science and technology.
This is good news, given the outsize role that unfounded conspiracy theories play in today’s political landscape. So while there are widespread and legitimate concerns that generative AI is a potent tool for spreading disinformation, our work shows that it can also be part of the solution.
Even people who began the conversation absolutely certain that their conspiracy was true, or who indicated that it was highly important to their personal worldview, showed marked decreases in belief. Remarkably, the effects were very durable; we followed up with participants two months later and saw just as big a reduction in conspiracy belief as we did immediately after the conversations.
Our experiments indicate that many believers are relatively rational but misinformed, and getting them timely, accurate facts can have a big impact. Conspiracy theories can make sense to reasonable people who have simply never heard clear, non-conspiratorial explanations for the events they’re fixated on. This may seem surprising. But many conspiratorial claims, while wrong, seem reasonable on the surface and require specialized, esoteric knowledge to evaluate and debunk.
For example, 9/11 deniers often point to the claim that jet fuel doesn’t burn hot enough to melt steel as evidence that airplanes were not responsible for bringing down the Twin Towers—but the chatbot responds by pointing out that although this is true, the American Institute of Steel Construction says jet fuel does burn hot enough to reduce the strength of steel by over 50%, which is more than enough to cause such towers to collapse.
Although we have greater access to factual information than ever before, it is extremely difficult to search that vast corpus of knowledge efficiently. Finding the truth that way requires knowing what to google—or who to listen to—and being sufficiently motivated to seek out conflicting information. There are large time and skill barriers to conducting such a search every time we hear a new claim, and so it’s easy to take conspiratorial content you stumble upon at face value. And most would-be debunkers at the Thanksgiving table make elementary mistakes that AI avoids: Do you know the melting point and tensile strength of steel offhand? And when your relative calls you an idiot while trying to correct you, are you able to maintain your composure?
With enough effort, humans would almost certainly be able to research and deliver facts like the AI in our experiments. And in a follow-up experiment, we found that the AI debunking was just as effective if we told participants they were talking to an expert rather than an AI. So it’s not that the debunking effect is AI-specific. Generally speaking, facts and evidence delivered by humans would also work. But it would require a lot of time and concentration for a human to come up with those facts. Generative AI can do the cognitive labor of fact-checking and rebutting conspiracy claims much more efficiently.
In another large follow-up experiment, we found that what drove the debunking effect was specifically the facts and evidence the model provided: Factors like letting people know the chatbot was going to try to talk them out of their beliefs didn’t reduce its efficacy, whereas telling the model to try to persuade its chat partner without using facts and evidence totally eliminated the effect.
Although the foibles and hallucinations of these models are well documented, our results suggest that debunking efforts are widespread enough on the internet to keep the conspiracy-focused conversations roughly accurate. When we hired a professional fact-checker to evaluate GPT-4’s claims, they found that over 99% of the claims were rated as true (and not politically biased). Also, in the few cases where participants named conspiracies that turned out to be true (like MK Ultra, the CIA’s human experimentation program from the 1950s), the AI chatbot confirmed their accurate belief rather than erroneously talking them out of it.
To date, largely by necessity, interventions to combat conspiracy theorizing have been mainly prophylactic—aiming to prevent people from going down the rabbit hole rather than trying to pull them back out. Now, thanks to advances in generative AI, we have a tool that can change conspiracists’ minds using evidence.
Bots prompted to debunk conspiracy theories could be deployed on social media platforms to engage with those who share conspiratorial content—including other AI chatbots that spread conspiracies. Google could also link debunking AI models to search engines to provide factual answers to conspiracy-related queries. And instead of arguing with your conspiratorial uncle over the dinner table, you could just pass him your phone and have him talk to AI.
Of course, there are much deeper implications here for how we as humans make sense of the world around us. It is widely argued that we now live in a “post-truth” world, where polarization and politics have eclipsed facts and evidence. By that account, our passions trump truth, logic-based reasoning is passé, and the only way to effectively change people’s minds is via psychological tactics like presenting compelling personal narratives or changing perceptions of the social norm. If so, the typical, discourse-based work of living together in a democracy is fruitless.
But facts aren’t dead. Our findings about conspiracy theories are the latest—and perhaps most extreme—in an emerging body of research demonstrating the persuasive power of facts and evidence. For example, while it was once believed that correcting falsehoods that aligns with one’s politics would just cause people to dig in and believe them even more, this idea of a “backfire” has itself been debunked: Many studies consistently find that corrections and warning labels reduce belief in, and sharing of, falsehoods—even among those who most distrust the fact-checkers making the corrections. Similarly, evidence-based arguments can change partisans’ minds on political issues, even when they are actively reminded that the argument goes against their party leader’s position. And simply reminding people to think about whether content is accurate before they share it can substantially reduce the spread of misinformation.
And if facts aren’t dead, then there’s hope for democracy—though this arguably requires a consensus set of facts from which rival factions can work. There is indeed widespread partisan disagreement on basic facts, and a disturbing level of belief in conspiracy theories. Yet this doesn’t necessarily mean our minds are inescapably warped by our politics and identities. When faced with evidence—even inconvenient or uncomfortable evidence—many people do shift their thinking in response. And so if it’s possible to disseminate accurate information widely enough, perhaps with the help of AI, we may be able to reestablish the factual common ground that is missing from society today.
You can try our debunking bot yourself at at debunkbot.com.
Thomas Costello is an assistant professor in social and decision sciences at Carnegie Mellon University. His research integrates psychology, political science, and human-computer interaction to examine where our viewpoints come from, how they differ from person to person, and why they change—as well as the sweeping impacts of artificial intelligence on these processes.
Gordon Pennycook is the Dorothy and Ariz Mehta Faculty Leadership Fellow and associate professor of psychology at Cornell University. He examines the causes and consequences of analytic reasoning, exploring how intuitive versus deliberative thinking shapes decision-making to understand errors underlying issues such as climate inaction, health behaviors, and political polarization.
David Rand is a professor of information science, marketing and management communication, and psychology at Cornell University. He uses approaches from computational social science and cognitive science to explore how human-AI dialogue can correct inaccurate beliefs, why people share falsehoods, and how to reduce political polarization and promote cooperation.