Normal view

There are new articles available, click to refresh the page.
Yesterday — 24 January 2026Main stream

Crazy Old Machines

24 January 2026 at 10:00

Al and I were talking about the IBM 9020 FAA Air Traffic Control computer system on the podcast. It’s a strange machine, made up of a bunch of IBM System 360 mainframes connected together to a common memory unit, with all sorts of custom peripherals to support keeping track of airplanes in the sky. Absolutely go read the in-depth article on that machine if it sparks your curiosity.

It got me thinking about how strange computers were in the early days, and how boringly similar they’ve all become. Just looking at the word sizes of old machines is a great example. Over the last, say, 40 years, things that do computing have had 4, 8, 16, 32, or even 64-bit words. You noticed the powers-of-two trend going on here, right? Basically starting with the lowly Intel 4004, it’s been round numbers ever since.

Harvard Mark I, by [Topory]
On the other side of the timeline, though, you get strange beasts. The classic PDP-8 had 12-bit words, while its predecessors the PDP-6 and PDP-1 had 36 bits and 18 bits respectively. (Factors of six?) There’s a string of military guidance computers that had 27-bit words, while the Apollo Guidance computer ran 15-bit words. UNIVAC III had 25-bit words, putting the 23-bit Harvard Mark I to shame.

I wasn’t there, but it gives you the feeling that each computer is a unique, almost hand-crafted machine. Some must have made their odd architectural choices to suit particular functions, others because some designer had a clever idea. I’m not a computer historian, but I’m sure that the word lengths must tell a number of interesting stories.

On the whole, though, it gives the impression of a time when each computer was it’s own unique machine, before the convergence of everything to roughly the same architectural ideas. A much more hackery time, for lack of a better word. We still see echoes of this in the people who make their own “retro” computers these days, either virtually, on a breadboard, or emulated in the fabric of an FPGA. It’s not just nostalgia, though, but a return to a time when there was more creative freedom: a time before 64 bits took over.

This article is part of the Hackaday.com newsletter, delivered every seven days for each of the last 200+ weeks. It also includes our favorite articles from the last seven days that you can see on the web version of the newsletter. Want this type of article to hit your inbox every Friday morning? You should sign up!
Before yesterdayMain stream

Watch a robot swarm "bloom" like a garden

21 January 2026 at 14:47

Researchers at Princeton University have built a swarm of interconnected mini-robots that "bloom" like flowers in response to changing light levels in an office. According to their new paper published in the journal Science Robotics, such robotic swarms could one day be used as dynamic facades in architectural designs, enabling buildings to adapt to changing climate conditions as well as interact with humans in creative ways.

The authors drew inspiration from so-called "living architectures," such as beehives. Fire ants provide a textbook example of this kind of collective behavior. A few ants spaced well apart behave like individual ants. But pack enough of them closely together, and they behave more like a single unit, exhibiting both solid and liquid properties. You can pour them from a teapot like ants, as Goldman’s lab demonstrated several years ago, or they can link together to build towers or floating rafts—a handy survival skill when, say, a hurricane floods Houston. They also excel at regulating their own traffic flow. You almost never see an ant traffic jam.

Naturally scientists are keen to mimic such systems. For instance, in 2018, Georgia Tech researchers built ant-like robots and programmed them to dig through 3D-printed magnetic plastic balls designed to simulate moist soil. Robot swarms capable of efficiently digging underground without jamming would be super beneficial for mining or disaster recovery efforts, where using human beings might not be feasible.

Read full article

Comments

© Merihan Alhafnawi

In the Age of Microsegmentation Enforcement in Hours, Are You Still Shutting Down Operations?

20 January 2026 at 11:19

I was researching cyberattacks, and a common theme popped out. “We had an unprecedented cyberattack… and we shut down our operations to protect stakeholder interests.” I know, breaches can be strenuous. The initial hours following a breach are often marked by chaos and urgency as crisis leaders call vendors, disconnect systems, analyze logs, and brief executives. The focus is on containing the damage. But over […]

The post In the Age of Microsegmentation Enforcement in Hours, Are You Still Shutting Down Operations? appeared first on ColorTokens.

The post In the Age of Microsegmentation Enforcement in Hours, Are You Still Shutting Down Operations? appeared first on Security Boulevard.

How adaptive infrastructure is evolving capabilities at the speed of business

19 January 2026 at 05:30

I’m not normally fond of year-end technology retrospectives, but 2025 was indeed a year of quantum leaps in the art of the possible and it has filled us all with measured optimism paired with some healthy and well-earned skepticism where AI is concerned. When I put architecture in perspective, I’m inclined to take a longer view of automation in all its variations over a decade. That’s why 2025 feels more like a footnote in a long series of events culminating in the perfect storm of opportunities we’ve been contemplating for some time now.

The composable infrastructure revolution

We’ve been moving toward self-aware, composable infrastructure in architecture for a while now and infrastructure-as-code was merely the first major inflection.

Let’s be honest, the old way of building IT infrastructure is breaking down. As an enterprise architect, the vicious cycle is very familiar. Tying agentic architecture demand-patterns to legacy infrastructure without careful consideration is fraught with peril. The old pattern is really predictable now: You provision systems, maintain them reactively and eventually retire them. Rinse and repeat.

That model is now officially unsustainable in the age of AI. What’s taking its place? Composable and intelligent infrastructure that can proactively self-assemble, reconfigure and optimize on the fly to match what the business needs.

For IT leaders, this shift from rigid systems to modular, agent-driven infrastructure is both a breakthrough opportunity and a serious transformation challenge. And the numbers back this up: the global composable infrastructure market sits at USD $8.3 billion in 2025 and is projected to grow at 24.9% annually through 2032.

What’s driving this hyper-accelerated growth? Geopolitical disruptions, supply chain chaos and AI advances are reshaping how and where companies operate. Business environments are being driven by reactive and dynamic agentic experiences, transactions and digital partnerships everywhere, all the time. Static infrastructure just can’t deliver that kind of flexibility based on marketing exercises that describe solution offerings as “on-demand,” “utility-based,” “adaptive” and “composable.” These are little more than half-truths.

A 2025 Forrester study commissioned by Microsoft found that 84% of IT leaders want solutions that consolidate edge and cloud operations across systems, sites and teams. As an architect in the consumer goods space, I found that our IT team would produce endless slide decks about composable enterprises ad nauseam, but infrastructure-as-code was the level of actual capability for some time.

Leaders wanted composable architecture that can pull together diverse components without hyperextended interoperability efforts. IBM’s research reinforces this, showing that companies with modular architectures are more agile, more resilient and faster to market — while also reducing the technical debt that slows everyone down.

The problem has been one of capacity and fitness for purpose. Legacy infrastructure and the underlying systems of record simply weren’t designed with agentic AI patterns in mind. My conversations with pan-industry architecture colleagues reflect the same crisis of expectation and resilience around agentic architectures.

Consider McKinsey’s 2025 AI survey that demonstrated 88% of organizations now use AI regularly in at least one business function and 62% are experimenting with AI agents. But most are stuck in pilot mode because their infrastructure can’t scale AI across the business.

If there are any winners in this race, they’ve broken apart their monolithic systems into modular pieces that AI agents can orchestrate based on what’s actually happening in real time.

AI agents: The new orchestration layer

So, what’s driving this shift? Agentic AI — systems that understand business context, figure out optimal configurations and execute complex workflows by pulling together infrastructure components on demand. This isn’t just standard automation following rigid, brittle scripts. Agents reason about what to assemble, how to configure it and when to reconfigure as conditions change.

The adoption curve is steep. BCG and MIT Sloan Management Review found that 35% of organizations already use agentic AI, with another 44% planning to jump in soon. The World Economic Forum reports 82% of executives plan to adopt AI agents within three years. McKinsey’s abovementioned State of AI research further highlights agentic AI as an emerging focus area for enterprise investment and describes AI agents as systems that can plan, take actions and orchestrate multi-step workflows with less human intervention than traditional automation.

As McKinsey puts it: “We’re entering an era where enterprise productivity is no longer just accelerated by AI — it’s orchestrated by it.” That’s a fundamental change in how infrastructure works.

IBM is betting big on this future, stating that “the future of IT operations is autonomous, policy-driven and hybrid by design.” They’re building environments where AI agents can orchestrate everything — public cloud, private infrastructure, on-premises systems, edge deployments — assembling optimal configurations for specific workloads and contexts. The scope of automation ranges from helpful recommendations to closed-loop fixes to fully autonomous optimization.

What composable architecture actually looks like

I recall no shortage of Lego-induced architecture references to composability over the last decade. Sadly, we conflated them with domain services and not how business capabilities and automation could and should inform how the Legos are pieced together to solve problems. Traditional infrastructure comes as tightly integrated stacks — hard to decompose, inflexible and reactive. The new composable model flips this, offering modular building blocks that agents can intelligently assemble and reassemble dynamically based on what’s needed right now.

Composability demands modularity and responsive automation

The foundation is extreme modularity — breaking monolithic systems into discrete, independently deployable pieces with clean interfaces. Composable infrastructure lets you dynamically assemble and disassemble resources based on application demands, optimizing how pooled resources get allocated and improving overall efficiency.

This goes far beyond physical infrastructure to include services, data pipelines, security policies and workflows. When everything is modular and API-accessible, agents can compose complex solutions from simple building blocks and adapt in real time.

Bringing cloud and edge together

Enterprise organizations are no longer treating cloud and edge as separate worlds requiring manual integration. The new approach treats all infrastructure — from hyperscale data centers to network edge — as a unified resource pool that agents can compose into optimal configurations.

McKinsey identifies edge-cloud convergence as essential for agentic AI: “Agents need real-time data access and low-latency environments. Combining edge compute (for inference and responsiveness) with cloud-scale training and storage is essential.” They further highlight how Hewlett Packard Enterprise (HPE) expanded its GreenLake platform in late 2024 with composable infrastructure hardware for hybrid and AI-driven workloads — modular servers and storage that let enterprises dynamically allocate resources based on real-time demand.

Agents running the show

Even IBM with its storied fixed-infrastructure history is all-in on agentic AI infrastructure capabilities — including agents and Model Context Protocol (MCP) servers — across its portfolio, making infrastructure components discoverable and composable by AI agents. These agents don’t just watch the infrastructure state; they actively orchestrate resources across enterprise data and applications, creating optimal configurations for specific workloads.

Management interfaces across IBM cloud, storage, power and Z platforms are becoming MCP-compatible services — turning infrastructure into building blocks that agents can reason about and orchestrate. Vendor-native agentic management solutions introduced similar AI-driven orchestration enhancements in 2024, letting large enterprises dynamically allocate resources across compute, storage and networking.

Self-aware and self-correcting infrastructure

Instead of manually configuring every component, composable architectures enable intent-based interfaces. You specify business objectives — support 10,000 concurrent users with sub-100ms latency at 99.99% availability — and agents figure out the infrastructure composition to make it happen.

Emerging intelligent infrastructure player Quali describes this as “infrastructure that understands itself” — systems where agentic AI doesn’t just demand infrastructure that keeps up, but infrastructure built from composable components that agents can understand and orchestrate.

Getting scale and flexibility in real time

Traditional infrastructure forces a choice: optimize for scale or build for adaptability. As architects, there are clear opposing trade-offs we must navigate successfully: Scale relative to adaptability, investment versus sustaining operations, tight oversight versus autonomy and process refactoring versus process reinvention.

Composable architectures solve this by delivering both. The dual nature of agentic AI — part tool, part human-like — doesn’t fit traditional management frameworks. People are flexible but don’t scale. Tools scale but can’t adapt. Agentic AI on composable infrastructure gives you scalable adaptability — handling massive workloads while continuously reconfiguring for changing contexts.

Self-composability and evolved governance

Agent-orchestrated infrastructure demands governance that balances autonomy with control. The earlier-mentioned MIT Sloan Management Review and BCG study found that most agentic AI leaders anticipate significant changes to governance and decision rights as they adopt agentic AI. They recommend creating governance hubs with enterprise-wide guardrails and dynamic decision rights rather than approving individual AI decisions one by one.

The answer lies in policy-based composition, defining constraints that bound agent decisions without prescribing exact configurations. Within those boundaries, agents compose and recompose infrastructure autonomously.

When AI agents continuously compose and recompose resources, you need governance frameworks that look nothing like traditional change management. A model registry that includes MCP connects different large language models while implementing guardrails for analytics, security, privacy and compliance. This treats AI as an agent whose decisions must be understood, managed and learned from — not as an infallible tool.

Making it happen in 2026

What should IT leaders do? Here are the most critical moves from my perspective.

Redesign work around agents first. Use agentic AI’s capacity to implement scalability and adapt broadly within parameterized governance automation rather than automating isolated tasks. Almost two-thirds of agentic AI leaders expect operating model changes. Build workflows that shift smoothly between efficiency and problem-solving modes.

Rethink roles for human-agent collaboration. Agents are an architect’s new partner. Reposition your role as an architect in the enterprise to adopt and embrace portfolios of AI agents to coordinate workflows, and traditional management layers change. Expect fewer middle management layers, with managers evolving to orchestrate hybrid human-AI teams. Consider dual career paths for generalist orchestrators and AI-augmented specialists.

Keep investments tied to value. Agentic AI leaders anchor investments to value — whether efficiency, innovation, revenue growth or some combination. Agentic systems are evolving from finite function agents to multi-agent collaborators, from narrow to broadly orchestrated tasks and other ecosystems and agents, and from operational to strategic human-mediated partnership.

The bottom line

The companies that will win in the next decade will recognize composability as the foundation of adaptive infrastructure. When every part of the technology stack becomes a modular building block and intelligent agents compose those blocks into optimal configurations based on real-time context, infrastructure becomes a competitive advantage instead of a constraint.

Organizations that understand agentic AI’s dual nature and align their processes, governance, talent and investments accordingly will realize its full business value. My architect’s perspective is that agentic AI will challenge established management approaches and, yes, even convince many of its ability to defy gravity. But with the right strategy and execution, it won’t just offer empty promises — it will deliver results. Further, our grounded expectations around the capacity of aging infrastructure and legacy demand patterns must guide us in ensuring we make intelligent decisions.

The question isn’t whether to embrace composable, agent-orchestrated infrastructure. It’s how fast you can decompose monolithic systems, build orchestration capabilities and establish the governance to make it work.

This article was made possible by our partnership with the IASA Chief Architect Forum. The CAF’s purpose is to test, challenge and support the art and science of Business Technology Architecture and its evolution over time, as well as grow the influence and leadership of chief architects both inside and outside the profession. The CAF is a leadership community of the IASA, the leading non-profit professional association for business technology architects. 

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

10 top priorities for CIOs in 2026

19 January 2026 at 05:01

A CIO’s wish list is typically long and costly. Fortunately, by establishing reasonable priorities, it’s possible to keep pace with emerging demands without draining your team or budget.

As 2026 arrives, CIOs need to take a step back and consider how they can use technology to help reinvent their wider business while running their IT capabilities with a profit and loss mindset, advises Koenraad Schelfaut, technology strategy and advisory global lead at business advisory firm Accenture. “The focus should shift from ‘keeping the lights on’ at the lowest cost to using technology … to drive topline growth, create new digital products, and bring new business models faster to market.”

Here’s an overview of what should be at the top of your 2026 priorities list.

1. Strengthening cybersecurity resilience and data privacy

Enterprises are increasingly integrating generative and agentic AI deep into their business workflows, spanning all critical customer interactions and transactions, says Yogesh Joshi, senior vice president of global product platforms at consumer credit reporting firm TransUnion. “As a result, CIOs and CISOs must expect bad actors will use these same AI technologies to disrupt these workflows to compromise intellectual property, including customer sensitive data and competitively differentiated information and assets.”

Cybersecurity resilience and data privacy must be top priorities in 2026, Joshi says. He believes that as enterprises accelerate their digital transformation and increasingly integrate AI, the risk landscape will expand dramatically. “Protecting sensitive data and ensuring compliance with global regulations is non-negotiable,” Joshi states.

2. Consolidating security tools

CIOs should prioritize re-baselining their foundations to capitalize on the promise of AI, says Arun Perinkolam, Deloitte’s US cyber platforms and technology, media, and telecommunications industry leader. “One of the prerequisites is consolidating fragmented security tools into unified, integrated, cyber technology platforms — also known as platformization.”

Perinkolam says a consolidation shift will move security from a patchwork of isolated solutions to an agile, extensible foundation fit for rapid innovation and scalable AI-driven operations. “As cyber threats become increasingly sophisticated, and the technology landscape evolves, integrating cybersecurity solutions into unified platforms will be crucial,” he says.

“Enterprises now face a growing array of threats, resulting in a sprawling set of tools to manage them,” Perinkolam notes. “As adversaries exploit fractured security postures, delaying platformization only amplifies these risks.”

3. Ensuring data protection

To take advantage of enhanced efficiency, speed, and innovation, organizations of all types and sizes are now racing to adopt new AI models, says Parker Pearson, chief strategy officer at data privacy and preservation firm Donoma Software.

“Unfortunately, many organizations are failing to take the basic steps necessary to protect their sensitive data before unleashing new AI technologies that could potentially be left exposed,” she warns, adding that in 2026 “data privacy should be viewed as an urgent priority.”

Implementing new AI models can raise significant concerns around how data is collected, used, and protected, Pearson notes. These issues arise across the entire AI lifecycle, from how the data used for initial training to ongoing interactions with the model. “Until now, the choices for most enterprises are between two bad options: either ignore AI and face the consequences in an increasingly competitive marketplace; or implement an LLM that could potentially expose sensitive data,” she says. Both options, she adds, can result in an enormous amount of damage.

The question for CIOs is not whether to implement AI, but how to derive optimal value from AI without placing sensitive data at risk, Pearson says. “Many CIOs confidently report that their organization’s data is either ‘fully’ or ‘end to end’ encrypted.” Yet Pearson believes that true data protection requires continuous encryption that keeps information secure during all states, including when it’s being used. “Until organizations address this fundamental gap, they will continue to be blindsided by breaches that bypass all their traditional security measures.”

Organizations that implement privacy-enhancing technology today will have a distinct advantage in implementing future AI models, Pearson says. “Their data will be structured and secured correctly, and their AI training will be more efficient right from the start, rather than continually incurring the expense, and risk of retraining their models.”

4. Focusing on team identity and experience

A top priority for CIOs in 2026 should be resetting their enterprise identity and employee experience, says Michael Wetzel, CIO at IT security software company Netwrix. “Identity is the foundation of how people show up, collaborate, and contribute,” he states. “When you get identity and experience right, everything else, including security, productivity, and adoption, follows naturally.”

Employees expect a consumer-grade experience at work, Wetzel says. “If your internal technology is clunky, they simply won’t use it.” When people work around IT, the organization loses both security and speed, he warns. “Enterprises that build a seamless, identity-rooted experience will innovate faster while organizations that don’t will fall behind.”

5. Navigating increasingly costly ERP migrations

Effectively navigating costly ERP migrations should be at the top of the CIO agenda in 2026, says Barrett Schiwitz, CIO at invoice lifecycle management software firm Basware. “SAP S/4HANA migrations, for instance, are complex and often take longer than planned, leading to rising costs.” He notes that upgrades can cost enterprises upwards of $100 million, rising to as much as $500 million depending on the ERP’s size and complexity.

The problem is that while ERPs try to do everything, they rarely perform specific tasks, such as invoice processing, really well, Schiwitz says. “Many businesses overcomplicate their ERP systems, customizing them with lots of add-ons that further increase risk.” The answer, he suggests, is adopting a “clean core” strategy that lets SAP do what it does best and then supplement it with best-in-class tools to drive additional value.

6. Doubling-down on innovation — and data governance

One of the most important priorities for CIOs in 2026 is architecting a foundation that makes innovation scalable, sustainable, and secure, says Stephen Franchetti, CIO at compliance platform provider Samsara.

Franchetti says he’s currently building a loosely coupled, API-first architecture that’s designed to be modular, composable, and extensible. “This allows us to move faster, adapt to change more easily, and avoid vendor or platform lock-in.” Franchetti adds that in an era where workflows, tools, and even AI agents are increasingly dynamic, a tightly bound stack simply won’t scale.

Franchetti is also continuing to evolve his enterprise data strategy. “For us, data is a long-term strategic asset — not just for AI, but also for business insight, regulatory readiness, and customer trust,” he says. “This means doubling down on data quality, lineage, governance, and accessibility across all functions.”

7. Facilitating workforce transformation

CIOs must prioritize workforce transformation in 2026, says Scott Thompson, a partner in executive search and management consulting company Heidrick & Struggles. “Upskilling and reskilling teams will help develop the next generation of leaders,” he predicts. “The technology leader of 2026 needs to be a product-centric tech leader, ensuring that product, technology, and the business are all one and the same.”

CIOs can’t hire their way out of the talent gap, so they must build talent internally, not simply buy it on the market, Thompson says. “The most effective strategy is creating a digital talent factory with structured skills taxonomies, role-based learning paths, and hands-on project rotations.”

Thompson also believes that CIOs should redesign job roles for an AI-enabled environment and use automation to reduce the amount of specialized labor required. “Forming fusion teams will help spread scarce expertise across the organization, while strong career mobility and a modern engineering culture will improve retention,” he states. “Together, these approaches will let CIOs grow, multiply, and retain the talent they need at scale.”

8. Improving team communication

A CIO’s top priority should be developing sophisticated and nuanced approaches to communication, says James Stanger, chief technology evangelist at IT certification firm CompTIA. “The primary effect of uncertainty in tech departments is anxiety,” he observes. “Anxiety takes different forms, depending upon the individual worker.”

Stanger suggests working closer with team members as well as managing anxiety through more effective and relevant training.

9. Strengthening drive agility, trust, and scale

Beyond AI, the priority for CIOs in 2026 should be strengthening the enabling capabilities that drive agility, trust, and scale, says Mike Anderson, chief digital and information officer at security firm Netskope.

Anderson feels that the product operating model will be central to this shift, expanding beyond traditional software teams to include foundational enterprise capabilities, such as identity and access management, data platforms, and integration services.

“These capabilities must support both human and non-human identities — employees, partners, customers, third parties, and AI agents — through secure, adaptive frameworks built on least-privileged access and zero trust principles,” he says, noting that CIOs who invest in these enabling capabilities now will be positioned to move faster and innovate more confidently throughout 2026 and beyond.

10. Addressing an evolving IT architecture

In 2026, today’s IT architecture will become a legacy model, unable to support the autonomous power of AI agents, predicts Emin Gerba, chief architect at Salesforce. He believes that in order to effectively scale, enterprises will have to pivot to a new agentic enterprise blueprint with four new architectural layers: a shared semantic layer to unify data meaning, an integrated AI/ML layer for centralized intelligence, an agentic layer to manage the full lifecycle of a scalable agent workforce, and an enterprise orchestration layer to securely manage complex, cross-silo agent workflows.

“This architectural shift will be the defining competitive wedge, separating companies that achieve end-to-end automation from those whose agents remain trapped in application silos,” Gerba says.

Architecture and decision-making

15 January 2026 at 05:00

Over my 20+ years working in software architecture and engineering leadership, I have come to believe that architecture is far more than technology. It is in many ways a form of leadership. At the center of that leadership is the ability to make sound decisions in moments of uncertainty. High-quality decisions that balance what the design and team can do today with what the business will need tomorrow.

We often celebrate architectural knowledge, such as patterns, frameworks, abstractions and styles, all of which are essential. But in reality, the thing that separates good architects from great ones is not knowledge. It is judgment. Knowledge is essential; however, it is judgment that truly distinguishes outstanding architects.

I believe judgment was the quiet force behind every meaningful architectural decision that I’ve made. It has guided me to deal with uncertainty, to weigh trade-offs, to balance the ideal with the practical and to design for teams that are still growing into the systems when we are building. I’ve found knowledge helps one to understand what is possible and Judgment helps one to decide what is the right thing to do.

This idea is reinforced in many of the great books I’ve learned from, whether it’s The Hard Things About Hard Things by Ben Horowitz or Team of Teams by General Stanley McChrystal. These books talk about leading through uncertainty, making tough choices and guiding people through complexity. Interestingly, that’s also exactly what software architects do every day.

In this article, I will share what I’ve learned about how leadership, technology and product thinking come together to shape better decisions – and how I learned to navigate the messy reality of building software while keeping one eye on the horizon.

Context: Architecture leadership and why it matters

When I think about architecture, I don’t think about diagrams. I think about high-quality decisions. Architecture is ultimately about shaping systems that deliver value. You can create the most elegant system in the world, but if it doesn’t meet its delivery timelines or if it is too complex for the team to operate, or if it doesn’t actually solve the user’s problem, then it’s not good architecture.

My approach has always centred on building systems that achieve the right return on investment. ROI is not just about cost efficiency or saving money. Sometimes the best ROI comes from spending more upfront to create long-term leverage. Other times, ROI comes from choosing the simplest possible path to meet a pressing market deadline. The job of the architect is to weigh these forces not just theoretically but practically.

For me, architectural leadership is about helping teams navigate these decisions without getting trapped in the pursuit of perfection. It’s about understanding users, understanding business priorities and understanding the people who will build and operate the system. It requires the ability to communicate a vision, reduce uncertainty and guide teams through moments when we simply don’t have all the answers.

That’s why I say architecture leadership sits right at the intersection of three worlds: technology, product and people.

Navigating uncertainty: In software architecture

Uncertainty is something I’ve learned to live with as it’s a constant backdrop of architectural decision-making.  Rarely do I get perfect requirements. Rarely do I know exactly how a system will evolve or behave at scale. And yet, I still need to make decisions that feel concrete, meaningful and aligned with the future, even when the future is a bit blurry.

I often think about Napoleon’s line: “A leader is a dealer in hope.”  It’s a surprisingly accurate reflection of what software architects do. We bring clarity where things are messy. We can’t predict the future, but we must still articulate a path forward when ambiguity is high. And we do it not because we know everything, but because we can see just enough to guide the next few steps.

Some of the best leadership books I’ve read, like Richard Rumelt’s Good Strategy Bad Strategy and Schmidt’s Trillion Dollar Coach, prioritize judgment as a top leadership quality, one who can navigate ambiguity and make high-quality decisions based on gutsy judgment.  

Meanwhile, leading technical books from Martin Fowler and Bob Martin’s Clean Architecture prioritize knowledge and vocabulary to act on that judgment.

For me, the role of an architect is to operate in the space where these two worlds, knowledge and judgement, overlap.

My personal experience: Data architecture — designing for reality, then evolving

Let me share a personal story that taught me the importance of architectural judgment more than any textbook ever could. It happened during a major data platform initiative that I was leading from the architecture front. We had a tight deadline, multiple data sources to onboard and a team that was still early in its journey with cloud-native data ingestion and distributed systems.

The long-term vision was clear in my head. We needed a generic ingestion framework built around an adapter pattern. In this model, each data source would plug into a common interface, giving us consistency, maintainability and the flexibility to evolve and scale over time. It was the right architecture for the future.

But the real question for me was it the right architecture for that moment?

In my view, the team was not yet ready to build such a framework. They needed real, hands-on experience with ingestion patterns, schema evolution, data quality issues and the messy operational challenges that only appear once real production traffic hits the system. If we pushed ahead prematurely, not only would we miss the deadline we would likely create abstractions that were elegant in theory but mismatched to reality.

So, it’s time for a pragmatic decision judgment call? Do we start with a managed ingestion service? Something reliable? Something we could operate easily? Something that delivered value quickly? It’s time for me to make a judgment call on whether to proceed with a cloud-native ingestion service or a generic ingestion framework. I decided to proceed with a cloud-native ingestion service. With this approach, it allowed us to ship on time, drastically reduce operational risk and gave the team space to build the foundational knowledge about large scale data ingestion service.

From the outside, the choice of proceeding with cloud-native data ingestion looked simple, maybe even ordinary. But internally, from a business perspective, it was a turning point. The team gained the confidence that comes from operating production data ingestion services. It allowed us to observe and learn what patterns we needed, what was working. What’s not working and more importantly, what patterns we don’t need. Of course, the big win was that we delivered business value immediately.

Fast forward to the following year. The team had grown. We had clear insights into performance bottlenecks, failure modes and necessary abstractions. With this experience, we revisited the long-term vision and built the generic ingestion framework we originally imagined. This time, the architecture almost designed itself. The adapter pattern made perfect sense, and the managed service became just one of the plug-in connectors behind the framework.

That decision started pragmatically, evolved strategically and remains one of the clearest examples of architectural judgment in my career. It was not just about technical correctness. It was about doing the right thing at the right time for the right reasons.

That experience reinforced something I firmly believe: Architecture is not about building the perfect system. It’s about making the right decisions based on the strengths that you have today from both a technical and a people perspective. While having a future state in mind, staying aligned with the future that will allow to evolve the architecture as you move toward.

Architecture judgment: 5 questions that shape decision-making

Over the years, I’ve discovered that asking the right questions matters even more than trying to find the perfect answers. This idea is strongly reinforced by Srinath Perera in his work on software architecture and decision-making, where he points out that good questions force us to think, uncover details and reshape our understanding of the problem. I’ve always resonated with Srinath’s observation that questions ground us in concrete situations rather than abstract ideals because it’s often the pursuit of those ideals that leads projects into trouble. In my experience, these questions are not just diagnostic tools, they are catalysts for clarity. They help me scope a system, cut through noise and understand what truly matters in the moment.

Q1: When is the best time to market?

The first question to consider is on timing. Timing is a decisive force in architecture. If you are racing to meet a product launch or capture a market opportunity, then simplicity and speed matter far more than creating the most elegant design. If you have more breathing room, can invest deeper in long-term foundations.

Q2: What is the skill level of the team?

The second question is about team skill level. Architecture doesn’t live in PowerPoint. It lives in the hands of the engineers who build and operate the system. A design that is beyond the team’s current capability is not an architecture; it’s a liability. This doesn’t mean lowering standards it means aligning ambition with reality and growing the team along the way.

Q3: How sensitive is the system to performance?

The third question is in relation to scaling and performance. Not every system needs millisecond latency or hyper-efficient throughput. But the ones that do require early architectural considerations. Misjudging this can turn into expensive rewrites or operational nightmares. Conversely, over-optimizing a system that doesn’t require it is wasted effort and cognitive load.

Q4: When can we rewrite the system?

The fourth question is about future state architecture. Every architecture has a lifespan. If you expect to revisit the design in the near future, you can accept more shortcuts and tactical decisions. But if the system is expected to live for years, then foundational decisions like data models, domain boundaries and communication patterns will require greater care.

Q5: What are the hard problems?

The final question is about hard architecture patterns or technology and how to address them early. Every system has one or two truly difficult challenges, whether it’s data consistency, security boundaries, real-time performance or scaling complexity. These hard problems should be tackled early, even if it means building prototypes or running parallel explorations. De-risking them can change the entire trajectory of the project.

These five questions help me to navigate uncertainty with clarity, while they echo many of the themes Srinath Perera outlines, particularly the idea that questions reshape our understanding; they’ve also evolved with my own experiences. They’ve become part of how I guide teams, evaluate trade-offs and build architectures that hold up in the real world.

Conclusion

Architecture is not the pursuit of perfection. It is the pursuit of appropriateness. Knowledge gives us options, but judgment helps us choose the right path for the moment we are in, the team we have and the product we are trying to build. The best architects bring together technology expertise, product intuition and leadership judgment into a single discipline.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

The tech leadership realizing more than the sum of parts

14 January 2026 at 05:00

Waiting on replacement parts can be more than just an inconvenience. It can be a matter of sharp loss of income and opportunity. This is especially true for those who depend on industrial tools and equipment for agriculture and construction. So to keep things run as efficiently as possible, Parts ASAP CIO John Fraser makes sure end customer satisfaction is the highest motivation to get the tech implementation and distribution right.

“What it comes down to, in order to achieve that, is the team,” he says. “I came into this organization because of the culture, and the listen first, act later mentality. It’s something I believe in and I’m going to continue that culture.”

Bringing in talent and new products has been instrumental in creating a stable e-commerce model, so Fraser and his team can help digitally advertise to customers, establish the right partnerships to drive traffic, and provide the right amount of data.

“Once you’re a customer of ours, we have to make sure we’re a needs-based business,” he says. “We have to be the first thing that sticks in their mind because it’s not about a track on a Bobcat that just broke. It’s $1,000 a day someone’s not going to make due to a piece of equipment that’s down.”

Ultimately, this strategy helps and supports customers with a collection of highly-integrated tools to create an immersive experience. But the biggest challenge, says Fraser, is the variety of marketplace channels customers are on.

“Some people prefer our website,” he says. “But some are on Walmart or about 20 other commercial channels we sell on. Each has unique requirements, ways to purchase, and product descriptions. On a single product, we might have 20 variations to meet the character limits of eBay, for instance, or the brand limitations of Amazon. So we’ve built out our own product information management platform. It takes the right talent to use that technology and a feedback loop to refine the process.”

Of course, AI is always in the conversation since people can’t write updated descriptions for 250,000 SKUs.

“AI will fundamentally change what everybody’s job is,” he says. “I know I have to prepare for it and be forward thinking. We have to embrace it. If you don’t, you’re going to get left behind.”

Fraser also details practical AI adoption in terms of pricing, product data enhancement, and customer experience, while stressing experimentation without over-dependence. Watch the full video below for more insights, and be sure to subscribe to the monthly Center Stage newsletter by clicking here.

On consolidating disparate systems: You certainly run into challenges. People are on the same ERP system so they have some familiarity. But even within that, you have massive amounts of customization. Sometimes that’s very purpose-built for the type of process an organization is running, or that unique sales process, or whatever. But in other cases, it’s very hard. We’ve acquired companies with their own custom built ERP platform, where they spent 20 years curating it down to eliminate every button click. Those don’t go quite as well, but you start with a good culture, and being transparent with employees and customers about what’s happening, and you work through it together. The good news is it starts with putting the customer first and doing it in a consistent way. Tell people change is coming and build a rapport before you bring in massive changes. There are some quick wins and efficiencies, and so people begin to trust. Then, you’re not just dragging them along but bringing them along on the journey.

On AI: Everybody’s talking it, but there’s a danger to that, just like there was a danger with blockchain and other kinds of immersive technologies. You have to make sure you know why you’re going after AI. You can’t just use it because it’s a buzzword. You have to bake it into your strategy and existing use cases, and then leverage it. We’re doing it in a way that allows us to augment our existing strategy rather than completely and fundamentally change it. So for example, we’re going to use AI to help influence what our product pricing should be. We have great competitive data, and a great idea of what our margins need to be and where the market is for pricing. Some companies are in the news because they’ve gone all in on AI, and AI is doing some things that are maybe not so appropriate in terms of automation. But if you can go in and have it be a contributing factor to a human still deciding on pricing, that’s where we are rather than completely handing everything over to AI.

On pooling data: We have a 360-degree view of all of our customers. We know when they’re buying online and in person. If they’re buying construction equipment and material handling equipment, we’ll see that. But when somebody’s buying a custom fork for a forklift, that’s very different than someone needing a new water pump for a John Deere tractor. And having a manufacturing platform that allows us to predict a two and a half day lead time on that custom fork is a different system to making sure that water pump is at your door the next day. Trying to do all that in one platform just hasn’t been successful in my experience in the past. So we’ve chosen to take a bit of a hybrid approach where you combine the data but still have best in breed operational platforms for different segments of the business.

On scaling IT systems: The key is we’re not afraid to have more than one operational platform. Today, in our ecosystem of 23 different companies, we’re manufacturing parts in our material handling business, and that’s a very different operational platform than, say, purchasing overseas parts, bringing them in, and finding a way to sell them to people in need, where you need to be able to distribute them fast. It’s an entirely different model. So we’re not establishing one core platform in that case, but the right amount of platforms. It’s not 23, but it’s also not one. So as we think about being able to scale, it’s also saying that if you try to be all things to all people, you’re going to be a jack of all trades and an expert in none. So we want to make sure when we have disparate segments that have some operational efficiency in the back end — same finance team, same IT teams — we’ll have more than one operational platform. Then through different technologies, including AI, ensure we have one view of the customer, even if they’re purchasing out of two or three different systems.

On tech deployment: Experiment early and then make certain not to be too dependent on it immediately. We have 250,000 SKUs, and more than two million parts that we can special order for our customers, and you can’t possibly augment that data with a world-class description with humans. So we selectively choose how to make the best product listing for something on Amazon or eBay. But we’re using AI to build enhanced product descriptions for us, and instead of having, say, 10 people curating and creating custom descriptions for these products, we’re leveraging AI and using agents in a way that allow people to build the content. Now humans are simply approving, rejecting, or editing that content, so we’re leveraging them for the knowledge they need to have, and if this going to be a good product listing or not. We know there are thousands of AI companies, and for us to be able to pick a winner or loser is a gamble. Our approach is to make it a bit of a commoditized service. But we’re also pulling in that data and putting it back into our core operational platform, and there it rests. So if we’re with the wrong partner, or they get acquired, or go out of business, we can switch quickly without having to rewrite our entire set of systems because we take it in, use it a bit as a commoditized service, get the data, set it at rest, and then we can exchange that AI engine. We’ve already changed it five times and we’re okay to change it another five until we find the best possible partner so we can stay bleeding edge without having all the expense of building it too deeply into our core platforms.

Signals for 2026

9 January 2026 at 07:14

We’re three years into a post-ChatGPT world, and AI remains the focal point of the tech industry. In 2025, several ongoing trends intensified: AI investment accelerated; enterprises integrated agents and workflow automation at a faster pace; and the toolscape for professionals seeking a career edge is now overwhelmingly expansive. But the jury’s still out on the ROI from the vast sums that have saturated the industry. 

We anticipate that 2026 will be a year of increased accountability. Expect enterprises to shift focus from experimentation to measurable business outcomes and sustainable AI costs. There are promising productivity and efficiency gains to be had in software engineering and development, operations, security, and product design, but significant challenges also persist.  

Bigger picture, the industry is still grappling with what AI is and where we’re headed. Is AI a worker that will take all our jobs? Is AGI imminent? Is the bubble about to burst? Economic uncertainty, layoffs, and shifting AI hiring expectations have undeniably created stark career anxiety throughout the industry. But as Tim O’Reilly pointedly argues, “AI is not taking jobs: The decisions of people deploying it are.” No one has quite figured out how to make money yet, but the organizations that succeed will do so by creating solutions that “genuinely improve. . .customers’ lives.” That won’t happen by shoehorning AI into existing workflows but by first determining where AI can actually improve upon them, then taking an “AI first” approach to developing products around these insights.

As Tim O’Reilly and Mike Loukides recently explained, “At O’Reilly, we don’t believe in predicting the future. But we do believe you can see signs of the future in the present.” We’re watching a number of “possible futures taking shape.” AI will undoubtedly be integrated more deeply into industries, products, and the wider workforce in 2026 as use cases continue to be discovered and shared. Topics we’re keeping tabs on include context engineering for building more reliable, performant AI systems; LLM posttraining techniques, in particular fine-tuning as a means to build more specialized, domain-specific models; the growth of agents, as well as the protocols, like MCP, to support them; and computer vision and multimodal AI more generally to enable the development of physical/embodied AI and the creation of world models. 

Here are some of the other trends that are pointing the way forward.

Software Development

In 2025, AI was embedded in software developers’ everyday work, transforming their roles—in some cases dramatically. A multitude of AI tools are now available to create code, and workflows are undergoing a transformation shaped by new concepts including vibe coding, agentic development, context engineering, eval- and spec-driven development, and more.

In 2026, we’ll see an increased focus on agents and the protocols, like MCP, that support them; new coding workflows; and the impact of AI on assisting with legacy code. But even as software development practices evolve, fundamental skills such as code review, design patterns, debugging, testing, and documentation are as vital as ever.

And despite major disruption from GenAI, programming languages aren’t going anywhere. Type-safe languages like TypeScript, Java, and C# provide compile-time validation that catches AI errors before production, helping mitigate the risks of AI-generated code. Memory safety mandates will drive interest in Rust and Zig for systems programming: Major players such as Google, Microsoft, Amazon, and Meta have adopted Rust for critical systems, and Zig is behind Anthropic’s most recent acquisition, Bun. And Python is central to creating powerful AI and machine learning frameworks, driving complex intelligent automation that extends far beyond simple scripting. It’s also ideal for edge computing and robotics, two areas where AI is likely to make inroads in the coming year.

Takeaways

Which AI tools programmers use matter less than how they use them. With a wide choice of tools now available in the IDE and on the command line, and new options being introduced all the time, it’s useful to focus on the skills needed to produce good code rather than focusing on the tool itself. After all, whatever tool they use, developers are ultimately responsible for the code it produces.

Effectively communicating with AI models is the key to doing good work. The more background AI tools are given about a project, the better the code they generate will be. Developers have to understand both how to manage what the AI knows about their project (context engineering) and how to communicate it (prompt engineering) to get useful outputs.

AI isn’t just a pair programmer; it’s an entire team of developers. Software engineers have moved beyond single coding assistants. They’re building and deploying custom agents, often within complex setups involving multi-agent scenarios, teams of coding agents, and agent swarms. But as the engineering workflow shifts from conducting AI to orchestrating AI, the fundamentals of building and maintaining good software—code review, design patterns, debugging, testing, and documentation—stay the same and will be what elevates purposeful AI-assisted code above the crowd.

Software Architecture

AI has progressed from being something architects might have to consider to something that is now essential to their work. They can use LLMs to accelerate or optimize architecture tasks; they can add AI to existing software systems or use it to modernize those systems; and they can design AI-native architectures, an approach that requires new considerations and patterns for system design. And even if they aren’t working with AI (yet), architects still need to understand how AI relates to other parts of their system and be able to communicate their decisions to stakeholders at all levels.

Takeaways

AI-enhanced and AI-native architectures bring new considerations and patterns for system design. Event-driven models can enable AI agents to act on incoming triggers rather than fixed prompts. In 2026, evolving architectures will become more important as architects look for ways to modernize existing systems for AI. And the rise of agentic AI means architects need to stay up-to-date on emerging protocols like MCP.

Many of the concerns from 2025 will carry over into the new year. Considerations such as incorporating LLMs and RAG into existing architectures, emerging architecture patterns and antipatterns specifically for AI systems, and the focus on API and data integrations elevated by MCP are critical.

The fundamentals still matter. Tools and frameworks are making it possible to automate more tasks. However, to successfully leverage these capabilities to design sustainable architecture, enterprise architects must have a full command of the principles behind them: when to add an agent or a microservice, how to consider cost, how to define boundaries, and how to act on the knowledge they already have.

Infrastructure and Operations

The InfraOps space is undergoing its most significant transformation since cloud computing, as AI evolves from a workload to be managed to an active participant in managing infrastructure itself. With infrastructure sprawling across multicloud environments, edge deployments, and specialized AI accelerators, manual management is becoming nearly impossible. In 2026, the industry will keep moving toward self-healing systems and predictive observability—infrastructure that continuously optimizes itself, shifting the human role from manual maintenance to system oversight, architecture, and long-term strategy.

Platform engineering makes this transformation operational, abstracting infrastructure complexity behind self-service interfaces, which lets developers deploy AI workloads, implement observability, and maintain security without deep infrastructure expertise. The best platforms will evolve into orchestration layers for autonomous systems. While fully autonomous systems remain on the horizon, the trajectory is clear.

Takeaways

AI is becoming a primary driver of infrastructure architecture. AI-native workloads demand GPU orchestration at scale, specialized networking protocols optimized for model training and inference, and frameworks like Ray on Kubernetes that can distribute compute intelligently. Organizations are redesigning infrastructure stacks to accommodate these demands and are increasingly considering hybrid environments and alternatives to hyperscalers to power their AI workloads—“neocloud” platforms like CoreWeave, Lambda, and Vultr.

AI is augmenting the work of operations teams with real-time intelligence. Organizations are turning to AIOps platforms to predict failures before they cascade, identify anomalies humans would miss, and surface optimization opportunities in telemetry data. These systems aim to amplify human judgment, giving operators superhuman pattern recognition across complex environments.

AI is evolving into an autonomous operator that makes its own infrastructure decisions. Companies will implement emerging “agentic SRE” practices: systems that reason about infrastructure problems, form hypotheses about root causes, and take independent corrective action, replicating the cognitive workload that SREs perform, not just following predetermined scripts.

Data

The big story of the back half of 2025 was agents. While the groundwork has been laid, in 2026 we expect focus on the development of agentic systems to persist—and this will necessitate new tools and techniques, particularly on the data side. AI and data platforms continue to converge, with vendors like Snowflake, Databricks, and Salesforce releasing products to help customers build and deploy agents. 

Beyond agents, AI is making its influence felt across the entire data stack, as data professionals target their workflows to support enterprise AI. Significant trends include real-time analytics, enhanced data privacy and security, and the increasing use of low-code/no-code tools to democratize data access. Sustainability also remains a concern, and data professionals need to consider ESG compliance, carbon-aware tooling, and resource-optimized architectures when designing for AI workloads.

Takeaways

Data infrastructure continues to consolidate. The consolidation trend has not only affected the modern data stack but also more traditional areas like the database space. In response, organizations are being more intentional about what kind of databases they deploy. At the same time, modern data stacks have fragmented across cloud platforms and open ecosystems, so engineers must increasingly design for interoperability. 

A multiple database approach is more important than ever. Vector databases like Pinecone, Milvus, Qdrant, and Weaviate help power agentic AI—while they’re a new technology, companies are beginning to adopt vector databases more widely. DuckDB’s popularity is growing for running analytical queries. And even though it’s been around for a while, ClickHouse, an open source distributed OLAP database used for real-time analytics, has finally broken through with data professionals.

The infrastructure to support autonomous agents is coming together. GitOps, observability, identity management, and zero-trust orchestration will all play key roles. And we’re following a number of new initiatives that facilitate agentic development, including AgentDB, a database designed specifically to work effectively with AI agents; Databricks’ recently announced Lakebase, a Postgres database/OLTP engine integrated within the data lakehouse; and Tiger Data’s Agentic Postgres, a database “designed from the ground up” to support agents.

Security

AI is a threat multiplier—59% of tech professionals cited AI-driven cyberthreats as their biggest concern in a recent survey. In response, the cybersecurity analyst role is shifting from low-level human-in-the-loop tasks to complex threat hunting, AI governance, advanced data analysis and coding, and human-AI teaming oversight. But addressing AI-generated threats will also require a fundamental transformation in defensive strategy and skill acquisition—and the sooner it happens, the better.

Takeaways

Security professionals now have to defend a broader attack surface. The proliferation of AI agents expands the attack surface. Security tools must evolve to protect it. Implementing zero trust for machine identities is a smart opening move to mitigate sprawl and nonhuman traffic. Security professionals must also harden their AI systems against common threats such as prompt injection and model manipulation.

Organizations are struggling with governance and compliance. Striking a balance between data utility and vulnerability requires adherence to data governance best practices (e.g., least privilege). Government agencies, industry and professional groups, and technology companies are developing a range of AI governance frameworks to help guide organizations, but it’s up to companies to translate these technical governance frameworks into board-level risk decisions and actionable policy controls.

The security operations center (SOC) is evolving. The velocity and scale of AI-driven attacks can overwhelm traditional SIEM/SOAR solutions. Expect increased adoption of agentic SOC—a system of specialized, coordinated AI agents for triage and response. This shifts the focus of the SOC analyst from reactive alert triage to proactive threat hunting, complex analysis, and AI system oversight.

Product Management and Design

Business focus in 2025 shifted from scattered AI experiments to the challenge of building defensible, AI-native businesses. Next year we’re likely to see product teams moving from proof of concept to proof of value

One thing to look for: Design and product responsibilities may consolidate under a “product builder”—a full stack generalist in product, design, and engineering who can rapidly build, validate, and launch new products. Companies are currently hiring for this role, although few people actually possess the full skill set at the moment. But regardless of whether product builders become ascendant, product folks in 2026 and beyond will need the ability to combine product validation, good-enough engineering, and rapid design, all enabled by AI as a core accelerator. We’re already seeing the “product manager” role becoming more technical as AI spreads throughout the product development process. Nearly all PMs use AI, but they’ll increasingly employ purpose-built AI workflows for research, user-testing, data analysis, and prototyping.

Takeaways

Companies need to bridge the AI product strategy gap. Most companies have moved past simple AI experiments but are now facing a strategic crisis. Their existing product playbooks (how to size markets, roadmapping, UX) weren’t designed for AI-native products. Organizations must develop clear frameworks for building a portfolio of differentiated AI products, managing new risks, and creating sustainable value. 

AI product evaluation is now mission-critical. As AI becomes a core product component and strategy matures, rigorous evaluation is the key to turning products that are good on paper into those that are great in production. Teams should start by defining what “good” means for their specific context, then build reliable evals for models, agents, and conversational UIs to ensure they’re hitting that target.

Design’s new frontier is conversations and interactions. Generative AI has pushed user experience beyond static screens into probabilistic new multimodal territory. This means a harder shift toward designing nonlinear, conversational systems, including AI agents. In 2026, we’re likely to see increased demand for AI conversational designers and AI interaction designers to devise conversation flows for chatbots and even design a model’s behavior and personality.

What It All Means

While big questions about AI remain unanswered, the best way to plan for uncertainty is to consider the real value you can create for your users and for your teams themselves right now. The tools will improve, as they always do, and the strategies to use them will grow more complex. Being deeply versed in the core knowledge of your area of expertise gives you the foundation you’ll need to take advantage of these quickly evolving technologies—and ensure that whatever you create will be built on bedrock, not shaky ground.

Understanding transformers: What every leader should know about the architecture powering GenAI

8 January 2026 at 07:40

Generative AI has gone from research novelty to production necessity. Models like GPT-4, Claude, Gemini and Llama now power everything from customer support to developer tooling. Yet many leaders still describe these systems as “black boxes.”

The reality is far less mysterious. What powers every generative AI model is not magic, but architecture. Specifically, the transformer. Understanding this architecture helps leaders make smarter decisions about infrastructure costs, scaling and where AI can (and cannot) deliver value.

From sequences to systems that understand context

Before 2017, nearly every AI system that dealt with language relied on recurrent neural networks (RNNs) or their improved variant, LSTMs (long short-term memory networks). These architectures processed text sequentially, one token at a time, passing the output of one step into the next like a relay baton carrying memory through the sentence.

This design was intuitive but restrictive. Each word depended on the previous one, which meant training and inference couldn’t be parallelized. Processing a long paragraph required thousands of dependent steps, making RNNs inherently slow and memory intensive. They also struggled with long-range dependencies in which early information often got “faded” by the time the model reached the end of a sentence or paragraph, a problem known as the vanishing gradient.

Engineers tried to solve this by increasing memory capacity (via LSTMs and GRUs), adding gates and loops to preserve context, but these models still worked linearly. The result was a trade-off: accuracy or speed, but rarely both.

The transformer upended this paradigm. Instead of learning language as a sequence through time, it learned it as a network of relationships. Each token in a sentence can “see” every other token simultaneously, using attention to decide which relationships matter most. The model no longer relies on passing a single stream of memory step-by-step; it builds a complete context map in one shot.

This shift was more than a performance improvement; it was an architectural breakthrough. By enabling parallel computation across all tokens, transformers made it possible to train on massive datasets efficiently and capture dependencies across entire documents, not just sentences. In essence, it replaced memory with connectivity.

For engineering leaders, this was the moment machine learning architecture started to look like systems architecture which was distributed, scalable and optimized for context propagation instead of sequential control. It’s the same conceptual leap that turns a single-threaded process into a multi-core system: throughput increases, latency drops and coordination becomes the new design challenge.

Tokens, vectors and meaning

Think of a token as the smallest unit a model can process: a word, subworld or even punctuation. When you type “transformers power generative AI,” the model doesn’t see letters; it sees tokens such as [Transform], [ers], [power], [generative], [AI].

Each token is converted into a vector which is a list of numbers that encodes meaning and context. These vectors are how machines “think”. They represent ideas not as symbols but as positions in a high-dimensional space: words with similar meanings live close together.

The attention mechanism: How understanding emerges

Inside a transformer, attention is the mechanism that lets tokens talk to each other. The model compares every token’s query (what it’s looking for) with every other token’s key (what information it offers) to calculate a weight which is a measure of how relevant one token is to another. These weights are then used to blend information from all tokens into a new, context-aware representation called a value.

In simple terms: attention allows the model to focus dynamically. If the model reads “The cat sat on the mat because it was tired,” attention helps it learn that “it” refers to “the cat,” not “the mat.”

By doing this in parallel across thousands of tokens, transformers achieve context awareness at scale. That’s why GPT-4 can write multi-page essays coherently, it’s not remembering word by word, it’s reasoning over relationships between vectors in context.

Transformers at a glance

A transformer processes text in several structured steps, each designed to help the model move from understanding words to understanding meaning and relationships between them.

transformer architecture chart

Ankush Dhar

The process begins with input tokens, where the sentence e.g. “The cat sat on the mat” is split into smaller units called tokens. Each token is then converted into a numerical form through Token Embeddings, which then translate words into high-dimensional vectors that capture semantic meaning (for example, “cat” is numerically closer to “dog” than to “mat”).

However, because word order matters in language, the model adds Positional Encoding which is a mathematical signal that injects information about each token’s position in the sequence. This allows the transformer to distinguish between “The cat sat on the mat” and “The mat sat on the cat.”

Next comes the multi-head self-attention layer, the heart of the transformer. Here, each token interacts with every other token in the sentence, learning which words matter most to one another. For instance, “cat” pays attention to “sat” and “mat” because they are contextually related. Multiple “heads” of attention learn different types of relationships simultaneously, some focusing on grammar, others on meaning or dependency.

Each token’s refined representation then passes through a feed-forward network, which applies nonlinear transformations independently to every token, helping the model combine and interpret information more deeply.

Afterward, residual connections and normalization ensure that useful information from earlier layers isn’t lost and that the training process remains stable and efficient. These mechanisms keep gradients flowing smoothly through the network and prevent degradation of learning across layers.

Finally, the processed representations emerge as output tokens or embeddings, which either serve as input to the next transformer layer or as the final contextualized output for prediction (like generating the next word).

This simple loop of attention, transformation, normalization is repeated dozens or even hundreds of times. Each layer adds nuance, letting the model move from recognizing words, to ideas, to reasoning patterns.

Scaling and serving: Where architecture meet cost

Transformers are powerful, but they’re also expensive. Training a model like GPT-4 requires thousands of GPUs and trillions of data tokens.

Leaders don’t need to know tensor math, but they do need to understand scaling trade-offs. Techniques like quantization (reducing numerical precision), model sharding (splitting across GPUs) and caching can cut serving costs by 30–50% with minimal accuracy loss.

The key insight: Architecture determines economics. Design choices in model serving directly impact latency, reliability and total cost of ownership.

Beyond text: One architecture, many domains

When transformers were first introduced in 2017, they revolutionized how machines understood language. But what makes the architecture truly remarkable is its universality. The same design that understands sentences can also understand images, audio and even video because at its core, a transformer doesn’t care about the data type. It just needs tokens and relationships.

In computer vision, vision transformers (ViT) replaced traditional convolutional neural networks by splitting an image into small patches (tokens) and analyzing how they relate through attention much like how words relate in a sentence.

In speech, architectures such as Conformer and Whisper applied the same self-attention principle to learn dependencies across time, improving transcription, translation and voice synthesis accuracy.

In multimodal AI, models like CLIP and GPT-4V combine text, images and audio into a shared vector space, enabling the model to describe an image, caption a video or answer questions about visual content  all within one architectural framework.

This convergence means the transformer blueprint has become the foundation of nearly every modern AI system. Whether it’s ChatGPT writing text, DALL·E generating images or Gemini integrating multiple modalities, they all share the same underlying logic: tokens, attention and embeddings.

The transformer isn’t just an NLP model; it’s a universal architecture for understanding and generating any kind of data.

Leadership takeaway

The transformer’s most profound breakthrough isn’t just technical — it’s architectural. It proved that intelligence could emerge from design — from systems that are distributed, parallel and context-aware.

For engineering leaders, understanding transformers isn’t about learning equations; it’s about recognizing a new principle of system design.

Architectures that listen, connect and adapt just like attention layers in a transformer consistently outperform those that process blindly.

Teams built the same way — context-rich, communicative and adaptive — become more intelligent over time.

The views expressed in this article are the author’s own and do not represent those of Amazon.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

Interoperability and standardization: Cornerstones of coalition readiness

23 December 2025 at 15:23

In an era increasingly defined by rapid technological change, the ability of the United States and its allies to communicate and operate as a unified force has never been more vital. Modern conflict now moves at the pace of data, and success depends on how quickly information can be shared, analyzed and acted upon across Defense Department and coalition networks. Today, interoperability is critical to maintaining a strategic advantage across all domains.

The DoD has made progress toward interoperability goals through initiatives such as Combined

Joint All-Domain Command and Control (CJADC2), the Modular Open Systems Approach (MOSA) and the Sensor Open Systems Architecture (SOSA). Each underscores a clear recognition that victory in future conflicts will hinge on the ability to connect every sensor, platform and decision-maker in real time. Yet as adversaries work to jam communications and weaken alliances, continued collaboration between government and industry remains essential.

The strategic imperative

Interoperability allows the Army, Navy, Marine Corps, Air Force and Space Force to function as one integrated team. It ensures that data gathered by an Army sensor can inform a naval strike or that an Air Force feed can guide a Space Force operation, all in seconds. Among NATO and allied partners, this same connectivity ensures that an attack on one member can trigger a fast, coordinated, data-driven response by all. That unity of action forms the backbone of deterrence.

Without true interoperability, even the most advanced technology can end up isolated. The challenge is compounded by aging systems, proprietary platforms and differing national standards. Sustained commitment to open architectures and shared standards is the only way to guarantee compatibility while still encouraging innovation.

The role of open standards

Open standards make real interoperability possible. Common interfaces like Ethernet or IP networking allow systems built by different nations or vendors to talk to one another. When governments and companies collaborate on open frameworks instead of rigid specifications, innovation can thrive without sacrificing integration.

History has demonstrated that rigid design rules can slow progress and limit creativity, and it’s critical we now find the right balance. That means defining what interoperability requires while giving end users the freedom to achieve it in flexible ways. The DoD’s emphasis on modular, open architectures allows industry to innovate within shared boundaries, keeping future systems adaptable, affordable and compatible across domains and partners.

Security at the core

Interoperability depends on trust, and trust relies on security. Seamless data sharing among allies must be matched with strong protection for classified and mission-critical information, whether that data is moving across networks or stored locally.

Information stored on devices, vehicles or sensors, also known as data at rest, must be encrypted to prevent exploitation if it is captured or lost. Strong encryption ensures that even if adversaries access the hardware, the information remains unreadable. The loss of unprotected systems has repeatedly exposed vulnerabilities, reinforcing the need for consistent data at rest safeguards across all platforms.

The rise of quantum computing only heightens this concern. As processing power increases, current encryption methods will become outdated. Shifting to quantum-resistant encryption must be treated as a defense priority to secure joint and coalition data for decades to come.

Lessons from past operations

Past crises have highlighted how incompatible systems can cripple coordination. During Hurricane Katrina, for example, first responders struggled to communicate because their radios could not connect. The same issue has surfaced in combat, where differing waveforms or encryption standards limited coordination among U.S. services and allies.

The defense community has since made major strides, developing interoperable waveforms, software-defined radios and shared communications frameworks. But designing systems to be interoperable from the outset, rather than retrofitting them later, remains crucial. Building interoperability in from day one saves time, lowers cost and enhances readiness.

The rise of machine-to-machine communication

As the tempo of warfare increases, human decision-making alone cannot keep up with the speed of threats. Machine-to-machine communication, powered by artificial intelligence and machine learning, is becoming a decisive edge. AI-driven systems can identify, classify and respond to threats such as hypersonic missiles within milliseconds, long before a human could react.

These capabilities depend on smooth, standardized data flow across domains and nations. For AI systems to function effectively, they must exchange structured, machine-readable data through shared architectures. Distributed intelligence lets each platform make informed local decisions even if communications are jammed, preserving operational effectiveness in contested environments.

Cloud and hybrid architectures

Cloud and hybrid computing models are reshaping how militaries handle information. The Space Development Agency’s growing network of low Earth orbit satellites is enabling high bandwidth, global connectivity. Yet sending vast amounts of raw data from the field to distant cloud servers is not always practical or secure.

Processing data closer to its source, at the tactical edge, strikes the right balance. By combining local processing with cloud-based analytics, warfighters gain the agility, security and resilience required for modern operations. This approach also minimizes latency, ensuring decisions can be made in real time when every second matters.

A call to action

To maintain an edge over near-peer rivals, the United States and its allies must double down on open, secure and interoperable systems. Interoperability should be built into every new platform’s design, not treated as an afterthought. The DoD can further this goal by enforcing standards that require seamless communication across services and allied networks, including baseline requirements for data encryption at rest.

Adopting quantum-safe encryption should also remain a top priority to safeguard coalition systems against emerging threats. Ongoing collaboration between allies is equally critical, not only to harmonize technical standards, but to align operational procedures and shared security practices.

Government and industry must continue working side by side. The speed of technological change demands partnerships that can turn innovation into fielded capability quickly. Open, modular architectures will ensure defense systems evolve with advances in AI, networking and computing, while staying interoperable across generations of hardware and software.

Most importantly, interoperability should be viewed as a lasting strategic advantage, not just a technical goal. The nations that can connect, coordinate and act faster than their adversaries will maintain a strategic advantage. The continued leadership of the DoD and allied defense organizations in advancing secure, interoperable and adaptable systems will keep the United States and its partners ahead of near-peer competitors for decades to come.

 

Ray Munoz is the chief executive officer of Spectra Defense Technologies and a veteran of the United States Navy.

Cory Grosklags is the chief commercial officer of Spectra Defense Technologies.

The post Interoperability and standardization: Cornerstones of coalition readiness first appeared on Federal News Network.

© III Marine Expeditionary Force //Cpl. William Hester

How to do a Security Review – An Example

By: Jo
16 November 2025 at 03:36
Learn how to perform a complete Security Review for new product features—from scoping and architecture analysis to threat modeling and risk assessment. Using a real-world chatbot integration example, this guide shows how to identify risks, apply security guardrails, and deliver actionable recommendations before release.

Innovator Spotlight: Corelight

By: Gary
9 September 2025 at 12:24

The Network’s Hidden Battlefield: Rethinking Cybersecurity Defense Modern cyber threats are no longer knocking at the perimeter – they’re already inside. The traditional security paradigm has fundamentally shifted, and CISOs...

The post Innovator Spotlight: Corelight appeared first on Cyber Defense Magazine.

Synergy between cyber security Mesh & the CISO role: Adaptability, visibility & control

By: slandau
22 July 2024 at 09:00

With over two decades of experience in the cyber security industry, I specialize in advising organizations on how to optimize their financial investments through the design of effective and cost-efficient cyber security strategies. Since the year 2000, I’ve had the privilege of collaborating with various channels and enterprises across the Latin American region, serving in multiple roles ranging from Support Engineer to Country Manager. This extensive background has afforded me a unique perspective on the evolving threat landscape and the shifting needs of businesses in the digital world.

The dynamism of technological advancements has transformed cyber security demands, necessitating more proactive approaches to anticipate and prevent threats before they can impact an organization. Understanding this ever-changing landscape is crucial for adapting to emerging security challenges.

In my current role as the Channel Engineering Manager for LATAM at Check Point, I also serve as part of the Cybersecurity Evangelist team under the office of our CTO. I am focused on merging technical skills with strategic decision-making, encouraging organizations to concentrate on growing their business while we ensure security.

The Cyber Security Mesh framework can safeguard businesses from unwieldy and next-generation cyber threats. In this interview, Check Point Security Engineering Manager Angel Salazar Velasquez discusses exactly how that works. Get incredible insights that you didn’t even realize that you were missing. Read through this power-house interview and add another dimension to your organization’s security strategy!

Would you like to provide an overview of the Cyber Security Mesh framework and its significance?

The Cyber Security Mesh framework represents a revolutionary approach to addressing cyber security challenges in increasingly complex and decentralized network environments. Unlike traditional security models that focus on establishing a fixed ‘perimeter’ around an organization’s resources, the Mesh framework places security controls closer to the data, devices, and users requiring protection. This allows for greater flexibility and customization, more effectively adapting to specific security and risk management needs.

For CISOs, adopting the Cyber Security Mesh framework means a substantial improvement in risk management capabilities. It enables more precise allocation of security resources and offers a level of resilience that is difficult to achieve with more traditional approaches. In summary, the Mesh framework provides an agile and scalable structure for addressing emerging threats and adapting to rapid changes in the business and technology environment.

How does the Cyber Security Mesh framework differ from traditional cyber security approaches?

Traditionally, organizations have adopted multiple security solutions from various providers in the hope of building comprehensive defense. The result, however, is a highly fragmented security environment that can lead to a lack of visibility and complex risk management. For CISOs, this situation presents a massive challenge because emerging threats often exploit the gaps between these disparate solutions.

The Cyber Security Mesh framework directly addresses this issue. It is an architecture that allows for better interoperability and visibility by orchestrating different security solutions into a single framework. This not only improves the effectiveness in mitigating threats but also enables more coherent, data-driven risk management. For CISOs, this represents a radical shift, allowing for a more proactive and adaptive approach to cyber security strategy.

Could you talk about the key principles that underly Cyber Security Mesh frameworks and architecture?

Understanding the underlying principles of Cyber Security Mesh is crucial for evaluating its impact on risk management. First, we have the principle of ‘Controlled Decentralization,’ which allows organizations to maintain control over their security policies while distributing implementation and enforcement across multiple security nodes. This facilitates agility without compromising security integrity.

Secondly, there’s the concept of ‘Unified Visibility.’ In an environment where each security solution provides its own set of data and alerts, unifying this information into a single coherent ‘truth’ is invaluable. The Mesh framework allows for this consolidation, ensuring that risk-related decision-making is based on complete and contextual information. These principles, among others, combine to provide a security posture that is much more resilient and adaptable to the changing needs of the threat landscape.

How does the Cyber Security Mesh framework align with or complement Zero Trust?

The convergence of Cyber Security Mesh and the Zero Trust model is a synergy worth exploring. Zero Trust is based on the principle of ‘never trust, always verify,’ meaning that no user or device is granted default access to the network, regardless of its location. Cyber Security Mesh complements this by decentralizing security controls. Instead of having a monolithic security perimeter, controls are applied closer to the resource or user, allowing for more granular and adaptive policies.

This combination enables a much more dynamic approach to mitigating risks. Imagine a scenario where a device is deemed compromised. In an environment that employs both Mesh and Zero Trust, this device would lose its access not only at a global network level but also to specific resources, thereby minimizing the impact of a potential security incident. These additional layers of control and visibility strengthen the organization’s overall security posture, enabling more informed and proactive risk management.

How does the Cyber Security Mesh framework address the need for seamless integration across diverse technologies and platforms?

The Cyber Security Mesh framework is especially relevant today, as it addresses a critical need for seamless integration across various technologies and platforms. In doing so, it achieves Comprehensive security coverage, covering all potential attack vectors, from endpoints to the cloud. This approach also aims for Consolidation, as it integrates multiple security solutions into a single operational framework, simplifying management and improving operational efficiency.

Furthermore, the mesh architecture promotes Collaboration among different security solutions and products. This enables a quick and effective response to any threat, facilitated by real-time threat intelligence that can be rapidly shared among multiple systems. At the end of the day, it’s about optimizing security investment while facing key business challenges, such as breach prevention and secure digital transformation.

Can you discuss the role of AI and Machine Learning within the Cyber Security Mesh framework/architecture?

Artificial Intelligence (AI) and Machine Learning play a crucial role in the Cyber Security Mesh ecosystem. These technologies enable more effective and adaptive monitoring, while providing rapid responses to emerging threats. By leveraging AI, more effective prevention can be achieved, elevating the framework’s capabilities to detect and counter vulnerabilities in real-time.

From an operational standpoint, AI and machine learning add a level of automation that not only improves efficiency but also minimizes the need for manual intervention in routine security tasks. In an environment where risks are constantly evolving, this agility and ability to quickly adapt to new threats are invaluable. These technologies enable coordinated and swift action, enhancing the effectiveness of the Cyber Security Mesh.

What are some of the challenges or difficulties that organizations may see when trying to implement Mesh?

The implementation of a Cyber Security Mesh framework is not without challenges. One of the most notable obstacles is the inherent complexity of this mesh architecture, which can hinder effective security management. Another significant challenge is the technological and knowledge gap that often arises in fragmented security environments. Added to these is the operational cost of integrating and maintaining multiple security solutions in an increasingly diverse and dynamic ecosystem.

However, many of these challenges can be mitigated if robust technology offering centralized management is in place. This approach reduces complexity and closes the gaps, allowing for more efficient and automated operation. Additionally, a centralized system can offer continuous learning as it integrates intelligence from various points into a single platform. In summary, centralized security management and intelligence can be the answer to many of the challenges that CISOs face when implementing the Cyber Security Mesh.

How does the Cyber Security Mesh Framework/Architecture impact the role of traditional security measures, like firewalls and IPS?

Cyber Security Mesh has a significant impact on traditional security measures like firewalls and IPS. In the traditional paradigm, these technologies act as gatekeepers at the entry and exit points of the network. However, with the mesh approach, security is distributed and more closely aligned with the fluid nature of today’s digital environment, where perimeters have ceased to be fixed.

Far from making them obsolete, the Cyber Security Mesh framework allows firewalls and IPS to transform and become more effective. They become components of a broader and more dynamic security strategy, where their intelligence and capabilities are enhanced within the context of a more flexible architecture. This translates into improved visibility, responsiveness, and adaptability to new types of threats. In other words, traditional security measures are not eliminated, but integrated and optimized in a more versatile and robust security ecosystem.

Can you describe real-world examples that show the use/success of the Cyber Security Mesh Architecture?

Absolutely! In a company that had adopted a Cyber Security Mesh architecture, a sophisticated multi-vector attack was detected targeting its employees through various channels: corporate email, Teams, and WhatsApp. The attack included a malicious file that exploited a zero-day vulnerability. The first line of defense, ‘Harmony Email and Collaboration,’ intercepted the file in the corporate email and identified it as dangerous by leveraging its Sandboxing technology and updated the information in its real-time threat intelligence cloud.

When the same malicious file tried to be delivered through Microsoft Teams, the company was already one step ahead. The security architecture implemented also extends to collaboration platforms, so the file was immediately blocked before it could cause harm. Almost simultaneously, another employee received an attack attempt through WhatsApp, which was neutralized by the mobile device security solution, aligned with the same threat intelligence cloud.

This comprehensive and coordinated security strategy demonstrates the strength and effectiveness of the Cyber Security Mesh approach, which allows companies to always be one step ahead, even when facing complex and sophisticated multi-vector attacks. The architecture allows different security solutions to collaborate in real-time, offering effective defense against emerging and constantly evolving threats.

The result is solid security that blocks multiple potential entry points before they can be exploited, thus minimizing risk and allowing the company to continue its operations without interruption. This case exemplifies the potential of a well-implemented and consolidated security strategy, capable of addressing the most modern and complex threats.

Is there anything else that you would like to share with the CyberTalk.org audience?

To conclude, the Cyber Security Mesh approach aligns well with the three key business challenges that every CISO faces:

Breach and Data Leak Prevention: The Cyber Security Mesh framework is particularly strong in offering an additional layer of protection, enabling effective prevention against emerging threats and data breaches. This aligns perfectly with our first ‘C’ of being Comprehensive, ensuring security across all attack vectors.

Secure Digital and Cloud Transformation: The flexibility and scalability of the Mesh framework make it ideal for organizations in the process of digital transformation and cloud migration. Here comes our second ‘C’, which is Consolidation. We offer a consolidated architecture that unifies multiple products and technologies, from the network to the cloud, thereby optimizing operational efficiency and making digital transformation more secure.

Security Investment Optimization: Finally, the operational efficiency achieved through a Mesh architecture helps to optimize the security investment. This brings us to our third ‘C’ of Collaboration. The intelligence shared among control points, powered by our ThreatCloud intelligence cloud, enables quick and effective preventive action, maximizing the return on security investment.

In summary, Cyber Security Mesh is not just a technological solution, but a strategic framework that strengthens any CISO’s stance against current business challenges. It ideally complements our vision and the three C’s of Check Point, offering an unbeatable value proposition for truly effective security.

The post Synergy between cyber security Mesh & the CISO role: Adaptability, visibility & control appeared first on CyberTalk.

Contain Breaches and Gain Visibility With Microsegmentation

1 February 2023 at 09:00

Organizations must grapple with challenges from various market forces. Digital transformation, cloud adoption, hybrid work environments and geopolitical and economic challenges all have a part to play. These forces have especially manifested in more significant security threats to expanding IT attack surfaces.

Breach containment is essential, and zero trust security principles can be applied to curtail attacks across IT environments, minimizing business disruption proactively. Microsegmentation has emerged as a viable solution through its continuous visualization of workload and device communications and policy creation to define what communications are permitted. In effect, microsegmentation restricts lateral movement, isolates breaches and thwarts attacks.

Given the spotlight on breaches and their impact across industries and geographies, how can segmentation address the changing security landscape and client challenges? IBM and its partners can help in this space.

Breach Landscape and Impact of Ransomware

Historically, security solutions have focused on the data center, but new attack targets have emerged with enterprises moving to the cloud and introducing technologies like containerization and serverless computing. Not only are breaches occurring and attack surfaces expanding, but also it has become easier for breaches to spread. Traditional prevention and detection tools provided surface-level visibility into traffic flow that connected applications, systems and devices communicating across the network.  However, they were not intended to contain and stop the spread of breaches.

Ransomware is particularly challenging, as it presents a significant threat to cyber resilience and financial stability. A successful attack can take a company’s network down for days or longer and lead to the loss of valuable data to nefarious actors. The Cost of a Data Breach 2022 report, conducted by the Ponemon Institute and sponsored by IBM Security, cites $4.54 million as the average ransomware attack cost, not including the ransom itself.

In addition, a recent IDC study highlights that ransomware attacks are evolving in sophistication and value. Sensitive data is being exfiltrated at a higher rate as attackers go after the most valuable targets for their time and money. Ultimately, the cost of a ransomware attack can be significant, leading to reputational damage, loss of productivity and regulatory compliance implications.

Organizations Want Visibility, Control and Consistency

With a focus on breach containment and prevention, hybrid cloud infrastructure and application security, security teams are expressing their concerns. Three objectives have emerged as vital for them.

First, organizations want visibility. Gaining visibility empowers teams to understand their applications and data flows regardless of the underlying network and compute architecture.

Second, organizations want consistency. Fragmented and inconsistent segmentation approaches create complexity, risk and cost. Consistent policy creation and strategy help align teams across heterogeneous environments and facilitate the move to the cloud with minimal re-writing of security policy.

Finally, organizations want control. Solutions that help teams target and protect their most critical assets deliver the greatest return. Organizations want to control communications through selectively enforced policies that can expand and improve as their security posture matures towards zero trust security.

Microsegmentation Restricts Lateral Movement to Mitigate Threats

Microsegmentation (or simply segmentation) combines practices, enforced policies and software that provide user access where required and deny access everywhere else. Segmentation contains the spread of breaches across the hybrid attack surface by continually visualizing how workloads and devices communicate. In this way, it creates granular policies that only allow necessary communication and isolate breaches by proactively restricting lateral movement during an attack.

The National Institute of Standards and Technology (NIST) highlights microsegmentation as one of three key technologies needed to build a zero trust architecture, a framework for an evolving set of cybersecurity paradigms that move defense from static, network-based perimeters to users, assets and resources.

Suppose existing detection solutions fail and security teams lack granular segmentation. In that case, malicious software can enter their environment, move laterally, reach high-value applications and exfiltrate critical data, leading to catastrophic outcomes.

Ultimately, segmentation helps clients respond by applying zero trust principles like ‘assume a breach,’ helping them prepare in the wake of the inevitable.

IBM Launches Segmentation Security Services

In response to growing interest in segmentation solutions, IBM has expanded its security services portfolio with IBM Security Application Visibility and Segmentation Services (AVS). AVS is an end-to-end solution combining software with IBM consulting and managed services to meet organizations’ segmentation needs. Regardless of where applications, data and users reside across the enterprise, AVS is designed to give clients visibility into their application network and the ability to contain ransomware and protect their high-value assets.

AVS will walk you through a guided experience to align your stakeholders on strategy and objectives, define the schema to visualize desired workloads and devices and build the segmentation policies to govern network communications and ring-fence critical applications from unauthorized access. Once the segmentation policies are defined and solutions deployed, clients can consume steady-state services for ongoing management of their environment’s workloads and applications. This includes health and maintenance, policy and configuration management, service governance and vendor management.

IBM has partnered with Illumio, an industry leader in zero trust segmentation, to deliver this solution.  Illumio’s software platform provides attack surface visibility, enabling you to see all communication and traffic between workloads and devices across the entire hybrid attack surface. In addition, it allows security teams to set automated, granular and flexible segmentation policies that control communications between workloads and devices, only allowing what is necessary to traverse the network. Ultimately, this helps organizations to quickly isolate compromised systems and high-value assets, stopping the spread of an active attack.

With AVS, clients can harden compute nodes across their data center, cloud and edge environments and protect their critical enterprise assets.

Start Your Segmentation Journey

IBM Security Services can help you plan and execute a segmentation strategy to meet your objectives. To learn more, register for the on-demand webinar now.

The post Contain Breaches and Gain Visibility With Microsegmentation appeared first on Security Intelligence.

Seth Rogen’s ‘High-ly Creative Retreat’ Airbnb Begins Booking

8 February 2023 at 08:00

Feel like taking your creativity level… a bit higher? Available for booking beginning this week, Seth Rogen partnered with Airbnb to unveil “A High-ly Creative Retreat,” providing a unique getaway in Los Angeles with ceramic activities.

The retreat features a ceramic studio with Rogen’s own handmade pottery, a display of his cannabis and lifestyle company Houseplant’s unique Housegoods, as well as mid-century furnishings, and “sprawling views of the city.”

The Airbnb is probably a lot cheaper than you think: Rogen will host three, one-night stays on February 15, 16, and 17 for two guests each for just $42—one decimal point away from 420—with some restrictions. U.S. residents can book an overnight stay at Rogen’s Airbnb beginning Feb. 7, but book now, because it’s doubtful that open slots will last.

“I don’t know what’s more of a Houseplant vibe than a creative retreat at a mid-century Airbnb filled with our Housegoods, a pottery wheel, and incredible views of LA,” Rogen said. “Add me, and you’ll have the ultimate experience.”

According to the listing, and his Twitter account, Rogen will be there to greet people and even do ceramics together.

“I’m teaming up with Airbnb so you (or someone else) can hang out with me and spend the night in a house inspired by my company,” Rogen tweeted recently.

I'm teaming up with @airbnb so you (or someone else) can hang out with me and spend the night in a house inspired by my company Houseplant. https://t.co/7XFoY5vgm9 pic.twitter.com/ukW1UxnEm5

— Seth Rogen (@Sethrogen) January 31, 2023

Guests will be provided with the following activities:

  • Get glazed in the pottery studio and receive pointers from Rogen himself!
  • Peruse a selection of Rogen’s own ceramic masterpieces, proudly displayed within the mid-century modern home.
  • Relax and revel in the sunshine of the space’s budding yard.
  • Tune in and vibe out to a collection of Houseplant record sets with specially curated tracklists by Seth Rogen & Evan Goldberg and inspired by different cannabis strains. Guests will get an exclusive first listen to their new Vinyl Box Set Vol. 2.
  • Satisfy cravings with a fully-stocked fridge for after-hours snacks.

Airbnb plans to join in on Rogen’s charity efforts, including his non-profit Hilarity for Charity, focusing on helping people living with Alzheimer’s disease.

“In celebration of this joint effort, Airbnb will make a one-time donation to Hilarity for Charity, a national non-profit on a mission to care for families impacted by Alzheimer’s disease, activate the next generation of Alzheimer’s advocates, and be a leader in brain health research and education,” Airbnb wrote.

In 2021, Rogen launched Houseplant, his cannabis and lifestyle company, in the U.S. But the cannabis brand’s web traffic was so high that the site crashed. Houseplant was founded by Rogen and his childhood friend Evan Goldberg, along with Michael Mohr, James Weaver, and Alex McAtee.

Yahoo! News reports, however, that Airbnb does not (cough, cough) allow cannabis on the premises of listings. The listing, however, will be filled with goods from Houseplant. Houseplant also sells luxury paraphernalia with a “mid-century modern spin.”

Seth Rogen recently invited Architectural Digest to present a tour of the Houseplant headquarters’ interior decor and operations. Houseplant’s headquarters is located in a 1918 bungalow in Los Angeles. Architectural Digest describes it as “Mid-century-modern-inspired furniture creates a cozy but streamlined aesthetic.”

People living in the U.S. can request to book stays at airbnb.com/houseplant. Guests are responsible for their own travel to and from Los Angeles, California and comply with applicable COVID-19 rules and guidelines. 

See Rogen’s listing on the Airbnb site.

If you can’t find your way in, Airbnb provides over 1,600 other creative spaces available around the globe.

The post Seth Rogen’s ‘High-ly Creative Retreat’ Airbnb Begins Booking appeared first on High Times.

❌
❌