Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

How adaptive infrastructure is evolving capabilities at the speed of business

19 January 2026 at 05:30

I’m not normally fond of year-end technology retrospectives, but 2025 was indeed a year of quantum leaps in the art of the possible and it has filled us all with measured optimism paired with some healthy and well-earned skepticism where AI is concerned. When I put architecture in perspective, I’m inclined to take a longer view of automation in all its variations over a decade. That’s why 2025 feels more like a footnote in a long series of events culminating in the perfect storm of opportunities we’ve been contemplating for some time now.

The composable infrastructure revolution

We’ve been moving toward self-aware, composable infrastructure in architecture for a while now and infrastructure-as-code was merely the first major inflection.

Let’s be honest, the old way of building IT infrastructure is breaking down. As an enterprise architect, the vicious cycle is very familiar. Tying agentic architecture demand-patterns to legacy infrastructure without careful consideration is fraught with peril. The old pattern is really predictable now: You provision systems, maintain them reactively and eventually retire them. Rinse and repeat.

That model is now officially unsustainable in the age of AI. What’s taking its place? Composable and intelligent infrastructure that can proactively self-assemble, reconfigure and optimize on the fly to match what the business needs.

For IT leaders, this shift from rigid systems to modular, agent-driven infrastructure is both a breakthrough opportunity and a serious transformation challenge. And the numbers back this up: the global composable infrastructure market sits at USD $8.3 billion in 2025 and is projected to grow at 24.9% annually through 2032.

What’s driving this hyper-accelerated growth? Geopolitical disruptions, supply chain chaos and AI advances are reshaping how and where companies operate. Business environments are being driven by reactive and dynamic agentic experiences, transactions and digital partnerships everywhere, all the time. Static infrastructure just can’t deliver that kind of flexibility based on marketing exercises that describe solution offerings as “on-demand,” “utility-based,” “adaptive” and “composable.” These are little more than half-truths.

A 2025 Forrester study commissioned by Microsoft found that 84% of IT leaders want solutions that consolidate edge and cloud operations across systems, sites and teams. As an architect in the consumer goods space, I found that our IT team would produce endless slide decks about composable enterprises ad nauseam, but infrastructure-as-code was the level of actual capability for some time.

Leaders wanted composable architecture that can pull together diverse components without hyperextended interoperability efforts. IBM’s research reinforces this, showing that companies with modular architectures are more agile, more resilient and faster to market — while also reducing the technical debt that slows everyone down.

The problem has been one of capacity and fitness for purpose. Legacy infrastructure and the underlying systems of record simply weren’t designed with agentic AI patterns in mind. My conversations with pan-industry architecture colleagues reflect the same crisis of expectation and resilience around agentic architectures.

Consider McKinsey’s 2025 AI survey that demonstrated 88% of organizations now use AI regularly in at least one business function and 62% are experimenting with AI agents. But most are stuck in pilot mode because their infrastructure can’t scale AI across the business.

If there are any winners in this race, they’ve broken apart their monolithic systems into modular pieces that AI agents can orchestrate based on what’s actually happening in real time.

AI agents: The new orchestration layer

So, what’s driving this shift? Agentic AI — systems that understand business context, figure out optimal configurations and execute complex workflows by pulling together infrastructure components on demand. This isn’t just standard automation following rigid, brittle scripts. Agents reason about what to assemble, how to configure it and when to reconfigure as conditions change.

The adoption curve is steep. BCG and MIT Sloan Management Review found that 35% of organizations already use agentic AI, with another 44% planning to jump in soon. The World Economic Forum reports 82% of executives plan to adopt AI agents within three years. McKinsey’s abovementioned State of AI research further highlights agentic AI as an emerging focus area for enterprise investment and describes AI agents as systems that can plan, take actions and orchestrate multi-step workflows with less human intervention than traditional automation.

As McKinsey puts it: “We’re entering an era where enterprise productivity is no longer just accelerated by AI — it’s orchestrated by it.” That’s a fundamental change in how infrastructure works.

IBM is betting big on this future, stating that “the future of IT operations is autonomous, policy-driven and hybrid by design.” They’re building environments where AI agents can orchestrate everything — public cloud, private infrastructure, on-premises systems, edge deployments — assembling optimal configurations for specific workloads and contexts. The scope of automation ranges from helpful recommendations to closed-loop fixes to fully autonomous optimization.

What composable architecture actually looks like

I recall no shortage of Lego-induced architecture references to composability over the last decade. Sadly, we conflated them with domain services and not how business capabilities and automation could and should inform how the Legos are pieced together to solve problems. Traditional infrastructure comes as tightly integrated stacks — hard to decompose, inflexible and reactive. The new composable model flips this, offering modular building blocks that agents can intelligently assemble and reassemble dynamically based on what’s needed right now.

Composability demands modularity and responsive automation

The foundation is extreme modularity — breaking monolithic systems into discrete, independently deployable pieces with clean interfaces. Composable infrastructure lets you dynamically assemble and disassemble resources based on application demands, optimizing how pooled resources get allocated and improving overall efficiency.

This goes far beyond physical infrastructure to include services, data pipelines, security policies and workflows. When everything is modular and API-accessible, agents can compose complex solutions from simple building blocks and adapt in real time.

Bringing cloud and edge together

Enterprise organizations are no longer treating cloud and edge as separate worlds requiring manual integration. The new approach treats all infrastructure — from hyperscale data centers to network edge — as a unified resource pool that agents can compose into optimal configurations.

McKinsey identifies edge-cloud convergence as essential for agentic AI: “Agents need real-time data access and low-latency environments. Combining edge compute (for inference and responsiveness) with cloud-scale training and storage is essential.” They further highlight how Hewlett Packard Enterprise (HPE) expanded its GreenLake platform in late 2024 with composable infrastructure hardware for hybrid and AI-driven workloads — modular servers and storage that let enterprises dynamically allocate resources based on real-time demand.

Agents running the show

Even IBM with its storied fixed-infrastructure history is all-in on agentic AI infrastructure capabilities — including agents and Model Context Protocol (MCP) servers — across its portfolio, making infrastructure components discoverable and composable by AI agents. These agents don’t just watch the infrastructure state; they actively orchestrate resources across enterprise data and applications, creating optimal configurations for specific workloads.

Management interfaces across IBM cloud, storage, power and Z platforms are becoming MCP-compatible services — turning infrastructure into building blocks that agents can reason about and orchestrate. Vendor-native agentic management solutions introduced similar AI-driven orchestration enhancements in 2024, letting large enterprises dynamically allocate resources across compute, storage and networking.

Self-aware and self-correcting infrastructure

Instead of manually configuring every component, composable architectures enable intent-based interfaces. You specify business objectives — support 10,000 concurrent users with sub-100ms latency at 99.99% availability — and agents figure out the infrastructure composition to make it happen.

Emerging intelligent infrastructure player Quali describes this as “infrastructure that understands itself” — systems where agentic AI doesn’t just demand infrastructure that keeps up, but infrastructure built from composable components that agents can understand and orchestrate.

Getting scale and flexibility in real time

Traditional infrastructure forces a choice: optimize for scale or build for adaptability. As architects, there are clear opposing trade-offs we must navigate successfully: Scale relative to adaptability, investment versus sustaining operations, tight oversight versus autonomy and process refactoring versus process reinvention.

Composable architectures solve this by delivering both. The dual nature of agentic AI — part tool, part human-like — doesn’t fit traditional management frameworks. People are flexible but don’t scale. Tools scale but can’t adapt. Agentic AI on composable infrastructure gives you scalable adaptability — handling massive workloads while continuously reconfiguring for changing contexts.

Self-composability and evolved governance

Agent-orchestrated infrastructure demands governance that balances autonomy with control. The earlier-mentioned MIT Sloan Management Review and BCG study found that most agentic AI leaders anticipate significant changes to governance and decision rights as they adopt agentic AI. They recommend creating governance hubs with enterprise-wide guardrails and dynamic decision rights rather than approving individual AI decisions one by one.

The answer lies in policy-based composition, defining constraints that bound agent decisions without prescribing exact configurations. Within those boundaries, agents compose and recompose infrastructure autonomously.

When AI agents continuously compose and recompose resources, you need governance frameworks that look nothing like traditional change management. A model registry that includes MCP connects different large language models while implementing guardrails for analytics, security, privacy and compliance. This treats AI as an agent whose decisions must be understood, managed and learned from — not as an infallible tool.

Making it happen in 2026

What should IT leaders do? Here are the most critical moves from my perspective.

Redesign work around agents first. Use agentic AI’s capacity to implement scalability and adapt broadly within parameterized governance automation rather than automating isolated tasks. Almost two-thirds of agentic AI leaders expect operating model changes. Build workflows that shift smoothly between efficiency and problem-solving modes.

Rethink roles for human-agent collaboration. Agents are an architect’s new partner. Reposition your role as an architect in the enterprise to adopt and embrace portfolios of AI agents to coordinate workflows, and traditional management layers change. Expect fewer middle management layers, with managers evolving to orchestrate hybrid human-AI teams. Consider dual career paths for generalist orchestrators and AI-augmented specialists.

Keep investments tied to value. Agentic AI leaders anchor investments to value — whether efficiency, innovation, revenue growth or some combination. Agentic systems are evolving from finite function agents to multi-agent collaborators, from narrow to broadly orchestrated tasks and other ecosystems and agents, and from operational to strategic human-mediated partnership.

The bottom line

The companies that will win in the next decade will recognize composability as the foundation of adaptive infrastructure. When every part of the technology stack becomes a modular building block and intelligent agents compose those blocks into optimal configurations based on real-time context, infrastructure becomes a competitive advantage instead of a constraint.

Organizations that understand agentic AI’s dual nature and align their processes, governance, talent and investments accordingly will realize its full business value. My architect’s perspective is that agentic AI will challenge established management approaches and, yes, even convince many of its ability to defy gravity. But with the right strategy and execution, it won’t just offer empty promises — it will deliver results. Further, our grounded expectations around the capacity of aging infrastructure and legacy demand patterns must guide us in ensuring we make intelligent decisions.

The question isn’t whether to embrace composable, agent-orchestrated infrastructure. It’s how fast you can decompose monolithic systems, build orchestration capabilities and establish the governance to make it work.

This article was made possible by our partnership with the IASA Chief Architect Forum. The CAF’s purpose is to test, challenge and support the art and science of Business Technology Architecture and its evolution over time, as well as grow the influence and leadership of chief architects both inside and outside the profession. The CAF is a leadership community of the IASA, the leading non-profit professional association for business technology architects. 

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

10 top priorities for CIOs in 2026

19 January 2026 at 05:01

A CIO’s wish list is typically long and costly. Fortunately, by establishing reasonable priorities, it’s possible to keep pace with emerging demands without draining your team or budget.

As 2026 arrives, CIOs need to take a step back and consider how they can use technology to help reinvent their wider business while running their IT capabilities with a profit and loss mindset, advises Koenraad Schelfaut, technology strategy and advisory global lead at business advisory firm Accenture. “The focus should shift from ‘keeping the lights on’ at the lowest cost to using technology … to drive topline growth, create new digital products, and bring new business models faster to market.”

Here’s an overview of what should be at the top of your 2026 priorities list.

1. Strengthening cybersecurity resilience and data privacy

Enterprises are increasingly integrating generative and agentic AI deep into their business workflows, spanning all critical customer interactions and transactions, says Yogesh Joshi, senior vice president of global product platforms at consumer credit reporting firm TransUnion. “As a result, CIOs and CISOs must expect bad actors will use these same AI technologies to disrupt these workflows to compromise intellectual property, including customer sensitive data and competitively differentiated information and assets.”

Cybersecurity resilience and data privacy must be top priorities in 2026, Joshi says. He believes that as enterprises accelerate their digital transformation and increasingly integrate AI, the risk landscape will expand dramatically. “Protecting sensitive data and ensuring compliance with global regulations is non-negotiable,” Joshi states.

2. Consolidating security tools

CIOs should prioritize re-baselining their foundations to capitalize on the promise of AI, says Arun Perinkolam, Deloitte’s US cyber platforms and technology, media, and telecommunications industry leader. “One of the prerequisites is consolidating fragmented security tools into unified, integrated, cyber technology platforms — also known as platformization.”

Perinkolam says a consolidation shift will move security from a patchwork of isolated solutions to an agile, extensible foundation fit for rapid innovation and scalable AI-driven operations. “As cyber threats become increasingly sophisticated, and the technology landscape evolves, integrating cybersecurity solutions into unified platforms will be crucial,” he says.

“Enterprises now face a growing array of threats, resulting in a sprawling set of tools to manage them,” Perinkolam notes. “As adversaries exploit fractured security postures, delaying platformization only amplifies these risks.”

3. Ensuring data protection

To take advantage of enhanced efficiency, speed, and innovation, organizations of all types and sizes are now racing to adopt new AI models, says Parker Pearson, chief strategy officer at data privacy and preservation firm Donoma Software.

“Unfortunately, many organizations are failing to take the basic steps necessary to protect their sensitive data before unleashing new AI technologies that could potentially be left exposed,” she warns, adding that in 2026 “data privacy should be viewed as an urgent priority.”

Implementing new AI models can raise significant concerns around how data is collected, used, and protected, Pearson notes. These issues arise across the entire AI lifecycle, from how the data used for initial training to ongoing interactions with the model. “Until now, the choices for most enterprises are between two bad options: either ignore AI and face the consequences in an increasingly competitive marketplace; or implement an LLM that could potentially expose sensitive data,” she says. Both options, she adds, can result in an enormous amount of damage.

The question for CIOs is not whether to implement AI, but how to derive optimal value from AI without placing sensitive data at risk, Pearson says. “Many CIOs confidently report that their organization’s data is either ‘fully’ or ‘end to end’ encrypted.” Yet Pearson believes that true data protection requires continuous encryption that keeps information secure during all states, including when it’s being used. “Until organizations address this fundamental gap, they will continue to be blindsided by breaches that bypass all their traditional security measures.”

Organizations that implement privacy-enhancing technology today will have a distinct advantage in implementing future AI models, Pearson says. “Their data will be structured and secured correctly, and their AI training will be more efficient right from the start, rather than continually incurring the expense, and risk of retraining their models.”

4. Focusing on team identity and experience

A top priority for CIOs in 2026 should be resetting their enterprise identity and employee experience, says Michael Wetzel, CIO at IT security software company Netwrix. “Identity is the foundation of how people show up, collaborate, and contribute,” he states. “When you get identity and experience right, everything else, including security, productivity, and adoption, follows naturally.”

Employees expect a consumer-grade experience at work, Wetzel says. “If your internal technology is clunky, they simply won’t use it.” When people work around IT, the organization loses both security and speed, he warns. “Enterprises that build a seamless, identity-rooted experience will innovate faster while organizations that don’t will fall behind.”

5. Navigating increasingly costly ERP migrations

Effectively navigating costly ERP migrations should be at the top of the CIO agenda in 2026, says Barrett Schiwitz, CIO at invoice lifecycle management software firm Basware. “SAP S/4HANA migrations, for instance, are complex and often take longer than planned, leading to rising costs.” He notes that upgrades can cost enterprises upwards of $100 million, rising to as much as $500 million depending on the ERP’s size and complexity.

The problem is that while ERPs try to do everything, they rarely perform specific tasks, such as invoice processing, really well, Schiwitz says. “Many businesses overcomplicate their ERP systems, customizing them with lots of add-ons that further increase risk.” The answer, he suggests, is adopting a “clean core” strategy that lets SAP do what it does best and then supplement it with best-in-class tools to drive additional value.

6. Doubling-down on innovation — and data governance

One of the most important priorities for CIOs in 2026 is architecting a foundation that makes innovation scalable, sustainable, and secure, says Stephen Franchetti, CIO at compliance platform provider Samsara.

Franchetti says he’s currently building a loosely coupled, API-first architecture that’s designed to be modular, composable, and extensible. “This allows us to move faster, adapt to change more easily, and avoid vendor or platform lock-in.” Franchetti adds that in an era where workflows, tools, and even AI agents are increasingly dynamic, a tightly bound stack simply won’t scale.

Franchetti is also continuing to evolve his enterprise data strategy. “For us, data is a long-term strategic asset — not just for AI, but also for business insight, regulatory readiness, and customer trust,” he says. “This means doubling down on data quality, lineage, governance, and accessibility across all functions.”

7. Facilitating workforce transformation

CIOs must prioritize workforce transformation in 2026, says Scott Thompson, a partner in executive search and management consulting company Heidrick & Struggles. “Upskilling and reskilling teams will help develop the next generation of leaders,” he predicts. “The technology leader of 2026 needs to be a product-centric tech leader, ensuring that product, technology, and the business are all one and the same.”

CIOs can’t hire their way out of the talent gap, so they must build talent internally, not simply buy it on the market, Thompson says. “The most effective strategy is creating a digital talent factory with structured skills taxonomies, role-based learning paths, and hands-on project rotations.”

Thompson also believes that CIOs should redesign job roles for an AI-enabled environment and use automation to reduce the amount of specialized labor required. “Forming fusion teams will help spread scarce expertise across the organization, while strong career mobility and a modern engineering culture will improve retention,” he states. “Together, these approaches will let CIOs grow, multiply, and retain the talent they need at scale.”

8. Improving team communication

A CIO’s top priority should be developing sophisticated and nuanced approaches to communication, says James Stanger, chief technology evangelist at IT certification firm CompTIA. “The primary effect of uncertainty in tech departments is anxiety,” he observes. “Anxiety takes different forms, depending upon the individual worker.”

Stanger suggests working closer with team members as well as managing anxiety through more effective and relevant training.

9. Strengthening drive agility, trust, and scale

Beyond AI, the priority for CIOs in 2026 should be strengthening the enabling capabilities that drive agility, trust, and scale, says Mike Anderson, chief digital and information officer at security firm Netskope.

Anderson feels that the product operating model will be central to this shift, expanding beyond traditional software teams to include foundational enterprise capabilities, such as identity and access management, data platforms, and integration services.

“These capabilities must support both human and non-human identities — employees, partners, customers, third parties, and AI agents — through secure, adaptive frameworks built on least-privileged access and zero trust principles,” he says, noting that CIOs who invest in these enabling capabilities now will be positioned to move faster and innovate more confidently throughout 2026 and beyond.

10. Addressing an evolving IT architecture

In 2026, today’s IT architecture will become a legacy model, unable to support the autonomous power of AI agents, predicts Emin Gerba, chief architect at Salesforce. He believes that in order to effectively scale, enterprises will have to pivot to a new agentic enterprise blueprint with four new architectural layers: a shared semantic layer to unify data meaning, an integrated AI/ML layer for centralized intelligence, an agentic layer to manage the full lifecycle of a scalable agent workforce, and an enterprise orchestration layer to securely manage complex, cross-silo agent workflows.

“This architectural shift will be the defining competitive wedge, separating companies that achieve end-to-end automation from those whose agents remain trapped in application silos,” Gerba says.

Architecture and decision-making

15 January 2026 at 05:00

Over my 20+ years working in software architecture and engineering leadership, I have come to believe that architecture is far more than technology. It is in many ways a form of leadership. At the center of that leadership is the ability to make sound decisions in moments of uncertainty. High-quality decisions that balance what the design and team can do today with what the business will need tomorrow.

We often celebrate architectural knowledge, such as patterns, frameworks, abstractions and styles, all of which are essential. But in reality, the thing that separates good architects from great ones is not knowledge. It is judgment. Knowledge is essential; however, it is judgment that truly distinguishes outstanding architects.

I believe judgment was the quiet force behind every meaningful architectural decision that I’ve made. It has guided me to deal with uncertainty, to weigh trade-offs, to balance the ideal with the practical and to design for teams that are still growing into the systems when we are building. I’ve found knowledge helps one to understand what is possible and Judgment helps one to decide what is the right thing to do.

This idea is reinforced in many of the great books I’ve learned from, whether it’s The Hard Things About Hard Things by Ben Horowitz or Team of Teams by General Stanley McChrystal. These books talk about leading through uncertainty, making tough choices and guiding people through complexity. Interestingly, that’s also exactly what software architects do every day.

In this article, I will share what I’ve learned about how leadership, technology and product thinking come together to shape better decisions – and how I learned to navigate the messy reality of building software while keeping one eye on the horizon.

Context: Architecture leadership and why it matters

When I think about architecture, I don’t think about diagrams. I think about high-quality decisions. Architecture is ultimately about shaping systems that deliver value. You can create the most elegant system in the world, but if it doesn’t meet its delivery timelines or if it is too complex for the team to operate, or if it doesn’t actually solve the user’s problem, then it’s not good architecture.

My approach has always centred on building systems that achieve the right return on investment. ROI is not just about cost efficiency or saving money. Sometimes the best ROI comes from spending more upfront to create long-term leverage. Other times, ROI comes from choosing the simplest possible path to meet a pressing market deadline. The job of the architect is to weigh these forces not just theoretically but practically.

For me, architectural leadership is about helping teams navigate these decisions without getting trapped in the pursuit of perfection. It’s about understanding users, understanding business priorities and understanding the people who will build and operate the system. It requires the ability to communicate a vision, reduce uncertainty and guide teams through moments when we simply don’t have all the answers.

That’s why I say architecture leadership sits right at the intersection of three worlds: technology, product and people.

Navigating uncertainty: In software architecture

Uncertainty is something I’ve learned to live with as it’s a constant backdrop of architectural decision-making.  Rarely do I get perfect requirements. Rarely do I know exactly how a system will evolve or behave at scale. And yet, I still need to make decisions that feel concrete, meaningful and aligned with the future, even when the future is a bit blurry.

I often think about Napoleon’s line: “A leader is a dealer in hope.”  It’s a surprisingly accurate reflection of what software architects do. We bring clarity where things are messy. We can’t predict the future, but we must still articulate a path forward when ambiguity is high. And we do it not because we know everything, but because we can see just enough to guide the next few steps.

Some of the best leadership books I’ve read, like Richard Rumelt’s Good Strategy Bad Strategy and Schmidt’s Trillion Dollar Coach, prioritize judgment as a top leadership quality, one who can navigate ambiguity and make high-quality decisions based on gutsy judgment.  

Meanwhile, leading technical books from Martin Fowler and Bob Martin’s Clean Architecture prioritize knowledge and vocabulary to act on that judgment.

For me, the role of an architect is to operate in the space where these two worlds, knowledge and judgement, overlap.

My personal experience: Data architecture — designing for reality, then evolving

Let me share a personal story that taught me the importance of architectural judgment more than any textbook ever could. It happened during a major data platform initiative that I was leading from the architecture front. We had a tight deadline, multiple data sources to onboard and a team that was still early in its journey with cloud-native data ingestion and distributed systems.

The long-term vision was clear in my head. We needed a generic ingestion framework built around an adapter pattern. In this model, each data source would plug into a common interface, giving us consistency, maintainability and the flexibility to evolve and scale over time. It was the right architecture for the future.

But the real question for me was it the right architecture for that moment?

In my view, the team was not yet ready to build such a framework. They needed real, hands-on experience with ingestion patterns, schema evolution, data quality issues and the messy operational challenges that only appear once real production traffic hits the system. If we pushed ahead prematurely, not only would we miss the deadline we would likely create abstractions that were elegant in theory but mismatched to reality.

So, it’s time for a pragmatic decision judgment call? Do we start with a managed ingestion service? Something reliable? Something we could operate easily? Something that delivered value quickly? It’s time for me to make a judgment call on whether to proceed with a cloud-native ingestion service or a generic ingestion framework. I decided to proceed with a cloud-native ingestion service. With this approach, it allowed us to ship on time, drastically reduce operational risk and gave the team space to build the foundational knowledge about large scale data ingestion service.

From the outside, the choice of proceeding with cloud-native data ingestion looked simple, maybe even ordinary. But internally, from a business perspective, it was a turning point. The team gained the confidence that comes from operating production data ingestion services. It allowed us to observe and learn what patterns we needed, what was working. What’s not working and more importantly, what patterns we don’t need. Of course, the big win was that we delivered business value immediately.

Fast forward to the following year. The team had grown. We had clear insights into performance bottlenecks, failure modes and necessary abstractions. With this experience, we revisited the long-term vision and built the generic ingestion framework we originally imagined. This time, the architecture almost designed itself. The adapter pattern made perfect sense, and the managed service became just one of the plug-in connectors behind the framework.

That decision started pragmatically, evolved strategically and remains one of the clearest examples of architectural judgment in my career. It was not just about technical correctness. It was about doing the right thing at the right time for the right reasons.

That experience reinforced something I firmly believe: Architecture is not about building the perfect system. It’s about making the right decisions based on the strengths that you have today from both a technical and a people perspective. While having a future state in mind, staying aligned with the future that will allow to evolve the architecture as you move toward.

Architecture judgment: 5 questions that shape decision-making

Over the years, I’ve discovered that asking the right questions matters even more than trying to find the perfect answers. This idea is strongly reinforced by Srinath Perera in his work on software architecture and decision-making, where he points out that good questions force us to think, uncover details and reshape our understanding of the problem. I’ve always resonated with Srinath’s observation that questions ground us in concrete situations rather than abstract ideals because it’s often the pursuit of those ideals that leads projects into trouble. In my experience, these questions are not just diagnostic tools, they are catalysts for clarity. They help me scope a system, cut through noise and understand what truly matters in the moment.

Q1: When is the best time to market?

The first question to consider is on timing. Timing is a decisive force in architecture. If you are racing to meet a product launch or capture a market opportunity, then simplicity and speed matter far more than creating the most elegant design. If you have more breathing room, can invest deeper in long-term foundations.

Q2: What is the skill level of the team?

The second question is about team skill level. Architecture doesn’t live in PowerPoint. It lives in the hands of the engineers who build and operate the system. A design that is beyond the team’s current capability is not an architecture; it’s a liability. This doesn’t mean lowering standards it means aligning ambition with reality and growing the team along the way.

Q3: How sensitive is the system to performance?

The third question is in relation to scaling and performance. Not every system needs millisecond latency or hyper-efficient throughput. But the ones that do require early architectural considerations. Misjudging this can turn into expensive rewrites or operational nightmares. Conversely, over-optimizing a system that doesn’t require it is wasted effort and cognitive load.

Q4: When can we rewrite the system?

The fourth question is about future state architecture. Every architecture has a lifespan. If you expect to revisit the design in the near future, you can accept more shortcuts and tactical decisions. But if the system is expected to live for years, then foundational decisions like data models, domain boundaries and communication patterns will require greater care.

Q5: What are the hard problems?

The final question is about hard architecture patterns or technology and how to address them early. Every system has one or two truly difficult challenges, whether it’s data consistency, security boundaries, real-time performance or scaling complexity. These hard problems should be tackled early, even if it means building prototypes or running parallel explorations. De-risking them can change the entire trajectory of the project.

These five questions help me to navigate uncertainty with clarity, while they echo many of the themes Srinath Perera outlines, particularly the idea that questions reshape our understanding; they’ve also evolved with my own experiences. They’ve become part of how I guide teams, evaluate trade-offs and build architectures that hold up in the real world.

Conclusion

Architecture is not the pursuit of perfection. It is the pursuit of appropriateness. Knowledge gives us options, but judgment helps us choose the right path for the moment we are in, the team we have and the product we are trying to build. The best architects bring together technology expertise, product intuition and leadership judgment into a single discipline.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

The tech leadership realizing more than the sum of parts

14 January 2026 at 05:00

Waiting on replacement parts can be more than just an inconvenience. It can be a matter of sharp loss of income and opportunity. This is especially true for those who depend on industrial tools and equipment for agriculture and construction. So to keep things run as efficiently as possible, Parts ASAP CIO John Fraser makes sure end customer satisfaction is the highest motivation to get the tech implementation and distribution right.

“What it comes down to, in order to achieve that, is the team,” he says. “I came into this organization because of the culture, and the listen first, act later mentality. It’s something I believe in and I’m going to continue that culture.”

Bringing in talent and new products has been instrumental in creating a stable e-commerce model, so Fraser and his team can help digitally advertise to customers, establish the right partnerships to drive traffic, and provide the right amount of data.

“Once you’re a customer of ours, we have to make sure we’re a needs-based business,” he says. “We have to be the first thing that sticks in their mind because it’s not about a track on a Bobcat that just broke. It’s $1,000 a day someone’s not going to make due to a piece of equipment that’s down.”

Ultimately, this strategy helps and supports customers with a collection of highly-integrated tools to create an immersive experience. But the biggest challenge, says Fraser, is the variety of marketplace channels customers are on.

“Some people prefer our website,” he says. “But some are on Walmart or about 20 other commercial channels we sell on. Each has unique requirements, ways to purchase, and product descriptions. On a single product, we might have 20 variations to meet the character limits of eBay, for instance, or the brand limitations of Amazon. So we’ve built out our own product information management platform. It takes the right talent to use that technology and a feedback loop to refine the process.”

Of course, AI is always in the conversation since people can’t write updated descriptions for 250,000 SKUs.

“AI will fundamentally change what everybody’s job is,” he says. “I know I have to prepare for it and be forward thinking. We have to embrace it. If you don’t, you’re going to get left behind.”

Fraser also details practical AI adoption in terms of pricing, product data enhancement, and customer experience, while stressing experimentation without over-dependence. Watch the full video below for more insights, and be sure to subscribe to the monthly Center Stage newsletter by clicking here.

On consolidating disparate systems: You certainly run into challenges. People are on the same ERP system so they have some familiarity. But even within that, you have massive amounts of customization. Sometimes that’s very purpose-built for the type of process an organization is running, or that unique sales process, or whatever. But in other cases, it’s very hard. We’ve acquired companies with their own custom built ERP platform, where they spent 20 years curating it down to eliminate every button click. Those don’t go quite as well, but you start with a good culture, and being transparent with employees and customers about what’s happening, and you work through it together. The good news is it starts with putting the customer first and doing it in a consistent way. Tell people change is coming and build a rapport before you bring in massive changes. There are some quick wins and efficiencies, and so people begin to trust. Then, you’re not just dragging them along but bringing them along on the journey.

On AI: Everybody’s talking it, but there’s a danger to that, just like there was a danger with blockchain and other kinds of immersive technologies. You have to make sure you know why you’re going after AI. You can’t just use it because it’s a buzzword. You have to bake it into your strategy and existing use cases, and then leverage it. We’re doing it in a way that allows us to augment our existing strategy rather than completely and fundamentally change it. So for example, we’re going to use AI to help influence what our product pricing should be. We have great competitive data, and a great idea of what our margins need to be and where the market is for pricing. Some companies are in the news because they’ve gone all in on AI, and AI is doing some things that are maybe not so appropriate in terms of automation. But if you can go in and have it be a contributing factor to a human still deciding on pricing, that’s where we are rather than completely handing everything over to AI.

On pooling data: We have a 360-degree view of all of our customers. We know when they’re buying online and in person. If they’re buying construction equipment and material handling equipment, we’ll see that. But when somebody’s buying a custom fork for a forklift, that’s very different than someone needing a new water pump for a John Deere tractor. And having a manufacturing platform that allows us to predict a two and a half day lead time on that custom fork is a different system to making sure that water pump is at your door the next day. Trying to do all that in one platform just hasn’t been successful in my experience in the past. So we’ve chosen to take a bit of a hybrid approach where you combine the data but still have best in breed operational platforms for different segments of the business.

On scaling IT systems: The key is we’re not afraid to have more than one operational platform. Today, in our ecosystem of 23 different companies, we’re manufacturing parts in our material handling business, and that’s a very different operational platform than, say, purchasing overseas parts, bringing them in, and finding a way to sell them to people in need, where you need to be able to distribute them fast. It’s an entirely different model. So we’re not establishing one core platform in that case, but the right amount of platforms. It’s not 23, but it’s also not one. So as we think about being able to scale, it’s also saying that if you try to be all things to all people, you’re going to be a jack of all trades and an expert in none. So we want to make sure when we have disparate segments that have some operational efficiency in the back end — same finance team, same IT teams — we’ll have more than one operational platform. Then through different technologies, including AI, ensure we have one view of the customer, even if they’re purchasing out of two or three different systems.

On tech deployment: Experiment early and then make certain not to be too dependent on it immediately. We have 250,000 SKUs, and more than two million parts that we can special order for our customers, and you can’t possibly augment that data with a world-class description with humans. So we selectively choose how to make the best product listing for something on Amazon or eBay. But we’re using AI to build enhanced product descriptions for us, and instead of having, say, 10 people curating and creating custom descriptions for these products, we’re leveraging AI and using agents in a way that allow people to build the content. Now humans are simply approving, rejecting, or editing that content, so we’re leveraging them for the knowledge they need to have, and if this going to be a good product listing or not. We know there are thousands of AI companies, and for us to be able to pick a winner or loser is a gamble. Our approach is to make it a bit of a commoditized service. But we’re also pulling in that data and putting it back into our core operational platform, and there it rests. So if we’re with the wrong partner, or they get acquired, or go out of business, we can switch quickly without having to rewrite our entire set of systems because we take it in, use it a bit as a commoditized service, get the data, set it at rest, and then we can exchange that AI engine. We’ve already changed it five times and we’re okay to change it another five until we find the best possible partner so we can stay bleeding edge without having all the expense of building it too deeply into our core platforms.

❌
❌