Reading view

There are new articles available, click to refresh the page.

Build, Buy, or Borrow Compute? – A CIO’s call on LLM infrastructure

Across enterprise deployments, the decisive variable isn’t only the specific LLM; it’s the infrastructure strategy as well —how organizations provision, govern, and scale GPU capacity. Projects stall not because teams lack ideas, but because we pick the wrong way to power them. We still treat GPUs like a procurement item when they are closer to an operating strategy. If your strategy is still forming—as it is for many sensible companies—the pragmatic default is to borrow first: Start by prototyping on GPU-as-a-Service—run multiple models on top of rented GPUs; validate ROI with live benchmarks before you decide what to build or buy.

Picture a Tuesday budget review. One team wants an on-prem cluster “so we’re not at the mercy of the cloud.” Another wants to keep everything with a hyperscaler because “we can’t wait twelve weeks.” Finance is staring at a graph that looks more like a mountain range than a plan. None of them are wrong. Owning capacity is compelling when you fine-tune frequently, need hard latency guarantees, or must keep data in a strict boundary. You control interconnects and schedulers, and unit costs look attractive—if utilization stays high. But racks and cards are the easy part. The less glamorous reality is drivers, firmware, cooling, observability, security, and an ops team that runs this like a product. Private clusters idling at “respectable” 35% are not cheap; they are expensive in time.

Buying from a managed platform is the opposite energy: idea on Monday, demo on Friday. You borrow the provider’s maturity—tooling, accelerators, global reach—and pay for that privilege. The risks are familiar: multi-tenant guardrails, the creep of lock-in if you don’t standardize interfaces, and egress that bites. But for many programs, speed is the difference between a pilot that ships and a pilot that fades into a wiki.

This is why I nudge uncertain teams toward borrowing—GPU-as-a-Service—first. Treat it as an option, not a crutch. It absorbs spikes, enables honest bake-offs across model families and hardware generations, and turns capital debates into measured operating experiments. After a few cycles, your own data starts to talk back: what you spend per 1,000 tokens, which workloads are spiky theatre and which are boring baseload, where latency really matters (as in users notice) and where it doesn’t. Only then decide what to own, what to reserve, and what to keep elastic.

All of this only works if the architecture is portable by design. Containerize training and inference. Use open interfaces—ONNX for models, KServe/KFServing for serving—and keep a neutral registry so versions don’t vanish into ticket threads. Keep data flows honest about gravity. Retrieval-augmented generation is a good test: embeddings and sources should live where latency and policy demand, not where a provider’s defaults land them. If shifting a workload from borrowed to reserved capacity requires a rewrite, you don’t have an architecture—you have a dependency with good intentions.

Governance can’t wait for “phase two.” Trust is not a slide; it is evaluation harnesses that run every day. Keep a small, boring set of tests—factuality, safety, toxicity, fairness—and run them across environments. Log prompts and decisions. Track lineage from data source to output so auditors, and your future self, can explain why a model said what it said. Apply the same discipline to money. Treat inference like a product with service levels for latency, availability, and cost per request. Then squeeze it: smaller specialist models where they fit; distillation and quantization where they don’t. In the real world, serving—not training—often dominates the bill.

If you want a straightforward way to start, admit uncertainty. Stand up GPU-as-a-Service and run two benchmark rounds: first across model families, then across hardware. Keep the scoring plain—end-to-end latency people feel, accuracy the business accepts, and a cost you can explain to finance without footnotes. Over a quarter, you’ll see a curve of “known work.” Move that steady baseload to owned or reserved capacity—whichever the math favors. Leave seasonality, experiments, migrations, and cross-generation tests on the borrowed tier. Push the lowest-latency inference to the edge where decisions actually happen—shops, plants, fleets—and keep the rest near your data lakes. Most important, operate with one fabric for observability and policy across all three modes so you’re not running three AI programs that merely share a name.

You’ll notice I haven’t said “never build” or “always buy.” The truth is less dramatic. Owning pays when utilization is real and sovereignty is non-negotiable. Buying pays when speed compounds and you need to move a portfolio of ideas across the finish line. Borrowing pays when you’re honest about not knowing the mix yet—and you want the learning to be cheap and fast. That isn’t fence-sitting; it’s how you stop arguing about ideology and start arguing about facts.

My bias is clear: if your strategy is still forming, start with GPU-as-a-Service. It buys time without buying regret and keeps options open while you learn your own economics. When you’re ready, land your baseload where it belongs—owned or reserved—and keep the rest elastic. Do that, and the conversation with your board shifts from “Can we trust this?” to “Where else can we apply it, and what’s the payback?” Compute stops being a bottleneck and starts behaving like what it really is in 2025: an instrument of strategy.

❌