โŒ

Reading view

There are new articles available, click to refresh the page.

Vertical AI development agents are the future of enterprise integrations

Enterprise Application Integration (EAI) and modern iPaaS platforms have become two of the most strategically important โ€“ and resource-constrained โ€“ functions inside todayโ€™s enterprises. As organizations scale SaaS adoption, modernize core systems, and automate cross-functional workflows, integration teams face mounting pressure to deliver faster while upholding strict architectural, data quality, and governance standards.

AI has entered this environment with the promise of acceleration. But CIOs are discovering a critical truth:

Not all AI is built for the complexity of enterprise integrations โ€“ whether in traditional EAI stacks or modern iPaaS environments.

Generic coding assistants such as Cursor or Claude Code can boost individual productivity, but they struggle with the pattern-heavy, compliance-driven reality of integration engineering. What looks impressive in a demo often breaks down under real-world EAI/iPaaS conditions.

This widening gap has led to the rise of a new category: Vertical AI Development Agents โ€“ domain-trained agents purpose-built for integration and middleware development. Companies like CurieTech AI are demonstrating that specialized agents deliver not just speed, but materially higher accuracy, higher-quality outputs, and far better governance than general-purpose tools.

For CIOs running mission-critical integration programs, that difference directly affects reliability, delivery velocity, and ROI.

Why EAI and iPaaS integrations are not a โ€œGeneric Codingโ€ problem

Integrationsโ€”whether built on legacy middleware or modern iPaaS platforms โ€“ operate within a rigid architectural framework:

  • multi-step orchestration, sequencing, and idempotency
  • canonical data transformations and enrichment
  • platform-specific connectors and APIs
  • standardized error-handling frameworks
  • auditability and enterprise logging conventions
  • governance and compliance embedded at every step

Generic coding models are not trained on this domain structure. They often produce code that looks correct, yet subtly breaks sequencing rules, omits required error handling, mishandles transformations, or violates enterprise logging and naming standards.

Vertical agents, by contrast, are trained specifically to understand flow logic, mappings, middleware orchestration, and integration patterns โ€“ across both EAI and iPaaS architectures. They donโ€™t just generate code โ€“ they reason in the same structures architects and ICC teams use to design integrations.

This domain grounding is the critical distinction.

The hidden drag: Context latency, expensive context managers, and prompt fatigue

Teams experimenting with generic AI encounter three consistent frictions:

Context Latency

Generic models cannot retain complex platform context across prompts. Developers must repeatedly restate platform rules, logging standards, retry logic, authentication patterns, and canonical schemas.

Developers become โ€œexpensive context managersโ€

A seemingly simple instructionโ€”โ€œTransform XML to JSON and publish to Kafkaโ€โ€”
quickly devolves into a series of corrective prompts:

  • โ€œUse the enterprise logging format.โ€
  • โ€œAdd retries with exponential backoff.โ€
  • โ€œFix the transformation rules.โ€
  • โ€œApply the standardized error-handling pattern.โ€

Developers end up managing the model instead of building the solution.

Prompt fatigue

The cycle of re-prompting, patching, and enforcing architectural rules consumes time and erodes confidence in outputs.

This is why generic tools rarely achieve the promised acceleration in integration environments.

Benchmarks show vertical agents are about twice as accurate

CurieTech AI recently published comparative benchmarks evaluating its vertical integration agents against leading generic tools, including Claude Code.
The tests covered real-world tasks:

  • generating complete, multi-step integration flows
  • building cross-system data transformations
  • producing platform-aligned retries and error chains
  • implementing enterprise-standard logging
  • converting business requirements into executable integration logic

The results were clear: generic tools performed at roughly half the accuracy of vertical agents.

Generic outputs often looked plausible but contained structural errors or governance violations that would cause failures in QA or production. Vertical agents produced platform-aligned, fully structured workflows on the first pass.

For integration engineering โ€“ where errors cascade โ€“ this accuracy gap directly impacts delivery predictability and long-term quality.

The vertical agent advantage: Single-shot solutioning

The defining capability of vertical agents is single-shot task execution.

Generic tools force stepwise prompting and correction. But vertical agentsโ€”because they understand patterns, sequencing, and governanceโ€”can take a requirement like:

โ€œCreate an idempotent order-sync flow from NetSuite to SAP S/4HANA with canonical transformations, retries, and enterprise logging.โ€

โ€ฆand return:

  • the flow
  • transformations
  • error handling
  • retries
  • logging
  • and test scaffolding

in one coherent output.

This shift โ€“ from instruction-oriented prompting to goal-oriented promptingโ€”removes context latency and prompt fatigue while drastically reducing the need for developer oversight.

Built-in governance: The most underrated benefit

Integrations live and die by adherence to standards. Vertical agents embed those standards directly into generation:

  • naming and folder conventions
  • canonical data models
  • PII masking and sensitive-data controls
  • logging fields and formats
  • retry and exception handling patterns
  • platform-specific best practices

Generic models cannot consistently maintain these rules across prompts or projects.

Vertical agents enforce them automatically, which leads to higher-quality integrations with far fewer QA defects and production issues.

The real ROI: Quality, consistency, predictability

Organizations adopting vertical agents report three consistent benefits:

1. Higher-Quality Integrations

Outputs follow correct patterns and platform rulesโ€”reducing defects and architectural drift.

2. Greater Consistency Across Teams

Standardized logic and structures eliminate developer-to-developer variability.

3. More Predictable Delivery Timelines

Less rework means smoother pipelines and faster delivery.

A recent enterprise using CurieTech AI summarized the impact succinctly:

โ€œFor MuleSoft users, generic AI tools wonโ€™t cut it. But with domain-specific agents, the ROI is clear. Just start.โ€

For CIOs, these outcomes translate to increased throughput and higher trust in integration delivery.

Preparing for the agentic future

The industry is already moving beyond single responses toward agentic orchestration, where AI systems coordinate requirements gathering, design, mapping, development, testing, documentation, and deployment.

Vertical agentsโ€”because they understand multi-step integration workflowsโ€”are uniquely suited to lead this transition.

Generic coding agents lack the domain grounding to maintain coherence across these interconnected phases.

The bottom line

Generic coding assistants provide breadth, but vertical AI development agents deliver the depth, structure, and governance enterprise integrations require.

Vertical agents elevate both EAI and iPaaS programs by offering:

  • significantly higher accuracy
  • higher-quality, production-ready outputs
  • built-in governance and compliance
  • consistent logic and transformations
  • predictable delivery cycles

As integration workloads expand and become more central to digital transformation, organizations that adopt vertical AI agents early will deliver faster, with higher accuracy, and with far greater confidence.

In enterprise integrations, specialization isnโ€™t optionalโ€”it is the foundation of the next decade of reliability and scale.

Learn more about CurieTech AI here.

IT leaders turn to third-party providers to manage tech debt

As tech debt threatens to cripple many IT organizations, a huge number of CIOs have turned to third-party service providers to maintain or upgrade legacy software and systems, according to a new survey.

A full 95% of IT leaders are now using outside service providers to modernize legacy IT and reduce tech debt, according to a survey by MSP Ensono.

The push is in part due to the cost of legacy IT, with nearly half of those surveyed saying they paid more in the past year to maintain older IT systems than they had budgeted. More importantly, dealing with legacy applications and infrastructure is holding IT organizations back, as nearly nine in 10 IT leaders say legacy maintenance has hampered their AI modernization plans.

โ€œMaintaining legacy systems is really slowing down modernization efforts,โ€ says Tim Beerman, Ensonoโ€™s CTO. โ€œItโ€™s the typical innovatorโ€™s dilemma โ€” theyโ€™re focusing on outdated systems and how to address them.โ€

In some cases, CIOs have turned to service providers to manage legacy systems, but in other cases, they have looked to outside IT teams to retire tech debt and modernize software and systems, Beerman says. One reason theyโ€™re turning to outside service providers is an aging employee base, with internal experts in legacy systems retiring and taking their knowledge with them, he adds.

โ€œNot very many people are able to do it themselves,โ€ Beerman says. โ€œYou have maturing workforces and people moving out of the workforce, and you need to go find expertise in areas where you canโ€™t hire that talent.โ€

While the MSP model has been around for decades, the move to using it to manage tech debt appears to be a growing trend as organizations look to clear up budget and find time to deploy AI, he adds.

โ€œIf you look at the advent of lot of new technology, especially AI, thatโ€™s moving much faster, and clients are looking for help,โ€ Beerman says. โ€œOn one side, you have this legacy problem that they need to manage and maintain, and then you have technology moving at a pace that it hasnโ€™t moved in years.โ€

Outsourcing risk

Ryan Leirvik, CEO at cybersecurity services firm Neuvik, also sees a trend toward using service providers to manage legacy IT. He sees several advantages, including matching the right experts to legacy systems, but CIOs may also use MSPs to manage their risk, he says.

โ€œOf the many advantages, one primary advantage oftenย not mentioned is shifting the exploitation or service interruption risk to the vendor,โ€ he adds. โ€œIn an environment where vulnerability discovery, patching, and overall maintenance is an ongoing and expensive effort, the risk of getting it wrong typically sits with the vendor in charge.โ€

The number of IT leaders in the survey who overspent their legacy IT maintenance budgets also doesnโ€™t surprise Leirvik, a former chief of staff and associate director of cyber at the US Department of Defense.

Many organizations have a talent mismatch between the IT infrastructure they have and the one they need to move to, he says. In addition, the ongoing maintenance of legacy software and systems often costs more than anticipated, he adds.

โ€œThereโ€™s this huge maintenance tail that we werenโ€™t expecting because the initial price point was one cost and the maintenance is 1X,โ€ Leirvik says.

To get out of the legacy maintenance trap, IT leaders need foresight and discipline to choose the right third-party provider, he adds. โ€œTake the long-term view โ€” make sure the five-year plan lines up with this particular vendor,โ€ he says. โ€œDo your goals as an organization match up with where theyโ€™re going to help you out?โ€

Paying twice

While some IT leaders have turned to third-party vendors to update legacy systems, a recently released report from ITSM and customer-service software vendor Freshworks raises questions about the efficiency of modernization efforts.

More than three-quarters of those surveyed by Freshworks say software implementations take longer than expected, with two-thirds of those projects exceeding expected budgets.

Third-party providers may not solve the problems, says Ashwin Ballal, Freshworksโ€™ CIO.

โ€œLegacy systems have become so complex that companies are increasingly turning to third-party vendors and consultants for help, but the problem is that, more often than not, organizations are trading one subpar legacy system for another,โ€ he says. โ€œAdding vendors and consultants often compounds the problem, bringing in new layers of complexity rather than resolving the old ones.โ€

The solution isnโ€™t adding more vendors, but new technology that works out of the box, Ballal adds.

โ€œIn theory, third-party providers bring expertise and speed,โ€ he says. โ€œIn practice, organizations often find themselves paying for things twice โ€” once for complex technology, and then again for consultants to make it work.โ€

Third-party vendors unavoidable

Other IT leaders see some third-party support as nearly inevitable. Whether itโ€™s updating old code, moving workloads to the cloud, adopting SaaS tools, or improving cybersecurity, most organizations now need outside assistance, says Adam Winston, field CTO and CISO at cybersecurity vendor WatchGuard Technologies.

A buildup of legacy systems, including outdated remote-access tools and VPNs, can crush organizations with tech debt, he adds. Many organizations havenโ€™t yet fully modernized to the cloud or to SaaS tools, and they will turn to outside providers when the time comes, he says.

โ€œMost companies donโ€™t build and design and manage their own apps, and thatโ€™s where all that tech debt basically is sitting, and they are in some hybrid IT design,โ€ he says. โ€œThey may be still sitting in an era dating back to co-location and on-premise, and that almost always includes legacy servers, legacy networks, legacy systems that arenโ€™t really following a modern design or architecture.โ€

Winston advises IT leaders to create plans to retire outdated technology and to negotiate service contracts that lean on vendors to keep IT purchases as up to date as possible. Too many vendors are quick to drop support for older products when new ones come out, he suggests.

โ€œIf youโ€™re not going to upgrade, do the math on that legacy support and say, โ€˜If we canโ€™t upgrade that, how are we going to isolate it?โ€™โ€ he says. โ€œโ€˜What is our graveyard segmentation strategy to move the risk in the event that this canโ€™t be upgraded?โ€™ The vendor due diligence leaves a lot of this stuff on the table, and then people seem to get surprised.โ€

CIOs should avoid specializing in legacy IT, he adds. โ€œIf you canโ€™t amortize the cost of the software or the build, promise yourself that every new application thatโ€™s coming into the system is going to use the latest component,โ€ Winston says.

๊ธฐ์—… ์ „๋ฐ˜์— ์Šค๋ฉฐ๋“œ๋Š” ์—์ด์ „ํ‹ฑ AIยทยทยท๋ณ€ํ™”ํ•˜๋Š” ์•„ํ‚คํ…ํŠธ์˜ ์—ญํ• 

์—”ํ„ฐํ”„๋ผ์ด์ฆˆ ์•„ํ‚คํ…ํŠธ ๊ด€๋ จ ๊ธฐํš ๊ธฐ์‚ฌ์—์„œ ์ƒ์„ฑํ˜• AI๊ฐ€ ์–ธ๊ธ‰๋œ ์ ์€ ์žˆ์ง€๋งŒ, ๊ธฐ์—… ๊ธฐ์ˆ  ์ „๋ฐ˜์— ๋ฏธ์น˜๋Š” ์˜ํ–ฅ์€ ์ง€๊ธˆ๊นŒ์ง€ ํฌ๊ฒŒ ๋“œ๋Ÿฌ๋‚˜์ง€ ์•Š์•˜๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ์ง€๊ธˆ์€ ์ฃผ์š” ์„œ๋น„์Šคํ˜• ์†Œํ”„ํŠธ์›จ์–ด(SaaS) ๊ธฐ์—…์ด ์—์ด์ „ํ‹ฑ AI ์†”๋ฃจ์…˜์„ ์ž‡๋‹ฌ์•„ ๋‚ด๋†“์œผ๋ฉด์„œ ์•„ํ‚คํ…์ฒ˜์™€ ์•„ํ‚คํ…ํŠธ ์—ญํ•  ์ž์ฒด๊ฐ€ ๋ณ€ํ™”ํ•˜๊ณ  ์žˆ๋‹ค. ๊ทธ๋ ‡๋‹ค๋ฉด ์ง€๊ธˆ CIO์™€ ์•„ํ‚คํ…ํŠธ๋Š” ๋ฌด์—‡์„ ์•Œ์•„์•ผ ํ• ๊นŒ?

๊ธฐ์—…, ํŠนํžˆ CEO๋Š” ์ƒ์‚ฐ์„ฑ์„ ๋†’์ด๊ณ  ์„ฑ์žฅ์„ธ๋ฅผ ํšŒ๋ณตํ•˜๊ธฐ ์œ„ํ•ด AI ๋„์ž…์ด ํ•„์š”ํ•˜๋‹ค๊ณ  ๊พธ์ค€ํžˆ ๋ชฉ์†Œ๋ฆฌ๋ฅผ ๋‚ด์™”๊ณ , ๋ถ„์„๊ฐ€๋“ค๋„ ๊ฐ™์€ ์˜๊ฒฌ์„ ์ „ํ•˜๊ณ  ์žˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด ๊ฐ€ํŠธ๋„ˆ๋Š” ํ–ฅํ›„ 5๋…„ ๋™์•ˆ IT ์—…๋ฌด์˜ 75%๊ฐ€ AI๋ฅผ ํ™œ์šฉํ•œ ์ง์›์— ์˜ํ•ด ์ˆ˜ํ–‰๋  ๊ฒƒ์ด๋ผ๊ณ  ์ „๋งํ–ˆ๋‹ค. ์ด๋Š” ์ƒˆ๋กœ์šด ์‹œ์žฅ ์ง„์ถœ, ์ถ”๊ฐ€ ์ œํ’ˆยท์„œ๋น„์Šค ๊ฐœ๋ฐœ, ๋งˆ์ง„์„ ๋†’์ผ ๊ธฐ๋Šฅ ํ™•์ถฉ์ฒ˜๋Ÿผ IT ์—…๋ฌด๊ฐ€ ์ƒˆ๋กœ์šด ๊ฐ€์น˜๋ฅผ ๋งŒ๋“ค์–ด๋‚ด๋„๋ก ์ ๊ทน์ ์œผ๋กœ ๋‚˜์„œ์•ผ ํ•œ๋‹ค๋Š” ์˜๋ฏธ์ผ ์ˆ˜ ์žˆ๋‹ค.

์ƒ์‚ฐ์„ฑ์ด ์ด์ฒ˜๋Ÿผ ๊ทผ๋ณธ์ ์œผ๋กœ ๋ณ€ํ™”ํ•œ๋‹ค๋ฉด, ๊ธฐ์—…์—๋Š” ๋น„์ฆˆ๋‹ˆ์Šค ํ”„๋กœ์„ธ์Šค์™€ ์ด๋ฅผ ์šด์˜ํ•˜๋Š” ๊ธฐ์ˆ  ์ „๋ฐ˜์— ๋Œ€ํ•œ ์ƒˆ๋กœ์šด ๊ณ„ํš์ด ํ•„์š”ํ•˜๋‹ค. ์ตœ๊ทผ ์‚ฌ๋ก€๋“ค์€ ๊ธฐ์—…์ด ์ƒˆ๋กœ์šด ์šด์˜ ๋ชจ๋ธ์„ ๋„์ž…ํ•˜์ง€ ์•Š์œผ๋ฉด ๊ธฐ์ˆ  ํˆฌ์ž ํšจ๊ณผ๋ฅผ ์ œ๋Œ€๋กœ ์–ป๊ธฐ ์–ด๋ ต๋‹ค๋Š” ์ ์„ ๋ณด์—ฌ์ฃผ๊ณ  ์žˆ๋‹ค.

์—์ด์ „ํ‹ฑ AI ๋„์ž…์€ ๊ธฐ์—…์˜ ํ”„๋กœ์„ธ์Šค๋ฟ ์•„๋‹ˆ๋ผ ์†Œํ”„ํŠธ์›จ์–ด ๊ฐœ๋ฐœ ๋ฐฉ์‹, ๋งž์ถคํ™”, ๊ธฐ์ˆ  ๊ตฌํ˜„ ๋ฐฉ์‹๊นŒ์ง€ ๋ชจ๋‘ ๋ฐ”๊ฟ€ ๊ฐ€๋Šฅ์„ฑ์ด ๋†’๋‹ค. ๋”ฐ๋ผ์„œ ์•„ํ‚คํ…ํŠธ๋Š” ์†Œํ”„ํŠธ์›จ์–ด๊ฐ€ ์–ด๋–ป๊ฒŒ ๊ฐœ๋ฐœ๋˜๊ณ  ์กฐ์ •๋˜๋ฉฐ ๋ฐฐํฌ๋˜๋Š”์ง€ ์žฌ์„ค๊ณ„ํ•˜๋Š” ์ตœ์ „์„ ์— ์„œ๊ฒŒ ๋œ๋‹ค.

๊ธฐ์ˆ  ์—…๊ณ„ ์ผ๋ถ€์—์„œ๋Š” ์ƒ์„ฑํ˜• AI๊ฐ€ ๊ธฐ์—…์šฉ ์†Œํ”„ํŠธ์›จ์–ด์™€ ์ด๋ฅผ ์ œ๊ณตํ•˜๋Š” ๋Œ€ํ˜• ๋ฒค๋”์— ๊ทผ๋ณธ์  ๋ณ€ํ™”๋ฅผ ๊ฐ€์ ธ์˜ฌ ๊ฒƒ์œผ๋กœ ๋ณด๊ณ  ์žˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ํฌ๋ ˆ์Šคํ„ฐ(Forrester) ์ด๊ด„ ์• ๋„๋ฆฌ์ŠคํŠธ ๋””์—๊ณ  ๋กœ ์ฃผ๋””์ฒด๋Š” โ€œAI๊ฐ€ ๋ณธ๊ฒฉํ™”๋œ๋‹ค๊ณ  ํ•ด์„œ ์†Œํ”„ํŠธ์›จ์–ด ์‚ฐ์—…์ด ๋ถ•๊ดด๋œ๋‹ค๋Š” ์ฃผ์žฅ์€ ํ„ฐ๋ฌด๋‹ˆ์—†๋‹ค. ๊ทธ๋Ÿฐ ๊ฒฐ๋ก ์„ ๋‚ด๋ ค๋ฉด AI์— ๊ฐ€์žฅ ๋‚™๊ด€์ ์ธ ์ „๋ฌธ๊ฐ€์˜ ์˜ˆ์ƒ์กฐ์ฐจ ๋›ฐ์–ด๋„˜๋Š” ์ˆ˜์ค€์˜ ์™„์ „๋ฌด๊ฒฐํ•œ AI๊ฐ€ ์ „์ œ๋ผ์•ผ ํ•œ๋‹คโ€๋ผ๊ณ  ๋งํ–ˆ๋‹ค. ๋กœ ์ฃผ๋””์ฒด๋Š” ์ตœ๊ทผ ์—ด๋ฆฐ ์› ์ปจํผ๋Ÿฐ์Šค์—์„œ ๋น„์ฆˆ๋‹ˆ์Šค ๊ธฐ์ˆ  ๋ฆฌ๋” 4,000๋ช…์—๊ฒŒ โ€œ๋ณ€ํ™”๋Š” ๋ถ„๋ช… ์ง„ํ–‰๋˜๊ณ  ์žˆ์ง€๋งŒ, ์ด๋Š” ์ตœ๊ทผ ์ถ•์ ๋œ ์„ฑ๊ณผ๋ฅผ ๊ธฐ๋ฐ˜์œผ๋กœ ์ผ์–ด๋‚˜๋Š” ๊ฒƒโ€์ด๋ผ๊ณ  ์„ค๋ช…ํ–ˆ๋‹ค.

๋กœ ์ฃผ๋””์ฒด๋Š” โ€œ์• ์ž์ผ์€ ์กฐ์ง ๊ฐ„ ์กฐ์œจ์„ ๊ฐœ์„ ํ–ˆ๊ณ , ๋ฐ๋ธŒ์˜ต์Šค๋Š” ๊ฐœ๋ฐœ๊ณผ ์šด์˜ ์‚ฌ์ด์˜ ๋ฒฝ์„ ํ—ˆ๋ฌผ์—ˆ๋‹ค. ์ด๋Š” ๋ชจ๋‘ ๋ชฉํ‘œ๊ฐ€ ๊ฐ™์•˜๋‹ค. ๋ฐ”๋กœ ์•„์ด๋””์–ด์™€ ๊ตฌํ˜„ ์‚ฌ์ด์˜ ๊ฐ„๊ทน์„ ์ค„์ด๋Š” ๊ฒƒ์ด๋‹คโ€๋ผ๊ณ  ๋งํ–ˆ๋‹ค. ๊ทธ๋Š” AI๊ฐ€ ๊ธฐ์—…์šฉ ์†Œํ”„ํŠธ์›จ์–ด ๊ฐœ๋ฐœ ๋ฐฉ์‹์„ ๋ฐ”๊ฟ€ ๊ฒƒ์ด๋ผ๋Š” ์ ์„ ๋ถ€์ •ํ•˜์ง€๋Š” ์•Š์•˜์ง€๋งŒ, ์• ์ž์ผ๊ณผ ๋ฐ๋ธŒ์˜ต์Šค๊ฐ€ ๊ทธ๋žฌ๋“ฏ AI๋„ ์†Œํ”„ํŠธ์›จ์–ด ๊ฐœ๋ฐœ ์ƒ์• ์ฃผ๊ธฐ๋ฅผ ๊ฐœ์„ ํ•˜๊ณ  ๊ฒฐ๊ตญ ์•„ํ‚คํ…์ฒ˜ ์ „๋ฐ˜์„ ๊ณ ๋„ํ™”ํ•˜๊ฒŒ ๋  ๊ฒƒ์ด๋ผ๊ณ  ๊ฐ•์กฐํ–ˆ๋‹ค. ๋‹ค๋ฅธ ์ ์€ ๋ณ€ํ™”์˜ ์†๋„๋‹ค. ์ฝ˜ํ…์ธ  ๊ด€๋ฆฌ ์†Œํ”„ํŠธ์›จ์–ด ๊ธฐ์—… ์—„๋ธŒ๋ผ์ฝ”์˜ AI ์Šคํƒœํ”„ ์—”์ง€๋‹ˆ์–ด ํ•„ ํœ˜ํƒœ์ปค๋Š” โ€œ๊ฐœ๋ฐœ ์—ญ์‚ฌ์ƒ ์ด๋Ÿฐ ์†๋„์˜ ๋ณ€ํ™”๋Š” ์—†์—ˆ๋‹คโ€๋ผ๊ณ  ์ง„๋‹จํ–ˆ๋‹ค.

๋ณต์žก์„ฑ ์ฆ๊ฐ€์™€ ํ”„๋กœ์„ธ์Šค ๋ณ€ํ™”

์†Œํ”„ํŠธ์›จ์–ด ๊ฐœ๋ฐœ ๋ฐ ๋งž์ถคํ™” ์ฃผ๊ธฐ๊ฐ€ ๋ฐ”๋€Œ๊ณ  ์—์ด์ „ํ‹ฑ ์• ํ”Œ๋ฆฌ์ผ€์ด์…˜์ด ๋ณดํŽธํ™”๋˜๋ฉด์„œ, ์•„ํ‚คํ…ํŠธ๋Š” ๋ณต์žก์„ฑ๊ณผ ์ƒˆ๋กœ์šด ๋น„์ฆˆ๋‹ˆ์Šค ํ”„๋กœ์„ธ์Šค๋ฅผ ์—ผ๋‘์— ๋‘” ๊ณ„ํš์„ ์ˆ˜๋ฆฝํ•ด์•ผ ํ•˜๋Š” ์ƒํ™ฉ์ด๋‹ค. ์—์ด์ „ํ‹ฑ AI๊ฐ€ ์ง€๊ธˆ๊นŒ์ง€ ์ง์›์ด ์ˆ˜๋™์œผ๋กœ ์ฒ˜๋ฆฌํ•˜๋˜ ์—…๋ฌด๋ฅผ ๋งก๊ฒŒ ๋œ๋‹ค๋ฉด ๊ธฐ์กด ๋น„์ฆˆ๋‹ˆ์Šค ํ”„๋กœ์„ธ์Šค๋ฅผ ๊ทธ๋Œ€๋กœ ์œ ์ง€ํ•˜๊ธฐ๋Š” ์–ด๋ ต๋‹ค.

๋กœ ์ฃผ๋””์ฒด๋Š” ์•„๋งˆ์กด์›น์„œ๋น„์Šค(AWS) ๊ฐ™์€ AI ์„ ๋„ ๊ธฐ์—…์ด ๋Œ€๊ทœ๋ชจ ์ธ๋ ฅ ๊ฐ์ถ•์— ๋‚˜์„  ์ดํ›„ ๊ณผ์—ด๋œ ๋…ผ์Ÿ์— ๋‹ค์‹œ ํ•œ๋ฒˆ ์˜๊ฒฌ์„ ์ „ํ–ˆ๋‹ค. ๊ทธ๋Š” ์› ์ปจํผ๋Ÿฐ์Šค์—์„œ โ€œ๋ชจ๋“  ์ง์›์ด ์ž์‹ ์˜ ์ผ์„ ๋„์™€์ฃผ๋Š” ๋ด‡ ํ•˜๋‚˜์”ฉ์„ ๊ฐ–๊ฒŒ ๋  ๊ฒƒ์ด๋ผ๋Š” ์ƒ๊ฐ์€ ๋‹จ์ˆœํ•œ ๋ฐœ์ƒ์ด๋‹คโ€๋ผ๋ฉฐ, โ€œ๊ธฐ์—…์€ ๊ฐ ์—ญํ• ๊ณผ ๋น„์ฆˆ๋‹ˆ์Šค ํ”„๋กœ์„ธ์Šค๋ฅผ ๋ฉด๋ฐ€ํžˆ ๋ถ„์„ํ•ด, ์ ์ ˆํ•œ ์ž‘์—…์— ์ ์ ˆํ•œ ์—์ด์ „ํŠธ๋ฅผ ๋ฐฐ์น˜ํ•˜๋Š” ๋ฐ ์˜ˆ์‚ฐ๊ณผ ์ž์›์„ ์“ฐ๊ณ  ์žˆ๋Š”์ง€ ํ™•์ธํ•ด์•ผ ํ•œ๋‹ค. ์ด ๊ณผ์ •์„ ๊ฑฐ์น˜์ง€ ์•Š์œผ๋ฉด ํ•„์š”ํ•˜์ง€ ์•Š์€ ๊ณณ์— ์—์ด์ „ํ‹ฑ ๊ธฐ์ˆ ์„ ๋„์ž…ํ•ด ๋ณต์žกํ•œ ์—…๋ฌด๋ฅผ ์ฒ˜๋ฆฌํ•˜์ง€๋„ ๋ชปํ•˜๋ฉด์„œ ๊ธฐ์—…์˜ ํด๋ผ์šฐ๋“œ ๋น„์šฉ๋งŒ ๋Š˜๋ฆฌ๋Š” ๊ฒฐ๊ณผ๋ฅผ ์ดˆ๋ž˜ํ•˜๊ฒŒ ๋œ๋‹คโ€๋ผ๊ณ  ๊ฒฝ๊ณ ํ–ˆ๋‹ค.

AI ๊ธฐ๋ฐ˜ ๋กœ์šฐ์ฝ”๋“œ ํ”Œ๋žซํผ ๊ธฐ์—… ์•„์›ƒ์‹œ์Šคํ…œ์ฆˆ(OutSystems)์˜ CIO ํ‹ฐ์•„๊ณ  ์•„์ œ๋ฒ ๋‘๋Š” โ€œ์ค‘์š”ํ•œ ์ •๋ณด์— ์ ‘๊ทผํ•  ์ˆ˜ ์žˆ๋Š” ์—์ด์ „ํŠธ๋ฅผ ๋งŒ๋“œ๋Š” ์ผ์€ ์ƒ๊ฐ๋ณด๋‹ค ์‰ฝ๋‹คโ€๋ผ๊ณ  ๋งํ–ˆ๋‹ค. ๊ทธ๋Š” โ€œ๊ทธ๋ž˜์„œ ๋ฐ์ดํ„ฐ ๊ตฌ๋ถ„์ด ํ•„์š”ํ•˜๋‹ค. ์—์ด์ „ํŠธ๋ฅผ ๋ฐฐํฌํ•  ๋•Œ๋Š” ์ด๋ฅผ ํ†ต์ œํ•  ์ˆ˜ ์žˆ์–ด์•ผ ํ•œ๋‹ค. ์—์ด์ „ํŠธ๊ฐ€ ๋งŽ์•„์งˆ์ˆ˜๋ก ๋น„์šฉ๋„ ํ•จ๊ป˜ ๋Š˜์–ด๋‚œ๋‹คโ€๋ผ๊ณ  ์„ค๋ช…ํ–ˆ๋‹ค.

ํ•˜์ง€๋งŒ ํœ˜ํƒœ์ปค๋Š” ๊ฒฐ์ •๋ก ์  ๋ฐฉ์‹๊ณผ ๋น„๊ฒฐ์ •๋ก ์  ๋ฐฉ์‹ ์‚ฌ์ด์˜ ์ฐจ์ด๊ฐ€ ๋ฌด์—‡๋ณด๋‹ค ํฌ๋‹ค๊ณ  ์ง€์ ํ–ˆ๋‹ค. ๋น„๊ฒฐ์ •๋ก ์  ๋ฐฉ์‹์€ ๊ฒฐ๊ณผ๊ฐ€ ๋งค๋ฒˆ ๋‹ฌ๋ผ์งˆ ์ˆ˜ ์žˆ๊ธฐ ๋•Œ๋ฌธ์—, ํ•ญ์ƒ ๋™์ผํ•œ ๊ฒฐ๊ณผ๋ฅผ ๋‚ด๋Š” ๊ฒฐ์ •๋ก ์  ์—์ด์ „ํŠธ๋ฅผ ์ผ์ข…์˜ ๊ฐ€๋“œ๋ ˆ์ผ๋กœ ๋‘ฌ์•ผ ํ•œ๋‹ค๋Š” ๊ฒƒ์ด๋‹ค. ๊ทธ๋Š” ์–ด๋–ค ๋น„์ฆˆ๋‹ˆ์Šค ๊ฒฐ๊ณผ๋ฅผ ๊ฒฐ์ •๋ก ์ ยท๋น„๊ฒฐ์ •๋ก ์  ๋ฐฉ์‹ ์ค‘ ์–ด๋””์— ๋‘˜ ๊ฒƒ์ธ์ง€ ์ •์˜ํ•˜๋Š” ์ผ์ด ์•„ํ‚คํ…์ฒ˜์˜ ํ•ต์‹ฌ ์—ญํ• ์ด๋ผ๊ณ  ์„ค๋ช…ํ–ˆ๋‹ค. ๋˜ํ•œ ํœ˜ํƒœ์ปค๋Š” ์—ฌ๊ธฐ์„œ AI๊ฐ€ ์กฐ์ง์˜ ๋นˆํ‹ˆ์„ ๋ฉ”์šฐ๋Š” ๋ฐ ๋„์›€์„ ์ค„ ์ˆ˜ ์žˆ๋‹ค๊ณ  ๋ง๋ถ™์˜€๋‹ค. ์•„ํ‚คํ…ํŠธ๋กœ ์ผํ•œ ๊ฒฝํ—˜์ด ์žˆ๋Š” ํœ˜ํƒœ์ปค๋Š” ๊ธฐ์—…์ด AI๋ฅผ ์ ๊ทน ์‹คํ—˜ํ•ด ์ž์‚ฌ ์•„ํ‚คํ…์ฒ˜์— ์–ด๋–ค ์ด์ ์„ ์ค„ ์ˆ˜ ์žˆ๋Š”์ง€, ๊ทธ๋ฆฌ๊ณ  ๊ถ๊ทน์ ์œผ๋กœ ๋น„์ฆˆ๋‹ˆ์Šค ์„ฑ๊ณผ์— ์–ด๋–ค ์˜ํ–ฅ์„ ๋ฏธ์น  ์ˆ˜ ์žˆ๋Š”์ง€ ํ™•์ธํ•˜๋Š” ์ผ์ด ๋งค์šฐ ์ค‘์š”ํ•ด์งˆ ๊ฒƒ์ด๋ผ๊ณ  ๊ฐ•์กฐํ–ˆ๋‹ค.

๊ฐ€ํŠธ๋„ˆ ์• ๋„๋ฆฌ์ŠคํŠธ ๋Œ€๋ฆด ํ”Œ๋Ÿฌ๋จธ์™€ ์•Œ๋ฆฌ์‹œ์•„ ๋ฉ€๋Ÿฌ๋ฆฌ๋Š” โ€œ์‹ค์งˆ์  ๊ฒฝ์Ÿ๋ ฅ์„ ํ™•๋ณดํ•˜๋Š” ๊ธธ์€ ๊ณผ์žฅ๋œ ๊ธฐ๋Œ€๋ฅผ ์ซ“๊ฑฐ๋‚˜ AI์˜ ์ž ์žฌ๋ ฅ์„ ๊นŽ์•„๋‚ด๋ฆฌ๋Š” ๋ฐ ์žˆ์ง€ ์•Š๋‹ค. ๊ฐ€์น˜๋ฅผ ์ฐฝ์ถœํ•˜๋Š” ์ค‘๊ฐ„์ง€์ ์„ ์ฐพ๋Š” ๋ฐ ์žˆ๋‹คโ€๋ผ๊ณ  ๋ฐํ˜”๋‹ค. ๋‘ ์‚ฌ๋žŒ์€ โ€œAI์˜ ๊ฐ€๋Šฅ์„ฑ์€ ๋ถ„๋ช…ํ•˜์ง€๋งŒ, ๊ทธ ๊ฐ€์น˜๋ฅผ ์˜จ์ „ํžˆ ์‹คํ˜„ํ•  ๊ฐ€๋Šฅ์„ฑ์€ ๋ณด์žฅ๋˜์ง€ ์•Š๋Š”๋‹ค. ๊ฐ€ํŠธ๋„ˆ ์กฐ์‚ฌ์— ๋”ฐ๋ฅด๋ฉด AI ํ”„๋กœ์ ํŠธ ๊ฐ€์šด๋ฐ ROI๋ฅผ ๋‹ฌ์„ฑํ•˜๋Š” ๊ฒฝ์šฐ๋Š” 5๊ฐœ ์ค‘ 1๊ฐœ์— ๋ถˆ๊ณผํ•˜๊ณ , ์ง„์ •ํ•œ ๋ณ€ํ™”๋ฅผ ์ด๋„๋Š” ์‚ฌ๋ก€๋Š” 50๊ฐœ ์ค‘ 1๊ฐœ ์ˆ˜์ค€์— ๊ทธ์นœ๋‹คโ€๋ผ๊ณ  ์ „ํ–ˆ๋‹ค. ๋˜ ๋‹ค๋ฅธ ์กฐ์‚ฌ์—์„œ๋Š” ์กฐ์ง์˜ ๋ฆฌ๋”๊ฐ€ ๋””์ง€ํ„ธ ์ „ํ™˜์„ ์ œ๋Œ€๋กœ ์ด๋Œ ์ˆ˜ ์žˆ๋‹ค๊ณ  ์‹ ๋ขฐํ•˜๋Š” ์ง์›์ด 32%์— ๋ถˆ๊ณผํ•˜๋‹ค๋Š” ๊ฒฐ๊ณผ๋„ ๋‚˜์™”๋‹ค. ์ด์— ๋Œ€ํ•ด ์•„์ œ๋ฒ ๋‘๋Š” โ€œ์—์ด์ „ํŠธ๋Š” ์•„ํ‚คํ…์ฒ˜ ๋ณต์žก์„ฑ์„ ๋”ํ•ด์ฃผ๊ธฐ ๋•Œ๋ฌธ์— ์•„ํ‚คํ…ํŠธ ์—ญํ• ์ด ๋” ์ค‘์š”ํ•ด์ง€๊ณ  ์žˆ๋‹คโ€๋ผ๊ณ  ๋ถ„์„ํ–ˆ๋‹ค.

๊ณผ๊ฑฐ ์•„ํ‚คํ…ํŠธ๋Š” ์ฃผ๋กœ ํ”„๋ ˆ์ž„์›Œํฌ ์ค‘์‹ฌ์˜ ์—…๋ฌด๋ฅผ ์ˆ˜ํ–‰ํ•ด์™”๋‹ค. ํœ˜ํƒœ์ปค๋Š” ์ด์ œ ์ง์›, ์• ํ”Œ๋ฆฌ์ผ€์ด์…˜, ๋ฐ์ดํ„ฐ๋ฒ ์ด์Šค, ์—์ด์ „ํ‹ฑ AI๊ฐ€ ์–ฝํ˜€ ์žˆ๋Š” ์—”ํ„ฐํ”„๋ผ์ด์ฆˆ ํ™˜๊ฒฝ์„ ๊ด€๋ฆฌํ•˜๋ ค๋ฉด ์ƒˆ๋กœ์šด ๊ธฐ์ˆ  ๋ชจ๋ธ์„ ์ดํ•ดํ•˜๊ณ  ๋„์ž…ํ•ด์•ผ ํ•œ๋‹ค๊ณ  ์„ค๋ช…ํ–ˆ๋‹ค. ๊ทธ๋Š” ๊ทธ์ค‘ ํ•˜๋‚˜๋กœ MCP๋ฅผ ์–ธ๊ธ‰ํ•˜๋ฉด์„œ, MCP๊ฐ€ AI ๋ชจ๋ธ์„ ๋ฐ์ดํ„ฐ ์†Œ์Šค์™€ ์—ฐ๊ฒฐํ•˜๋Š” ํ‘œ์ค€ ๋ฐฉ์‹์„ ์ œ๊ณตํ•ด, ์ง€๊ธˆ์ฒ˜๋Ÿผ ๊ฐ๊ธฐ ๋‹ค๋ฅธ ๋ฐฉ์‹์œผ๋กœ ๊ตฌ์„ฑ๋œ ํ†ตํ•ฉ ๊ตฌ์กฐ๋‚˜ ๊ฒ€์ƒ‰ ์ฆ๊ฐ• ์ƒ์„ฑ(RAG) ๊ตฌํ˜„์˜ ๋ณต์žก์„ฑ์„ ์ค„์—ฌ์ค„ ์ˆ˜ ์žˆ๋‹ค๊ณ  ์–ธ๊ธ‰ํ–ˆ๋‹ค. AI๋Š” ์ด๋Ÿฌํ•œ ์ƒˆ๋กœ์šด ๋ณต์žก์„ฑ์„ ๋‹ค๋ฃจ๋Š” ๋ฐ๋„ ๋„์›€์„ ์ œ๊ณตํ•  ์ „๋ง์ด๋‹ค. ๋กœ ์ฃผ๋””์ฒด๋Š” โ€œ๊ธฐํš, ์š”๊ตฌ์‚ฌํ•ญ ๊ด€๋ฆฌ, ์—ํ”ฝ ์ƒ์„ฑ, ์‚ฌ์šฉ์ž ์Šคํ† ๋ฆฌ ์ž‘์„ฑ, ์ฝ”๋“œ ์ƒ์„ฑ, ์ฝ”๋“œ ๋ฌธ์„œํ™”, ๋ฒˆ์—ญ๊นŒ์ง€ ์ง€์›ํ•˜๋Š” ๋‹ค์–‘ํ•œ ๋„๊ตฌ๊ฐ€ ๋“ฑ์žฅํ•˜๊ณ  ์žˆ๋‹คโ€๋ผ๊ณ  ์„ค๋ช…ํ–ˆ๋‹ค.

์ƒˆ๋กœ์šด ์ฑ…์ž„

ํฌ๋ ˆ์Šคํ„ฐ(Forrester) ์‹œ๋‹ˆ์–ด ์• ๋„๋ฆฌ์ŠคํŠธ ์Šคํ…ŒํŒ ๋ฐ˜๋ ˆ์ผ์€ ์ด์ œ ์—์ด์ „ํ‹ฑ AI๊ฐ€ ์ฃผ์š” ์—”ํ„ฐํ”„๋ผ์ด์ฆˆ ์•„ํ‚คํ…์ฒ˜ ๋„๊ตฌ์˜ ํ•ต์‹ฌ ๊ธฐ๋Šฅ์œผ๋กœ ์ž๋ฆฌ ์žก๊ณ  ์žˆ๋‹ค๊ณ  ์„ค๋ช…ํ•œ๋‹ค. ๊ทธ๋Š” โ€œ์—์ด์ „ํŠธ๋Š” ๋ฐ์ดํ„ฐ ๊ฒ€์ฆ, ์—ญ๋Ÿ‰ ๋งคํ•‘, ์•„ํ‹ฐํŒฉํŠธ ์ƒ์„ฑ ๊ฐ™์€ ์ž‘์—…์„ ์ž๋™ํ™”ํ•ด ์•„ํ‚คํ…ํŠธ๊ฐ€ ์ „๋žต๊ณผ ์ „ํ™˜ ์—…๋ฌด์— ์ง‘์ค‘ํ•  ์ˆ˜ ์žˆ๊ฒŒ ํ•œ๋‹คโ€๋ผ๊ณ  ๋งํ–ˆ๋‹ค. ๋ฐ˜๋ ˆ์ผ์€ ์…€๋กœ๋‹ˆ์Šค, SAP ์‹œ๊ทธ๋‚˜๋น„์˜ค, ์„œ๋น„์Šค๋‚˜์šฐ๊ฐ€ ๋„์ž…ํ•œ ์—์ด์ „ํ‹ฑ ํ†ตํ•ฉ ๊ธฐ์ˆ ์„ ์‚ฌ๋ก€๋กœ ๋“ค์—ˆ๋‹ค. ํœ˜ํƒœ์ปค๋Š” ์•„ํ‚คํ…ํŠธ๊ฐ€ ์กฐ์ง์„ ๋ณดํ˜ธํ•˜๊ณ  ์—์ด์ „ํ‹ฑ AI์˜ ์˜์‚ฌ๊ฒฐ์ •๊ณผ ๊ฒฐ๊ณผ์— ์ฑ…์ž„์„ ์ง€๋Š” ์—ญํ• ๋กœ ๋”์šฑ ์ค‘์š”ํ•ด์ง€๊ณ  ์žˆ๋‹ค๊ณ  ์ง„๋‹จํ–ˆ๋‹ค.

์ผ๋ถ€ ์•„ํ‚คํ…ํŠธ๋Š” ์ด๋Ÿฐ ๋ณ€ํ™”๊ฐ€ ๊ธฐ์กด ์ „๋ฌธ ์˜์—ญ์„ ์•ฝํ™”์‹œํ‚จ๋‹ค๊ณ  ์ƒ๊ฐํ•  ์ˆ˜๋„ ์žˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ํœ˜ํƒœ์ปค๋Š” ์˜คํžˆ๋ ค ์—ญํ• ์˜ ๋ฒ”์œ„๋ฅผ ๋„“ํž ๊ธฐํšŒ๋ผ๊ณ  ๋ดค๋‹ค. ๊ทธ๋Š” โ€œ์•„ํ‚คํ…ํŠธ๋Š” ์—ฌ๋Ÿฌ ์˜์—ญ์„ ๊นŠ์ด ์žˆ๊ฒŒ ํŒŒ๊ณ ๋“ค ์ˆ˜ ์žˆ๋‹ค. ์‚ฌ๋žŒ์„ ํ•œ ๊ฐ€์ง€ ๋ฒ”์ฃผ์— ๊ฐ€๋‘ฌ๋‘๋Š” ๋ฐฉ์‹์€ ๊ฒฐ์ฝ” ๋ฐ”๋žŒ์งํ•˜์ง€ ์•Š๋‹คโ€๋ผ๊ณ  ๋งํ–ˆ๋‹ค.

์ „ํ†ต์ ์œผ๋กœ ์•„ํ‚คํ…์ฒ˜๋Š” ๋ฌด์–ธ๊ฐ€๋ฅผ ์„ค๊ณ„ํ•˜๊ณ  ๊ตฌ์ถ•ํ•œ ๋’ค ๊ณ ์ •๋œ ํ˜•ํƒœ๋กœ ์กด์žฌํ•˜๋Š” ๊ตฌ์กฐ๋ฅผ ์˜๋ฏธํ–ˆ๋‹ค. ํ•˜์ง€๋งŒ ์—์ด์ „ํ‹ฑ AI๊ฐ€ ๊ธฐ์—…์— ํ™•์‚ฐ๋˜๋ฉด์„œ ์•„ํ‚คํ…์ฒ˜๋ฅผ ๊ด€๋ฆฌํ•˜๋Š” ์•„ํ‚คํ…ํŠธ์˜ ์—ญํ• ์€ ๋”์šฑ ์œ ๋™์ ์œผ๋กœ ๋ณ€ํ•˜๊ณ  ์žˆ๋‹ค. ์ด์ œ๋Š” ์„ค๊ณ„ ๋ฐ ๊ตฌ์ถ• ๊ฐ๋…๋ฟ ์•„๋‹ˆ๋ผ, ๊ณ„ํš์„ ๊พธ์ค€ํžˆ ๋ชจ๋‹ˆํ„ฐ๋งํ•˜๊ณ  ์กฐ์ •ํ•˜๋Š” ์—ญํ• ๊นŒ์ง€ ์š”๊ตฌ๋œ๋‹ค. ์ด๋ฅผ โ€˜์˜ค์ผ€์ŠคํŠธ๋ ˆ์ด์…˜โ€™์ด๋ผ๊ณ  ๋ถ€๋ฅด๊ธฐ๋„ ํ•˜๋ฉฐ, ์ผ์ข…์˜ ์ง€๋„ ์ฝ๊ธฐ์— ๊ฐ€๊น๋‹ค๋Š” ๋น„์œ ๋„ ๋‚˜์˜จ๋‹ค. ์•„ํ‚คํ…ํŠธ๊ฐ€ ๊ฒฝ๋กœ๋ฅผ ์„ค๊ณ„ํ•˜๋”๋ผ๋„, ์‹ค์ œ ํ™˜๊ฒฝ์˜ ๋‹ค์–‘ํ•œ ๋ณ€์ˆ˜๋กœ ์ธํ•ด ๊ธธ์ด ๋‹ฌ๋ผ์งˆ ์ˆ˜ ์žˆ๊ธฐ ๋•Œ๋ฌธ์ด๋‹ค. ๋‚ ์”จ ๋ณ€ํ™”๋‚˜ ์“ฐ๋Ÿฌ์ง„ ๋‚˜๋ฌด ๋•Œ๋ฌธ์— ๊ธธ์„ ์šฐํšŒํ•ด์•ผ ํ•˜๋“ฏ, ์•„ํ‚คํ…ํŠธ ์—ญ์‹œ ๋น„์ฆˆ๋‹ˆ์Šค ํ™˜๊ฒฝ์ด ๋ฐ”๋€Œ๋ฉด ๊ณ„ํš์„ ์ˆ˜์ •ํ•˜๊ณ  ์ƒˆ๋กœ์šด ๋ฐฉํ–ฅ์„ ์ด๋Œ์–ด์•ผ ํ•œ๋‹ค.

๋˜ํ•œ ์ƒˆ๋กœ์šด ์•„ํ‚คํ…ํŠธ ์—ญํ•  ์—ญ์‹œ ๊ธฐ์ˆ  ๋ฐœ์ „์— ๋”ฐ๋ผ ๊ณ„์† ๋ณ€ํ•˜๊ฒŒ ๋  ์ „๋ง์ด๋‹ค. ๋กœ ์ฃผ๋””์ฒด๋Š” ์กฐ์ง์˜ ์ž๋™ํ™” ์ˆ˜์ค€์ด ๋” ๋†’์•„์งˆ ๊ฒƒ์ด๋ผ๊ณ  ๋‚ด๋‹ค๋ดค๊ณ , ์•„์ œ๋ฒ ๋‘๋Š” ์กฐ์ง ์ „๋ฐ˜์— ๊ตฌ์ถ•๋˜๋Š” ์—์ด์ „ํŠธ๋ฅผ ๋ฌถ์–ด ์นดํƒˆ๋กœ๊ทธ ํ˜•ํƒœ๋กœ ๊ด€๋ฆฌํ•˜๋Š” โ€˜์˜ค์ผ€์ŠคํŠธ๋ ˆ์ด์…˜โ€™ ๊ด€์ ์— ํž˜์„ ์‹ค์—ˆ๋‹ค. ์•„ํ‚คํ…ํŠธ์™€ CIO๊ฐ€ ์กฐ์ง ์ „์ฒด์˜ ์กฐ์œจ์ž๋กœ ์—ญํ• ์„ ํ™•์žฅํ•  ์ˆ˜ ์žˆ๋Š” ๊ธฐํšŒ๋ผ๋Š” ์„ค๋ช…์ด๋‹ค.

์งํ•จ์ด ๋ฌด์—‡์ด๋“ , ์•„ํ‚คํ…์ฒ˜์˜ ์ค‘์š”์„ฑ์€ ๊ทธ ์–ด๋А ๋•Œ๋ณด๋‹ค ์ปค์ง€๊ณ  ์žˆ๋‹ค. ํœ˜ํƒœ์ปค๋Š” โ€œAI๊ฐ€ ๋” ๋งŽ์€ ์ฝ”๋“œ๋ฅผ ์ž‘์„ฑํ•˜๊ฒŒ ๋ ์ˆ˜๋ก ์•„ํ‚คํ…ํŠธ๊ฐ€ ๋˜๋Š” ์‚ฌ๋žŒ๋„ ๋” ๋Š˜์–ด๋‚  ๊ฒƒโ€์ด๋ผ๋ฉฐ โ€œ์•ž์œผ๋กœ๋Š” ๋ˆˆ์•ž์— ์žˆ๋Š” ์ˆ˜๋งŽ์€ ์—์ด์ „ํŠธ๋ฅผ ์กฐ์œจํ•˜๊ณ  ํ†ต์ œํ•˜๋Š” ์ผ์ด ์•„ํ‚คํ…ํŠธ์˜ ๋ณธ๋ž˜ ์—ญํ• ์ด ๋  ๊ฒƒโ€์ด๋ผ๊ณ  ๋งํ–ˆ๋‹ค. ๊ทธ๋Š” ๊ธฐ์ˆ  ๋‹ด๋‹น์ž๊ฐ€ ์ ์  ๋” ๋งŽ์€ ๊ฐœ๋ฐœ ์—…๋ฌด๋ฅผ ์—์ด์ „ํŠธ์™€ AI์— ๋งก๊ธฐ๊ฒŒ ๋˜๋ฉด์„œ, ๊ฐœ๋ณ„ ์—์ด์ „ํŠธ์™€ ํ”„๋กœ์„ธ์Šค๊ฐ€ ์–ด๋–ป๊ฒŒ ์ž‘๋™ํ•ด์•ผ ํ•˜๋Š”์ง€๋ฅผ ์„ค๊ณ„ํ•˜๋Š” ์ฑ…์ž„์ด ๋”์šฑ ํ™•๋Œ€๋˜๊ณ  ๋งŽ์€ ๊ธฐ์ˆ  ์ง์›์ด ์ด ์—ญํ• ์„ ๋ถ„๋‹ดํ•˜๊ฒŒ ๋  ๊ฒƒ์ด๋ผ๊ณ  ์„ค๋ช…ํ–ˆ๋‹ค.

๊ทธ๋Š” โ€œAI๊ฐ€ ์ฝ”๋“œ๋ฅผ ์ƒ์„ฑํ•ด์ค„ ์ˆ˜๋Š” ์žˆ์ง€๋งŒ, ์ฝ”๋“œ์˜ ๋ณด์•ˆ์„ ํ™•์ธํ•˜๋Š” ์ฑ…์ž„์€ ์—ฌ์ „ํžˆ ์‚ฌ๋žŒ์—๊ฒŒ ์žˆ๋‹คโ€๋ผ๊ณ  ๊ฐ•์กฐํ–ˆ๋‹ค. ์ด์— ๋”ฐ๋ผ IT ์กฐ์ง์€ ์ฝ”๋“œ๋ฅผ ์ง์ ‘ ๊ฐœ๋ฐœํ•˜๋Š” ํŒ€์—์„œ, AI๊ฐ€ ๋งŒ๋“  ๊ธฐ์ˆ ์„ ์ ๊ฒ€ยท์ˆ˜์šฉํ•˜๊ณ  ์ด๋ฅผ ์‹ค์ œ ๋น„์ฆˆ๋‹ˆ์Šค ํ”„๋กœ์„ธ์Šค์— ๋ฐฐ์น˜ํ•˜๋Š” ์—ญํ• ์„ ์ˆ˜ํ–‰ํ•˜๋Š” ์•„ํ‚คํ…์ฒ˜ ์ค‘์‹ฌ ์กฐ์ง์œผ๋กœ ๋ณ€ํ™”ํ•˜๊ฒŒ ๋  ๊ฒƒ์ด๋ผ๊ณ  ์ „๋งํ–ˆ๋‹ค.

์ด๋ฏธ ์กฐ์ง ๋‚ด์— ์„€๋„์šฐ AI๊ฐ€ ๊นŠ์ˆ™์ด ์Šค๋ฉฐ๋“  ์ƒํ™ฉ์—์„œ, ํœ˜ํƒœ์ปค๋Š” ๊ธฐ์—…์ด ๋„์ž…ํ•œ AI ์—์ด์ „ํŠธ์™€ ๋น„์ฆˆ๋‹ˆ์Šค๊ฐ€ ์กฐ์œจํ•˜๋„๋ก ์ง€์›ํ•˜๋ฉด์„œ ๋™์‹œ์— ๊ณ ๊ฐ ๋ฐ์ดํ„ฐ์™€ ์‚ฌ์ด๋ฒ„ ๋ณด์•ˆ ์ฒด๊ณ„๋ฅผ ๋ณดํ˜ธํ•  ์ˆ˜ ์žˆ๋Š” ์•„ํ‚คํ…ํŠธ ํŒ€์˜ ํ•„์š”์„ฑ์ด ์ปค์ง€๊ณ  ์žˆ๋‹ค๊ณ  ๊ฐ•์กฐํ–ˆ๋‹ค. AI ์—์ด์ „ํŠธ๋Š” ๊ธฐ์—…์˜ ์šด์˜ ๊ตฌ์กฐ๋ฅผ ๋‹ค์‹œ ๊ทธ๋ ค๋‚ด๊ณ  ์žˆ์œผ๋ฉฐ, ๋™์‹œ์— ์•„ํ‚คํ…ํŠธ ์—ญํ• ์˜ ๋ฏธ๋ž˜ ๋˜ํ•œ ์ƒˆ๋กญ๊ฒŒ ์ •์˜ํ•˜๊ณ  ์žˆ๋‹ค.
dl-ciokorea@foundryco.com

Agentic AIโ€™s rise is making the enterprise architect role more fluid

In a previous feature about enterprise architects, gen AI had emerged, but its impact on enterprise technology hadnโ€™t been felt. Today, gen AI has spawned a plethora of agentic AI solutions from the major SaaS providers, and enterprise architecture and the role of enterprise architect is being redrawn. So what do CIOs and their architects need to know?

Organizations, especially their CEOs, have been vocal of the need for AI to improve productivity and bring back growth, and analysts have backed the trend. Gartner, for example, forecasts that 75% of IT work will be completed by human employees using AI over the next five years, which will demand, it says, a proactive approach to identifying new value-creating IT work, like expanding into new markets, creating additional products and services, or adding features that boost margins.

If this radical change in productivity takes place, organizations will need a new plan for business processes and the tech that operates those processes. Recent history shows if organizations donโ€™t adopt new operating models, the benefits of tech investments canโ€™t be achieved.

As a result of agentic AI, processes will change, as well as the software used by the enterprise, and the development and implementation of the technology. Enterprise architects, therefore, are at the forefront of planning and changing the way software is developed, customized, and implemented.

In some quarters of the tech industry, gen AI is seen as a radical change to enterprise software, and to its large, well-known vendors. โ€œTo say AI unleashed will destroy the software industry is absurd, as it would require an AI perfection that even the most optimistic couldnโ€™t agree to,โ€ says Diego Lo Giudice, principal analyst at Forrester. Speaking at the One Conference in the fall, Lo Giudice reminded 4,000 business technology leaders that change is taking place, but itโ€™s built on the foundations of recent successes.

โ€œAgile has given better alignment, and DevOps has torn down the wall between developers and operations,โ€ he said. โ€œTheyโ€™re all trying to do the same thing, reduce the gap between an idea and implementation.โ€ Heโ€™s not denying AI will change the development of enterprise software, but like Agile and DevOps, AI will improve the lifecycle of software development and, therefore, the enterprise architecture. The difference is the speed of change. โ€œIn the history of development, thereโ€™s never been anything like this,โ€ adds Phil Whittaker, AI staff engineer at content management software provider Umbraco.

Complexity and process change

As the software development and customization cycle changes, and agentic applications become commonplace, enterprise architects will need to plan for increased complexity and new business processes. Existing business processes canโ€™t continue if agentic AI is taking on tasks currently done manually by staff.

Again, Lo Giudice adds some levity to a debate that can often become heated, especially in the wake of major redundancies by AI leaders such as AWS. โ€œThe view that everyone will get a bot that helps them do their job is naรฏve,โ€ he said at the One Conference. โ€œOrganizations will need to carry out a thorough analysis of roles and business processes to ensure they spend money and resources on deploying the right agents to the right tasks. Failure to do so will lead to agentic technology being deployed thatโ€™s not needed, canโ€™t cope with complex tasks, and increases the cloud costs of the business.

โ€œItโ€™s easy to build an agent that has access to really important information,โ€ says Tiago Azevedo, CIO for AI-powered low-code platform provider OutSystems. โ€œYou need segregation of data. When you publish an agent, you need to be able to control it, and thereโ€™ll be many agents, so costs will grow.โ€

The big difference, though, is deterministic and non-deterministic, says Whittaker. So non-deterministic requires guardrails of deterministic agents that produce the same output every time over the more random outcomes of non-deterministic agents. Defining business outcomes by deterministic and non-deterministic is a clear role for enterprise architecture. He adds that this is where AI can help organizations fill in gaps. Whittaker, whoโ€™s been an enterprise architect, says itโ€™ll be vital for organizations to experiment with AI to see how it can benefit their architecture and, ultimately, business outcomes.

โ€œThe path to greatness lies not in chasing hype or dismissing AIโ€™s potential, but in finding the golden middle ground where value is truly captured,โ€ write Gartner analysts Daryl Plummer and Alicia Mullery. โ€œAIโ€™s promise is undeniable, but realizing its full value is far from guaranteed. Our research reveals the sobering odds that only one in five AI initiatives achieve ROI, and just one in 50 deliver true transformation.โ€ Further research also finds just 32% of employees trust the organizationโ€™s leadership to drive transformation. โ€œAgents bring an additional component of complexity to architecture that makes the role so relevant,โ€ Azevedo adds.

In the past, enterprise architects were focused on frameworks. Whittaker points out that new technology models will need to be understood and deployed by architects to manage an enterprise that comprises employees, applications, databases, and agentic AI. He cites MCP as one as it provides a standard way to connect AI models to data sources, and simplifies the current tangle of bespoke integrations and RAG implementations. AI will also help architects with this new complexity. โ€œThere are tools for planning, requirements, creating epics, user stories, code generation, documenting code, and translating it,โ€ added Lo Giudice.

New responsibilities

Agentic AI is now a core feature of every major EA tool, says Stรฉphane Vanrechem, senior analyst at Forrester. โ€œThese agents automate data validation, capability mapping, and artifact creation, freeing architects to focus on strategy and transformation.โ€ He cites the technology of Celonis, SAP Signavio, and ServiceNow for their agentic integrations. Whittaker adds that the enterprise architect has become an important human in the loop to protect the organization and be responsible for the decisions and outcomes that agentic AI delivers.

Although some enterprise architects will see this as a collapse of their specialization, Whittaker thinks it broadens the scope of the role and makes them more T-shaped. โ€œI can go deep in different areas,โ€ he says. โ€œPigeon-holing people is never a great thing to do.โ€

Traditionally, architecture has suggested that something is planned, built, and then exists. The rise of agentic AI in the enterprise means the role of the enterprise architect is becoming more fluid as they continue to design and oversee construction. But the role will also involve continual monitoring and adjustment to the plan. Some call this orchestration, or perhaps itโ€™s akin to map reading. An enterprise architect may plan a route, but other factors will alter the course. And just like weather or a fallen tree, which can lead to a route deviation, so too will enterprise architects plan and then lead when business conditions change.

Again, this new way of being an enterprise architect will be impacted by technology. Lo Guidice believes thereโ€™ll be increased automation, and Azevedo sides with the orchestration view, saying agents are built and a catalogue of them is created across the organization, which is an opportunity for enterprise architects and CIOs to be orchestrators.

Whatever the job title, Whittaker says enterprise architecture is more important than ever. โ€œMore people will become enterprise architects as more software is written by AI,โ€ he says. โ€œThen itโ€™s an architectural role to coordinate and conduct the agents in front of you.โ€ He argues that as technologists allow agents and AI to do the development work for them, the responsibility of architecting how agents and processes function broadens and becomes the responsibility of many more technologists.

โ€œAI can create code for you, but itโ€™s your responsibility to make sure itโ€™s secure,โ€ he adds. Rather than developing the code, technology teams will become architecture teams, checking and accepting the technology that AI has developed, and then managing its deployment into the business processes.

With shadow AI already embedded in organizations, Whittakerโ€™s view shows the need for a team of enterprise architects that can help business align with the AI agents theyโ€™ve deployed, and at the same time protect customer data and cybersecurity posture.

AI agents are redrawing the enterprise, and at the same time replanning the role of enterprise architects.

From cloud-native to AI-native: Why your infrastructure must be rebuilt for intelligence

The cloud-native ceiling

For the past decade, theย cloud-nativeย paradigm โ€” defined by containers, microservices and DevOps agility โ€” served as the undisputed architecture of speed. As CIOs, you successfully used it to decouple monoliths, accelerate release cycles and scale applications on demand.

But today, we face a new inflection point. The major cloud providers are no longer just offering compute and storage; they are transforming their platforms to beย AI-native, embedding intelligence directly into the core infrastructure and services. This is not just a feature upgrade; it is a fundamental shift that determines who wins the next decade of digital competition. If you continue to treat AI as a mere application add-on, your foundation will become an impediment. The strategic imperative for every CIO is to recognize AI as the new foundational layer of the modern cloud stack.

This transition from an agility-focusedย cloud-nativeย approach to an intelligence-focusedย AI-nativeย one requires a complete architectural and organizational rebuild. It isย the CIOโ€™s journey to the new digital transformation in the AI era. According to McKinseyโ€™s โ€œThe state of AI in 2025: Agents, innovation and transformation,โ€ while 80 percent of respondents setย efficiencyย as an objective of their AI initiatives, the leaders of theย AI eraย are those who view intelligence as a growth engine, often settingย innovationย andย market expansionย as additional, higher-value objectives.

The new architecture: Intelligence by design

The AI lifecycle โ€” data ingestion, model training, inference and MLOps โ€” imposes demands that conventional, CPU-centric cloud-native stacks simply cannot meet efficiently. Rebuilding your infrastructure for intelligence focuses on three non-negotiable architectural pillars:

1. GPU-optimization: The engine of modern compute

The single most significant architectural difference is the shift in compute gravity from the CPU to theย GPU. AI models, particularly large language models (LLMs), rely on massive parallel processing for training and inference. GPUs, with their thousands of cores, are the only cost-effective way to handle this.

  • Prioritize acceleration:ย Establish a strategic layer to accelerate AI vector search and handle data-intensive operations. This ensures that every dollar spent on high-cost hardware is maximized, rather than wasted on idle or underutilized compute cycles.
  • A containerized fabric:ย Since GPU resources are expensive and scarce, they must be managed with surgical precision. This is where the Kubernetes ecosystem becomes indispensable, orchestrating not just containers, but high-cost specialized hardware.

2. Vector databases: The new data layer

Traditional relational databases are not built to understand the semantic meaning of unstructured data (text, images, audio). The rise of generative AI and retrieval augmented generation (RAG) demands a new data architecture built onย vector databases.

  • Vector embeddings โ€” the mathematical representations of data โ€” are the core language of AI. Vector databases store and index these embeddings, allowing your AI applications to perform instant, semantic lookups. This capability is critical for enterprise-grade LLM applications, as it provides the model with up-to-date, relevant and factual company data, drastically reducing โ€œhallucinations.โ€
  • This is the critical element thatย vector databases provide โ€” a specialized way to store and query vector embeddings, bridging the gap between your proprietary knowledge and the generalized power of a foundation model.

3. The orchestration layer: Accelerating MLOps with Kubernetes

Cloud-native made DevOps possible;ย AI-nativeย requiresย MLOps (machine learning operations). MLOps is the discipline of managing the entire AI lifecycle, which is exponentially more complex than traditional software due to the moving parts: data, models, code and infrastructure.

Kubernetes (K8s) has become the de facto standardย for this transition. Its core capabilities โ€” dynamic resource allocation, auto-scaling and container orchestration โ€” are perfectly suited for the volatile and resource-hungry nature of AI workloads.

By leveragingย Kubernetes for running AI/ML workloads, you achieve:

  • Efficient GPU orchestration:ย K8s ensures that expensive GPU resources are dynamically allocated based on demand, enabling fractional GPU usage (time-slicing or MIG) and multi-tenancy. This eliminates long wait times for data scientists and prevents costly hardware underutilization.
  • MLOps automation:ย K8s and its ecosystem (like Kubeflow) automate model training, testing, deployment and monitoring. This enables a continuous delivery pipeline for models, ensuring that as your data changes, your models are retrained and deployed without manual intervention. This MLOps layer is the engine of vertical integration, ensuring that the underlying GPU-optimized infrastructure is seamlessly exposed and consumed as high-level PaaS and SaaS AI services. This tight coupling ensures maximum utilization of expensive hardware while embedding intelligence directly into your business applications, from data ingestion to final user-facing features.

Competitive advantage: IT as the AI driver

The payoff for prioritizing this infrastructure transition is significant: a decisiveย competitive advantage. When your platform is AI-native, your IT organization shifts from a cost center focused on maintenance to a strategic business driver.

Key takeaways for your roadmap:

  1. Velocity:ย By automating MLOps on a GPU-optimized, Kubernetes-driven platform, you accelerate the time-to-value for every AI idea, allowing teams to iterate on models in weeks, not quarters.
  2. Performance:ย Infrastructure investments in vector databases and dedicated AI accelerators ensure your models are always running with optimal performance and cost-efficiency.
  3. Strategic alignment:ย By building the foundational layer, you are empowering the business, not limiting it. You are executing the vision outlined in โ€œA CIOโ€™s guide to leveraging AI in cloud-native applications,โ€ positioning IT to be the primary enabler of the companyโ€™s AI vision, rather than an impedance.

Conclusion: The future is built on intelligence

The move from cloud-native to AI-Native is not an option; it is a market-driven necessity. The architecture of the future is defined byย GPU-optimization, vector databases and Kubernetes-orchestrated MLOps.

As CIO, your mandate is clear: lead the organizational and architectural charge to install this intelligent foundation. By doing so, you move beyond merely supporting applications to actively governing intelligence that spans and connects the entire enterprise stack. This intelligent foundation requires a modern, integrated approach.ย AI observabilityย must provide end-to-end lineage and automated detection of model drift, bias and security risks, enablingย AI governanceย to enforce ethical policies and maintain regulatory compliance across the entire intelligent stack. By making the right infrastructure investments now, you ensure your enterprise has the scalable, resilient and intelligent backbone required to truly harness the transformative power of AI. Your new role is to be theย Chief Orchestration Officer, governing the engine of future growth.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

โŒ