โŒ

Reading view

There are new articles available, click to refresh the page.

From static workflows to intelligent automation: Architecting the self-driving enterprise

I want you to think about the most fragile employee in your organization. They donโ€™t take coffee breaks, they work 24/7 and they cost a fortune to recruit. But if a button on a website moves a few pixels to the right, this employee has a complete mental breakdown and stops working entirely.

I am talking, of course, about your RPA (robotic process automation) bots.

For the last few years, I have observed IT leaders, CIOs and business leaders pour millions into what we call automation. Weโ€™ve hired armies of consultants to draw architecture diagrams and map out every possible scenario. Weโ€™ve built rigid digital train tracks, convinced that if we just laid enough rail, efficiency would follow.

But we didnโ€™t build resilience. We built fragility.

As an AI solution architect, I see the cracks in this foundation every day. The strategy for 2026 isnโ€™t just about adopting AI; it is about attacking the fragility of traditional automation. The era of deterministic, rule-based systems is ending. We are witnessing the death of determinism and the rise of probabilistic systems โ€” what I call the shift from static workflows to intelligent automation.

The fragility tax of old automation

There is a painful truth we need to acknowledge: Your current bot portfolio is likely a liability.

In my experience and architectural practice, I frequently encounter what I call the fragility tax. This is the hidden cost of maintaining deterministic bots in a dynamic world. The industry rule of thumb ย โ€” ย and one that I see validated in budget sheets constantly โ€” is that for every $1 you spend on BPA licenses, you end up spending $3 on maintenance.

Why? Because traditional BPA is blind. It doesnโ€™t understand the screen it is looking at; it only understands coordinates (x, y). It doesnโ€™t understand the email it is reading; it only scrapes for keywords. When the user interface updates or the vendor changes an invoice format, the bot crashes.

I recall a disaster with an enterprise client who had an automated customer engagement process. It was a flagship project. It worked perfectly until the third-party system provider updated their solution. The submit button changed from green to blue. The bot, which was hardcoded to look for green pixels at specific coordinates, failed silently.

But fragility isnโ€™t just about pixel colors. It is about the fragility of trust in external platforms.

We often assume fragility only applies to bad code, but it also applies to our dependencies. Even the vanguard of the industry isnโ€™t immune. In September 2024, OpenAIโ€™s official newsroom account on X (formerly Twitter) was hijacked by scammers promoting a crypto token.

Think about the irony: The company building the most sophisticated intelligence in human history was momentarily compromised not by a failure of their neural networks, but by the fragility of a third-party platform. This is the fragility tax in action. When you build your enterprise on deterministic connections to external platforms you donโ€™t control, you inherit their vulnerabilities. If you had a standard bot programmed to Retweet@OpenAINewsroom, you would have automatically amplified a scam to your entire customer base.

The old way of scripting cannot handle this volatility. We spent years trying to predict the future and hard-code it into scripts. But the world is too chaotic for scripts. We need architecture that can heal itself.

The architectural pivot: From rules to goals

To capture the value of intelligent automation (IA), you must frame it as an architectural paradigm shift, not just a software upgrade. We are moving from task automation (mimicking hands) to decision automation (mimicking brains).

When I architect these systems, I look not only for rules but also for goals.

In the old paradigm, we gave the computer a script: Click button A, then type text B, then wait 5 seconds. In the new paradigm, we use cognitive orchestrators. We give the AI a goal: Perform this goal.

The difference is profound. If the submit button turns blue, a goal-based system using a large language model (LLM) and vision capabilities sees the button. It understands that despite the color change, it is still the submission mechanism. It adjusts its own path to achieving the goal.

Think of it like the difference between a train and an off-road vehicle. A train is fast and efficient, but it requires expensive infrastructure (tracks) and cannot steer around a rock on the line. Intelligent automation is the off-road vehicle. It uses sensors to perceive the environment. If it sees a rock, it doesnโ€™t derail; it decides to go around it.

This isnโ€™t magic; itโ€™s a specific architectural pattern. The tech stack required to support this is fundamentally different from what most CIOs currently have installed. It is no longer just a workflow engine. The new stack requires three distinct components working in concert:

  1. The workflow engine: The hands that execute actions.
  2. The reasoning layer (LLM): The brain that figures out the steps dynamically and handles the logic.
  3. The vector database: The memory that stores context, past experiences and embedded data to reduce hallucinations.

By combining these, we move from brittle scripts to resilient agents.

Breaking the unstructured data barrier

The most significant limitation of the old way was its inability to handle unstructured data. We know that roughly 80% of enterprise data is unstructured, locked away in PDFs, email threads, Slack and MS Teams chats, and call logs. Traditional business process automation cannot touch this. It requires structured inputs: rows and columns.

This is where the multi-modal understanding of intelligent automation changes the architecture.

I urge you to adopt a new mantra: Data entry is dead. Data understanding is the new standard.

I am currently designing architectures where the system doesnโ€™t just move a PDF from folder A to folder B. It reads the PDF. It understands the sentiment of the email attached to it. It extracts the intent from the call log referenced in the footer.

Consider a complex claims-processing scenario. In the past, a human had to manually review a handwritten accident report, cross-reference it with a policy PDF and check a photo of the damage. A deterministic bot is useless here because the inputs are never the same twice.

Intelligent automation changes the equation. It can ingest the handwritten note (using OCR), analyze the photo (using computer vision) and read the policy (using an LLM). It synthesizes these disparate, messy inputs into a structured claim object. It turns chaos into order.

This is the difference between digitization (making it electronic) and digitalization (making it intelligent).

Human-in-the-loop as a governance pattern

Whenever we present this self-driving enterprise concept to clients, the immediate reaction is โ€œYou want an LLM to talk to our customers?โ€ This is a valid fear. But the answer isnโ€™t to ban AI; it is to architect confidence-based routing.

We donโ€™t hand over the keys blindly. We build governance directly into the code. In this pattern, the AI assesses its own confidence level before acting.

This brings us back to the importance of verification. Why do we need humans in the loop? Because trusted endpoints donโ€™t always stay trusted.

Revisiting the security incident I mentioned earlier: If you had a fully autonomous sentient loop that automatically acted upon every post from a verified partner account, your enterprise would be at risk. A deterministic bot says: Signal comes from a trusted source -> execute.

A probabilistic, governed agent says: Signal comes from a trusted source, but the content deviates 99% from their semantic norm (crypto scam vs. tech news). The confidence score is low. Alert human.

That is the architectural shift we need.

  • Scenario A: The AI is 99% confident it understands the invoice, the vendor matches the master record and the semantics align with past behavior. The system auto-executes.
  • Scenario B: The AI is only 70% confident because the address is slightly different, the image is blurry or the request seems out of character (like the hacked tweet example). The system routes this specific case to a human for approval.

This turns automation into a partnership. The AI handles the mundane, high-volume work and your humans handle the edge cases. It solves the black box problem that keeps compliance officers awake at night.

Kill the zombie bots

If you want to prepare your organization for this shift, you donโ€™t need to buy more software tomorrow. You need to start with an audit.

Look at your current automation portfolio. Identify the zombie bots, which are the scripts that are technically alive but require constant intervention to keep moving. These bots fail whenever vendors update their software. These are the bots that are costing you more in fragility tax than they save in labor.

Stop trying to patch them. These are the prime candidates for intelligent automation.

The future belongs to the probabilistic. It belongs to architectures that can reason through ambiguity, handle unstructured chaos and self-correct when the world changes. As leaders, we need to stop building trains and start building off-road vehicles.

The technology is ready. The question is, are you ready to let go of the steering wheel?

Disclaimer: This and any related publications are provided in the authorโ€™s personal capacity and do not represent the views, positions or opinions of the authorโ€™s employer or any affiliated organization.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

์นผ๋Ÿผ | ๋ฐ์ดํ„ฐ ๊ด€๋ฆฌ ๋ฐฉ์‹์ด ๋‹ฌ๋ผ์ง„๋‹คยทยทยท2026๋…„ โ€˜๋œจ๋Š” 5๊ฐ€์ง€, ์ง€๋Š” 5๊ฐ€์ง€โ€™

๋ฐ์ดํ„ฐ ํ™˜๊ฒฝ์€ ๋Œ€๋ถ€๋ถ„์˜ ๊ธฐ์—…์ด ๋”ฐ๋ผ๊ฐ€๊ธฐ ์–ด๋ ค์šธ ๋งŒํผ ๋น ๋ฅด๊ฒŒ ๋ณ€ํ™”ํ•˜๊ณ  ์žˆ๋‹ค. ์ด๋Ÿฐ ๋ณ€ํ™” ์†๋„๋Š” 2๊ฐ€์ง€ ํž˜์ด ๋งž๋ฌผ๋ฆฌ๋ฉด์„œ ๊ฐ€์†ํ™”๋˜๊ณ  ์žˆ๋‹ค. ํ•˜๋‚˜๋Š” ์ ์ฐจ ์„ฑ์ˆ™ ๋‹จ๊ณ„์— ์ ‘์–ด๋“œ๋Š” ์—”ํ„ฐํ”„๋ผ์ด์ฆˆ ๋ฐ์ดํ„ฐ ๊ด€๋ฆฌ ๊ด€ํ–‰์ด๊ณ , ๋‹ค๋ฅธ ํ•˜๋‚˜๋Š” ๊ธฐ์—…์ด ํ™œ์šฉํ•˜๋Š” ๋ฐ์ดํ„ฐ์— ๋” ๋†’์€ ์ˆ˜์ค€์˜ ์ผ๊ด€์„ฑ, ์ •ํ•ฉ์„ฑ, ์‹ ๋ขฐ๋ฅผ ์š”๊ตฌํ•˜๋Š” AI ํ”Œ๋žซํผ์ด๋‹ค.

๊ทธ ๊ฒฐ๊ณผ 2026๋…„์€ ๊ธฐ์—…์ด ์ฃผ๋ณ€๋ถ€๋ฅผ ์กฐ๊ธˆ์”ฉ ์†๋ณด๋Š” ๋ฐ์„œ ๋ฒ—์–ด๋‚˜, ๋ฐ์ดํ„ฐ ๊ด€๋ฆฌ์˜ ํ•ต์‹ฌ ๊ตฌ์กฐ๋ฅผ ๋ณธ๊ฒฉ์ ์œผ๋กœ ์ „ํ™˜ํ•˜๋Š” ํ•ด๊ฐ€ ๋  ์ „๋ง์ด๋‹ค. ๋ฐ์ดํ„ฐ ๊ด€๋ฆฌ ์˜์—ญ์—์„œ ๋ฌด์—‡์ด ํ•„์š”ํ•ด์ง€๊ณ  ๋ฌด์—‡์ด ์•„๋‹Œ์ง€์— ๋Œ€ํ•œ ๊ธฐ์ค€๋„ ์ ์ฐจ ๋šœ๋ ทํ•ด์ง€๊ณ  ์žˆ์œผ๋ฉฐ, ์ด๋Š” ํŒŒํŽธํ™”๋œ ๋„๊ตฌ ํ™˜๊ฒฝ๊ณผ ์ˆ˜์ž‘์—… ์ค‘์‹ฌ์˜ ๊ด€๋ฆฌ, ์‹ค์งˆ์ ์ธ ์ธํ…”๋ฆฌ์ „์Šค๋ฅผ ์ œ๊ณตํ•˜์ง€ ๋ชปํ•˜๋Š” ๋Œ€์‹œ๋ณด๋“œ์— ํ”ผ๋กœ๊ฐ์„ ๋А๋‚€ ์‹œ์žฅ์˜ ํ˜„์‹ค์„ ๊ทธ๋Œ€๋กœ ๋ณด์—ฌ์ค€๋‹ค.

2026๋…„ ๋ฐ์ดํ„ฐ ๊ด€๋ฆฌ ์˜์—ญ์—์„œ โ€˜๋œจ๋Š” ์š”์†Œโ€™์™€ โ€˜์ง€๋Š” ์š”์†Œโ€™๋ฅผ ์ •๋ฆฌํ•ด ๋ณธ๋‹ค.

๋œจ๋Š” ์š”์†Œ 1: ์‚ฌ๋žŒ์˜ ํŒ๋‹จ์— ๊ธฐ๋ฐ˜ํ•œ ๋„ค์ดํ‹ฐ๋ธŒ ๊ฑฐ๋ฒ„๋„Œ์Šค

๋ฐ์ดํ„ฐ ๊ฑฐ๋ฒ„๋„Œ์Šค๋Š” ๋” ์ด์ƒ ๋ถ€๊ฐ€์ ์ธ ์ž‘์—…์— ๊ทธ์น˜์ง€ ์•Š๋Š”๋‹ค. ์œ ๋‹ˆํ‹ฐ ์นดํƒˆ๋กœ๊ทธ, ์Šค๋…ธ์šฐํ”Œ๋ ˆ์ดํฌ ํ˜ธ๋ผ์ด์ฆŒ, AWS ๊ธ€๋ฃจ ์นดํƒˆ๋กœ๊ทธ์™€ ๊ฐ™์€ ํ”Œ๋žซํผ์€ ๊ฑฐ๋ฒ„๋„Œ์Šค๋ฅผ ์•„ํ‚คํ…์ฒ˜์˜ ๊ธฐ์ดˆ ์š”์†Œ๋กœ ์ง์ ‘ ํ†ตํ•ฉํ•˜๊ณ  ์žˆ๋‹ค. ์ด๋Š” ์™ธ๋ถ€ ๊ฑฐ๋ฒ„๋„Œ์Šค ๊ณ„์ธต์ด ์˜คํžˆ๋ ค ๋งˆ์ฐฐ์„ ํ‚ค์šฐ๊ณ , ๋ฐ์ดํ„ฐ ์ „๋ฐ˜์„ ์ผ๊ด€๋˜๊ฒŒ ๊ด€๋ฆฌํ•˜๋Š” ๋ฐ ํ•œ๊ณ„๋กœ ์ž‘์šฉํ•œ๋‹ค๋Š” ์ธ์‹์ด ๋ฐ˜์˜๋œ ๊ฒฐ๊ณผ๋‹ค. ์ƒˆ๋กญ๊ฒŒ ์ž๋ฆฌ ์žก์€ ํ๋ฆ„์˜ ํ•ต์‹ฌ์€ ๋„ค์ดํ‹ฐ๋ธŒ ์ž๋™ํ™”๋‹ค. ๋ฐ์ดํ„ฐ ํ’ˆ์งˆ ์ ๊ฒ€, ์ด์ƒ ์ง•ํ›„ ์•Œ๋ฆผ, ์‚ฌ์šฉ ํ˜„ํ™ฉ ๋ชจ๋‹ˆํ„ฐ๋ง์ด ๋ฐฑ๊ทธ๋ผ์šด๋“œ์—์„œ ์ƒ์‹œ์ ์œผ๋กœ ์ž‘๋™ํ•˜๋ฉฐ, ์‚ฌ๋žŒ์ด ๋”ฐ๋ผ๊ฐˆ ์ˆ˜ ์—†๋Š” ์†๋„๋กœ ํ™˜๊ฒฝ ์ „๋ฐ˜์˜ ๋ณ€ํ™”๋ฅผ ํฌ์ฐฉํ•œ๋‹ค.

๋‹ค๋งŒ ์ด๋Ÿฌํ•œ ์ž๋™ํ™”๊ฐ€ ์‚ฌ๋žŒ์˜ ํŒ๋‹จ์„ ๋Œ€์ฒดํ•˜๋Š” ๊ฒƒ์€ ์•„๋‹ˆ๋‹ค. ๋ฌธ์ œ๋Š” ๋„๊ตฌ๊ฐ€ ์ง„๋‹จํ•˜์ง€๋งŒ, ์‹ฌ๊ฐ๋„์˜ ๊ธฐ์ค€์„ ์–ด๋–ป๊ฒŒ ์ •ํ• ์ง€, ์–ด๋–ค SLA๊ฐ€ ์ค‘์š”ํ•œ์ง€, ์—์Šค์ปฌ๋ ˆ์ด์…˜ ๊ฒฝ๋กœ๋ฅผ ์–ด๋–ป๊ฒŒ ์„ค๊ณ„ํ• ์ง€๋Š” ์—ฌ์ „ํžˆ ์‚ฌ๋žŒ์ด ๊ฒฐ์ •ํ•œ๋‹ค. ์—…๊ณ„๋Š” ๋„๊ตฌ๊ฐ€ ํƒ์ง€๋ฅผ ๋‹ด๋‹นํ•˜๊ณ , ์˜๋ฏธ ๋ถ€์—ฌ์™€ ์ฑ…์ž„์€ ์‚ฌ๋žŒ์ด ๋งก๋Š” ๊ตฌ์กฐ๋กœ ๋ณ€ํ™”ํ•˜๊ณ  ์žˆ๋‹ค. ์ด๋Š” ๊ฑฐ๋ฒ„๋„Œ์Šค๊ฐ€ ์–ธ์  ๊ฐ€ ์™„์ „ํžˆ ์ž๋™ํ™”๋  ๊ฒƒ์ด๋ผ๋Š” ์ธ์‹์—์„œ ๋ฒ—์–ด๋‚˜๋Š” ํ๋ฆ„์œผ๋กœ ๋ณผ ์ˆ˜ ์žˆ๋‹ค. ๋Œ€์‹  ๊ธฐ์—…์€ ๋„ค์ดํ‹ฐ๋ธŒ ๊ธฐ์ˆ ์˜ ์ด์ ์„ ์ ๊ทน ํ™œ์šฉํ•˜๋Š” ๋™์‹œ์—, ์‚ฌ๋žŒ์˜ ์˜์‚ฌ๊ฒฐ์ •์ด ์ง€๋‹Œ ๊ฐ€์น˜๋ฅผ ๋‹ค์‹œ ํ•œ๋ฒˆ ๊ฐ•ํ™”ํ•˜๊ณ  ์žˆ๋‹ค.

๋œจ๋Š” ์š”์†Œ 2: ํ”Œ๋žซํผ ํ†ตํ•ฉ๊ณผ ํฌ์ŠคํŠธ ์›จ์–ดํ•˜์šฐ์Šค ๋ ˆ์ดํฌํ•˜์šฐ์Šค์˜ ๋ถ€์ƒ

์ˆ˜์‹ญ ๊ฐœ์˜ ํŠนํ™”๋œ ๋ฐ์ดํ„ฐ ๋„๊ตฌ๋ฅผ ์ด์–ด ๋ถ™์—ฌ ์‚ฌ์šฉํ•˜๋˜ ์‹œ๋Œ€๊ฐ€ ๋ง‰์„ ๋‚ด๋ฆฌ๊ณ  ์žˆ๋‹ค. ๋ถ„์‚ฐ์„ ์ „์ œ๋กœ ํ•œ ์‚ฌ๊ณ ๋ฐฉ์‹์ด ๋ณต์žก์„ฑ์˜ ํ•œ๊ณ„์— ๋„๋‹ฌํ–ˆ๊ธฐ ๋•Œ๋ฌธ์ด๋‹ค. ๊ทธ๋™์•ˆ ๊ธฐ์—…์€ ๋ฐ์ดํ„ฐ ์ˆ˜์ง‘ ์‹œ์Šคํ…œ, ํŒŒ์ดํ”„๋ผ์ธ, ์นดํƒˆ๋กœ๊ทธ, ๊ฑฐ๋ฒ„๋„Œ์Šค ๊ณ„์ธต, ์›จ์–ดํ•˜์šฐ์Šค ์—”์ง„, ๋Œ€์‹œ๋ณด๋“œ ๋„๊ตฌ๋ฅผ ์กฐํ•ฉํ•ด ์™”๋‹ค. ๊ทธ ๊ฒฐ๊ณผ ์œ ์ง€ ๋น„์šฉ์€ ๋†’๊ณ  ๊ตฌ์กฐ๋Š” ์ทจ์•ฝํ•˜๋ฉฐ, ๊ฑฐ๋ฒ„๋„Œ์Šค ์ธก๋ฉด์—์„œ๋Š” ์˜ˆ์ƒ๋ณด๋‹ค ํ›จ์”ฌ ๊ด€๋ฆฌํ•˜๊ธฐ ์–ด๋ ค์šด ํ™˜๊ฒฝ์ด ํ˜•์„ฑ๋๋‹ค.

๋ฐ์ดํ„ฐ๋ธŒ๋ฆญ์Šค, ์Šค๋…ธ์šฐํ”Œ๋ ˆ์ดํฌ, ๋งˆ์ดํฌ๋กœ์†Œํ”„ํŠธ๋Š” ์ด๋Ÿฐ ์ƒํ™ฉ์„ ๊ธฐํšŒ๋กœ ๋ณด๊ณ  ํ”Œ๋žซํผ์„ ํ†ตํ•ฉ ํ™˜๊ฒฝ์œผ๋กœ ํ™•์žฅํ•˜๊ณ  ์žˆ๋‹ค. ๋ ˆ์ดํฌํ•˜์šฐ์Šค๋Š” ๋ฐ์ดํ„ฐ ์•„ํ‚คํ…์ฒ˜์˜ ํ•ต์‹ฌ ์ง€ํ–ฅ์ ์œผ๋กœ ์ž๋ฆฌ ์žก์•˜๋‹ค. ์ •ํ˜• ๋ฐ ๋น„์ •ํ˜• ๋ฐ์ดํ„ฐ๋ฅผ ํ•˜๋‚˜์˜ ํ”Œ๋žซํผ์—์„œ ์ฒ˜๋ฆฌํ•˜๊ณ , ๋ถ„์„๊ณผ ๋จธ์‹ ๋Ÿฌ๋‹, AI ํ•™์Šต๊นŒ์ง€ ์•„์šฐ๋ฅผ ์ˆ˜ ์žˆ๊ธฐ ๋•Œ๋ฌธ์ด๋‹ค. ๊ธฐ์—…์€ ๋” ์ด์ƒ ๋ฐ์ดํ„ฐ ์‚ฌ์ผ๋กœ ๊ฐ„ ์ด๋™์ด๋‚˜ ํ˜ธํ™˜๋˜์ง€ ์•Š๋Š” ์‹œ์Šคํ…œ์„ ๋™์‹œ์— ๊ด€๋ฆฌํ•˜๊ธธ ์›ํ•˜์ง€ ์•Š๋Š”๋‹ค. ํ•„์š”ํ•œ ๊ฒƒ์€ ๋งˆ์ฐฐ์„ ์ค„์ด๊ณ  ๋ณด์•ˆ์„ ๋‹จ์ˆœํ™”ํ•˜๋ฉฐ AI ๊ฐœ๋ฐœ ์†๋„๋ฅผ ๋†’์ผ ์ˆ˜ ์žˆ๋Š” ์ค‘์•™ ์šด์˜ ํ™˜๊ฒฝ์ด๋‹ค. ํ”Œ๋žซํผ ํ†ตํ•ฉ์€ ์ด์ œ ๋ฒค๋” ์ข…์†์˜ ๋ฌธ์ œ๊ฐ€ ์•„๋‹ˆ๋ผ, ๋ฐ์ดํ„ฐ๊ฐ€ ํญ์ฆํ•˜๊ณ  AI๊ฐ€ ๊ทธ ์–ด๋А ๋•Œ๋ณด๋‹ค ๋†’์€ ์ผ๊ด€์„ฑ์„ ์š”๊ตฌํ•˜๋Š” ํ™˜๊ฒฝ์—์„œ ์ƒ์กด์„ ์œ„ํ•œ ์„ ํƒ์œผ๋กœ ๋ฐ›์•„๋“ค์—ฌ์ง€๊ณ  ์žˆ๋‹ค.

๋œจ๋Š” ์š”์†Œ 3: ์ œ๋กœ ETL์„ ํ†ตํ•œ ์—”๋“œํˆฌ์—”๋“œ ํŒŒ์ดํ”„๋ผ์ธ ๊ด€๋ฆฌ

์ˆ˜์ž‘์—… ๊ธฐ๋ฐ˜์˜ ETL(์ถ”์ถœ, ์ „ํ™˜, ์ ์žฌ)์€ ์‚ฌ์‹ค์ƒ ๋งˆ์ง€๋ง‰ ๋‹จ๊ณ„์— ์ ‘์–ด๋“ค๊ณ  ์žˆ๋‹ค. ETL์€ ์—ฌ๋Ÿฌ ์‹œ์Šคํ…œ์— ํฉ์–ด์ง„ ๋ฐ์ดํ„ฐ๋ฅผ ์ถ”์ถœํ•˜๊ณ , ๋ถ„์„์— ์ ํ•ฉํ•œ ํ˜•ํƒœ๋กœ ๋ณ€ํ™˜ํ•œ ๋’ค, ๋ฐ์ดํ„ฐ ์›จ์–ดํ•˜์šฐ์Šค๋‚˜ ๋ ˆ์ดํฌ ๊ฐ™์€ ์ €์žฅ์†Œ์— ์ ์žฌํ•˜๋Š” ๊ณผ์ •์„ ์˜๋ฏธํ•œ๋‹ค. ํŒŒ์ด์ฌ ์Šคํฌ๋ฆฝํŠธ๋‚˜ ์ปค์Šคํ…€ SQL ์ž‘์—…์€ ์œ ์—ฐ์„ฑ์„ ์ œ๊ณตํ•˜์ง€๋งŒ, ์ž‘์€ ๋ณ€ํ™”์—๋„ ์‰ฝ๊ฒŒ ์˜ค๋ฅ˜๊ฐ€ ๋ฐœ์ƒํ•˜๊ณ  ์—”์ง€๋‹ˆ์–ด์˜ ์ง€์†์ ์ธ ๊ด€๋ฆฌ ๋ถ€๋‹ด์„ ์š”๊ตฌํ•œ๋‹ค. ์ด๋Ÿฐ ๊ณต๋ฐฑ์„ ๊ด€๋ฆฌํ˜• ํŒŒ์ดํ”„๋ผ์ธ ๋„๊ตฌ๊ฐ€ ๋น ๋ฅด๊ฒŒ ๋ฉ”์šฐ๊ณ  ์žˆ๋‹ค. ๋ฐ์ดํ„ฐ๋ธŒ๋ฆญ์Šค ๋ ˆ์ดํฌํ”Œ๋กœ์šฐ, ์Šค๋…ธ์šฐํ”Œ๋ ˆ์ดํฌ ์˜คํ”ˆํ”Œ๋กœ์šฐ, AWS ๊ธ€๋ฃจ๋Š” ๋ฐ์ดํ„ฐ ์ถ”์ถœ๋ถ€ํ„ฐ ๋ชจ๋‹ˆํ„ฐ๋ง, ์žฅ์•  ๋ณต๊ตฌ๊นŒ์ง€ ์•„์šฐ๋ฅด๋Š” ์ฐจ์„ธ๋Œ€ ์˜ค์ผ€์ŠคํŠธ๋ ˆ์ด์…˜ ํ™˜๊ฒฝ์„ ์ œ์‹œํ•œ๋‹ค.

๋ณต์žกํ•œ ์†Œ์Šค ์‹œ์Šคํ…œ์„ ์ฒ˜๋ฆฌํ•˜๋Š” ๊ณผ์ œ๋Š” ์—ฌ์ „ํžˆ ๋‚จ์•„์žˆ์ง€๋งŒ, ๋ฐฉํ–ฅ์„ฑ์€ ๋ถ„๋ช…ํ•˜๋‹ค. ๊ธฐ์—…์€ ์Šค์Šค๋กœ ์œ ์ง€๋˜๋Š” ํŒŒ์ดํ”„๋ผ์ธ์„ ์›ํ•˜๊ณ  ์žˆ๋‹ค. ๊ตฌ์„ฑ ์š”์†Œ๋ฅผ ์ค„์ด๊ณ , ์‚ฌ์†Œํ•œ ์Šคํฌ๋ฆฝํŠธ ๋ˆ„๋ฝ์œผ๋กœ ๋ฐœ์ƒํ•˜๋Š” ์•ผ๊ฐ„ ์žฅ์• ๋ฅผ ์ตœ์†Œํ™”ํ•˜๊ธธ ๊ธฐ๋Œ€ํ•œ๋‹ค. ์ผ๋ถ€ ์กฐ์ง์€ ํŒŒ์ดํ”„๋ผ์ธ ์ž์ฒด๋ฅผ ์šฐํšŒํ•˜๋Š” ์„ ํƒ๋„ ํ•˜๊ณ  ์žˆ๋‹ค. ์ œ๋กœ ETL ํŒจํ„ด์„ ํ†ตํ•ด ์šด์˜ ์‹œ์Šคํ…œ์˜ ๋ฐ์ดํ„ฐ๋ฅผ ๋ถ„์„ ํ™˜๊ฒฝ์œผ๋กœ ์ฆ‰์‹œ ๋ณต์ œํ•จ์œผ๋กœ์จ, ์•ผ๊ฐ„ ๋ฐฐ์น˜ ์ž‘์—…์ด ์ง€๋‹Œ ์ทจ์•ฝ์„ฑ์„ ์ œ๊ฑฐํ•˜๋Š” ๋ฐฉ์‹์ด๋‹ค. ์ด๋Š” ์‹ค์‹œ๊ฐ„ ๊ฐ€์‹œ์„ฑ๊ณผ ์‹ ๋ขฐํ•  ์ˆ˜ ์žˆ๋Š” AI ํ•™์Šต ๋ฐ์ดํ„ฐ๋ฅผ ์š”๊ตฌํ•˜๋Š” ์• ํ”Œ๋ฆฌ์ผ€์ด์…˜์—์„œ ์ƒˆ๋กœ์šด ํ‘œ์ค€์œผ๋กœ ๋– ์˜ค๋ฅด๊ณ  ์žˆ๋‹ค.

๋œจ๋Š” ์š”์†Œ 4: ๋Œ€ํ™”ํ˜• ๋ถ„์„๊ณผ ์—์ด์ „ํ‹ฑ BI

๋Œ€์‹œ๋ณด๋“œ๋Š” ์ ์ฐจ ๊ธฐ์—… ๋‚ด ์ค‘์‹ฌ ๋„๊ตฌ๋กœ์„œ์˜ ์ž…์ง€๋ฅผ ์žƒ๊ณ  ์žˆ๋‹ค. ์ˆ˜๋…„๊ฐ„ ํˆฌ์ž๊ฐ€ ์ด์–ด์กŒ์Œ์—๋„ ์‹ค์ œ ํ™œ์šฉ๋„๋Š” ์—ฌ์ „ํžˆ ๋‚ฎ๊ณ , ๊ทธ ์ˆ˜๋„ ๊ณ„์†ํ•ด์„œ ๋Š˜์–ด๋‚˜๋Š” ์–‘์ƒ์„ ๋ณด์ด๊ณ  ์žˆ๋‹ค. ๋Œ€๋ถ€๋ถ„์˜ ๋น„์ฆˆ๋‹ˆ์Šค ์‚ฌ์šฉ์ž๋Š” ์ •์ ์ธ ์ฐจํŠธ ์†์— ๋ฌปํžŒ ์ธ์‚ฌ์ดํŠธ๋ฅผ ์ง์ ‘ ์ฐพ์•„๋‚ด๊ณ  ์‹ถ์–ด ํ•˜์ง€ ์•Š๋Š”๋‹ค. ์ด๋“ค์ด ์›ํ•˜๋Š” ๊ฒƒ์€ ๋‹จ์ˆœํ•œ ์‹œ๊ฐํ™”๊ฐ€ ์•„๋‹ˆ๋ผ ๋ช…ํ™•ํ•œ ๋‹ต๋ณ€๊ณผ ์„ค๋ช…, ๊ทธ๋ฆฌ๊ณ  ๋งฅ๋ฝ์ด๋‹ค.

์ด๋Ÿฐ ๊ณต๋ฐฑ์„ ๋Œ€ํ™”ํ˜• ๋ถ„์„์ด ๋ฉ”์šฐ๊ณ  ์žˆ๋‹ค. ์ƒ์„ฑํ˜• BI ์‹œ์Šคํ…œ์€ ์‚ฌ์šฉ์ž๊ฐ€ ์›ํ•˜๋Š” ๋Œ€์‹œ๋ณด๋“œ๋ฅผ ๋ง๋กœ ์„ค๋ช…ํ•˜๊ฑฐ๋‚˜, ์—์ด์ „ํŠธ์—๊ฒŒ ๋ฐ์ดํ„ฐ๋ฅผ ์ง์ ‘ ํ•ด์„ํ•ด ๋‹ฌ๋ผ๊ณ  ์š”์ฒญํ•  ์ˆ˜ ์žˆ๋„๋ก ํ•œ๋‹ค. ํ•„ํ„ฐ๋ฅผ ํ•˜๋‚˜์”ฉ ํด๋ฆญํ•˜๋Š” ๋Œ€์‹  ๋ถ„๊ธฐ๋ณ„ ์„ฑ๊ณผ ์š”์•ฝ์„ ์š”์ฒญํ•˜๊ฑฐ๋‚˜, ํŠน์ • ์ง€ํ‘œ๊ฐ€ ์™œ ๋ณ€ํ–ˆ๋Š”์ง€๋ฅผ ์งˆ๋ฌธํ•  ์ˆ˜ ์žˆ๋‹ค. ์ดˆ๊ธฐ์˜ ์ž์—ฐ์–ด ๊ธฐ๋ฐ˜ SQL ์ž๋™ ์ƒ์„ฑ ๊ธฐ์ˆ ์€ ์ฟผ๋ฆฌ ์ž‘์„ฑ ๊ณผ์ •์„ ์ž๋™ํ™”ํ•˜๋Š” ๋ฐ ์ดˆ์ ์„ ๋งž์ถฐ ํ•œ๊ณ„๋ฅผ ๋“œ๋Ÿฌ๋ƒˆ๋‹ค. ๋ฐ˜๋ฉด ์ตœ๊ทผ์˜ ํ๋ฆ„์€ ๋‹ค๋ฅด๋‹ค. AI ์—์ด์ „ํŠธ๋Š” ์ฟผ๋ฆฌ๋ฅผ ๋งŒ๋“œ๋Š” ์—ญํ• ๋ณด๋‹ค ์ธ์‚ฌ์ดํŠธ๋ฅผ ์ข…ํ•ฉํ•˜๊ณ , ํ•„์š”์— ๋”ฐ๋ผ ์‹œ๊ฐํ™”๋ฅผ ์ƒ์„ฑํ•˜๋Š” ๋ฐ ์ง‘์ค‘ํ•œ๋‹ค. ์ด๋“ค์€ ๋‹จ์ˆœํ•œ ์งˆ์˜ ์ฒ˜๋ฆฌ ๋„๊ตฌ๊ฐ€ ์•„๋‹ˆ๋ผ, ๋ฐ์ดํ„ฐ์™€ ๋น„์ฆˆ๋‹ˆ์Šค ์งˆ๋ฌธ์„ ํ•จ๊ป˜ ์ดํ•ดํ•˜๋Š” ๋ถ„์„๊ฐ€์— ๊ฐ€๊นŒ์šด ์กด์žฌ๋กœ ์ง„ํ™”ํ•˜๊ณ  ์žˆ๋‹ค.

๋œจ๋Š” ์š”์†Œ 5: ๋ฒกํ„ฐ ๋„ค์ดํ‹ฐ๋ธŒ ์Šคํ† ๋ฆฌ์ง€์™€ ๊ฐœ๋ฐฉํ˜• ํ…Œ์ด๋ธ” ํฌ๋งท

AI๋Š” ์Šคํ† ๋ฆฌ์ง€์— ๋Œ€ํ•œ ์š”๊ตฌ ์กฐ๊ฑด ์ž์ฒด๋ฅผ ๋ฐ”๊พธ๊ณ  ์žˆ๋‹ค. ํŠนํžˆ ๊ฒ€์ƒ‰ ์ฆ๊ฐ• ์ƒ์„ฑ(RAG)์€ ๋ฒกํ„ฐ ์ž„๋ฒ ๋”ฉ์„ ์ „์ œ๋กœ ํ•œ๋‹ค. ์ด๋Š” ๋ฐ์ดํ„ฐ๋ฒ ์ด์Šค๊ฐ€ ๋ฒกํ„ฐ ๋ฐ์ดํ„ฐ๋ฅผ ๋ณ„๋„์˜ ํ™•์žฅ ๊ธฐ๋Šฅ์ด ์•„๋‹Œ, ๊ธฐ๋ณธ ๋ฐ์ดํ„ฐ ์œ ํ˜•์œผ๋กœ ์ €์žฅํ•˜๊ณ  ์ฒ˜๋ฆฌํ•  ์ˆ˜ ์žˆ์–ด์•ผ ํ•จ์„ ์˜๋ฏธํ•œ๋‹ค. ์ด์— ๋”ฐ๋ผ ๋ฒค๋”๋Š” ๋ฐ์ดํ„ฐ ์—”์ง„ ๋‚ด๋ถ€์— ๋ฒกํ„ฐ ๊ธฐ๋Šฅ์„ ์ง์ ‘ ๋‚ด์žฅํ•˜๊ธฐ ์œ„ํ•ด ๊ฒฝ์Ÿ์ ์œผ๋กœ ์›€์ง์ด๊ณ  ์žˆ๋‹ค.

๋™์‹œ์— ์•„ํŒŒ์น˜ ์•„์ด์Šค๋ฒ„๊ทธ(Apache Iceberg)๊ฐ€ ๊ฐœ๋ฐฉํ˜• ํ…Œ์ด๋ธ” ํฌ๋งท์˜ ์ƒˆ๋กœ์šด ํ‘œ์ค€์œผ๋กœ ์ž๋ฆฌ ์žก์•„๊ฐ€๊ณ  ์žˆ๋‹ค. ์•„์ด์Šค๋ฒ„๊ทธ๋Š” ๋ฐ์ดํ„ฐ ๋ณต์ œ๋‚˜ ๋ณ„๋„์˜ ๋ณ€ํ™˜ ๊ณผ์ • ์—†์ด๋„ ๋‹ค์–‘ํ•œ ์ปดํ“จํŒ… ์—”์ง„์ด ๋™์ผํ•œ ๋ฐ์ดํ„ฐ๋ฅผ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๋„๋ก ์ง€์›ํ•œ๋‹ค. ๊ทธ๋™์•ˆ ์—…๊ณ„๋ฅผ ๊ดด๋กญํ˜€ ์˜จ ์ƒํ˜ธ์šด์šฉ์„ฑ ๋ฌธ์ œ๋ฅผ ์ƒ๋‹น ๋ถ€๋ถ„ ํ•ด์†Œํ•˜๊ณ , ์˜ค๋ธŒ์ ํŠธ ์Šคํ† ๋ฆฌ์ง€๋ฅผ ์ง„์ •ํ•œ ๋ฉ€ํ‹ฐ ์—”์ง„ ๊ธฐ๋ฐ˜์œผ๋กœ ์ „ํ™˜์‹œํ‚ค๋Š” ์—ญํ• ์„ ํ•œ๋‹ค. ์ด๋ฅผ ํ†ตํ•ด ๊ธฐ์—…์€ ๋ฐ์ดํ„ฐ ์ƒํƒœ๊ณ„๊ฐ€ ๋ณ€ํ™”ํ•  ๋•Œ๋งˆ๋‹ค ๋ชจ๋“  ๊ตฌ์กฐ๋ฅผ ๋‹ค์‹œ ์ž‘์„ฑํ•˜์ง€ ์•Š๊ณ ๋„, ์žฅ๊ธฐ์ ์ธ ๊ด€์ ์—์„œ ๋ฐ์ดํ„ฐ๋ฅผ ์•ˆ์ •์ ์œผ๋กœ ํ™œ์šฉํ•  ์ˆ˜ ์žˆ๋Š” ๊ธฐ๋ฐ˜์„ ๋งˆ๋ จํ•  ์ˆ˜ ์žˆ๋‹ค.

๋‹ค์Œ์€ 2026๋…„์— ์ง€๋Š” ๋ฐ์ดํ„ฐ ๊ด€๋ฆฌ ์š”์†Œ๋‹ค.

์ง€๋Š” ์š”์†Œ 1: ๊ธฐ์กด ๋ชจ๋†€๋ฆฌ์‹ ์›จ์–ดํ•˜์šฐ์Šค์™€ ๊ณผ๋„ํ•˜๊ฒŒ ๋ถ„์‚ฐ๋œ ๋„๊ตฌ ์ฒด๊ณ„

ํ•˜๋‚˜์˜ ๊ฑฐ๋Œ€ํ•œ ์‹œ์Šคํ…œ์— ๋ชจ๋“  ๊ธฐ๋Šฅ์„ ํƒ‘์žฌํ•œ ์ „ํ†ต์ ์ธ ๋ฐ์ดํ„ฐ ์›จ์–ดํ•˜์šฐ์Šค๋Š” ๋Œ€๊ทœ๋ชจ ๋น„์ •ํ˜• ๋ฐ์ดํ„ฐ๋ฅผ ์ฒ˜๋ฆฌํ•˜๋Š” ๋ฐ ํ•œ๊ณ„๊ฐ€ ์žˆ๊ณ , AI๊ฐ€ ์š”๊ตฌํ•˜๋Š” ์‹ค์‹œ๊ฐ„ ์ฒ˜๋ฆฌ ์—ญ๋Ÿ‰๋„ ์ถฉ๋ถ„ํžˆ ์ œ๊ณตํ•˜์ง€ ๋ชปํ•œ๋‹ค. ๊ทธ๋ ‡๋‹ค๊ณ  ํ•ด์„œ ๊ทธ ๋ฐ˜๋Œ€ ๊ทน๋‹จ์ด ํ•ด๋ฒ•์ด ๋œ ๊ฒƒ๋„ ์•„๋‹ˆ๋‹ค. ํ˜„๋Œ€ ๋ฐ์ดํ„ฐ ์Šคํƒ์€ ์ˆ˜๋งŽ์€ ์†Œ๊ทœ๋ชจ ๋„๊ตฌ์— ์—ญํ• ๊ณผ ์ฑ…์ž„์„ ๋ถ„์‚ฐ์‹œ์ผฐ๊ณ , ๊ทธ ๊ฒฐ๊ณผ ๊ฑฐ๋ฒ„๋„Œ์Šค๋Š” ๋ณต์žกํ•ด์กŒ์œผ๋ฉฐ AI๋ฅผ ์œ„ํ•œ ์ค€๋น„ ์†๋„๋„ ์˜คํžˆ๋ ค ๋А๋ ค์กŒ๋‹ค. ๋ฐ์ดํ„ฐ ๋ฉ”์‹œ ์—ญ์‹œ ์ƒํ™ฉ์€ ๋น„์Šทํ•˜๋‹ค. ๋ฐ์ดํ„ฐ ์†Œ์œ ์™€ ๋ถ„์‚ฐ ์ฑ…์ž„์ด๋ผ๋Š” ์›์น™ ์ž์ฒด๋Š” ์—ฌ์ „ํžˆ ์˜๋ฏธ๋ฅผ ๊ฐ–์ง€๋งŒ, ์ด๋ฅผ ์—„๊ฒฉํ•˜๊ฒŒ ๊ตฌํ˜„ํ•˜๋ ค๋Š” ์ ‘๊ทผ๋ฒ•์€ ์ ์ฐจ ํž˜์„ ์žƒ๊ณ  ์žˆ๋‹ค.

์ง€๋Š” ์š”์†Œ 2: ์ˆ˜์ž‘์—… ๊ธฐ๋ฐ˜ ETL๊ณผ ์ปค์Šคํ…€ ์ปค๋„ฅํ„ฐ

์•ผ๊ฐ„ ๋ฐฐ์น˜ ์Šคํฌ๋ฆฝํŠธ๋Š” ๋ฌธ์ œ๋ฅผ ์ฆ‰๊ฐ์ ์œผ๋กœ ๋“œ๋Ÿฌ๋‚ด์ง€ ์•Š์€ ์ฑ„ ์ค‘๋‹จ๋˜๊ธฐ ์‰ฝ๊ณ , ์ฒ˜๋ฆฌ ์ง€์—ฐ์„ ์ดˆ๋ž˜ํ•˜๋ฉฐ ์—”์ง€๋‹ˆ์–ด์˜ ์‹œ๊ฐ„์„ ์ง€์†์ ์œผ๋กœ ์†Œ๋ชจํ•œ๋‹ค. ๋ฐ์ดํ„ฐ ๋ณต์ œ ๋„๊ตฌ์™€ ๊ด€๋ฆฌํ˜• ํŒŒ์ดํ”„๋ผ์ธ์ด ํ‘œ์ค€์œผ๋กœ ์ž๋ฆฌ ์žก์œผ๋ฉด์„œ, ์—…๊ณ„๋Š” ์ด๋Ÿฌํ•œ ์ทจ์•ฝํ•œ ์›Œํฌํ”Œ๋กœ์šฐ์—์„œ ๋น ๋ฅด๊ฒŒ ๋ฒ—์–ด๋‚˜๊ณ  ์žˆ๋‹ค. ์‚ฌ๋žŒ์ด ์ง์ ‘ ์—ฐ๊ฒฐํ•˜๊ณ  ๊ด€๋ฆฌํ•˜๋˜ ์ˆ˜๋™์ ์ธ ๋ฐ์ดํ„ฐ ์—ฐ๊ณ„ ๋ฐฉ์‹์€, ์ƒ์‹œ์ ์œผ๋กœ ์ž‘๋™ํ•˜๊ณ  ์ง€์†์ ์œผ๋กœ ๋ชจ๋‹ˆํ„ฐ๋ง๋˜๋Š” ์˜ค์ผ€์ŠคํŠธ๋ ˆ์ด์…˜ ๊ตฌ์กฐ๋กœ ๋Œ€์ฒด๋˜๊ณ  ์žˆ๋‹ค.

์ง€๋Š” ์š”์†Œ 3: ์ˆ˜๋™ ๋ฐ์ดํ„ฐ ๊ด€๋ฆฌ์™€ ์ˆ˜๋™์  ์นดํƒˆ๋กœ๊ทธ

์‚ฌ๋žŒ์ด ๋ฐ์ดํ„ฐ๋ฅผ ์ผ์ผ์ด ๊ฒ€ํ† ํ•˜๊ณ  ๊ด€๋ฆฌํ•˜๋Š” ๋ฐฉ์‹์€ ๋” ์ด์ƒ ํ˜„์‹ค์ ์ธ ์„ ํƒ์ง€๊ฐ€ ์•„๋‹ˆ๋‹ค. ๋ฌธ์ œ๊ฐ€ ๋ฐœ์ƒํ•œ ์ดํ›„์— ์ •๋ฆฌํ•˜๋Š” ๋ฐฉ์‹์€ ๋น„์šฉ ๋Œ€๋น„ ํšจ๊ณผ๊ฐ€ ๋‚ฎ๊ณ , ๊ธฐ๋Œ€๋งŒํผ์˜ ์„ฑ๊ณผ๋ฅผ ๋‚ด๊ธฐ๋„ ์–ด๋ ต๋‹ค. ๋‹จ์ˆœํžˆ ์ •๋ณด๋ฅผ ๋‚˜์—ดํ•˜๋Š” ์œ„ํ‚ค ํ˜•ํƒœ์˜ ์ˆ˜๋™ํ˜• ๋ฐ์ดํ„ฐ ์นดํƒˆ๋กœ๊ทธ ์—ญ์‹œ ์ ์ฐจ ๋น„์ค‘์ด ์ค„์–ด๋“ค๊ณ  ์žˆ๋‹ค. ๋Œ€์‹  ๋ฐ์ดํ„ฐ ์ƒํƒœ๋ฅผ ์ง€์†์ ์œผ๋กœ ๊ฐ์‹œํ•˜๊ณ  ๋ณ€ํ™”์™€ ์ด์ƒ ์ง•ํ›„๋ฅผ ์ž๋™์œผ๋กœ ํŒŒ์•…ํ•˜๋Š” ์•กํ‹ฐ๋ธŒ ๋ฉ”ํƒ€๋ฐ์ดํ„ฐ ์‹œ์Šคํ…œ์ด ํ•„์ˆ˜ ์š”์†Œ๋กœ ๋– ์˜ค๋ฅด๊ณ  ์žˆ๋‹ค.

์ง€๋Š” ์š”์†Œ 4: ์ •์  ๋Œ€์‹œ๋ณด๋“œ์™€ ์ผ๋ฐฉ์  ๋ณด๊ณ 

์ถ”๊ฐ€ ์งˆ๋ฌธ์— ๋‹ตํ•˜์ง€ ๋ชปํ•˜๋Š” ๋Œ€์‹œ๋ณด๋“œ๋Š” ์‚ฌ์šฉ์ž์—๊ฒŒ ์ขŒ์ ˆ๊ฐ์„ ์ค€๋‹ค. ๊ธฐ์—…์ด ์›ํ•˜๋Š” ๊ฒƒ์€ ๋‹จ์ˆœํžˆ ๊ฒฐ๊ณผ๋ฅผ ๋ณด์—ฌ์ฃผ๋Š” ๋„๊ตฌ๊ฐ€ ์•„๋‹ˆ๋ผ ํ•จ๊ป˜ ์ƒ๊ฐํ•  ์ˆ˜ ์žˆ๋Š” ๋ถ„์„ ํ™˜๊ฒฝ์ด๋‹ค. AI ๋น„์„œ ์‚ฌ์šฉ ๊ฒฝํ—˜์œผ๋กœ ๋น„์ฆˆ๋‹ˆ์Šค ๊ธฐ๋Œ€ ์ˆ˜์ค€์ด ๋†’์•„์ง€๋ฉด์„œ, ์ •์ ์ธ ๋ณด๊ณ  ๋ฐฉ์‹์€ ๊ทธ ๋ถ€๋‹ด์„ ๊ฐ๋‹นํ•˜์ง€ ๋ชปํ•˜๊ณ  ์žˆ๋‹ค.

์ง€๋Š” ์š”์†Œ 5: ์˜จํ”„๋ ˆ๋ฏธ์Šค ํ•˜๋‘ก ํด๋Ÿฌ์Šคํ„ฐ

ํ•˜๋‘ก ํด๋Ÿฌ์Šคํ„ฐ(Hadoop)๋Š” ๋Œ€๊ทœ๋ชจ ๋ฐ์ดํ„ฐ๋ฅผ ๋ถ„์‚ฐ ์ €์žฅยท์ฒ˜๋ฆฌํ•˜๊ธฐ ์œ„ํ•ด ์—ฌ๋Ÿฌ ์„œ๋ฒ„๋ฅผ ํ•˜๋‚˜์˜ ์‹œ์Šคํ…œ์ฒ˜๋Ÿผ ๋ฌถ์–ด ์šด์˜ํ•˜๋Š” ์˜คํ”ˆ์†Œ์Šค ๋น…๋ฐ์ดํ„ฐ ์ฒ˜๋ฆฌ ํ™˜๊ฒฝ์ด๋‹ค. ํ•˜์ง€๋งŒ ์˜จํ”„๋ ˆ๋ฏธ์Šค ํ™˜๊ฒฝ์—์„œ ์ด๋ฅผ ์ง์ ‘ ์šด์˜ํ•˜๋Š” ๋ฐฉ์‹์€ ์ ์  ์„ค๋“๋ ฅ์„ ์žƒ๊ณ  ์žˆ๋‹ค. ์˜ค๋ธŒ์ ํŠธ ์Šคํ† ๋ฆฌ์ง€์™€ ์„œ๋ฒ„๋ฆฌ์Šค ์ปดํ“จํŒ…๋ฅผ ๊ฒฐํ•ฉํ•œ ๊ตฌ์กฐ๋Š” ๋” ๋†’์€ ํ™•์žฅ์„ฑ๊ณผ ๋‹จ์ˆœํ•œ ์šด์˜, ๋‚ฎ์€ ๋น„์šฉ์ด๋ผ๋Š” ๋ถ„๋ช…ํ•œ ์ด์ ์„ ์ œ๊ณตํ•œ๋‹ค. ๋ฐ˜๋ฉด ์ˆ˜๋งŽ์€ ๊ตฌ์„ฑ ์š”์†Œ๋กœ ์ด๋ค„์ง„ ํ•˜๋‘ก ์„œ๋น„์Šค ์ƒํƒœ๊ณ„๋Š” ํ˜„๋Œ€์ ์ธ ๋ฐ์ดํ„ฐ ํ™˜๊ฒฝ๊ณผ ๋” ์ด์ƒ ์ž˜ ๋งž์ง€ ์•Š๋Š” ๊ตฌ์กฐ๊ฐ€ ๋˜๊ณ  ์žˆ๋‹ค.

2026๋…„์˜ ๋ฐ์ดํ„ฐ ๊ด€๋ฆฌ๋Š” โ€˜๋ช…ํ™•์„ฑโ€™์„ ์ค‘์‹ฌ์— ๋‘๊ณ  ์žˆ๋‹ค. ์‹œ์žฅ์€ ํŒŒํŽธํ™”๋œ ๊ตฌ์กฐ์™€ ์ˆ˜์ž‘์—… ๊ฐœ์ž…, ๊ทธ๋ฆฌ๊ณ  ์†Œํ†ตํ•˜์ง€ ๋ชปํ•˜๋Š” ๋ถ„์„ ๋ฐฉ์‹์„ ์ ์ฐจ ์™ธ๋ฉดํ•˜๊ณ  ์žˆ๋‹ค. ๋ฏธ๋ž˜์˜ ์ค‘์‹ฌ์—๋Š” ํ†ตํ•ฉ ํ”Œ๋žซํผ, ๋„ค์ดํ‹ฐ๋ธŒ ๊ฑฐ๋ฒ„๋„Œ์Šค, ๋ฒกํ„ฐ ๋„ค์ดํ‹ฐ๋ธŒ ์Šคํ† ๋ฆฌ์ง€, ๋Œ€ํ™”ํ˜• ๋ถ„์„, ๊ทธ๋ฆฌ๊ณ  ์ตœ์†Œํ•œ์˜ ์ธ๊ฐ„ ๊ฐœ์ž…์œผ๋กœ ์šด์˜๋˜๋Š” ํŒŒ์ดํ”„๋ผ์ธ์ด ์ž๋ฆฌ ์žก๊ณ  ์žˆ๋‹ค. AI๋Š” ๋ฐ์ดํ„ฐ ๊ด€๋ฆฌ๋ฅผ ๋Œ€์ฒดํ•˜๋Š” ์กด์žฌ๊ฐ€ ์•„๋‹ˆ๋‹ค. ๋Œ€์‹  ๋‹จ์ˆœํ•จ๊ณผ ๊ฐœ๋ฐฉ์„ฑ, ํ†ตํ•ฉ๋œ ์„ค๊ณ„๋ฅผ ์ค‘์‹œํ•˜๋Š” ๋ฐฉํ–ฅ์œผ๋กœ ๋ฐ์ดํ„ฐ ๊ด€๋ฆฌ์˜ ๊ทœ์น™ ์ž์ฒด๋ฅผ ๋‹ค์‹œ ์“ฐ๊ณ  ์žˆ๋‹ค.
dl-ciokorea@foundryco.com

Whatโ€™s in, and whatโ€™s out: Data management in 2026 has a new attitude

The data landscape is shifting faster than most organizations can track. The pace of change is driven by two forces that are finally colliding productively: enterprise data management practices that are maturing and AI platforms that are demanding more coherence, consistency and trust in the data they consume.

As a result, 2026 is shaping up to be the year when companies stop tinkering on the edges and start transforming the core. What is emerging is a clear sense of what is in and what is out for data management, and it reflects a market that is tired of fragmented tooling, manual oversight and dashboards that fail to deliver real intelligence.

So, hereโ€™s a list of whatโ€™s โ€œInโ€ and whatโ€™s โ€œOutโ€ for data management in 2026:

IN: Native governance that automates the work but still relies on human process

Data governance is no longer a bolt-on exercise. Platforms like Unity Catalog, Snowflake Horizon and AWS Glue Catalog are building governance into the foundation itself. This shift is driven by the realization that external governance layers add friction and rarely deliver reliable end-to-end coverage. The new pattern is native automation. Data quality checks, anomaly alerts and usage monitoring run continuously in the background. They identify what is happening across the environment with speed that humans cannot match.

Yet this automation does not replace human judgment. The tools diagnose issues, but people still decide how severity is defined, which SLAs matter and how escalation paths work. The industry is settling into a balanced model. Tools handle detection. Humans handle meaning and accountability. It is a refreshing rejection of the idea that governance will someday be fully automated. Instead, organizations are taking advantage of native technology while reinforcing the value of human decision-making.

IN: Platform consolidation and the rise of the post-warehouse lakehouse

The era of cobbling together a dozen specialized data tools is ending. Complexity has caught up with the decentralized mindset. Teams have spent years stitching together ingestion systems, pipelines, catalogs, governance layers, warehouse engines and dashboard tools. The result has been fragile stacks that are expensive to maintain and surprisingly hard to govern.

Databricks, Snowflake and Microsoft see an opportunity and are extending their platforms into unified environments. The Lakehouse has emerged as the architectural north star. It gives organizations a single platform for structured and unstructured data, analytics, machine learning and AI training. Companies no longer want to move data between silos or juggle incompatible systems. What they need is a central operating environment that reduces friction, simplifies security and accelerates AI development. Consolidation is no longer about vendor lock-in. It is about survival in a world where data volumes are exploding and AI demands more consistency than ever.

IN: End-to-end pipeline management with zero ETL as the new ideal

Handwritten ETL is entering its final chapter. Python scripts and custom SQL jobs may offer flexibility, but they break too easily and demand constant care from engineers. Managed pipeline tools are stepping into the gap. Databricks Lakeflow, Snowflake Openflow and AWS Glue represent a new generation of orchestration that covers extraction through monitoring and recovery.

While there is still work to do in handling complex source systems, the direction is unmistakable. Companies want pipelines that maintain themselves. They want fewer moving parts and fewer late-night failures caused by an overlooked script. Some organizations are even bypassing pipes altogether. Zero ETL patterns replicate data from operational systems to analytical environments instantly, eliminating the fragility that comes with nightly batch jobs. It is an emerging standard for applications that need real-time visibility and reliable AI training data.

IN: Conversational analytics and agentic BI

Dashboards are losing their grip on the enterprise. Despite years of investment, adoption remains low and dashboard sprawl continues to grow. Most business users do not want to hunt for insights buried in static charts. They want answers. They want explanations. They want context.

Conversational analytics is stepping forward to fill the void. Generative BI systems let users describe the dashboard they want or ask an agent to explain the data directly. Instead of clicking through filters, a user might request a performance summary for the quarter or ask why a metric changed. Early attempts at Text to SQL struggled because they attempted to automate the query writing layer. The next wave is different. AI agents now focus on synthesizing insights and generating visualizations on demand. They act less like query engines and more like analysts who understand both the data and the business question.

IN: Vector native storage and open table formats

AI is reshaping storage requirements. Retrieval Augmented Generation depends on vector embeddings, which means that databases must store vectors as first-class objects. Vendors are racing to embed vector support directly in their engines.

At the same time, Apache Iceberg is becoming the new standard for open table formats. It allows every compute engine to work on the same data without duplication or transformation. Iceberg removes a decade of interoperability pain and turns object storage into a true multi-engine foundation. Organizations finally get a way to future-proof their data without rewriting everything each time the ecosystem shifts.

And hereโ€™s whatโ€™s โ€œOutโ€:

OUT: Monolithic warehouses and hyper-decentralized tooling

Traditional enterprise warehouses cannot handle unstructured data at scale and cannot deliver the real-time capabilities needed for AI. Yet the opposite extreme has failed too. The highly fragmented Modern Data Stack scattered responsibilities across too many small tools. It created governance chaos and slowed down AI readiness. Even the rigid interpretation of Data Mesh has faded. The principles live on, but the strict implementation has lost momentum as companies focus more on AI integration and less on organizational theory.

OUT: Hand-coded ETL and custom connectors

Nightly batch scripts break silently, cause delays and consume engineering bandwidth. With replication tools and managed pipelines becoming mainstream, the industry is rapidly abandoning these brittle workflows. Manual plumbing is giving way to orchestration that is always on and always monitored.

OUT: Manual stewardship and passive catalogs

The idea of humans reviewing data manually is no longer realistic. Reactive cleanup costs too much and delivers too little. Passive catalogs that serve as wikis are declining. Active metadata systems that monitor data continuously are now essential.

Out: Static dashboards and one-way reporting

Dashboards that cannot answer follow up questions frustrate users. Companies want tools that converse. They want analytics that think with them. Static reporting is collapsing under the weight of business expectations shaped by AI assistants.

OUT: On-premises Hadoop clusters

Maintaining on-prem Hadoop is becoming indefensible. Object storage combined with serverless compute offers elasticity, simplicity and lower cost. The complex zoo of Hadoop services no longer fits the modern data landscape.

Data management in 2026 is about clarity. The market is rejecting fragmentation, manual intervention and analytics that fail to communicate. The future belongs to unified platforms, native governance, vector native storage, conversational analytics and pipelines that operate with minimal human interference. AI is not replacing data management. It is rewriting the rules in ways that reward simplicity, openness and integrated design.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

Why CIOs need a new approach to unstructured data management

CIOs everywhere will be familiar with the major issues caused by collecting and retaining data at an increasingly rapid rate. Industry research shows 64% of enterprises manage at least 1 Petabyte of data, creating substantial cost, governance and compliance pressures.

If that wasnโ€™t enough, organizations frequently default to retaining these enormous datasets, even when they are no longer needed. To put this into context, the average useful life of most enterprise data has now shrunk to 30โ€“90 days; however, for various reasons, businesses continue to store it indefinitely, thereby adding to the cost and complexity of their underlying infrastructure.

As much as 90% of this information comes in the form of unstructured data files spread across hybrid, multi-vendor environments with little to no centralized oversight. This can include everything from MS Office docs to photo and video content routinely used by the likes of marketing teams, for example. The list is extensive, stretching to invoices, service reports, log files and in some organizations even scans or faxes of hand-written documents, often dating back decades.

In these circumstances, CIOs often lack clear visibility into what data exists, where it resides, who owns it, how old it is or whether it holds any business value. This matters because in many cases, it has tremendous value with the potential to offer insight into a range of important business issues, such as customer behaviour or field quality challenges, among many others.

With the advent of GenAI, it is now realistic to use the knowledge embedded in all kinds of documents and to retrieve their high-quality (i.e., relevant, useful and correct) content. This is even possible for documents having a low visual/graphical quality. As a result, running AI on a combination of structured and unstructured input can reconstruct the entire enterprise memory and the so-called โ€œtribal knowledgeโ€.

Visibility and governance

The first point to appreciate is that the biggest challenge is not the amount of data being collected and retained, but the absence of meaningful visibility into what is being stored.

Without an enterprise-wide view (a situation common to many organizations), teams cannot determine which data is valuable, which is redundant, or which poses a risk. In particular, metadata remains underutilised, even though insights such as creation date, last access date, ownership, activity levels and other basic indicators can immediately reveal security risks, duplication, orphaned content and stale data.

Visibility begins by building a thorough understanding of the existing data landscape. This can be done by using tools that scan storage platforms across multi-vendor and multi-location environments, collect metadata at scale, and generate virtual views of datasets. This allows teams to understand the size, age, usage and ownership of their data, enabling them to identify duplicate, forgotten or orphaned files.

Itโ€™s a complex challenge. In most cases, some data will be on-premises, some in the cloud, some stored as files and some as objects (such as S3 or Azure), all of which can be on-prem or in the cloud. In these circumstances, the multi-vendor infrastructure strategy adopted by many organizations is a sound strategy as it facilitates data redundancy and replication while also protecting against increasingly common cloud outages, such as those seen at Amazon and CloudFlare.

With visibility tools and processes in place, the next requirement is to introduce governance frameworks that bring structure and control to unstructured data estates. Good governance enables CIOs to align information with retention rules, compliance obligations and business requirements, reducing unnecessary storage and risk.

Itโ€™s also dependent on effective data classification processes, which help determine which data should be retained, which can be relocated to lower-cost platforms and which no longer serve a purpose. Together, these processes establish clearer ownership and ensure data is handled consistently across the organization while also providing the basis for reliable decision-making by ensuring that data remains accurate. Without it, visibility alone cannot deliver operational or financial benefits, because there is no framework for acting on what the organization discovers.

Lifecycle management

Once CIOs have a clear view of what exists and a framework to control it, they need a practical method for acting on those findings across the data lifecycle. By applying metadata-based policies, teams can migrate older or rarely accessed data to lower-cost platforms, thereby reducing pressure on primary storage. Files that have not been accessed for an extended period can be relocated to more economical systems, while long-inactive data can be archived or removed entirely if appropriate.

A big part of the challenge is that the data lifecycle is now much longer than it used to be, a situation that has profoundly affected how organizations approach storage strategy and spend.

For example, datasets considered โ€˜activeโ€™ will typically be stored on high- or mid-performance systems. Once again, there are both on-premises and cloud options to consider, depending on the use case, but typically they include both file and object requirements.

As time passes (often years), data gradually becomes eligible for archival. It is then moved to an archive venue, where it is better protected but may become less accessible or require more checks before access. Inside the archive, it can (after even more years) be tiered to cheaper storage such as tape. At this point, data retrieval times might range from minutes to hours, or even days. In each case, archived data is typically subject to all kinds of regulations and can be used during e-discovery.

In most circumstances, it is only after this stage has been reached that data is finally eligible to be deleted.

When organizations take this approach, many discover that a significant proportion of their stored information falls into the inactive or long-inactive category. Addressing this issue immediately frees capacity, reduces infrastructure expenditure and helps prevent the further accumulation of redundant content.

Policy-driven lifecycle management also improves operational control. It ensures that data is retained according to its relevance rather than by default and reduces the risk created by carrying forgotten or outdated information. It supports data quality by limiting the spread of stale content across the estate and provides CIOs with a clearer path to meeting retention and governance obligations.

Whatโ€™s more, at a strategic level, lifecycle management transforms unstructured data from an unmanaged cost into a controlled process that aligns storage with business value. It strengthens compliance by ensuring only the data required for operational or legal reasons is kept, and it improves readiness for AI and analytics initiatives by ensuring that underlying datasets are accurate and reliable.

To put all these issues into perspective, the business obsession with data shows no sign of slowing up. Indeed, the growing adoption of AI technologies is raising the stakes even further, particularly for organizations that continue to prioritize data collection and storage over management and governance. As a result, getting data management and storage strategies in order sooner rather than later is likely to rise to the top of the to-do list for CIOs across the board.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

โ€œ๋ชจ๋“  ๋ฐ์ดํ„ฐ๊ฐ€ ๋˜‘๊ฐ™์ง€๋Š” ์•Š๋‹คโ€ AI ์„ฑํŒจ ๊ฐ€๋ฅผ ๋ฐ์ดํ„ฐ ์ „๋žต์˜ ๊ณผ์ œ

์ƒ์„ฑํ˜• AI๋Š” ๊ฑฐ์˜ ๋ชจ๋“  ์‚ฐ์—…์—์„œ ํŒŒ๊ดด์  ์˜ํ–ฅ๋ ฅ์„ ํ‚ค์šฐ๊ณ  ์žˆ์ง€๋งŒ, ์ตœ๊ณ  ์ˆ˜์ค€์˜ AI ๋ชจ๋ธ๊ณผ ๋„๊ตฌ๋ฅผ ์“ฐ๋Š” ๊ฒƒ๋งŒ์œผ๋กœ๋Š” ์ถฉ๋ถ„ํ•˜์ง€ ์•Š๋‹ค. ๋ชจ๋“  ๊ธฐ์—…์ด ๋น„์Šทํ•œ ๋ชจ๋ธ๊ณผ ๋„๊ตฌ๋ฅผ ์“ฐ๋Š” ์ƒํ™ฉ์—์„œ ๊ฒฝ์Ÿ์šฐ์œ„๋ฅผ ๋งŒ๋“œ๋Š” ํ•ต์‹ฌ์€ ์ž์ฒด ๋ชจ๋ธ์„ ํ•™์Šตํ•˜๊ณ  ๋ฏธ์„ธ ์กฐ์ •ํ•˜๊ฑฐ๋‚˜ ๋ชจ๋ธ์— ์ฐจ๋ณ„ํ™”๋œ ๋งฅ๋ฝ์„ ์ œ๊ณตํ•˜๋Š” ์—ญ๋Ÿ‰์ด๋ฉฐ, ์ด๋Ÿฐ ์—ญ๋Ÿ‰์—๋Š” ๋ฐ์ดํ„ฐ๊ฐ€ ํ•„์š”ํ•˜๋‹ค.

์ฝ”๋“œ ๋ฒ ์ด์Šค์™€ ๋ฌธ์„œ, ๋ณ€๊ฒฝ ๋กœ๊ทธ๋Š” ์ฝ”๋”ฉ ์—์ด์ „ํŠธ๋ฅผ ์œ„ํ•œ ๋ฐ์ดํ„ฐ๋‹ค. ๊ณผ๊ฑฐ ์ œ์•ˆ์„œ์™€ ๊ณ„์•ฝ์„œ ๋ผ์ด๋ธŒ๋Ÿฌ๋ฆฌ๋Š” ์ž‘๋ฌธ ์–ด์‹œ์Šคํ„ดํŠธ์˜ ํ•™์Šต ์žฌ๋ฃŒ๊ฐ€ ๋œ๋‹ค. ๊ณ ๊ฐ DB์™€ ์ง€์› ํ‹ฐ์ผ“์€ ๊ณ ๊ฐ ์„œ๋น„์Šค ์ฑ—๋ด‡์˜ ๊ธฐ๋ฐ˜ ๋ฐ์ดํ„ฐ๋‹ค. ๋‹ค๋งŒ ๋ฐ์ดํ„ฐ๊ฐ€ ๋งŽ๋‹ค๊ณ  ํ•ด์„œ โ€˜์ข‹์€ ๋ฐ์ดํ„ฐโ€™๊ฐ€ ๋˜๋Š” ๊ฒƒ์€ ์•„๋‹ˆ๋‹ค.

IT์„œ๋น„์Šค ๊ธฐ์—… ์œ ๋‹ˆ์‹œ์Šค(Unisys)์˜ ํด๋ผ์šฐ๋“œยท์• ํ”Œ๋ฆฌ์ผ€์ด์…˜ยท์ธํ”„๋ผ ์†”๋ฃจ์…˜ ๋ถ€๋ฌธ ์ˆ˜์„๋ถ€์‚ฌ์žฅ ๊ฒธ ์ด๊ด„ ์ฑ…์ž„์ž ๋งŒ์ฃผ ๋‚˜๊ทธ๋ผํ‘ธ๋ฅด๋Š” โ€œ์ ‘๊ทผ ๊ฐ€๋Šฅํ•œ ์–ด๋–ค ๋ฐ์ดํ„ฐ๋“  ๋ชจ๋ธ์— ์—ฐ๊ฒฐํ•˜๊ธฐ๊ฐ€ ๋„ˆ๋ฌด ์‰ฝ๋‹คโ€๋ผ๋ฉฐ, โ€œ์ง€๋‚œ 3๋…„๊ฐ„ ๊ฐ™์€ ์‹ค์ˆ˜๊ฐ€ ๋ฐ˜๋ณต๋˜๋Š” ๊ฑธ ๋ดค๋‹ค. โ€˜์“ฐ๋ ˆ๊ธฐ๋ฅผ ๋„ฃ์œผ๋ฉด ์“ฐ๋ ˆ๊ธฐ๊ฐ€ ๋‚˜์˜จ๋‹คโ€™๋Š” ์˜ค๋ž˜๋œ ๊ฒฉ์–ธ์€ ์—ฌ์ „ํžˆ ์œ ํšจํ•˜๋‹คโ€๋ผ๊ณ  ๊ฐ•์กฐํ–ˆ๋‹ค.

์‹ค์ œ๋กœ ๋ณด์Šคํ„ด ์ปจ์„คํŒ… ๊ทธ๋ฃน์ด 9์›”์— ๊ณต๊ฐœํ•œ ์„ค๋ฌธ ์กฐ์‚ฌ์—์„œ 1,250๋ช…์˜ AI ์˜์‚ฌ๊ฒฐ์ •๊ถŒ์ž ์ค‘ 68%๊ฐ€ โ€˜๊ณ ํ’ˆ์งˆ ๋ฐ์ดํ„ฐ ์ ‘๊ทผ ๋ถ€์กฑโ€™์„ AI ๋„์ž…์˜ ํ•ต์‹ฌ ์žฅ์•  ์š”์ธ์œผ๋กœ ๊ผฝ์•˜๋‹ค. 10์›”์— ์‹œ์Šค์ฝ”๊ฐ€ 8,000๋ช… ์ด์ƒ AI ๋ฆฌ๋”๋ฅผ ๋Œ€์ƒ์œผ๋กœ ์ง„ํ–‰ํ•œ ์กฐ์‚ฌ์—์„œ๋„ AI ์—์ด์ „ํŠธ์— ํ•„์š”ํ•œ โ€˜์ •์ œ๋˜๊ณ  ์ค‘์•™ํ™”๋œ ๋ฐ์ดํ„ฐโ€™๋ฅผ ์‹ค์‹œ๊ฐ„์œผ๋กœ ํ†ตํ•ฉํ•ด ๋‘” ๊ธฐ์—…์€ 35%์— ๊ทธ์ณค๋‹ค. IDC๋Š” 2027๋…„๊นŒ์ง€ ๊ณ ํ’ˆ์งˆ์˜ ์ด๋ฅธ๋ฐ” โ€˜AI ๋ ˆ๋”” ๋ฐ์ดํ„ฐ(AI-ready data)โ€™๋ฅผ ์šฐ์„ ์ˆœ์œ„๋กœ ๋‘์ง€ ์•Š๋Š” ๊ธฐ์—…์€ ์ƒ์„ฑํ˜• AI์™€ ์—์ด์ „ํ‹ฑ ์†”๋ฃจ์…˜ ํ™•์žฅ์— ์–ด๋ ค์›€์„ ๊ฒช๊ณ , ์ƒ์‚ฐ์„ฑ์ด 15% ๊ฐ์†Œํ•  ์ˆ˜ ์žˆ๋‹ค๊ณ  ๊ฒฝ๊ณ ํ–ˆ๋‹ค.

์‹œ๋งจํ‹ฑ ๊ณ„์ธต์ด ๋ฌด๋„ˆ์ง€๋Š” ์ˆœ๊ฐ„

๋ฐ์ดํ„ฐ๋ฅผ ํ•œ๋ฐ โ€˜๋ญ‰๋šฑ๊ทธ๋ คโ€™ ๋ชจ์•„๋‘๋ฉด ๋˜ ๋‹ค๋ฅธ ๋ฌธ์ œ๊ฐ€ ์ƒ๊ธด๋‹ค. ์‹œ๋งจํ‹ฑ(Semantic) ๊ณ„์ธต์ด ํ˜ผ๋ž€์Šค๋Ÿฌ์›Œ์ง„๋‹ค๋Š” ์ ์ด๋‹ค. ์—ฌ๋Ÿฌ ์†Œ์Šค์—์„œ ๋“ค์–ด์˜จ ๋ฐ์ดํ„ฐ๋Š” ๊ฐ™์€ ์ •๋ณด๋ผ๋„ ์ •์˜์™€ ๊ตฌ์กฐ๊ฐ€ ์ œ๊ฐ๊ฐ์ผ ์ˆ˜ ์žˆ๋‹ค. ์‹ ๊ทœ ํ”„๋กœ์ ํŠธ๋‚˜ ์ธ์ˆ˜ํ•ฉ๋ณ‘์œผ๋กœ ๋ฐ์ดํ„ฐ ์†Œ์Šค๊ฐ€ ๋Š˜์–ด๋‚ ์ˆ˜๋ก ์ด ๋ฌธ์ œ๋Š” ์ปค์ง„๋‹ค. ํŠนํžˆ โ€˜๊ณ ๊ฐโ€™์ฒ˜๋Ÿผ ๊ฐ€์žฅ ์ค‘์š”ํ•œ ๋ฐ์ดํ„ฐ์กฐ์ฐจ ์‹๋ณ„ํ•˜๊ณ  ์ •ํ•ฉ์„ฑ์„ ์œ ์ง€ํ•˜๊ธฐ ์–ด๋ ต๋‹ค๋Š” ํ˜ธ์†Œ๊ฐ€ ๋งŽ๋‹ค.

๋ฐ์ดํ„ฐยท์‹ ์šฉ์ •๋ณด ๊ธฐ์—… ๋˜ ์•ค ๋ธŒ๋ž˜๋“œ์ŠคํŠธ๋ฆฌํŠธ(Dun & Bradstreet)๋Š” ์ง€๋‚œํ•ด ์กฐ์‚ฌ์—์„œ ์ ˆ๋ฐ˜์ด ๋„˜๋Š” ์กฐ์ง์ด AI์— ํ™œ์šฉํ•˜๋Š” ๋ฐ์ดํ„ฐ์˜ ์‹ ๋ขฐ์„ฑ๊ณผ ํ’ˆ์งˆ์„ ์šฐ๋ คํ•œ๋‹ค๊ณ  ๋ณด๊ณ ํ–ˆ๋‹ค. ๊ธˆ์œต ์„œ๋น„์Šค ์—…์ข…์—์„œ๋Š” 52%๊ฐ€ โ€˜๋ฐ์ดํ„ฐ ํ’ˆ์งˆ ๋ฌธ์ œโ€™๋กœ AI ํ”„๋กœ์ ํŠธ๊ฐ€ ์‹คํŒจํ–ˆ๋‹ค๊ณ  ๋‹ตํ–ˆ๊ณ , 2,000๋ช… ์ด์ƒ ์—…๊ณ„ ์ „๋ฌธ๊ฐ€๋ฅผ ๋Œ€์ƒ์œผ๋กœ 12์›” ์„ค๋ฌธ์—์„œ๋Š” 44%๊ฐ€ 2026๋…„ ์ตœ๋Œ€ ์šฐ๋ ค๋กœ โ€˜๋ฐ์ดํ„ฐ ํ’ˆ์งˆโ€™์„ ๊ผฝ์•˜๋‹ค. ์ด๋Š” โ€˜์‚ฌ์ด๋ฒ„๋ณด์•ˆโ€™ ๋‹ค์Œ์œผ๋กœ ํฐ ๊ฑฑ์ •๊ฑฐ๋ฆฌ์˜€๋‹ค.

ํด๋ผ์šฐ๋“œ ์ปจ์„คํŒ… ๊ธฐ์—… ๋ ˆ๋ชฌ๊ทธ๋ผ์Šค(Lemongrass)์˜ CTO ์ด๋จผ ์˜ค๋‹์€ โ€œ๋ฐ์ดํ„ฐ ํ‘œ์ค€์ด ์„œ๋กœ ์ถฉ๋Œํ•˜์ง€ ์•Š๋Š” ๊ณณ์ด ์—†๋‹ค. ๋ถˆ์ผ์น˜(mismatch) ํ•˜๋‚˜ํ•˜๋‚˜๊ฐ€ ๋ฆฌ์Šคํฌ์ด์ง€๋งŒ, ์‚ฌ๋žŒ์ด๋ผ๋ฉด ์–ด๋–ป๊ฒŒ๋“  ํ•ด๊ฒฐํ•œ๋‹คโ€๋ผ๊ณ  ์ง€์ ํ–ˆ๋‹ค. ์˜ค๋‹์€ AI๋„ ๋น„์Šทํ•œ ๋ฐฉ์‹์œผ๋กœ โ€˜๋ฌธ์ œ๋ฅผ ์šฐํšŒโ€™ํ•˜๊ฒŒ ๋งŒ๋“ค ์ˆ˜ ์žˆ์ง€๋งŒ, ๊ทธ๋Ÿฌ๋ ค๋ฉด ๋ฌธ์ œ๊ฐ€ ๋ฌด์—‡์ธ์ง€ ์ •ํ™•ํžˆ ํŒŒ์•…ํ•˜๊ณ  ์ด๋ฅผ ๋ฐ”๋กœ์žก๋Š” ๋ฐ ์‹œ๊ฐ„๊ณผ ๋…ธ๋ ฅ์„ ํˆฌ์ž…ํ•ด์•ผ ํ•œ๋‹ค๊ณ  ์งš์—ˆ๋‹ค. ๋ฐ์ดํ„ฐ๊ฐ€ ์ด๋ฏธ ๊นจ๋—ํ•˜๋”๋ผ๋„ ์‹œ๋งจํ‹ฑ ๋งคํ•‘์€ ํ•„์š”ํ•˜๊ณ , ๋ฐ์ดํ„ฐ๊ฐ€ ์™„๋ฒฝํ•˜์ง€ ์•Š๋‹ค๋ฉด ์ •๋ฆฌ ์ž‘์—…์— ๋” ๋งŽ์€ ์‹œ๊ฐ„์ด ๋“ ๋‹ค๋Š” ๊ฒƒ์ด๋‹ค.

์˜ค๋‹์€ โ€œ์ž‘์€ ๋ฐ์ดํ„ฐ๋กœ ์‹œ์ž‘ํ•ด ํ•ด๋‹น ์‚ฌ์šฉ๋ก€๋ฅผ ์ œ๋Œ€๋กœ ๋งž์ถ”๋Š” ๊ฒŒ ํ˜„์‹ค์ ์ธ ์ ‘๊ทผโ€์ด๋ผ๋ฉฐ โ€œ๊ทธ ๋‹ค์Œ์— ํ™•์žฅํ•˜๋Š” ๋ฐฉ์‹์ด ์„ฑ๊ณต์ ์ธ ๋„์ž…์˜ ๋ชจ์Šตโ€์ด๋ผ๊ณ  ๋ง๋ถ™์˜€๋‹ค.

๊ด€๋ฆฌ๋˜์ง€ ์•Š๊ณ  ๊ตฌ์กฐ๋„ ์—†๋Š” ๋ฐ์ดํ„ฐ

์˜ค๋‹์€ ๊ธฐ์—… ์ •๋ณด์— AI๋ฅผ ์—ฐ๊ฒฐํ•  ๋•Œ ๋˜ ๋‹ค๋ฅธ ํ”ํ•œ ์‹ค์ˆ˜๋กœ โ€˜๋น„์ •ํ˜• ๋ฐ์ดํ„ฐ ์†Œ์Šคโ€™์— ๋ฌด์ž‘์ • ์—ฐ๊ฒฐํ•˜๋Š” ๋ฐฉ์‹์„ ๊ผฝ์•˜๋‹ค. LLM์ด ๋ฌธ์„œ, ํ…์ŠคํŠธ, ์ด๋ฏธ์ง€์—์„œ ์˜๋ฏธ๋ฅผ ๋ฝ‘์•„๋‚ด๋Š” ๋ฐ ๊ฐ•ํ•œ ๊ฑด ์‚ฌ์‹ค์ด์ง€๋งŒ, ๋ชจ๋“  ๋ฌธ์„œ๊ฐ€ AI์˜ โ€˜๊ด€์‹ฌโ€™์„ ๋ฐ›์„ ์ž๊ฒฉ์ด ์žˆ๋Š” ๊ฑด ์•„๋‹ˆ๋ผ๋Š” ์ง€์ ์ด๋‹ค.

์˜ˆ๋ฅผ ๋“ค์–ด ๋ฌธ์„œ๊ฐ€ ๊ตฌ๋ฒ„์ „์ด๊ฑฐ๋‚˜ ์•„์ง ๊ต์ •๋˜์ง€ ์•Š์€ ์ดˆ์•ˆ์ด๊ฑฐ๋‚˜ ์˜ค๋ฅ˜๊ฐ€ ํฌํ•จ๋œ ๋ฒ„์ „์ผ ์ˆ˜ ์žˆ๋‹ค. ์˜ค๋‹์€ โ€œ์‚ฌ๋žŒ๋“ค์ด ๋Š˜ ๊ฒช๋Š” ๋ฌธ์ œ๋‹ค. ์›๋“œ๋ผ์ด๋ธŒ๋‚˜ ํŒŒ์ผ ์Šคํ† ๋ฆฌ์ง€๋ฅผ ์ฑ—๋ด‡์— ์—ฐ๊ฒฐํ•˜๋ฉด, โ€˜๋ฒ„์ „ 2โ€™์™€ โ€˜๋ฒ„์ „ 2 ์ตœ์ข…โ€™์„ ๊ตฌ๋ถ„ํ•˜์ง€ ๋ชปํ•˜๋Š” ์ƒํ™ฉ์ด ์ƒ๊ธด๋‹คโ€๋ผ๊ณ  ์ „ํ–ˆ๋‹ค.

๋ฒ„์ „ ๊ด€๋ฆฌ๋Š” ์‚ฌ๋žŒ์—๊ฒŒ๋„ ์–ด๋ ต๋‹ค. ์˜ค๋‹์€ โ€œ๋งˆ์ดํฌ๋กœ์†Œํ”„ํŠธ๋Š” ๋ฒ„์ „ ๊ด€๋ฆฌ๋ฅผ ๋„์™€์ฃผ์ง€๋งŒ, ์‚ฌ์šฉ์ž๋“ค์€ ์—ฌ์ „ํžˆ โ€˜๋‹ค๋ฅธ ์ด๋ฆ„์œผ๋กœ ์ €์žฅโ€™์„ ๋ฐ˜๋ณตํ•œ๋‹คโ€๋ฉฐ โ€œ๊ทธ ๊ฒฐ๊ณผ ๋น„์ •ํ˜• ๋ฐ์ดํ„ฐ๊ฐ€ ๋์—†์ด ๋Š˜์–ด๋‚œ๋‹คโ€๊ณ  ๋งํ–ˆ๋‹ค.

์—์ด์ „ํ‹ฑ AI์™€ ๋” ๋ณต์žกํ•ด์ง€๋Š” ๋ณด์•ˆ

CIO๊ฐ€ AI ๋ณด์•ˆ์„ ๋– ์˜ฌ๋ฆด ๋•Œ ๋ณดํ†ต์€ ๋ชจ๋ธ ๊ฐ€๋“œ๋ ˆ์ผ, ํ•™์Šต ๋ฐ์ดํ„ฐ ๋ณดํ˜ธ, RAG ์ž„๋ฒ ๋”ฉ์šฉ ๋ฐ์ดํ„ฐ ๋ณดํ˜ธ ๋“ฑ์„ ์ƒ๊ฐํ•œ๋‹ค. ํ•˜์ง€๋งŒ ์ฑ—๋ด‡ ์ค‘์‹ฌ AI๊ฐ€ ์—์ด์ „ํ‹ฑ AI๋กœ ์ง„ํ™”ํ•˜๋ฉด์„œ ๋ณด์•ˆ ๋ฌธ์ œ๋Š” ํ›จ์”ฌ ๋ณต์žกํ•ด์ง„๋‹ค.

์˜ˆ์ปจ๋Œ€ ์ž„์ง์› ๊ธ‰์—ฌ DB๊ฐ€ ์žˆ๋‹ค๊ณ  ๊ฐ€์ •ํ•ด ๋ณด์ž. ์ง์›์ด ๊ธ‰์—ฌ๋ฅผ ๋ฌธ์˜ํ•˜๋ฉด, RAG ๋ฐฉ์‹์—์„œ๋Š” ์ „ํ†ต์ ์ธ ์ฝ”๋“œ๋กœ ํ•„์š”ํ•œ ๋ฐ์ดํ„ฐ๋งŒ ์ถ”์ถœํ•ด ํ”„๋กฌํ”„ํŠธ์— ํฌํ•จ์‹œํ‚จ ๋’ค AI์— ์งˆ์˜ํ•œ๋‹ค. ์ด๋•Œ AI๋Š” โ€˜ํ—ˆ์šฉ๋œ ์ •๋ณดโ€™๋งŒ ๋ณด๊ฒŒ ๋˜๊ณ , ๋‚˜๋จธ์ง€ ๋ฐ์ดํ„ฐ ๋ณดํ˜ธ๋Š” ์ „ํ†ต์ ์ธ ์†Œํ”„ํŠธ์›จ์–ด ์Šคํƒ์ด ๋งก๋Š”๋‹ค.

๋ฐ˜๋ฉด ์—์ด์ „ํ‹ฑ AI ์‹œ์Šคํ…œ์—์„œ๋Š” AI ์—์ด์ „ํŠธ๊ฐ€ MCP ์„œ๋ฒ„ ๋“ฑ์„ ํ†ตํ•ด DB๋ฅผ ์ž์œจ์ ์œผ๋กœ ์กฐํšŒํ•  ์ˆ˜ ์žˆ๋‹ค. ๋ชจ๋“  ์ง์› ์งˆ๋ฌธ์— ๋‹ตํ•ด์•ผ ํ•œ๋‹ค๋Š” ์ „์ œ ๋•Œ๋ฌธ์— ์—์ด์ „ํŠธ๊ฐ€ ์ „์ฒด ์ž„์ง์› ๋ฐ์ดํ„ฐ์— ์ ‘๊ทผํ•ด์•ผ ํ•˜๊ณ , ๊ทธ ๊ณผ์ •์—์„œ ์ •๋ณด๊ฐ€ ์ž˜๋ชป ํ˜๋Ÿฌ๋“ค์–ด๊ฐ€์ง€ ์•Š๋„๋ก ๋ง‰๋Š” ์ผ์ด ํฐ ๊ณผ์ œ๊ฐ€ ๋œ๋‹ค. ์‹œ์Šค์ฝ” ์กฐ์‚ฌ์— ๋”ฐ๋ฅด๋ฉด, AI ์‹œ์Šคํ…œ์— โ€˜๋™์ ์ด๊ณ  ์„ธ๋ฐ€ํ•œ ์ ‘๊ทผ ์ œ์–ดโ€™๋ฅผ ๊ฐ–์ถ˜ ๊ธฐ์—…์€ 27%์— ๋ถˆ๊ณผํ–ˆ๊ณ , ๋ฏผ๊ฐ ๋ฐ์ดํ„ฐ ๋ณดํ˜ธ๋‚˜ ๋ฌด๋‹จ ์ ‘๊ทผ ๋ฐฉ์ง€์— ์ž์‹  ์žˆ๋‹ค๊ณ  ๋‹ตํ•œ ๋น„์œจ๋„ ์ ˆ๋ฐ˜์„ ๋ฐ‘๋Œ์•˜๋‹ค.

์˜ค๋‹์€ ๋ฐ์ดํ„ฐ ๋ ˆ์ดํฌ๋กœ ๋ชจ๋“  ๋ฐ์ดํ„ฐ๋ฅผ ๋ชจ์œผ๋Š” ๋ฐฉ์‹์ด ๋ฌธ์ œ๋ฅผ ๋” ํ‚ค์šธ ์ˆ˜ ์žˆ๋‹ค๋ฉฐ, โ€œ๊ฐ ๋ฐ์ดํ„ฐ ์†Œ์Šค์—๋Š” ์ €๋งˆ๋‹ค์˜ ๋ณด์•ˆ ๋ชจ๋ธ์ด ์žˆ๋‹ค. ํ•˜์ง€๋งŒ ์ด๋ฅผ ๋ธ”๋ก ์Šคํ† ๋ฆฌ์ง€์— ์Œ“์•„ ์˜ฌ๋ฆฌ๋ฉด ๊ทธ โ€˜์„ธ๋ถ„ํ™”๋œ ํ†ต์ œโ€™๊ฐ€ ์‚ฌ๋ผ์ง„๋‹คโ€๋ผ๊ณ  ์ง€์ ํ–ˆ๋‹ค. ์‚ฌํ›„์ ์œผ๋กœ ๋ณด์•ˆ ๊ณ„์ธต์„ ๋ง๋ถ™์ด๊ธฐ๋ณด๋‹ค ์›์ฒœ ๋ฐ์ดํ„ฐ ์†Œ์Šค์— ์ง์ ‘ ์ ‘๊ทผํ•˜๊ณ  ๋ฐ์ดํ„ฐ ๋ ˆ์ดํฌ๋ฅผ ๊ฐ€๋Šฅํ•œ ํ•œ ์šฐํšŒํ•˜๋Š” ์ „๋žต์ด ๋” ํ˜„์‹ค์ ์ผ ์ˆ˜ ์žˆ๋‹ค๋Š” ์„ค๋ช…์ด๋‹ค.

โ€˜์†๋„์ „โ€™์ด ๊ฐ€์žฅ ์œ„ํ—˜ํ•œ ํ•จ์ •

๋””์ง€ํ„ธ ํŠธ๋žœ์Šคํฌ๋ฉ”์ด์…˜ ์ปจ์„คํŒ… ๊ธฐ์—… ์„œ๋œ๋žœ๋“œ ๊ธ€๋กœ๋ฒŒ(Sutherland Global)์˜ CIO ๊ฒธ CDO ๋”๊ทธ ๊ธธ๋ฒ„ํŠธ๋Š” CIO๊ฐ€ ์ €์ง€๋ฅด๋Š” 1์ˆœ์œ„ ์‹ค์ˆ˜๋กœ โ€˜๋„ˆ๋ฌด ๋นจ๋ฆฌ ๊ฐ€๋Š” ๊ฒƒโ€™์„ ๊ผฝ์•˜๋‹ค. ๊ธธ๋ฒ„ํŠธ๋Š” โ€œ๋Œ€๋ถ€๋ถ„์˜ ํ”„๋กœ์ ํŠธ๊ฐ€ ์‹คํŒจํ•˜๋Š” ์ด์œ ๋‹ค. ์†๋„ ๊ฒฝ์Ÿ์ด ๊ณผ์—ด๋ผ ์žˆ๋‹คโ€๋ผ๊ณ  ๋ถ„์„ํ–ˆ๋‹ค.

๋ฐ์ดํ„ฐ ์ด์Šˆ๋ฅผ โ€˜๋ณ‘๋ชฉโ€™์œผ๋กœ๋งŒ ๋ณด๊ณ  ๊ฑด๋„ˆ๋›ฐ๋ ค ํ•˜์ง€๋งŒ, ์‚ฌ์‹ค์ƒ ๊ทธ ๋ชจ๋“  ๊ฒƒ์ด ํฐ ๋ฆฌ์Šคํฌ๋กœ ๋Œ์•„์˜จ๋‹ค๋Š” ๊ฒฝ๊ณ ๋„ ๋ง๋ถ™์˜€๋‹ค. ๊ธธ๋ฒ„ํŠธ๋Š” โ€œAI ํ”„๋กœ์ ํŠธ๋ฅผ ์ง„ํ–‰ํ•˜๋Š” ๋งŽ์€ ์กฐ์ง์ด ๊ฒฐ๊ตญ ๊ฐ์‚ฌ๋ฅผ ๋ฐ›๊ฒŒ ๋˜๊ณ , ๊ทธ๋•Œ ๊ฐ€์„œ ์ „๋ถ€ ๋‹ค์‹œ ํ•ด์•ผ ํ•  ์ˆ˜ ์žˆ๋‹คโ€๊ณ  ๋งํ–ˆ๋‹ค. ์ด์–ด โ€œ๋ฐ์ดํ„ฐ๋ฅผ ์ œ๋Œ€๋กœ ๊ฐ–์ถ”๋Š” ๊ฑด ์†๋„๋ฅผ ๋Šฆ์ถ”๋Š” ๊ฒŒ ์•„๋‹ˆ๋ผ, ์˜ฌ๋ฐ”๋ฅธ ์ธํ”„๋ผ๋ฅผ ๊น”์•„ ํ˜์‹  ์†๋„๋ฅผ ์˜ฌ๋ฆฌ๊ณ  ๊ฐ์‚ฌ๋„ ํ†ต๊ณผํ•˜๋ฉฐ ์ปดํ”Œ๋ผ์ด์–ธ์Šค๋ฅผ ํ™•๋ณดํ•˜๋Š” ๊ธธโ€์ด๋ผ๊ณ  ๊ฐ•์กฐํ–ˆ๋‹ค.

ํ…Œ์ŠคํŠธ ์—ญ์‹œ ์‹œ๊ฐ„ ๋‚ญ๋น„๋กœ ๋ณด๊ธฐ ์‰ฝ์ง€๋งŒ, ๋น ๋ฅด๊ฒŒ ๋งŒ๋“ค๊ณ  ๋‚˜์ค‘์— ๊ณ ์น˜๋Š” ์ „๋žต์ด ํ•ญ์ƒ ์ตœ์„ ์€ ์•„๋‹ˆ๋ผ๋Š” ์ง€์ ์ด๋‹ค. ๊ธธ๋ฒ„ํŠธ๋Š” โ€œ๋น›์˜ ์†๋„๋กœ ์›€์ง์ด๋Š” ์‹ค์ˆ˜์˜ ๋น„์šฉ์ด ์–ผ๋งˆ์ธ๊ฐ€โ€๋ผ๊ณ  ๋ฐ˜๋ฌธํ•˜๋ฉฐ โ€œ๋‚˜๋Š” ์–ธ์ œ๋‚˜ ํ…Œ์ŠคํŠธ๋ฅผ ๋จผ์ € ๋ณด๊ฒ ๋‹ค. ํ…Œ์ŠคํŠธ ์—†์ด ์‹œ์žฅ์— ๋‚˜์˜ค๋Š” ์ œํ’ˆ์ด ์ƒ๊ฐ๋ณด๋‹ค ๋งŽ๋‹คโ€๋ผ๊ณ  ์ง€์ ํ–ˆ๋‹ค.

๋ฐ์ดํ„ฐ ์ •๋ฆฌ๋ฅผ ๋•๋Š” AI

๋ฐ์ดํ„ฐ ํ’ˆ์งˆ ๋ฌธ์ œ๋Š” ์‚ฌ์šฉ๋ก€๊ฐ€ ๋Š˜์–ด๋‚ ์ˆ˜๋ก ๋” ์•…ํ™”๋  ๊ฒƒ์ฒ˜๋Ÿผ ๋ณด์ธ๋‹ค. ๋ฐ์ดํ„ฐ ๊ด€๋ฆฌ ์†Œํ”„ํŠธ์›จ์–ด ๊ธฐ์—… ์—์ด๋ธŒํฌ์ธํŠธ(AvePoint)๊ฐ€ 10์›”์— 775๋ช…์˜ ๊ธ€๋กœ๋ฒŒ ๋น„์ฆˆ๋‹ˆ์Šค ๋ฆฌ๋”๋ฅผ ์กฐ์‚ฌํ•œ ๋ณด๊ณ ์„œ์— ๋”ฐ๋ฅด๋ฉด, 81%๋Š” ๋ฐ์ดํ„ฐ ๊ด€๋ฆฌ ๋˜๋Š” ๋ฐ์ดํ„ฐ ๋ณด์•ˆ ๋ฌธ์ œ๋กœ AI ๋ณด์กฐ ๋„๊ตฌ ๋ฐฐํฌ๋ฅผ ์ด๋ฏธ ๋ฏธ๋ฃฌ ๊ฒฝํ—˜์ด ์žˆ๋‹ค๊ณ  ๋‹ตํ–ˆ๋‹ค. ํ‰๊ท  ์ง€์—ฐ ๊ธฐ๊ฐ„์€ 6๊ฐœ์›”์ด์—ˆ๋‹ค. ๋ฐ์ดํ„ฐ ๊ทœ๋ชจ๋„ ๋น ๋ฅด๊ฒŒ ๋ถˆ์–ด๋‚œ๋‹ค. ์‘๋‹ต์ž์˜ 52%๋Š” ๊ธฐ์—…์ด 500ํŽ˜ํƒ€๋ฐ”์ดํŠธ ์ด์ƒ์˜ ๋ฐ์ดํ„ฐ๋ฅผ ๊ด€๋ฆฌํ•˜๊ณ  ์žˆ๋‹ค๊ณ  ๋‹ตํ–ˆ๋Š”๋ฐ, ์ด๋Š” 1๋…„ ์ „ 41%์—์„œ ํฌ๊ฒŒ ๋Š˜์—ˆ๋‹ค.

๊ทธ๋Ÿผ์—๋„ AI๊ฐ€ ์—ญ์„ค์ ์œผ๋กœ ๋ฐ์ดํ„ฐ ์ •๋ฆฌ๋ฅผ ๋” ์‰ฝ๊ฒŒ ๋งŒ๋“ค ๊ฒƒ์ด๋ผ๋Š” ๋ถ„์„๋„ ์žˆ๋‹ค. ์œ ๋‹ˆ์‹œ์Šค์˜ ๋‚˜๊ธ€๋ผํ‘ธ๋ฅด๋Š” โ€œ๊ณ ๊ฐ์— ๋Œ€ํ•œ 360๋„ ๋ทฐ๋ฅผ ํ™•๋ณดํ•˜๊ณ , ์—ฌ๋Ÿฌ ๋ฐ์ดํ„ฐ ์†Œ์Šค๋ฅผ ์ •๋ฆฌํ•˜๊ณ  ์กฐ์ •ํ•˜๋Š” ์ผ์ด AI ๋•๋ถ„์— ๋” ์‰ฌ์›Œ์งˆ ๊ฒƒโ€์ด๋ผ๋ฉฐ, โ€œ์—ญ์„ค์ ์ด์ง€๋งŒ, AI๊ฐ€ ๋ชจ๋“  ๊ฑธ ๋•๊ฒŒ ๋  ๊ฒƒโ€์ด๋ผ๊ณ  ํ‘œํ˜„ํ–ˆ๋‹ค. ์ด์–ด โ€œ3๋…„ ๊ฑธ๋ฆด ๋””์ง€ํ„ธ ํŠธ๋žœ์Šคํฌ๋ฉ”์ด์…˜๋„ ์ด์ œ AI๋กœ 12~18๊ฐœ์›”์ด๋ฉด ๊ฐ€๋Šฅํ•ด์งˆ ์ˆ˜ ์žˆ๋‹ค. AI ๋„๊ตฌ๋Š” ํ˜„์‹ค์— ๊ฐ€๊นŒ์›Œ์ง€๊ณ  ์žˆ๊ณ , ๋ณ€ํ™” ์†๋„๋ฅผ ๋” ๋Œ์–ด์˜ฌ๋ฆด ๊ฒƒโ€์ด๋ผ๊ณ  ์ „๋งํ–ˆ๋‹ค.
dl-ciokorea@foundryco.com

Beyond the hype: 4 critical misconceptions derailing enterprise AI adoption

Despite unprecedented investment in artificial intelligence, with enterprises committing an estimated $35 billion annually, the stark reality is that most AI initiatives fail to deliver tangible business value. With AI initiatives, ROI determination is still rocket science. Research reveals that approximately 80% of AI projects never reach production, almost double the failure rate of traditional IT projects. More alarmingly, studies from MIT indicate that 95% of generative AI investments produce no measurable financial returns.

The prevailing narrative attributes these failures to technological inadequacy or insufficient investment. However, this perspective fundamentally misunderstands the problem. My experience reveals another root cause that lies not in the technological aspects themselves, but in strategic and cognitive biases that systematically distort how organizations define readiness and value, manage data, and adopt and operationalize the AI lifecycle.

Here are four critical misconceptions that consistently undermine enterprise AI strategies.

1. The organizational readiness illusion

Perhaps the most pervasive misconception plaguing AI adoption is the readiness illusion, where executives equate technology acquisition with organizational capability. This bias manifests in underestimating AIโ€™s disruptive impact on organizational structures, power dynamics and established workflows. Leaders frequently assume AI adoption is purely technological when it represents a fundamental transformation that requires comprehensive change management, governance redesign and cultural evolution.

The readiness illusion obscures human and organizational barriers that determine success. As Li, Zhu and Hua observe, firms struggle to capture value not because technology fails, but because people, processes and politics do. During my engagements across various industries, I noticed that AI initiatives trigger turf wars. These kinds of defensive reactions from middle management, perceiving AI as threatening their authority or job security, quietly derail initiatives even in technically advanced companies.

S&P Globalโ€™s research reveals companies with higher failure rates encounter more employee and customer resistance. Organizations with lower failure rates demonstrate holistic approaches addressing cultural readiness alongside technical capability. MIT research found that older organizations experienced declines in structured management practices after adopting AI, accounting for one-third of their productivity losses. This suggests that established companies must rethink organizational design rather than merely overlaying AI onto existing structures.

2. AI expectation myths

The second critical bias involves inflated expectations about AIโ€™s universal applicability. Leaders frequently assume AI can address every business challenge and guarantee immediate ROI, when empirical evidence demonstrates that AI delivers measurable value only in targeted, well-defined and precise use cases. This expectation reality gap contributes to pilot paralysis, in which companies undertake numerous AI experiments but struggle to scale any to production.

An S&P Global 2025 survey reveals that 42% of companies abandoned most AI initiatives during the year, up from just 17% in 2024, with the average organization scrapping 46% of proofs-of-concept before production. McKinseyโ€™s research confirms that organizations reporting significant financial returns are twice as likely to have redesigned end-to-end workflows before selecting modeling techniques. Gartner indicates that more than 40% of agentic AI projects will be cancelled by 2027, largely because organizations pursue AI based on technological fascination rather than concrete business value.

3. Data readiness bias

The third misconception centers on data; specifically, the bias toward prioritizing volume over quality, claiming transparent and unbiased data, solid governance and contextual accuracy. Executives frequently claim their enterprise data is already clean or assume that collecting more data will ensure AI success โ€” fundamentally misunderstanding that quality, stewardship and relevance matter exponentially more than raw quantity โ€” and misunderstanding that the definition of clean data changes when AI is introduced.

Research exposes this readiness gap: while 91% of organizations acknowledge that a reliable data foundation is essential for AI success, only 55% believe their organization actually possesses one. This disconnect reveals executivesโ€™ tendency to overestimate data readiness while underinvesting in the governance, integration and quality management that AI systems require.

Analysis by FinTellect AI indicates that in financial services, 80% of AI projects fail to reach production and of those that do, 70% fail to deliver measurable business value, predominantly from poor data quality rather than technical deficiencies. Organizations that treat data as a product โ€” investing in master data management, governance frameworks and data stewardship โ€” are seven times more likely to deploy generative AI at scale.

This underscores that data infrastructure represents a strategic differentiator, not merely a technical prerequisite. Our understanding and definition for data readiness should be reconsidered by covering more inclusive aspects of data accessibility, integration and cleansing in the context of AI adoption.

4. The deployment fallacy

The fourth critical misconception involves treating AI implementation as traditional software deployment โ€” a set-and-forget approach thatโ€™s incompatible with AIโ€™s operational requirements. Iโ€™ve noticed that many executives believe deploying AI resembles rolling out ERP or CRM systems, assuming pilot performance translates directly to production.

This fallacy ignores AIโ€™s fundamental characteristic: AI systems are probabilistic and require continuous lifecycle management. MIT research demonstrates manufacturing firms adopting AI frequently experience J-curve trajectories, where initial productivity declines but is then followed by longer-term gains. This is because AI deployment triggers organizational disruption requiring adjustment periods. Companies failing to anticipate this pattern abandon initiatives prematurely.

The fallacy manifests in inadequate deployment management, including planning for model monitoring, retraining, governance and adaptation. AI systems can suffer from data drift as underlying patterns evolve. Organizations treating AI as static technology systematically underinvest in the operational infrastructure necessary for sustained success.

Overcoming the AI adoption misconceptions

Successful AI adoption requires understanding that deployment represents not an endpoint but the beginning of continuous lifecycle management. Despite the abundance of technological stacks available for AI deployments, a comprehensive lifecycle management strategy is essential to harness the full potential of these capabilities and effectively implement them.

I propose that the adoption journey should be structured into six interconnected phases, each playing a crucial role in transforming AI from a mere concept into a fully operational capability.

Stage 1: Envisioning and strategic alignment

Organizations must establish clear strategic objectives connecting AI initiatives to measurable business outcomes across revenue growth, operational efficiency, cost reduction and competitive differentiation.

This phase requires engaging leadership and stakeholders through both top-down and bottom-up approaches. Top-down leadership provides strategic direction, resource allocation and organizational mandate, while bottom-up engagement ensures frontline insights, practical use case identification and grassroots adoption. This bidirectional alignment proves critical: executive vision without operational input leads to disconnected initiatives, while grassroots enthusiasm without strategic backing results in fragmented pilots.

Organizations must conduct an honest assessment of organizational maturity across governance, culture and change readiness, as those that skip rigorous self-assessment inevitably encounter the readiness illusion.

Stage 2: Data foundation and governance

Organizations must ensure data availability, quality, privacy and regulatory compliance across the enterprise. This stage involves implementing modern data architecture-whether centralized or federated-supported by robust governance frameworks including lineage tracking, security protocols and ethical AI principles. Critically, organizations must adopt data democratization concepts that make quality data accessible across organizational boundaries while maintaining appropriate governance and security controls. Data democratization breaks down silos that traditionally restrict data access to specialized teams, enabling cross-functional teams to leverage AI effectively. The infrastructure must support not only centralized data engineering teams but also distributed business users who can access, understand and utilize data for AI-driven decision-making. Organizations often underestimate this stageโ€™s time requirements, yet it fundamentally determines subsequent success.

Stage 3: Pilot use cases with quick wins

Organizations prove AI value through quick wins by starting with low-risk, high-ROI use cases that demonstrate tangible impact. Successful organizations track outcomes through clear KPIs such as cost savings, customer experience improvements, fraud reduction and operational efficiency gains. Precision in use case definition proves essential โ€” AI cannot solve general or wide-scope problems but excels when applied to well-defined, bounded challenges. Effective prioritization considers potential ROI, technical feasibility, data availability, regulatory constraints and organizational readiness. Organizations benefit from combining quick wins that build confidence with transformational initiatives that drive strategic differentiation. This phase encompasses feature engineering, model selection and training and rigorous testing, maintaining a clear distinction between proof-of-concept and production-ready solutions.

Stage 4: Monitor, optimize and govern

Unlike the traditional IT implementations, this stage must begin during pilot deployment rather than waiting for production rollout. Organizations define model risk management policies aligned with regulatory frameworks, establishing protocols for continuous monitoring, drift detection, fairness assessment and explainability validation. Early monitoring ensures detection of model drift, performance degradation and output inconsistencies before they impact business operations. Organizations implement feedback loops to retrain and fine-tune models based on real-world performance. This stage demands robust MLOps (Machine Learning Operations) practices that industrialize AI lifecycle management through automated monitoring, versioning, retraining pipelines and deployment workflows. MLOps provides the operational rigor necessary to manage AI systems at scale, treating it as a strategic capability rather than a tactical implementation detail.

Stage 5: Prepare for scale and adoption

Organizations establish foundational capabilities necessary for enterprise-wide AI scaling through comprehensive governance frameworks with clear policies for risk management, compliance and ethical AI use. Organizations must invest in talent and upskilling initiatives that develop AI literacy across leadership and technical teams, closing capability gaps. Cultural transformation proves equally critical-organizations must foster a data-driven, innovation-friendly environment supported by tailored change management practices. Critically, organizations must shift from traditional DevOps toward a Dev-GenAI-Biz-Ops lifecycle that integrates development, generative AI capabilities, business stakeholder engagement and operations in a unified workflow. This expanded paradigm acknowledges that AI solutions demand continuous collaboration between technical teams, business users who understand domain context and operations teams managing production systems. Unlike traditional software, where business involvement diminishes post-requirements, AI systems require ongoing business input to validate outputs and refine models.

Stage 6: Scale and industrialize AI

Organizations transform pilots into enterprise capabilities by embedding AI models into core workflows and customer journeys. This phase requires establishing comprehensive model management systems for versioning, bias detection, retraining automation and lifecycle governance. Organizations implement cloud-native platforms that provide scalable compute infrastructure. Deployment requires careful orchestration of technical integration, user training, security validation and phased rollout strategies that manage risk while building adoption. Organizations that treat this as mere technical implementation encounter the deployment fallacy, underestimating the organizational transformation required. Success demands integration of AI into business processes, technology ecosystems and decision-making frameworks, supported by operational teams with clear ownership and accountability.

Critically, this framework emphasizes continuous iteration across all phases rather than sequential progression. AI adoption represents an organizational capability to be developed over time, not a project with a defined endpoint.

The importance of system integrators with inclusive ecosystems

AI adoption rarely succeeds in isolation. The complexity spanning foundational models, custom applications, data provision, infrastructure and technical services requires orchestration capabilities beyond most organizationsโ€™ internal capacity. MIT research demonstrates AI pilots built with external partners are twice as likely to reach full deployment compared to internally developed tools.

Effective system integrators provide value through inclusive ecosystem orchestration, maintaining partnerships across model providers, application vendors, data marketplaces, infrastructure specialists and consulting firms. This ecosystem approach enables organizations to leverage best-of-breed solutions while maintaining architectural coherence and governance consistency. The integratorโ€™s role extends beyond technical implementation to encompass change management, capability transfer and governance establishment.

I anticipate a paradigm shift in the next few years, with master system integrators leading the AI transformation journey, rather than technology vendors.

The path forward

The prevailing narrative that AI projects fail due to technological immaturity fundamentally misdiagnoses the problem. Evidence demonstrates that failure stems from predictable cognitive and strategic biases: overestimating organizational readiness for disruptive change, harboring unrealistic expectations about AIโ€™s universal applicability, prioritizing data volume over quality and governance and treating AI deployment as traditional software implementation.

Organizations that achieve AI success share common characteristics: they honestly assess readiness across governance, culture and change capability before deploying technology; they pursue targeted use cases with measurable business value; they treat data as a strategic asset requiring sustained investment; and they recognize that AI requires continuous lifecycle management with dedicated operational capabilities.

The path forward requires cognitive discipline and strategic patience. As AI capabilities advance, competitive advantage lies not in algorithms but in organizational capability to deploy them effectively โ€” a capability built through realistic readiness assessment, value-driven use case selection, strategic data infrastructure investment and commitment to continuous management and adoption of the right lifecycle management framework. The question facing enterprise leaders is not whether to adopt AI, but whether their organizations possess the maturity to navigate its inherent complexities and transform potential into performance.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

When it comes to AI, not all data is created equal

Gen AI is becoming a disruptive influence on nearly every industry, but using the best AI models and tools isnโ€™t enough. Everybodyโ€™s using the same ones but what really creates competitive advantage is being able to train and fine-tune your own models, or provide unique context to them, and that requires data.

Your companyโ€™s extensive code base, documentation, and change logs? Thatโ€™s data for your coding agents. Your library of past proposals and contracts? Data for your writing assistants. Your customer databases and support tickets? Data for your customer service chatbot.

But just because all this data exists, doesnโ€™t mean itโ€™s good.

โ€œItโ€™s so easy to point your models to any data thatโ€™s available,โ€ says Manju Naglapur, SVP and GM of cloud, applications, and infrastructure solutions at Unisys. โ€œFor the past three years, weโ€™ve seen this mistake made over and over again. The old adage garbage in, garbage out still holds true.โ€

According to a Boston Consulting Group survey released in September, 68% of 1,250 senior AI decision makers said the lack of access to high-quality data was a key challenge when it came to adopting AI. Other recent research confirms this. In an October Cisco survey of over 8,000 AI leaders, only 35% of companies have clean, centralized data with real-time integration for AI agents. And by 2027, according to IDC, companies that donโ€™t prioritize high-quality, AI-ready data will struggle scaling gen AI and agentic solutions, resulting in a 15% productivity loss.

Losing track of the semantics

Another problem using data thatโ€™s all lumped together is that the semantic layer gets confused. When data comes from multiple sources, the same type of information can be defined and structured in many ways. And as the number of data sources proliferates due to new projects or new acquisitions, the challenge increases. Even just keeping track of customers โ€” the most critical data type โ€” and basic data issues are difficult for many companies.

Dun & Bradstreet reported last year that more than half of organizations surveyed have concerns about the trustworthiness and quality of the data theyโ€™re leveraging for AI. For example, in the financial services sector, 52% of companies say AI projects have failed because of poor data. And for 44%, data quality is their biggest concern for 2026, second only to cybersecurity, based on a survey of over 2,000 industry professionals released in December.

Having multiple conflicting data standards is a challenge for everybody, says Eamonn Oโ€™Neill, CTO at Lemongrass, a cloud consultancy.

โ€œEvery mismatch is a risk,โ€ he says. โ€œBut humans figure out ways around it.โ€

AI can also be configured to do something similar, he adds, if you understand what the challenge is, and dedicate time and effort to address it. Even if the data is clean, a company should still go through a semantic mapping exercise. And if the data isnโ€™t perfect, itโ€™ll take time to tidy it up.

โ€œTake a use case with a small amount of data and get it right,โ€ he says. โ€œThatโ€™s feasible. And then you expand. Thatโ€™s what successful adoption looks like.โ€

Unmanaged and unstructured

Another mistake companies make when connecting AI to company information is to point AI at unstructured data sources, says Oโ€™Neill. And, yes, LLMs are very good at reading unstructured data and making sense of text and images. The problem is not all documents are worthy of the AIโ€™s attention.

Documents could be out of date, for example. Or they could be early versions of documents that havenโ€™t been edited yet, or that have mistakes in them.

โ€œPeople see this all the time,โ€ he says. โ€œWe connect your OneDrive or your file storage to a chatbot, and suddenly it canโ€™t tell the difference between โ€˜version 2โ€™ and โ€˜version 2 final.โ€™โ€

Itโ€™s very difficult for human users to maintain proper version control, he adds. โ€œMicrosoft can handle the different versions for you, but people still do โ€˜save asโ€™ and you end up with a plethora of unstructured data,โ€ Oโ€™Neill says.

Losing track of security

When CIOs typically think of security as it relates to AI systems, they might consider guardrails on the models, or protections around the training data and the data used for RAG embeddings. But as chatbot-based AI evolves into agentic AI, the security problems get more complex.

Say for example thereโ€™s a database of employee salaries. If an employee has a question about their salary and asks an AI chatbot embedded into their AI portal, the RAG embedding approach would be to collect only the relevant data from the database using traditional code, embed it into the prompt, then send the query off to the AI. The AI only sees the information itโ€™s allowed to see and the traditional, deterministic software stack handles the problem of keeping the rest of the employee data secure.

But when the system evolves into an agentic one, the AI agents can query the databases autonomously via MCP servers, and since they need to be able to answer questions from any employee, they require access to all employee data, and keeping it from getting into the wrong hands becomes a big task.

According to the Cisco survey, only 27% of companies have dynamic and detailed access controls for AI systems, and fewer than half feel confident in safeguarding sensitive data or preventing unauthorized access.

And the situation gets even more complicated if all the data is collected into a data lake, says Oโ€™Neill.

โ€œIf youโ€™ve put in data from lots of different sources, each of those individual sources might have its own security model,โ€ he says. โ€œWhen you pile it all into block storage, you lose that granularity of control.โ€

Trying to add the security layer in after the fact can be difficult. The solution, he says, is to go directly to the original data sources and skip the data lake entirely.

โ€œIt was about keeping history forever because storage was so cheap, and machine learning could see patterns over time and trends,โ€ he says. โ€œPlus, cross-disciplinary patterns could be spotted if you mix data from different sources.โ€

In general, data access changes dramatically when instead of humans, AI agents are involved, says Doug Gilbert, CIO and CDO at Sutherland Global, a digital transformation consultancy.

โ€œWith humans, thereโ€™s a tremendous amount of security that lives around the human,โ€ he says. โ€œFor example, most user interfaces have been written so if itโ€™s a number-only field, you canโ€™t put a letter in there. But once you put in an AI, all thatโ€™s gone. Itโ€™s a raw back door into your systems.โ€

The speed trap

But the number-one mistake Gilbert sees CIOs making is they simply move too fast. โ€œThis is why most projects fail,โ€ he says. โ€œThereโ€™s such a race for speed.โ€

Too often, CIOs look at data issues as slowdowns, but all those things are massive risks, he adds. โ€œA lot of people doing AI projects are going to get audited and theyโ€™ll have to stop and re-do everything,โ€ he says.

So getting the data right isnโ€™t a slowdown. โ€œWhen you put the proper infrastructure in place, then you speed through your innovation, you pass audits, and you have compliance,โ€ he says.

Another area that might feel like an unnecessary waste of time is testing. Itโ€™s not always a good strategy to move fast, break things, and then fix them later on after deployment.

โ€œWhatโ€™s the cost of a mistake that moves at the speed of light?โ€ he asks. โ€œI would always go to testing first. Itโ€™s amazing how many products we see that are pushed to market without any testing.โ€

Putting AI to work to fix the data

The lack of quality data might feel like a hopeless problem thatโ€™s only going to get worse as AI use cases expand.

In an October AvePoint report based on a survey of 775 global business leaders, 81% of organizations have already delayed deployment of AI assistants due to data management or data security issues, with an average delay of six months.

Meanwhile, not only the number of AI projects continues to grow but also the amount of data. Nearly 52% of respondents also said their companies were managing more than 500 petabytes of data, up from just 41% a year ago.

But Unisysโ€™ Naglapur says itโ€™s going to become easier to get a 360-degree view of a customer, and to clean up and reconcile other data sources, because of AI.

โ€œThis is the paradox,โ€ he says. โ€œAI will help with everything. If you think about a digital transformation that would take three years, you can do it now in 12 to 18 months with AI.โ€ The tools are getting closer to reality, and theyโ€™ll accelerate the pace of change, he says.

ํด๋ผ์šฐ๋“œ ์šด์˜์˜ ๋™๋ฐ˜์ž, MCSP์˜ ์žฅ์ ๊ณผ ํ•œ๊ณ„๋Š”?

๊ด€๋ฆฌํ˜• ํด๋ผ์šฐ๋“œ ์„œ๋น„์Šค ์ œ๊ณต์—…์ฒด(Managed Cloud Services Provider, MCSP)๋Š” ๊ธฐ์—…์ด ํด๋ผ์šฐ๋“œ ํ™˜๊ฒฝ์˜ ์ผ๋ถ€ ๋˜๋Š” ์ „๋ฐ˜์„ ์šด์˜ํ•˜๋Š” ๋ฐ ๋„์›€์„ ์ฃผ๋Š” ์—ญํ• ์„ ํ•œ๋‹ค. ์—ฌ๊ธฐ์—๋Š” ์‹œ์Šคํ…œ์˜ ํด๋ผ์šฐ๋“œ ์ด์ „, ๋ชจ๋‹ˆํ„ฐ๋ง๊ณผ ์œ ์ง€ ๊ด€๋ฆฌ, ์„ฑ๋Šฅ ๊ฐœ์„ , ๋ณด์•ˆ ๋„๊ตฌ ์šด์˜, ๋น„์šฉ ํ†ต์ œ ์ง€์› ๋“ฑ์ด ํฌํ•จ๋œ๋‹ค. MCSP๋Š” ์ผ๋ฐ˜์ ์œผ๋กœ ํผ๋ธ”๋ฆญ, ํ”„๋ผ์ด๋น—, ํ•˜์ด๋ธŒ๋ฆฌ๋“œ ํด๋ผ์šฐ๋“œ ํ™˜๊ฒฝ ์ „๋ฐ˜์—์„œ ์„œ๋น„์Šค๋ฅผ ์ œ๊ณตํ•œ๋‹ค.

๊ธฐ์—…์€ ํด๋ผ์šฐ๋“œ ํ™˜๊ฒฝ ๊ฐ€์šด๋ฐ ์–ด๋–ค ์˜์—ญ์„ ์ œ๊ณต์—…์ฒด์— ๋งก๊ธฐ๊ณ , ์–ด๋–ค ๋ถ€๋ถ„์„ ๋‚ด๋ถ€์—์„œ ์ง์ ‘ ์šด์˜ํ• ์ง€๋ฅผ ๊ฒฐ์ •ํ•œ๋‹ค. ๋Œ€๋ถ€๋ถ„์˜ ๊ฒฝ์šฐ ๊ธฐ์—…๊ณผ MCSP๋Š” ์ฑ…์ž„์„ ๊ณต์œ ํ•˜๋Š” ๊ตฌ์กฐ๋‹ค. ์ œ๊ณต์—…์ฒด๋Š” ์ผ์ƒ์ ์ธ ์šด์˜๊ณผ ๋„๊ตฌ ๊ด€๋ฆฌ๋ฅผ ๋‹ด๋‹นํ•˜๊ณ , ๊ธฐ์—…์€ ๋น„์ฆˆ๋‹ˆ์Šค ์˜์‚ฌ๊ฒฐ์ •๊ณผ ๋ฐ์ดํ„ฐ, ๊ฑฐ๋ฒ„๋„Œ์Šค์— ๋Œ€ํ•œ ์ฑ…์ž„์„ ์œ ์ง€ํ•œ๋‹ค.

์‚ฌ์ด๋ฒ„๋ณด์•ˆ ์ปจ์„คํŒ… ๊ธฐ์—… ์‚ฌ์ด์—‘์…€(CyXcel)์˜ ๋ถ๋ฏธ ๋””์ง€ํ„ธ ํฌ๋ Œ์‹ ๋ฐ ์‚ฌ๊ณ  ๋Œ€์‘ ๋ถ€๋ฌธ MCSP ๋ถ€์‚ฌ์žฅ์ธ ๋ธŒ๋ ŒํŠธ ๋ผ์ผ๋ฆฌ๋Š” MCSP๋ฅผ ์„ ํƒํ•˜๋Š” ๊ณผ์ •์ด ์–ธ์ œ๋‚˜ ๋ถ€๋‹ด์Šค๋Ÿฝ๋‹ค๊ณ  ์„ค๋ช…ํ–ˆ๋‹ค.

๋ผ์ผ๋ฆฌ๋Š” โ€œ์„œ๋น„์Šค ์ˆ˜์ค€ ๊ณ„์•ฝ(SLA)์— ๋ช…์‹œ๋œ ์ˆ˜์ค€์œผ๋กœ ์„œ๋น„์Šค๋ฅผ ์ˆ˜ํ–‰ํ•  ๊ฒƒ์ด๋ผ๋Š” ์‹ ๋ขฐ์— ํฌ๊ฒŒ ์˜์กดํ•˜์ง€๋งŒ, ์‹ค์ œ๋กœ ์ด๋ฅผ ์ถฉ์กฑํ•˜๊ณ  ์žˆ๋Š”์ง€๋Š” ์žฅ์• ๋‚˜ ์‚ฌ์ด๋ฒ„ ๋ณด์•ˆ ์‚ฌ๊ณ ๊ฐ€ ๋ฐœ์ƒํ•ด ๋ฌธ์ œ๊ฐ€ ๋“œ๋Ÿฌ๋‚˜๊ธฐ ์ „๊นŒ์ง€ ๊ฒ€์ฆํ•˜๊ธฐ ์–ด๋ ต๋‹คโ€๋ผ๋ฉฐ โ€œ๊ทธ ์‹œ์ ์—๋Š” ์ด๋ฏธ ํ”ผํ•ด๊ฐ€ ๋ฐœ์ƒํ•œ ๋’ค์ธ ๊ฒฝ์šฐ๊ฐ€ ๋งŽ๋‹คโ€๋ผ๊ณ  ์ „ํ–ˆ๋‹ค. ๊ทธ๋Š” ๋˜ โ€œMCSP๋Š” ์ ๊ฒ€ํ•  ์ˆ˜ ์žˆ๋Š” ๋ฌผ๋ฆฌ์  ์ธํ”„๋ผ๊ฐ€ ์—†๊ณ , ์˜จํ”„๋ ˆ๋ฏธ์Šค ํ™˜๊ฒฝ์ฒ˜๋Ÿผ ๋ˆˆ์— ๋ณด์ด๋Š” ์ž‘์—…๋„ ์—†์–ด ํ‰๊ฐ€์™€ ์„ ํƒ์ด ๋”์šฑ ๊นŒ๋‹ค๋กญ๋‹คโ€๋ผ๊ณ  ์–ธ๊ธ‰ํ–ˆ๋‹ค.

MCSP์˜ ์žฅ์ 

์šด์˜ ๋ถ€๋‹ด ๊ฐ์†Œ: MCSP๋Š” ์ผ์ƒ์ ์ธ ํด๋ผ์šฐ๋“œ ๊ด€๋ฆฌ ์—…๋ฌด๋ฅผ ๋Œ€์‹  ์ˆ˜ํ–‰ํ•ด ๋‚ด๋ถ€์— ๋Œ€๊ทœ๋ชจ ํด๋ผ์šฐ๋“œยท์ธํ”„๋ผ ์กฐ์ง์„ ์œ ์ง€ํ•ด์•ผ ํ•˜๋Š” ๋ถ€๋‹ด์„ ์ค„์—ฌ์ค€๋‹ค. ํŠนํžˆ ๋‚ด๋ถ€์— ํด๋ผ์šฐ๋“œ๋‚˜ ํ•€์˜ต์Šค(FinOps) ์ „๋ฌธ์„ฑ์ด ์ถฉ๋ถ„ํ•˜์ง€ ์•Š์€ ์กฐ์ง์— ํšจ๊ณผ์ ์ด๋‹ค.

์‹ ์†ํ•œ ๋ฌธ์ œ ๋Œ€์‘ : ๋Œ€๋ถ€๋ถ„์˜ MCSP๋Š” 24์‹œ๊ฐ„ ๋ชจ๋‹ˆํ„ฐ๋ง๊ณผ ์ง€์› ์ฒด๊ณ„๋ฅผ ์ œ๊ณตํ•œ๋‹ค. ๋ฌธ์ œ๊ฐ€ ๋ฐœ์ƒํ•˜๋ฉด ์‚ฌ์šฉ์ž๋‚˜ ์• ํ”Œ๋ฆฌ์ผ€์ด์…˜์— ํฐ ์˜ํ–ฅ์„ ๋ฏธ์น˜๊ธฐ ์ „์— ๋น ๋ฅด๊ฒŒ ๋Œ€์‘ํ•  ์ˆ˜ ์žˆ๋‹ค.

์žฌํ•ด ๋ณต๊ตฌ์™€ ๋ณต์›๋ ฅ ์ง€์› : MCSP๋Š” ๋ฐฑ์—…๊ณผ ์žฌํ•ด ๋ณต๊ตฌ ํ™˜๊ฒฝ์˜ ์„ค๊ณ„, ์šด์˜, ํ…Œ์ŠคํŠธ๋ฅผ ์ง€์›ํ•œ๋‹ค. ๋ณต๊ตฌ ๋ชฉํ‘œ๋Š” ๊ณ ๊ฐ์ด ์ •์˜ํ•˜์ง€๋งŒ, ๋ฌธ์ œ๊ฐ€ ๋ฐœ์ƒํ–ˆ์„ ๋•Œ ์‹œ์Šคํ…œ์„ ์‹ ์†ํ•˜๊ฒŒ ๋ณต๊ตฌํ•  ์ˆ˜ ์žˆ๋„๋ก ๋•๋Š” ์—ญํ• ์„ ๋งก๋Š”๋‹ค.

์ง€์†์ ์ธ ํ”Œ๋žซํผ ๊ด€๋ฆฌ : ํด๋ผ์šฐ๋“œ ํ”Œ๋žซํผ์€ ๋ณ€ํ™” ์†๋„๊ฐ€ ๋น ๋ฅด๋‹ค. MCSP๋Š” ์ธํ”„๋ผ ๊ตฌ์„ฑ ์š”์†Œ๋ฅผ ์ตœ์‹  ์ƒํƒœ๋กœ ์œ ์ง€ํ•˜๊ณ  ํ˜ธํ™˜์„ฑ์„ ๊ด€๋ฆฌํ•ด, ์˜ค๋ž˜๋œ ์„ค์ •์œผ๋กœ ์ธํ•œ ์œ„ํ—˜์„ ์ค„์ด๋Š” ๋™์‹œ์— ์ฃผ์š” ๋ณ€๊ฒฝ ์‹œ์ ์— ๋Œ€ํ•œ ํ†ต์ œ๊ถŒ์€ ๊ณ ๊ฐ์ด ์œ ์ง€ํ•  ์ˆ˜ ์žˆ๋„๋ก ํ•œ๋‹ค.

๋ณด์•ˆ ์ „๋ฌธ์„ฑ๊ณผ ๋„๊ตฌ ์ œ๊ณต : ํด๋ผ์šฐ๋“œ ๋ณด์•ˆ์—๋Š” ์ˆ˜์š”๊ฐ€ ๋†’์€ ์ „๋ฌธ ์—ญ๋Ÿ‰์ด ์š”๊ตฌ๋œ๋‹ค. MCSP๋Š” ์•„์ด๋ดํ‹ฐํ‹ฐ ๊ด€๋ฆฌ, ๋ชจ๋‹ˆํ„ฐ๋ง, ๊ทœ์ • ์ค€์ˆ˜ ๋„๊ตฌ, ๋ณด์•ˆ ๋ชจ๋ฒ” ์‚ฌ๋ก€์— ๋Œ€ํ•œ ๊ฒฝํ—˜์„ ๋ฐ”ํƒ•์œผ๋กœ ์ผ์ƒ์ ์ธ ๋ณด์•ˆ ์ˆ˜์ค€ ๊ฐ•ํ™”๋ฅผ ์ง€์›ํ•œ๋‹ค. ๋ณด์•ˆ ์ฑ…์ž„์€ ์—ฌ์ „ํžˆ ๊ธฐ์—…๊ณผ ์ œ๊ณต์—…์ฒด๊ฐ€ ๊ณต์œ ํ•œ๋‹ค.

์‹ ๋ขฐ์„ฑ๊ณผ ์„ฑ๋Šฅ ํ–ฅ์ƒ : ๋Œ€๊ทœ๋ชจ์ด๋ฉด์„œ ๋ณต์žกํ•œ ํ™˜๊ฒฝ์„ ์šด์˜ํ•ด ์˜จ ๊ฒฝํ—˜์„ ๋ฐ”ํƒ•์œผ๋กœ, ๋ณด๋‹ค ์•ˆ์ •์ ์ด๊ณ  ํ™•์žฅ ๊ฐ€๋Šฅํ•˜๋ฉฐ ๋ณต์›๋ ฅ ์žˆ๋Š” ํด๋ผ์šฐ๋“œ ์ธํ”„๋ผ์˜ ์„ค๊ณ„์™€ ์šด์˜์„ ์ง€์›ํ•œ๋‹ค.

๊ธฐ์กด ์‹œ์Šคํ…œ๊ณผ์˜ ํ†ตํ•ฉ : MCSP๋Š” ํด๋ผ์šฐ๋“œ ์ž์›์„ ์˜จํ”„๋ ˆ๋ฏธ์Šค ์‹œ์Šคํ…œ, ์• ํ”Œ๋ฆฌ์ผ€์ด์…˜, ์•„์ด๋ดํ‹ฐํ‹ฐ ํ”Œ๋žซํผ๊ณผ ์—ฐ๊ณ„ํ•ด ์‚ฌ์šฉ์ž์™€ ์• ํ”Œ๋ฆฌ์ผ€์ด์…˜์ด ์ค‘๋‹จ ์—†์ด ํด๋ผ์šฐ๋“œ ์„œ๋น„์Šค๋ฅผ ์ด์šฉํ•  ์ˆ˜ ์žˆ๋„๋ก ํ•œ๋‹ค.

๋น„์šฉ ์ ˆ๊ฐ๋ณด๋‹ค๋Š” ์˜ˆ์ธก ๊ฐ€๋Šฅํ•œ ์šด์˜ : MCSP๋ฅผ ํ™œ์šฉํ•˜๋ฉด ๋‚ด๋ถ€ ์ธ๋ ฅ๊ณผ ๋„๊ตฌ ๋น„์šฉ์„ ์ค„์ผ ์ˆ˜ ์žˆ์ง€๋งŒ, ์ „์ฒด ํด๋ผ์šฐ๋“œ ์ง€์ถœ์ด ํ•ญ์ƒ ๊ฐ์†Œํ•˜๋Š” ๊ฒƒ์€ ์•„๋‹ˆ๋‹ค. ํ˜„์žฌ MCSP์˜ ๊ฐ€์น˜๋Š” ์ €๋ ดํ•œ ํด๋ผ์šฐ๋“œ ์š”๊ธˆ๋ณด๋‹ค๋Š” ์šด์˜ ํšจ์œจ์„ฑ, ์ „๋ฌธ์„ฑ, ๋Œ€์‘ ์†๋„์— ๋” ์žˆ๋‹ค.

MCSP ์„ ํƒ ์‹œ ํ•ต์‹ฌ ๊ณ ๋ ค ์‚ฌํ•ญ

IT ๊ด€๋ฆฌ ์†Œํ”„ํŠธ์›จ์–ด ์ œ๊ณต์—…์ฒด ์ปค๋„ฅํŠธ์™€์ด์ฆˆ(ConnectWise)์˜ ์ตœ๊ณ ๊ฒฝ์˜์ž ๋งค๋‹ˆ ๋ฆฌ๋ฒจ๋กœ๋Š” ์กฐ์ง์ด ์ ์  ๋” ์ž์œจ์ ์ด๊ณ  AI ๊ธฐ๋ฐ˜ ์„œ๋น„์Šค๋กœ ์ „ํ™˜ํ•˜๋Š” ๊ณผ์ •์—์„œ, MCSP๊ฐ€ ์ž๋™ํ™”๋ฅผ ์‹ค์ œ ์ผ์ƒ ์šด์˜์—์„œ ์ œ๋Œ€๋กœ ์ž‘๋™ํ•˜๋„๋ก ๋งŒ๋“œ๋Š” ์ค‘์š”ํ•œ ์—ญํ• ์„ ํ•œ๋‹ค๊ณ  ์„ค๋ช…ํ–ˆ๋‹ค.

๋ฆฌ๋ฒจ๋กœ๋Š” ๋งŽ์€ ์กฐ์ง์ด ์˜ˆ์ƒ๋ณด๋‹ค ์ค‘์š”ํ•˜๊ฒŒ ์ธ์‹ํ•˜์ง€ ๋ชปํ•˜๋Š” ์š”์†Œ๋กœ ์šด์˜ ํˆฌ๋ช…์„ฑ์„ ๊ผฝ์•˜๋‹ค. ๊ทธ๋Š” ํด๋ผ์šฐ๋“œ ํ™˜๊ฒฝ์ด ์–ด๋–ป๊ฒŒ ์„ค๊ณ„๋˜๊ณ , ๋ณด์•ˆ์ด ์ ์šฉ๋˜๋ฉฐ, ์šด์˜๋˜๊ณ  ์žˆ๋Š”์ง€์— ๋Œ€ํ•œ ๋ช…ํ™•ํ•œ ๊ฐ€์‹œ์„ฑ์ด ํ•„์š”ํ•˜๋‹ค๊ณ  ์„ค๋ช…ํ–ˆ๋‹ค. ์•„์šธ๋Ÿฌ ์—์ด์ „ํ‹ฑ AI๊ฐ€ ์‹œ์Šคํ…œ์„ ์–ด๋–ป๊ฒŒ ๋ชจ๋‹ˆํ„ฐ๋งํ•˜๊ณ  ์˜์‚ฌ๊ฒฐ์ •์„ ๋‚ด๋ฆฌ๋ฉฐ ์‹ค์ œ ์กฐ์น˜๋ฅผ ์ทจํ•˜๋Š”์ง€๊นŒ์ง€ ์กฐ์ง์ด ์ดํ•ดํ•˜๊ณ  ์žˆ์–ด์•ผ, ์ธ์ง€ํ•˜์ง€ ๋ชปํ•œ ์ƒํƒœ์—์„œ ์ค‘์š”ํ•œ ์ผ์ด ์ง„ํ–‰๋˜๋Š” ์ƒํ™ฉ์„ ๋ง‰์„ ์ˆ˜ ์žˆ๋‹ค๊ณ  ์–ธ๊ธ‰ํ–ˆ๋‹ค.

๋ฆฌ๋ฒจ๋กœ๋Š” โ€œ์ž์œจ์„ฑ์ด ๋†’์•„์งˆ์ˆ˜๋ก ์šด์˜ ์„ฑ์ˆ™๋„์˜ ์ค‘์š”์„ฑ๋„ ์ปค์ง„๋‹คโ€๋ผ๋ฉฐ โ€œ์—ฌ๊ธฐ์—๋Š” ์ฒด๊ณ„์ ์ธ ๋ฐ์ดํ„ฐ ๊ฑฐ๋ฒ„๋„Œ์Šค, ๊ฐ•๋ ฅํ•œ ๋ฌผ๋ฆฌ์ ยท๋…ผ๋ฆฌ์  ๋ณด์•ˆ, ์ž๋™ํ™”์™€ ์ธ๊ฐ„ ๊ฐ๋…์˜ ๊ท ํ˜•์„ ๊ณ ๋ คํ•œ ๋ช…ํ™•ํ•œ ์‚ฌ๊ณ  ๋Œ€์‘ ํ”„๋กœ์„ธ์Šค๊ฐ€ ํฌํ•จ๋œ๋‹คโ€๋ผ๊ณ  ์„ค๋ช…ํ–ˆ๋‹ค. ๊ทธ๋Š” โ€œ์—์ด์ „ํ‹ฑ AI๋Š” ๋ฌธ์ œ๋ฅผ ํƒ์ง€ํ•˜๊ณ  ์‹ ํ˜ธ๋ฅผ ์—ฐ๊ด€ ๋ถ„์„ํ•˜๋ฉฐ ๊ธฐ๊ณ„ ์†๋„๋กœ ๋Œ€์‘ํ•  ์ˆ˜ ์žˆ์ง€๋งŒ, ์ •์ฑ… ์„ค์ •๊ณผ ๊ฒฐ๊ณผ ๊ฒ€์ฆ, ์˜ˆ์ƒ ๋ฒ”์œ„๋ฅผ ๋ฒ—์–ด๋‚œ ์ƒํ™ฉ์—์„œ์˜ ํŒ๋‹จ์€ ์—ฌ์ „ํžˆ ์‚ฌ๋žŒ์ด ๋‹ด๋‹นํ•ด์•ผ ํ•œ๋‹คโ€๋ผ๊ณ  ๋งํ–ˆ๋‹ค.

๋ฆฌ๋ฒจ๋กœ๋Š” MCSP๊ฐ€ ๊ด€๋ฆฌํ˜• ์„œ๋น„์Šค ๋ชจ๋ธ๊ณผ ๊ทธ ์ฃผ๋ณ€ ์ƒํƒœ๊ณ„์— ์–ผ๋งˆ๋‚˜ ์ž˜ ๋ถ€ํ•ฉํ•˜๋Š”์ง€๋„ ์ค‘์š”ํ•˜๋‹ค๊ณ  ๊ฐ•์กฐํ–ˆ๋‹ค. ์ ํ•ฉํ•œ ์ œ๊ณต์—…์ฒด๋Š” ์ž๋™ํ™”์™€ AI๋ฅผ ํ™œ์šฉํ•ด ์šด์˜์„ ๋‹จ์ˆœํ™”ํ•ด์•ผ ํ•˜๋ฉฐ, ์ž๋™ํ™”๊ฐ€ ์ œ๋Œ€๋กœ ์ž‘๋™ํ•  ๊ฒฝ์šฐ ํ˜„์—… ์ธ๋ ฅ์„ ๋’ท๋ฐ›์นจํ•˜๊ณ  ์šด์˜์˜ ์ผ๊ด€์„ฑ์„ ๋†’์ด๋ฉฐ, ํŒ€์ด ๋˜ ๋‹ค๋ฅธ ๋„๊ตฌ๋ฅผ ๊ด€๋ฆฌํ•˜๋Š” ๋ฐ ์‹œ๊ฐ„์„ ์“ฐ๋Š” ๋Œ€์‹  ์‹ค์ œ๋กœ ์ค‘์š”ํ•œ ์—…๋ฌด์— ์ง‘์ค‘ํ•  ์ˆ˜ ์žˆ๋„๋ก ๋•๋Š”๋‹ค๊ณ  ์„ค๋ช…ํ–ˆ๋‹ค.

์†Œํ”„ํŠธ์›จ์–ด ๋ผ์ด์„ ์Šค ํ™œ์šฉ ๊ฐ€์น˜๋ฅผ ๋†’์ด๊ณ  MS, ์˜ค๋ผํด, ์‹œ์Šค์ฝ” ๋“ฑ ๋ฒค๋” ๊ฐ์‚ฌ ๋Œ€์‘์„ ์ง€์›ํ•˜๋Š” NPI์˜ ์ตœ๊ณ ๊ฒฝ์˜์ž ์กด ์œˆ์…‹์€ ๊ฐ€๊ฒฉ ๊ตฌ์กฐ์˜ ์œ ์—ฐ์„ฑ์ด MCSP ์„ ํƒ ๊ณผ์ •์—์„œ ์ข…์ข… ๊ฐ„๊ณผ๋œ๋‹ค๊ณ  ์ง€์ ํ–ˆ๋‹ค. ๊ทธ๋Š” MCSP์™€ ๊ด€๋ จ๋œ ์œ„ํ—˜์ด ์ดˆ๊ธฐ ๋น„์šฉ ์ฆ๊ฐ€๋ณด๋‹ค๋Š”, ์‹œ๊ฐ„์ด ์ง€๋‚˜๋ฉด์„œ ์ธ์ง€ํ•˜์ง€ ๋ชปํ•œ ์ฑ„ ํ˜‘์ƒ๋ ฅ์ด ์•ฝํ™”๋˜๋Š” ๋ฐ ์žˆ๋‹ค๊ณ  ๋ถ„์„ํ–ˆ๋‹ค.

์œˆ์…‹์€ ์†Œ๊ทœ๋ชจ ํŒ€์ด๋‚˜ ์•„์ง ํด๋ผ์šฐ๋“œ ๊ฒฝํ—˜์„ ๊ตฌ์ถ•ํ•˜๋Š” ๋‹จ๊ณ„์— ์žˆ๋Š” ์กฐ์ง์—๋Š” MCSP๊ฐ€ ํฐ ๋„์›€์ด ๋  ์ˆ˜ ์žˆ๋‹ค๊ณ  ๋ง๋ถ™์˜€๋‹ค. ํด๋ผ์šฐ๋“œ ์ง€์ถœ์„ ํ†ตํ•ฉํ•˜๊ณ  ๋งˆ์ด๊ทธ๋ ˆ์ด์…˜ ์ง€์›, ๋ฆฌ์†Œ์Šค ์ตœ์ ํ™”, ๋น„์šฉ ํ†ต์ œ์™€ ๊ฐ™์€ ์„œ๋น„์Šค๋ฅผ ํŒจํ‚ค์ง€๋กœ ์ œ๊ณตํ•จ์œผ๋กœ์จ ๋‚ญ๋น„๋ฅผ ์ค„์ด๊ณ  ํด๋ผ์šฐ๋“œ ์šด์˜์„ ๋ณด๋‹ค ์ˆ˜์›”ํ•˜๊ฒŒ ๋งŒ๋“ค ์ˆ˜ ์žˆ๋‹ค๋Š” ์„ค๋ช…์ด๋‹ค. ๋‚ด๋ถ€์— ํด๋ผ์šฐ๋“œ๋‚˜ ํ•€์˜ต์Šค ์—ญ๋Ÿ‰์ด ์ถฉ๋ถ„ํ•˜์ง€ ์•Š์€ ์กฐ์ง์ด๋ผ๋ฉด ์ด๋Ÿฌํ•œ ์ด์ ์ด ์ผ์ • ๋ถ€๋ถ„์˜ trade-off๋ฅผ ๊ฐ์ˆ˜ํ•  ๋งŒํ•œ ๊ฐ€์น˜๊ฐ€ ์žˆ๋‹ค๊ณ  ์ „ํ–ˆ๋‹ค.

๊ทธ๋Š” โ€œํด๋ผ์šฐ๋“œ ํ™˜๊ฒฝ์ด ํ™•์žฅ๋ ์ˆ˜๋ก ๊ฐ€๊ฒฉ ๊ตฌ์กฐ๋Š” ์ ์  ๋ถˆํˆฌ๋ช…ํ•ด์ง„๋‹คโ€๋ผ๋ฉฐ โ€œMCSP๋Š” MS๋‚˜ ์•„๋งˆ์กด์›น์„œ๋น„์Šค(AWS) ์š”๊ธˆ ์œ„์— ์ž์ฒด ๋งˆ์ง„์„ ๋”ํ•˜๋Š”๋ฐ, ๊ธฐ๋ณธ ์‚ฌ์šฉ๋ฃŒ ๊ธฐ์ค€ ์ตœ๋Œ€ 8% ์ˆ˜์ค€์ด๋ฉฐ ์„œ๋น„์Šค๊ฐ€ ๋ฌถ์ผ ๊ฒฝ์šฐ ๊ทธ ์ด์ƒ์ด ๋  ์ˆ˜ ์žˆ๋‹คโ€๋ผ๊ณ  ์„ค๋ช…ํ–ˆ๋‹ค. ์ด์–ด โ€œ์ด ๊ฐ™์€ ๊ด€๋ฆฌ ๊ณ„์ธต์„ ํ†ตํ•ด MCSP๋Š” ์•ฝ 30~40% ์ˆ˜์ค€์˜ ์ˆ˜์ต๋ฅ ์„ ํ™•๋ณดํ•œ๋‹คโ€๋ผ๊ณ  ์–ธ๊ธ‰ํ–ˆ๋‹ค.

MCSP์˜ ๋‹จ์ 

๊ธฐ์ˆ  ์ปจ์„คํŒ… ๊ธฐ์—… ํ•˜์ด๋ผ์ธ์˜ ๊ธฐ์ˆ  ๋ถ€์‚ฌ์žฅ ๋ผ์ด์–ธ ๋งฅ์—˜๋กœ์ด๋Š” MCSP๋ฅผ ํ™œ์šฉํ•  ๋•Œ ๊ฐ€์žฅ ํฐ ๋‹จ์ ์œผ๋กœ ํ†ต์ œ๋ ฅ ์ƒ์‹ค์„ ๊ผฝ์•˜๋‹ค.

๋งฅ์—˜๋กœ์ด๋Š” โ€œ๊ฐ์ข… ๋ผ์ด์„ ์Šค ํ• ์ธ ํ˜œํƒ์„ ๋ฐ›๋”๋ผ๋„ ๊ณ„์•ฝ์— ๋ฌถ์—ฌ ํ•„์š” ์ด์ƒ์œผ๋กœ ๊ตฌ๋งคํ•ด์•ผ ํ•˜๋Š” ๊ตฌ์กฐ๋ผ๋ฉด ์‹ค์ œ๋กœ๋Š” ๋น„์šฉ์„ ์ ˆ๊ฐํ•˜์ง€ ๋ชปํ•  ์ˆ˜ ์žˆ๋‹คโ€๋ผ๋ฉฐ โ€œMCSP๋ฅผ ํ™œ์šฉํ•˜๋ฉด ์กฐ์ง์˜ ๊ณต๊ฒฉ ํ‘œ๋ฉด์ด ํ™•๋Œ€๋  ์ˆ˜ ์žˆ๋‹ค๋Š” ์ ๋„ ๊ณ ๋ คํ•ด์•ผ ํ•œ๋‹คโ€๋ผ๊ณ  ์„ค๋ช…ํ–ˆ๋‹ค. ๊ทธ๋Š” ๋˜ โ€œMS์™€ ๊ฐ™์€ ๋Œ€ํ˜• ํด๋ผ์šฐ๋“œ ๋ฒค๋”๊ฐ€ MCSP๋ฅผ ๊ต์œกํ•˜๊ณ  ๊ฐ€์ด๋“œ๋ฅผ ์ œ๊ณตํ•˜๋”๋ผ๋„, ๋Œ€๊ทœ๋ชจ ์‚ฌ์ด๋ฒ„ ๋ณด์•ˆ ์‚ฌ๊ณ  ์ดํ›„ ์ž‘์„ฑ๋˜๋Š” ๊ทผ๋ณธ ์›์ธ ๋ถ„์„ ๋ณด๊ณ ์„œ๋ฅผ ์‚ดํŽด๋ณด๋ฉด MCSP๊ฐ€ ๊ณต๊ฒฉ ๊ฒฝ๋กœ๋กœ ์ž‘์šฉํ•œ ์‚ฌ๋ก€๊ฐ€ ์šฐ๋ ค์Šค๋Ÿฌ์šธ ์ •๋„๋กœ ์ž์ฃผ ๋“ฑ์žฅํ•œ๋‹คโ€๋ผ๊ณ  ์ „ํ–ˆ๋‹ค.

๋ฆฌ์„œ์น˜ ๊ธฐ์—… ISG์˜ ๋””๋ ‰ํ„ฐ ์•„๋„ค์ด ๋‚˜์™€ํ…Œ๋Š” MCSP ํ˜‘์—…์ด ๋งŽ์€ ์ด์ ์„ ์ œ๊ณตํ•˜๋Š” ๋™์‹œ์— ๋ถ„๋ช…ํ•œ ์œ„ํ—˜๋„ ๋™๋ฐ˜ํ•œ๋‹ค๊ณ  ์–ธ๊ธ‰ํ–ˆ๋‹ค.

๋‚˜์™€ํ…Œ๋Š” โ€œMCSP๊ฐ€ ์กฐ์ง ๋‚ด ์•„ํ‚คํ…์ฒ˜ ๋…ผ์˜์˜ ์ค‘์‹ฌ์ ์ธ ๋ชฉ์†Œ๋ฆฌ๊ฐ€ ๋˜์–ด์„œ๋Š” ์•ˆ ๋œ๋‹คโ€๋ผ๋ฉฐ โ€œํ•ต์‹ฌ ์‹œ์Šคํ…œ์— ๋Œ€ํ•œ ์ง€์‹์„ ๋‚ด๋ถ€์— ์œ ์ง€ํ•˜๊ณ , ๋ฒค๋” ์ข…์†์„ ์ค„์ด๋ฉฐ, ์‹œ์žฅ ๋ชจ๋ฒ” ์‚ฌ๋ก€์™€ ๋น„๊ตํ–ˆ์„ ๋•Œ ์ œ๊ณต์—…์ฒด๋กœ ์ธํ•œ ์•„ํ‚คํ…์ฒ˜ ํŽธํ–ฅ์„ ์™„ํ™”ํ•˜๊ธฐ ์œ„ํ•ด์„œ๋Š” ์•„ํ‚คํ…์ฒ˜ ์˜์‚ฌ๊ฒฐ์ •์„ ๋‚ด๋ถ€์—์„œ ์†Œ์œ ํ•ด์•ผ ํ•œ๋‹คโ€๋ผ๊ณ  ์„ค๋ช…ํ–ˆ๋‹ค.

๊ทธ๋Š” ๋˜ MCSP๊ฐ€ ํด๋ผ์šฐ๋“œ ๋น„์šฉ ๊ด€๋ฆฌ์— ๋Œ€ํ•ด ์‹ค์ œ๋กœ ํด๋ผ์šฐ๋“œ๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ๊ธฐ์—…๋งŒํผ์˜ ์••๋ฐ•์„ ๋А๋ผ์ง€ ์•Š๋Š” ๊ฒฝ์šฐ๊ฐ€ ๋งŽ๋‹ค๊ณ  ๋ง๋ถ™์˜€๋‹ค. ๊ฒฐ๊ตญ ๊ณผ๋„ํ•œ ์ง€์ถœ์˜ ์˜ํ–ฅ์€ ๊ธฐ์—…์ด ์ง์ ‘ ๊ฐ๋‚ดํ•˜๊ฒŒ ๋˜๋ฉฐ, ์ด๋Ÿฌํ•œ ์ด์œ ๋กœ ๋งŽ์€ ๊ธฐ์—…์ด ํด๋ผ์šฐ๋“œ ๋น„์šฉ์— ๋Œ€ํ•œ ํ†ต์ œ๋ ฅ์„ ํ™•๋ณดํ•˜๊ธฐ ์œ„ํ•ด ํ•€์˜ต์Šค ์—ญํ• ์„ ๋‹ค์‹œ ๋‚ด๋ถ€๋กœ ๊ฐ€์ ธ์˜จ๋‹ค๊ณ  ์„ค๋ช…ํ–ˆ๋‹ค.

๊ธ€๋กœ๋ฒŒ ์‹œ์žฅ์—์„œ ์ฃผ๋ชฉ๋ฐ›๋Š” MCSP 6๊ณณ

๊ด€๋ฆฌํ˜• ํด๋ผ์šฐ๋“œ ์„œ๋น„์Šค ์ œ๊ณต์—…์ฒด๋Š” ์ˆ˜์‹ญ ๊ณณ์— ์ด๋ฅธ๋‹ค. ์กฐ์‚ฌ ๋ถ€๋‹ด์„ ์ค„์ด๊ธฐ ์œ„ํ•ด ๋…๋ฆฝ์ ์ธ ๋ฆฌ์„œ์น˜์™€ ์• ๋„๋ฆฌ์ŠคํŠธ์™€์˜ ๋…ผ์˜๋ฅผ ๋ฐ”ํƒ•์œผ๋กœ, ์•ŒํŒŒ๋ฒณ์ˆœ์œผ๋กœ ์ฃผ์š” MCSP 6๊ณณ์„ ์ •๋ฆฌํ–ˆ๋‹ค. ๊ฐ€๊ฒฉ ์ •๋ณด๋Š” ๊ฐ ์ œ๊ณต์—…์ฒด์— ์ง์ ‘ ๋ฌธ์˜ํ•ด์•ผ ํ•œ๋‹ค.

์•ก์„ผ์ถ”์–ด

์•ก์„ผ์ถ”์–ด(Accenture)๋Š” ์ „ ์„ธ๊ณ„ ์ฃผ์š” ์ง€์—ญ๊ณผ ์‹œ์žฅ์— ๋ถ„ํฌํ•œ ํŒ€๊ณผ ์„ผํ„ฐ๋ฅผ ๊ธฐ๋ฐ˜์œผ๋กœ ๊ด€๋ฆฌํ˜• ํด๋ผ์šฐ๋“œ ์„œ๋น„์Šค๋ฅผ ์ œ๊ณตํ•œ๋‹ค. ๊ธฐ์—…์˜ ํด๋ผ์šฐ๋“œ ํ™˜๊ฒฝ ์„ค๊ณ„, ์šด์˜, ์œ ์ง€ ๊ด€๋ฆฌ๋ฅผ ์ง€์›ํ•˜๋ฉฐ, ์ดˆ๊ธฐ ํด๋ผ์šฐ๋“œ ๊ตฌ์ถ•๋ถ€ํ„ฐ ๋ชจ๋‹ˆํ„ฐ๋ง, ์œ ์ง€ ๋ณด์ˆ˜, ๋ณด์•ˆ์„ ํฌํ•จํ•œ ์ง€์†์ ์ธ ์šด์˜๊นŒ์ง€ ํญ๋„“๊ฒŒ ๋‹ค๋ฃฌ๋‹ค. MS ์• ์ €, ๊ตฌ๊ธ€ ํด๋ผ์šฐ๋“œ, AWS ๋“ฑ ์ฃผ์š” ํด๋ผ์šฐ๋“œ ํ”Œ๋žซํผ ์ „๋ฐ˜์—์„œ ์„œ๋น„์Šค๋ฅผ ์ œ๊ณตํ•˜๋Š” ๊ฒƒ๋„ ํŠน์ง•์ด๋‹ค. ๊ธฐ์—…์€ ๋ณต์žกํ•œ ํด๋ผ์šฐ๋“œ ์‹œ์Šคํ…œ์„ ์ „๋ถ€ ๋‚ด๋ถ€์—์„œ ๊ด€๋ฆฌํ•˜๋Š” ๋Œ€์‹ , ์•ก์„ผ์ถ”์–ด๋ฅผ ํ†ตํ•ด ์ผ์ƒ์ ์ธ ์šด์˜๊ณผ ๊ธฐ์ˆ ์  ๊ด€๋ฆฌ ์—…๋ฌด๋ฅผ ๋งก๊ธธ ์ˆ˜ ์žˆ๋‹ค. ์‹œ์Šคํ…œ ๋ชจ๋‹ˆํ„ฐ๋ง๊ณผ ์ด์Šˆ ๋Œ€์‘, ํ™˜๊ฒฝ ์—…๋ฐ์ดํŠธ ๋“ฑ ์ผ์ƒ์ ์ธ ์ธํ”„๋ผ ์šด์˜์„ ์•ก์„ผ์ถ”์–ด๊ฐ€ ๋‹ด๋‹นํ•จ์œผ๋กœ์จ, ๋‚ด๋ถ€ ์ธ๋ ฅ์€ ํ•ต์‹ฌ ๋น„์ฆˆ๋‹ˆ์Šค ๊ณผ์ œ์— ์ง‘์ค‘ํ•  ์ˆ˜ ์žˆ๋‹ค.

์บก์ œ๋ฏธ๋‹ˆ

์บก์ œ๋ฏธ๋‹ˆ(Capgemini)๋Š” ์ „ ์„ธ๊ณ„๋ฅผ ๋Œ€์ƒ์œผ๋กœ ๊ด€๋ฆฌํ˜• ํด๋ผ์šฐ๋“œ ์„œ๋น„์Šค๋ฅผ ์ œ๊ณตํ•˜๋ฉฐ, ํŠนํžˆ ์œ ๋Ÿฝ๊ณผ ๋ถ๋ฏธ๋ฅผ ์ค‘์‹ฌ์œผ๋กœ ๋ฉ€ํ‹ฐํด๋ผ์šฐ๋“œ ํ™˜๊ฒฝ์„ ์ง€์›ํ•œ๋‹ค. ์ œ์กฐ, ๋ฆฌํ…Œ์ผ, ๊ธˆ์œต ์„œ๋น„์Šค, ๋ณดํ—˜ ์‚ฐ์—…๊ณผ์˜ ํ˜‘์—… ๊ฒฝํ—˜์ด ํ’๋ถ€ํ•˜๋‹ค. AWS, MS ์• ์ €, ๊ตฌ๊ธ€ ํด๋ผ์šฐ๋“œ ๋“ฑ ์ฃผ์š” ํด๋ผ์šฐ๋“œ ํ”Œ๋žซํผ๊ณผ ์ผ๋ถ€ ํŠนํ™”๋œ ์—”ํ„ฐํ”„๋ผ์ด์ฆˆ ํด๋ผ์šฐ๋“œ๋ฅผ ๊ธฐ๋ฐ˜์œผ๋กœ ์• ํ”Œ๋ฆฌ์ผ€์ด์…˜๊ณผ ์ธํ”„๋ผ ์šด์˜์„ ์ง€์›ํ•œ๋‹ค. ๋ชจ๋‹ˆํ„ฐ๋ง, ๋ฐฑ์—…, ๊ธฐ์ˆ  ์ง€์›์„ ํฌํ•จํ•œ ๊ด€๋ฆฌํ˜• ์„œ๋น„์Šค์™€ ํ•จ๊ป˜, ํด๋ผ์šฐ๋“œ ์ด์ „์ด ์ ํ•ฉํ•œ ์›Œํฌ๋กœ๋“œ๋ฅผ ์‹๋ณ„ํ•˜๊ณ  ํ•ด๋‹น ์‹œ์Šคํ…œ์„ ์ด์ „ยท์šด์˜ํ•˜๋Š” ๊ณผ์ •๊นŒ์ง€ ํฌ๊ด„์ ์œผ๋กœ ์ง€์›ํ•œ๋‹ค. ์ค‘๊ฒฌ๊ธฐ์—…๋ณด๋‹ค๋Š” ๋Œ€๊ทœ๋ชจ์ด๋ฉด์„œ ๋ณต์žกํ•œ ํ™˜๊ฒฝ์„ ๊ฐ€์ง„ ๋Œ€๊ธฐ์—…์— ์ ํ•ฉํ•œ ์„œ๋น„์Šค ์„ฑ๊ฒฉ์„ ๊ฐ–๊ณ  ์žˆ๋‹ค.

๋”œ๋กœ์ดํŠธ

๋”œ๋กœ์ดํŠธ(Deloitte)๋Š” ์ „ ์„ธ๊ณ„ ๊ณ ๊ฐ์„ ๋Œ€์ƒ์œผ๋กœ ํด๋ผ์šฐ๋“œ ์„œ๋น„์Šค๋ฅผ ์ œ๊ณตํ•˜๋ฉฐ, ๋ถ๋ฏธ์™€ ์œ ๋Ÿฝ ์ง€์—ญ์˜ ๋น„์ค‘์ด ํฌ๋‹ค. ๊ธˆ์œตยท๋ณดํ—˜, ๊ณต๊ณต, ํ—ฌ์Šค์ผ€์–ด ์‚ฐ์—…์—์„œ ํŠนํžˆ ๊ฐ•์ ์„ ๋ณด์ธ๋‹ค. AWS, MS ์• ์ €, ๊ตฌ๊ธ€ ํด๋ผ์šฐ๋“œ, VM์›จ์–ด ํด๋ผ์šฐ๋“œ, ์˜ค๋ผํด ํด๋ผ์šฐ๋“œ๋ฅผ ํฌํ•จํ•œ ๋ฉ€ํ‹ฐํด๋ผ์šฐ๋“œ ํ™˜๊ฒฝ์„ ์ง€์›ํ•œ๋‹ค. ๊ธฐ์—…์˜ ๋น„์ฆˆ๋‹ˆ์Šค ๋ชฉํ‘œ์— ๋งž์ถฐ ํด๋ผ์šฐ๋“œ ํ™˜๊ฒฝ์„ ๊ธฐํšยท๊ตฌ์ถ•ยท์šด์˜ํ•˜๋Š” ๋ฐ ์ดˆ์ ์„ ๋งž์ถ”๊ณ  ์žˆ์œผ๋ฉฐ, ํ”„๋กœ์„ธ์Šค์™€ ์šด์˜ ๊ฐœ์„ ์„ ํฌํ•จํ•œ ํด๋ผ์šฐ๋“œ ์ „ํ™˜์ด ํ•ต์‹ฌ ์˜์—ญ์ด๋‹ค. ์ปจ์„คํŒ…์ด ์ฃผ๋ ฅ ์‚ฌ์—…์ด์ง€๋งŒ, ๋””์ง€ํ„ธ ์ „ํ™˜์„ ์ถ”์ง„ํ•˜๋Š” ๋Œ€๊ธฐ์—…์„ ์ค‘์‹ฌ์œผ๋กœ ๊ด€๋ฆฌํ˜• ์„œ๋น„์Šค ์˜์—ญ๋„ ์ง€์†์ ์œผ๋กœ ํ™•๋Œ€ํ•˜๊ณ  ์žˆ๋‹ค.

HCLํ…Œํฌ๋†€๋กœ์ง€์Šค

HCLํ…Œํฌ๋†€๋กœ์ง€์Šค(HCL Technologies)๋Š” ์ „ ์„ธ๊ณ„์— ๋ถ„ํฌํ•œ ํŒ€๊ณผ ์„ผํ„ฐ๋ฅผ ํ†ตํ•ด ๊ด€๋ฆฌํ˜• ํด๋ผ์šฐ๋“œ ์„œ๋น„์Šค๋ฅผ ์ œ๊ณตํ•œ๋‹ค. AWS, MS ์• ์ €, ๊ตฌ๊ธ€ ํด๋ผ์šฐ๋“œ ๋“ฑ ์ฃผ์š” ํด๋ผ์šฐ๋“œ ์ œ๊ณต์—…์ฒด์™€ ํ˜‘๋ ฅํ•ด ๊ฐ ๊ธฐ์—…์˜ ์š”๊ตฌ์— ๋งž๋Š” ํด๋ผ์šฐ๋“œ ํ™˜๊ฒฝ์„ ์„ค๊ณ„ยท๊ตฌ์ถ•ํ•˜๊ณ , ์ดํ›„ ์•ˆ์ •์ ์ธ ์šด์˜์„ ์ง€์›ํ•œ๋‹ค. ๊ตฌ์ถ• ์ดํ›„์—๋Š” 24์‹œ๊ฐ„ ๋ชจ๋‹ˆํ„ฐ๋ง, ์„ฑ๋Šฅ ๊ด€๋ฆฌ, ์žฅ์•  ๋Œ€์‘ ๋“ฑ ์ผ์ƒ์ ์ธ ์šด์˜์„ ๋‹ด๋‹นํ•˜๋ฉฐ, ๋ฐ˜๋ณต์ ์ธ IT ์ž‘์—…์—๋Š” ์ž๋™ํ™”์™€ AI ๋„๊ตฌ๋ฅผ ํ™œ์šฉํ•œ๋‹ค. ๊ธˆ์œต, ์ œ์กฐ, ํ—ฌ์Šค์ผ€์–ด ๋“ฑ ๋‹ค์–‘ํ•œ ์‚ฐ์—…์—์„œ ์•ˆ์ •์ ์ธ ํด๋ผ์šฐ๋“œ ์‹œ์Šคํ…œ ์šด์˜์„ ์ง€์›ํ•˜๋Š” ๊ฒƒ์ด ํŠน์ง•์ด๋‹ค.

NTT๋ฐ์ดํ„ฐ

NTT๋ฐ์ดํ„ฐ(NTT Data)๋Š” ์ „ ์„ธ๊ณ„ ๊ณ ๊ฐ์„ ๋Œ€์ƒ์œผ๋กœ ๊ด€๋ฆฌํ˜• ํด๋ผ์šฐ๋“œ ์„œ๋น„์Šค๋ฅผ ์ œ๊ณตํ•˜๋ฉฐ, ์ œ์กฐ, ํ—ฌ์Šค์ผ€์–ด, ๊ธˆ์œต ์„œ๋น„์Šค, ๋ณดํ—˜ ๋“ฑ ํญ๋„“์€ ์‚ฐ์—…์„ ์ง€์›ํ•œ๋‹ค. MS ์• ์ €, ๊ตฌ๊ธ€ ํด๋ผ์šฐ๋“œ, IBM ํด๋ผ์šฐ๋“œ, AWS๋ฅผ ๊ธฐ๋ฐ˜์œผ๋กœ ํ•œ ๋ฉ€ํ‹ฐํด๋ผ์šฐ๋“œ ์ „๋žต์„ ์ฑ„ํƒํ•˜๊ณ  ์žˆ๋‹ค. ์• ํ”Œ๋ฆฌ์ผ€์ด์…˜์˜ ํด๋ผ์šฐ๋“œ ์ด์ „, ๋…ธํ›„ ์‹œ์Šคํ…œ ํ˜„๋Œ€ํ™”, ๋ ˆ๊ฑฐ์‹œ ๊ธฐ์ˆ  ์ „ํ™˜์„ ์ง€์›ํ•˜๋Š” ํ•œํŽธ, NTT ๊ทธ๋ฃน ์ „๋ฐ˜์˜ ์—ญ๋Ÿ‰์„ ํ™œ์šฉํ•ด ์•„์ด๋ดํ‹ฐํ‹ฐ ๋ฐ ์ ‘๊ทผ ๊ด€๋ฆฌ, ๋„คํŠธ์›Œํ‚น, ๊ด€๋ฆฌํ˜• ๋ณด์•ˆ ์„œ๋น„์Šค๋„ ํ•จ๊ป˜ ์ œ๊ณตํ•œ๋‹ค. ์ด๋ฅผ ํ†ตํ•ด ๊ณ ๊ฐ์ด ๋น„์ฆˆ๋‹ˆ์Šค๋ฅผ ๋ณด๋‹ค ํšจ๊ณผ์ ์œผ๋กœ ์ง€์›ํ•˜๋Š” ํด๋ผ์šฐ๋“œ ๊ธฐ๋ฐ˜ ์‹œ์Šคํ…œ์„ ๊ตฌ์ถ•ํ•˜๋„๋ก ๋•๋Š”๋‹ค.

ํƒ€ํƒ€์ปจ์„คํ„ด์‹œ์„œ๋น„์Šค

ํƒ€ํƒ€์ปจ์„คํ„ด์‹œ์„œ๋น„์Šค(Tata Consultancy Services, TCS)๋Š” ์ „ ์„ธ๊ณ„ ๊ธฐ์—…๊ณผ ํ˜‘๋ ฅํ•˜๊ณ  ์žˆ์ง€๋งŒ, ๊ด€๋ฆฌํ˜• ํด๋ผ์šฐ๋“œ ์„œ๋น„์Šค ๊ณ ๊ฐ์€ ์ฃผ๋กœ ๋ถ๋ฏธ์™€ ์œ ๋Ÿฝ์— ์ง‘์ค‘๋ผ ์žˆ๋‹ค. ๊ธˆ์œต ์„œ๋น„์Šค, ์ƒ๋ช…๊ณผํ•™ยท์ œ์•ฝ, ๋ฆฌํ…Œ์ผ ์‚ฐ์—…์—์„œ ๊ฐ•ํ•œ ๊ฒฝํ—˜์„ ๋ณด์œ ํ•˜๊ณ  ์žˆ๋‹ค. MS ์• ์ €, ๊ตฌ๊ธ€ ํด๋ผ์šฐ๋“œ, ์˜ค๋ผํด ํด๋ผ์šฐ๋“œ, AWS๋ฅผ ์ค‘์‹ฌ์œผ๋กœ ๋ฉ€ํ‹ฐํด๋ผ์šฐ๋“œ ํ™˜๊ฒฝ์„ ์ง€์›ํ•˜๋ฉฐ, ์ผ๋ถ€ IBM ํด๋ผ์šฐ๋“œ๋„ ์ œ๊ณตํ•œ๋‹ค. ์ฃผ์š” ํด๋ผ์šฐ๋“œ ํŒŒํŠธ๋„ˆ๋ณ„ ์ „๋‹ด ํŒ€์„ ์šด์˜ํ•˜๋ฉฐ, ๋Œ€๊ธฐ์—…์„ ๋Œ€์ƒ์œผ๋กœ ํด๋ผ์šฐ๋“œ ์ด์ „ ์ „๋žต ์ˆ˜๋ฆฝ, ๊ธฐ์กด ์‹œ์Šคํ…œ ์ด์ „, ์• ํ”Œ๋ฆฌ์ผ€์ด์…˜ ํ˜„๋Œ€ํ™”๋ฅผ ์ง€์›ํ•œ๋‹ค. ์„œ๋น„์Šค์˜ ์ค‘์‹ฌ์€ ๋Œ€๊ธฐ์—…์— ๋งž์ถฐ์ ธ ์žˆ์œผ๋ฉฐ, ์ค‘๊ฒฌ๊ธฐ์—… ๋Œ€์ƒ ๋น„์ค‘์€ ์ƒ๋Œ€์ ์œผ๋กœ ์ œํ•œ์ ์ด๋‹ค.
dl-ciokorea@foundryco.com

The 37-point trust gap: Itโ€™s not the AI, itโ€™s your organization

Iโ€™ve been in the tech industry for over three decades, and if thereโ€™s one thing Iโ€™ve learned, itโ€™s that the tech world loves a good mystery. And right now, weโ€™ve got a fascinating one on our hands.

This year, two of the most respected surveys in our field asked developers a simple question: Do you trust the output from AI tools? The results couldnโ€™t be more different!

  • The 2025 DORA report, a study with nearly 5,000 tech professionals that historically skews enterprise, found that a full 70% of respondents express some degree of confidence in the quality of AI-generated output.
  • Meanwhile, the 2025 Stack Overflow Developer Survey, with its own massive developer audience, found that only 33% of developers are โ€œSomewhatโ€ or โ€œHighlyโ€ trusting of AI tools.

Thatโ€™s a 37-point gap.

Think about that for a second. Weโ€™re talking about two surveys conducted during the same year, of the same profession and examining largely the same underlying AI models from providers like OpenAI, Anthropic and Google. How can two developer surveys report such fundamentally different realities?

DORA: AI is an amplifier

The mystery of the 37-point trust gap isnโ€™t about the AI. Itโ€™s about the operational environment AI is surrounded with (more on that in the next section). As the DORA report notes in its executive summary, the main takeaway is: AI is an amplifier. Put bluntly, โ€œthe central question for technology leaders is no longer if they should adopt AI, but how to realize its value.โ€

DORA didnโ€™t just measure AI adoption. They measured the organizational capabilities that determine whether AI helps or destroys your teamโ€™s velocity. And they found seven specific capabilities that separate the 70% confidence group in their survey from the 33% in the Stack Overflow results.

Let me walk you through them, because this is where weโ€™ll get practical.

The 7 pillars of a high-trust AI environment

So, what does a good foundation look like? The DORA research team didnโ€™t just identify the problem; they gave us a blueprint. They identified seven foundational โ€œcapabilitiesโ€ that turn AI from a novelty into a force multiplier. When I read this list, I just nodded my head. Itโ€™s the stuff great engineering organizations have been working on for years.

Here are the keys to the kingdom, straight from the DORA AI Capabilities Model:

  1. A clear and communicated AI stance: Do your developers know the rules of the road? Or are they driving blind, worried theyโ€™ll get in trouble for using a tool or, worse, feeding it confidential data? When the rules are clear, friction goes down and effectiveness skyrockets.
  2. Healthy data ecosystems: AI is only as good as the data it learns from. Organizations that treat their data as a strategic assetโ€”investing in its quality, accessibility and unificationโ€”see a massive amplification of AIโ€™s benefits on organizational performance.
  3. AI-accessible internal data: Generic AI is useful. AI that understands your codebase, your documentation and your internal APIs is a game-changer. Connecting AI to your internal context is the difference between a helpful co-pilot and a true navigator.
  4. Strong version control practices: In an age of AI-accelerated code generation, your version control system is your most critical safety net. Teams that are masters of commits and rollbacks can experiment with confidence, knowing they can easily recover if something goes wrong. This is what enables speed without sacrificing sanity.
  5. Working in small batches: AI can generate a lot of code, fast. But bigger changes are harder to review and riskier to deploy. Disciplined teams that work in small, manageable chunks see better product performance and less friction, even if it feels like theyโ€™re pumping the brakes on individual code output.
  6. A user-centric focus: This one is a showstopper. The DORA report found that without a clear focus on the user, AI adoption can actually harm team performance. Why? Because youโ€™re just getting faster at building the wrong thing. When teams are aligned on creating user value, AI becomes a powerful tool for achieving that shared goal.
  7. Quality internal platforms: A great platform is the paved road that lets developers drive the AI racecar. A bad one is a dirt track full of potholes. The data is unequivocal: a high-quality platform is the essential foundation for unlocking AIโ€™s value at an organizational level.

What this means for you

This isnโ€™t just an academic exercise. The 37-point DORA-Stack Overflow gap has real implications for how we work.

  • For developers: If youโ€™re frustrated with AI, donโ€™t just blame the tool. Look at the system around you. Are you being set up for success? This isnโ€™t about your prompt engineering skills; itโ€™s about whether you have the organizational support to use these tools effectively.
  • For engineering leaders: Your job isnโ€™t to just buy AI licenses. Itโ€™s to build the ecosystem where those licenses create value. That DORA list of seven capabilities? Thatโ€™s your new checklist. Your biggest ROI isnโ€™t in the next AI model; itโ€™s in fixing your internal platform, clarifying your data strategy and socializing your AI policy.
  • For CIOs: The DORA report states it plainly: successful AI adoption is a systems problem, not a tools problem. Pouring money into AI without investing in the foundational capabilities that amplify its benefits is a recipe for disappointment.

So, the next time you hear a debate about whether AI is โ€œgoodโ€ or โ€œbadโ€ for developers, remember the gap between these two surveys. The answer is both, and the difference has very little to do with the AI itself.

AI without a modern engineering culture and solid infrastructure is just expensive frustration. But AI with that foundation? Thatโ€™s the future.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

MCSP buyerโ€™s guide: 6 top managed cloud services providers โ€” and how to choose

A managed cloud services provider (MCSP) helps organizations run some or all of their cloud environments. This can include moving systems to the cloud, monitoring and maintaining them, improving performance, managing security tools, and helping control costs. MCSPs typically work across public, private, and hybrid cloud environments.

Organizations decide which parts of their cloud environments they want the provider to handle and which parts they want to keep in-house. In most cases, the company and the MCSP share responsibility. The provider manages day-to-day operations and tooling, while the organization stays accountable for business decisions, data, and governance.

Choosing an MCSP is always an unnerving experience, says Brent Riley, MCSP VP of digital forensics and incident response for North America at cybersecurity consultancy CyXcel.

โ€œSo much trust is placed in their ability to perform to the level promised in their SLA, but it can be tough to validate whether theyโ€™re being met until thereโ€™s an outage or cybersecurity incident that reveals issues,โ€ he says. โ€œAt that point, the damage is done. MCSPs are even more challenging to evaluate and select as thereโ€™s no physical infrastructure to inspect, and no visible work being done within an on-premise infrastructure.โ€

Benefits using an MCSP

Reduced operational burden: MCSPs can take on day-to-day cloud management tasks, reducing the need for large internal cloud and infrastructure teams. This is especially helpful for organizations that donโ€™t have deep cloud or FinOps expertise in-house.

Faster problem response: Most MCSPs provide 24/7 monitoring and support. When issues arise, their teams can respond quickly, often before problems significantly impact users or applications.

Support for disaster recovery and resilience: MCSPs help design, manage, and test backup and disaster recovery setups. While customers still define recovery goals, providers help ensure systems can be restored quickly if something goes wrong.

Ongoing platform management: Cloud platforms change frequently. MCSPs help keep infrastructure components current and compatible, reducing the risk of outdated configurations while allowing customers to control when major changes are introduced.

Security expertise and tooling: Cloud security requires specialized skills in high demand. MCSPs bring experience with identity management, monitoring, compliance tools, and security best practices. Security remains a shared responsibility, but providers help strengthen day-to-day protection.

Improved reliability and performance: With experience running large and complex environments, MCSPs can help design and operate cloud infrastructure thatโ€™s more stable, scalable, and resilient.

Integration with existing systems: MCSPs help connect cloud resources with on-prem systems, applications, and identity platforms. This makes it easier for users and applications to access cloud services without disruption.

More predictable operations, not always lower costs: While MCSPs can reduce internal staffing and tooling costs, they donโ€™t always lower overall cloud spend. Their value today is more about operational efficiency, expertise, and speed than cheaper cloud pricing.

Key considerations when choosing an MCSP

As organizations move toward more autonomous, AI-driven services, MCSPs play an important role turning automation into something that actually works every day, says Manny Rivelo, CEO at ConnectWise, a provider of IT management software.

Rivelo says one thing matters more than many teams realize: operational transparency. Organizations need a clear view into how their cloud environments are designed, secured, and managed, as well as how agentic AI monitors systems, makes decisions, and takes action so nothing important happens behind the scenes without their knowledge.

โ€œOperational maturity matters more as autonomy increases,โ€ Rivelo says. โ€œThis includes disciplined data governance, strong physical and logical security, and well-defined incident response processes that balance automation with human oversight. While agentic AI can detect issues, correlate signals, and respond at machine speed, humans remain essential to set policy, validate outcomes, and make judgment calls when conditions fall outside expected patterns.โ€

Itโ€™s also important that the MCSP fits well with the managed services model and the broader ecosystem around it, according to Rivelo. The right provider should use automation and AI to make things simpler. After all, when automation is done right, it backs up the people doing the work, brings more consistency to operations, and gives teams more time to focus on what actually matters, not manage another set of tools.

One factor that often gets missed when choosing an MCSP is how flexible pricing really is, says Jon Winsett, CEO at NPI, which helps enterprises get more value from their software licenses and navigate audits from vendors such as Microsoft, Oracle, and Cisco. The risk with an MCSP is usually not paying more at the start but losing negotiating power over time without noticing it.

MCSPs can be a big help for smaller teams or organizations still building cloud experiences, he adds. By combining cloud spend and packaging services, such as migration support, rightsizing, and cost controls, they can cut down on waste and make the cloud easier to run. For organizations without strong cloud or FinOps skills in-house, those benefits can be worth the tradeoffs.

โ€œAs cloud environments grow, pricing often becomes less clear,โ€ says Winsett. โ€œMCSPs add their own markup on top of Microsoft or AWS pricing, up to 8% for basic spend and more when services are bundled. That managed layer is how MCSPs reach profit margins of roughly 30 to 40%.โ€

Disadvantages to working with an MCSP

The biggest disadvantage using an MCSP is loss of control, according to Ryan McElroy, VP of technology at tech consulting firm Hylaine.ย 

โ€œIf you get discounts for various licenses, but youโ€™re locked into contracts and have to overbuy, then you may not be saving money,โ€ he says. โ€œAnd an MCSP adds to your organizationโ€™s attack surface area. While Microsoft and other large cloud vendors train their MCSPs and provide guidance, if you read the root cause analysis reports produced after major cybersecurity incidents, youโ€™ll find itโ€™s a worryingly common vector.โ€

Anay Nawathe, director at research and advisory firm ISG, says that while working with MCSPs has many benefits, there are also risks.

โ€œYour MCSP shouldnโ€™t be the main voice of architecture in your organization,โ€ he says. โ€œArchitectural decisions should be owned internally to maintain key systems knowledge in-house, reduce vendor lock-in, and mitigate architectural bias from a provider compared to market best practices.โ€

Additionally, he adds that MCSPs donโ€™t always feel the same pressure to manage costs as the companies using the cloud. In the end, enterprises are the ones who feel the impact of overspending, which is why many bring FinOps roles back in-house to take direct control of cloud costs, he says.

6 top MCSPs

There are dozens, so to help streamline the research, we highlight the following products, arranged alphabetically, based on independent research and discussions with analysts. Organizations should contact providers directly for pricing information.

Accenture

Accenture offers its managed cloud services to customers worldwide, backed by teams and centers in most major regions and markets. It helps organizations design, run, and maintain their cloud environments, and supports everything from initial cloud setup to ongoing operations, including monitoring, maintenance, and security. Accenture also works across major cloud platforms, such as Microsoft Azure, Google Cloud, and AWS. Instead of managing complex cloud systems entirely in-house, companies can use Accentureโ€™s services to handle routine operations and technical oversight. This includes monitoring systems, addressing issues as they come up, and keeping cloud environments updated. Overall, Accenture manages the day-to-day cloud infrastructure so organizational in-house staff can focus on key business priorities.

Capgemini

Capgemini provides managed cloud services worldwide and supports multicloud environments across all major regions, with much of its work centered in Europe and North America. The company works closely with industries such as manufacturing, retail, financial services, and insurance. Capgemini helps organizations run and manage applications on major cloud platforms, including AWS, Microsoft Azure, and Google Cloud, as well as specialized enterprise clouds. Its managed services cover both infrastructure and applications, including monitoring, backups, and technical support. Capgemini also helps companies decide which workloads make sense to move to the cloud, migrate those systems, and manage them over time. The firm is best suited for large enterprises and complex environments rather than midsize organizations.

Deloitte

Deloitte provides cloud services to customers around the world, with much of its work focused on organizations in North America and Europe. It works heavily with industries in financial services and insurance, government, and healthcare. Deloitte supports multicloud environments and works with platforms including AWS, Microsoft Azure, Google Cloud, VMware Cloud, and Oracle Cloud. The firm helps companies plan, build, and operate cloud environments tailored to business goals. A key focus is cloud transformation, including identifying where cloud tech can improve processes and operations. Deloitte is best suited for large enterprises pursuing digital transformation, and while consulting remains its core business, the firm continues to expand its managed services offerings.

HCL Technologies

Managed cloud services from HCL Technologies are offered globally, and supported by teams and centers around the world. HCL helps organizations move their systems to the cloud and keep them running smoothly over time. It works with major cloud providers, such as AWS, Microsoft Azure, and Google Cloud to design and set up cloud environments that match each businessโ€™s needs. Once everythingโ€™s in place, HCL handles the daily operations, including around-the-clock monitoring, performance management, and fixing issues as they arise, and also uses automation and AI tools for routine IT tasks. Overall, HCL helps organizations maintain reliable cloud systems across industries like banking, manufacturing, and healthcare.

NTT Data

NTT Data delivers managed cloud services to customers globally. It supports a wide range of industries, including manufacturing, healthcare, financial services, and insurance. NTT Data takes a multicloud approach, with managed services customers running on Microsoft Azure, Google Cloud, IBM Cloud, and AWS. NTT Data also helps companies move applications to the cloud, modernize aging systems, and move away from legacy tech, as well as draws on expertise from across the NTT Group to offer services like identity and access management, networking, and managed security, helping customers build cloud-based systems that better support their businesses.

Tata Consultancy Services

TCS works with organizations worldwide, but most of its cloud and managed services customers are in North America and Europe. The company has strong experience in industries such as financial services, life sciences and pharmaceuticals, and retail. TCS supports multicloud environments and works with leading cloud platforms like Microsoft Azure, Google Cloud, Oracle Cloud, and AWS, with some support for IBM Cloud. TCS has dedicated teams for its largest cloud partners and helps large enterprises plan cloud migrations, move existing systems, and modernize applications for the cloud. The majority of this work is focused on large enterprises, with limited emphasis on midsize organizations.


7 changes to the CIO role in 2026

Everything is changing, from data pipelines and technology platforms, to vendor selection and employee training โ€” even core business processes โ€” and CIOs are in the middle of it to guide their companies into the future.

In 2024, tech leaders asked themselves if this AI thing even works and how do you do it. Last year, the big question was what the best use cases are for the new technology. This year will be all about scaling up and starting to use AI to fundamentally transform how employees, business units, or even entire companies actually function.

So whatever IT was thought of before, itโ€™s now a driver of restructuring. Here are seven ways the CIO role will change in the next 12 months.

Enough experimenting

The role of the CIO will change for the better in 2026, says Eric Johnson, CIO at incident management company PagerDuty, with a lot of business benefit and opportunity in AI.

โ€œItโ€™s like having a mine of very valuable minerals and gold, and youโ€™re not quite sure how to extract it and get full value out of it,โ€ he says. Now, he and his peers are being asked to do just that: move out of experimentation and into extraction.

โ€œWeโ€™re being asked to take everything weโ€™ve learned over the past couple of years and find meaningful value with AI,โ€ he says.

What makes this extra challenging is the pace of change is so much faster now than before.

โ€œWhat generative AI was 12 months ago is completely different to what it is today,โ€ he says. โ€œAnd the business folks watching that transformation occur are starting to hear of use cases they never heard of months ago.โ€

From IT manager to business strategist

The traditional role of a companyโ€™s IT department has been to provide technology support to other business units.

โ€œYou tell me what the requirements are, and Iโ€™ll build you your thing,โ€ says Marcus Murph, partner and head of technology consulting at KPMG US.

But the role is changing from back-office order taker to full business partner working alongside business leaders to leverage innovation.

โ€œMy instincts tell me that for at least the next decade, weโ€™ll see such drastic change in technology that they wonโ€™t go back to the back office,โ€ he says. โ€œWeโ€™re probably in the most rapid hyper cycle of change at least since the internet or mobile phones, but almost certainly more than that.โ€

Change management

As AI transforms how people do their jobs, CIOs will be expected to step up and help lead the effort.

โ€œA lot of the conversations are about implementing AI solutions, how to make solutions work, and how they add value,โ€ says Ryan Downing, VP and CIO of enterprise business solutions at Principal Financial Group. โ€œBut the reality is with the transformation AI is bringing into the workplace right now, thereโ€™s a fundamental change in how everyone will be working.โ€

This transformation will challenge everyone, he says, in terms of roles, value proposition of whatโ€™s been done for years, and expertise.

โ€œThe technology weโ€™re starting to bring into the workplace is really shaping the future of work, and we need to be agents of change beyond the tech,โ€ he says.

That change management starts within the IT organization itself, adds Matt Kropp, MD and senior partner and CTO at Boston Consulting Group.

โ€œThereโ€™s quite a lot of focus on AI for software development because itโ€™s maybe the most advanced, and the tools have been around for a while,โ€ he says. โ€œThereโ€™s a very clear impact using AI agents for software developers.โ€

The lessons that CIOs learn from managing this transformation can be applied in other business units, too, he says.

โ€œWhat we see happening with AI for software development is a canary in the coal mine,โ€ he adds. And itโ€™s an opportunity to ensure the company is getting the productivity gains itโ€™s looking for, but also to create change management systems that can be used in other parts of the enterprise. And it starts with the CIO.

โ€œYou want the top of the organization saying they expect everyone to use AI because they use it, and can demonstrate how they use it as part of their work,โ€ he says. Leaders need to lead by example that the use of AI is allowed, accepted, and expected.

CIOs and other executives can use AI to create first drafts of memos, organize meeting notes, and help them think through strategy. And any major technology initiative will include a change management component, yet few technologies have had as dramatic an impact on work as AI is having, and is expected to have.

Deploying AI at scale in an enterprise, however, is a very contentious issue, says Ari Lightman, a professor at Carnegie Mellon University. Companies have spent a lot of time focusing on understanding the customer experience, he says, but few focus on the employee experience.

โ€œWhen you roll out enterprise-wide AI systems, youโ€™re going to have people who are supportive and interested, and people who just want to blow it up,โ€ he says. Without addressing the issues that employees have, AI projects can grind to a halt.

Cleaning up the data

As AI projects scale up, so will their data requirements. Instead of limited, curated data sets, enterprises will need to modernize their data stacks if they havenโ€™t already, and make the data ready and accessible for AI systems while ensuring security and compliance.

โ€œWeโ€™re thinking about data foundations and making sure we have the infrastructure in place so AI is something we can leverage and get value out of,โ€ says Aaron Rucker, VP of data at Warner Music.

The security aspect is particularly important as AI agents gain the ability to autonomously seek out and query data sources. This was much less of a concern with small pilot projects or RAG embedding, where developers carefully curated the data that was used to augment AI prompts. And before gen AI, data scientists, analysts, and data engineers were the ones accessing data, which offered a layer of human control that might diminish or completely vanish in the agentic age. That means the controls will need to move closer to the data itself.

โ€œWith AI, sometimes you want to move fast, but you still want to make sure youโ€™re setting up data sources with proper permissions so someone canโ€™t just type in a chatbot and get all the family jewels,โ€ says Rucker.

Make build vs buy decisions

This year, the build or buy decisions for AI will have dramatically bigger impacts than they did before. In many cases, vendors can build AI systems better, quicker, and cheaper than a company can do it themselves. And if a better option comes along, switching is a lot easier than when youโ€™ve built something internally from scratch. On the other hand, some business processes represent core business value and competitive advantage, says Rucker.

โ€œHR isnโ€™t a competitive advantage for us because Workday is going to be better positioned to build something thatโ€™s compliantโ€ he says. โ€œIt wouldnโ€™t make sense for us to build that.โ€

But then there are areas where Warner Music can gain a strategic advantage, he says, and itโ€™s going to be important to figure out what this advantage is going to be when it comes to AI.

โ€œWe shouldnโ€™t be doing AI for AIโ€™s sake,โ€ says Rucker. โ€œWe should attach it to some business value as a reflection of our company strategy.โ€

If a company uses outside vendors for important business processes, thereโ€™s a risk the vendor will come to understand an industry better than the existing players.

Digitizing a business process creates behavioral capital, network capital, and cognitive capital, says John Sviokla, executive fellow at the Harvard Business School and co-founder of GAI Insights. It unlocks something that used to be exclusively inside the minds of employees.

Companies have already traded their behavioral capital to Google and Facebook, and network capital to Facebook and LinkedIn.

โ€œTrading your cognitive capital for cheap inference or cheap access to technology is a very bad idea,โ€ says Sviokla. Even if the AI company or hyperscaler isnโ€™t currently in a particular line of business, this gives them the starter kit to understand that business. โ€œOnce they see a massive opportunity, they can put billions of dollars behind it,โ€ he says.

Platform selection

As AI moves from one-off POCs and pilot projects to deployments at scale, companies will have to come to grips with choosing an AI platform, or platforms.

โ€œWith things changing so fast, we still donโ€™t know whoโ€™s going to be the leaders in the long term,โ€ says Principalโ€™s Downing. โ€œWeโ€™re going to start making some meaningful bets, but I donโ€™t think the industry is at the point where we pick one and say thatโ€™s going to be it.โ€

The key is to pick platforms that have the ability to scale, but are decoupled, he says, so enterprises can pivot quickly, but still get business value. โ€œRight now, Iโ€™m prioritizing flexibility,โ€ he says.

Bret Greenstein, chief AI officer at management consulting firm West Monroe Partners, recommends CIOs identify aspects of AI that are stable, and those that change rapidly, and make their platform selections accordingly.

โ€œKeep your AI close to the cloud because the cloud is going to be stable,โ€ he says. โ€œBut the AI agent frameworks will change in six months, so build to be agnostic in order to integrate with any agent frameworks.โ€

Progressive CIOs are building the enterprise infrastructure of tomorrow and have to be thoughtful and deliberate, he adds, especially around building governance models.

Revenue generation

AI is poised to massively transform business models across every industry. This is a threat to many companies, but also an opportunity for others. By helping to create new AI-powered products and services, CIOs can make IT a revenue generator instead of just a cost center.

โ€œYouโ€™re going to see this notion of most IT organizations directly building tech products that enable value in the marketplace, and change how you do manufacturing, provide services, and how you sell a product in a store,โ€ says KPMGโ€™s Murph.

That puts IT much closer to the customer than it had been before, raising its profile and significance in the organization, he says.

โ€œIn the past, IT was one level away from the customer,โ€ he says. โ€œThey enabled the technology to help business functions sell products and services. Now with AI, CIOs and IT build the products, because everything is enabled by technology. They go from the notion of being services-oriented to product-oriented.โ€

One CIO already doing this is Amith Nair at Vituity, a national physician group serving 13.8 million patients.

โ€œWeโ€™re building products internally and providing them back to the hospital system, and to external customers,โ€ he says.

For example, doctors spend hours a day transcribing conversations with patients, which is something AI can help with. โ€œWhen a patient comes in, they can just have a conversation,โ€ he says. โ€œInstead of looking at the computer and typing, they look at and listen to the patient. Then all of their charting, medical decision processes, and discharge summaries are developed using a multi-agent AI platform.โ€

The tool was developed in-house, custom-built on top of the Microsoft Azure platform, and is now a startup running on its own, he says.

โ€œWeโ€™ve become a revenue generator,โ€ he says.

โŒ