❌

Reading view

There are new articles available, click to refresh the page.

At VA, cyber dominance is in, cyber compliance is out

The Department of Veterans Affairs is moving toward a more operational approach to cybersecurity.

This means VA is applying a deeper focus on protecting the attack surfaces and closing off threat vectors that put veterans’ data at risk.

Eddie Pool, the acting principal assistant secretary for information and technology and acting principal deputy chief information officer at VA, said the agency is changing its cybersecurity posture to reflect a cyber dominance approach.

Eddie Pool is the acting principal assistant secretary for information and technology and acting principal deputy chief information officer at the Department of Veterans Affairs.

“That’s a move away from the traditional and an exclusively compliance based approach to cybersecurity, where we put a lot of our time resources investments in compliance based activities,” Pool said on Ask the CIO. “For example, did someone check the box on a form? Did someone file something in the right place? We’re really moving a lot of our focus over to the risk-based approach to security, pushing things like zero trust architecture, micro segmentation of our networks and really doing things that are more focused on the operational landscape. We are more focused on protecting those attack surfaces and closing off those threat vectors in the cyber space.”

A big part of this move to cyber dominance is applying the concepts that make up a zero trust architecture like micro segmentation and identity and access management.

Pool said as VA modernizes its underlying technology infrastructure, it will “bake in” these zero trust capabilities.

“Over the next several years, you’re going to see that naturally evolve in terms of where we are in the maturity model path. Our approach here is not necessarily to try to map to a model. It’s really to rationalize what are the highest value opportunities that those models bring, and then we prioritize on those activities first,” he said. “We’re not pursuing it in a linear fashion. We are taking parts and pieces and what makes the most sense for the biggest thing for our buck right now, that’s where we’re putting our energy and effort.”

One of those areas that VA is focused on is rationalizing the number of tools and technologies it’s using across the department. Pool said the goal is to get down to a specific set instead of having the “31 flavors” approach.

“We’re going to try to make it where you can have any flavor you want so long as it’s chocolate. We are trying to get that standardized across the department,” he said. “That gives us the opportunity from a sustainment perspective that we can focus the majority of our resources on those enterprise standardized capabilities. From a security perspective, it’s a far less threat landscape to have to worry about having 100 things versus having two or three things.”

The business process reengineering priority

Pool added that redundancy remains a key factor in the security and tool rationalization effort. He said VA will continue to have a diversity of products in its IT investment portfolios.

“Where we are at is we are looking at how do we build that future state architecture, as elegantly and simplistically as possible so that we can manage it more effectively, they can protect it more securely,” he said.

In addition to standardizing on technology and cyber tools and technologies, Pool said VA is bringing the same approach to business processes for enterprisewide services.

He said over the years, VA has built up a laundry list of legacy technology all with different versions and requirements to maintain.

“We’ve done a lot over the years in the Office of Information and Technology to really standardize on our technology platforms. Now it’s time to leverage that, to really bring standard processes to the business,” he said. “What that does is that really does help us continue to put the veteran at the center of everything that we do, and it gives a very predictable, very repeatable process and expectation for veterans across the country, so that you don’t have different experiences based on where you live or where you’re getting your health care and from what part of the organization.”

Part of the standardization effort is that VA will expand its use of automation, particularly in processing of veterans claims.

Pool said the goal is to take more advantage of the agency’s data and use artificial intelligence to accelerate claims processing.

“The richness of the data and the standardization of our data that we’re looking at and how we can eliminate as many steps in these processes as we can, where we have data to make decisions, or we can automate a lot of things that would completely eliminate what would be a paper process that is our focus,” Pool said. “We’re trying to streamline IT to the point that it’s as fast and as efficient, secure and accurate as possible from a VA processing perspective, and in turn, it’s going to bring a decision back to the veteran a lot faster, and a decision that’s ready to go on to the next step in the process.”

Many of these updates already are having an impact on VA’s business processes. The agency said that it set a new record for the number of disability and pension claims processed in a single year, more than 3 million. That beat its record set in 2024 by more than 500,000.

“We’re driving benefit outcomes. We’re driving technology outcomes. From my perspective, everything that we do here, every product, service capability that the department provides the veteran community, it’s all enabled through technology. So technology is the underpinning infrastructure, backbone to make all things happen, or where all things can fail,” Pool said. “First, on the internal side, it’s about making sure that those infrastructure components are modernized. Everything’s hardened. We have a reliable, highly available infrastructure to deliver those services. Then at the application level, at the actual point of delivery, IT is involved in every aspect of every challenge in the department, to again, bring the best technology experts to the table and look at how can we leverage the best technologies to simplify the business processes, whether that’s claims automation, getting veterans their mileage reimbursement earlier or by automating processes to increase the efficacy of the outcomes that we deliver, and just simplify how the veterans consume the services of VA. That’s the only reason why we exist here, is to be that enabling partner to the business to make these things happen.”

The post At VA, cyber dominance is in, cyber compliance is out first appeared on Federal News Network.

© Getty Images/ipopba

Cyber security network and data protection technology on virtual interface screen.

Vertical AI development agents are the future of enterprise integrations

Enterprise Application Integration (EAI) and modern iPaaS platforms have become two of the most strategically important – and resource-constrained – functions inside today’s enterprises. As organizations scale SaaS adoption, modernize core systems, and automate cross-functional workflows, integration teams face mounting pressure to deliver faster while upholding strict architectural, data quality, and governance standards.

AI has entered this environment with the promise of acceleration. But CIOs are discovering a critical truth:

Not all AI is built for the complexity of enterprise integrations – whether in traditional EAI stacks or modern iPaaS environments.

Generic coding assistants such as Cursor or Claude Code can boost individual productivity, but they struggle with the pattern-heavy, compliance-driven reality of integration engineering. What looks impressive in a demo often breaks down under real-world EAI/iPaaS conditions.

This widening gap has led to the rise of a new category: Vertical AI Development Agents – domain-trained agents purpose-built for integration and middleware development. Companies like CurieTech AI are demonstrating that specialized agents deliver not just speed, but materially higher accuracy, higher-quality outputs, and far better governance than general-purpose tools.

For CIOs running mission-critical integration programs, that difference directly affects reliability, delivery velocity, and ROI.

Why EAI and iPaaS integrations are not a “Generic Coding” problem

Integrations—whether built on legacy middleware or modern iPaaS platforms – operate within a rigid architectural framework:

  • multi-step orchestration, sequencing, and idempotency
  • canonical data transformations and enrichment
  • platform-specific connectors and APIs
  • standardized error-handling frameworks
  • auditability and enterprise logging conventions
  • governance and compliance embedded at every step

Generic coding models are not trained on this domain structure. They often produce code that looks correct, yet subtly breaks sequencing rules, omits required error handling, mishandles transformations, or violates enterprise logging and naming standards.

Vertical agents, by contrast, are trained specifically to understand flow logic, mappings, middleware orchestration, and integration patterns – across both EAI and iPaaS architectures. They don’t just generate code – they reason in the same structures architects and ICC teams use to design integrations.

This domain grounding is the critical distinction.

The hidden drag: Context latency, expensive context managers, and prompt fatigue

Teams experimenting with generic AI encounter three consistent frictions:

Context Latency

Generic models cannot retain complex platform context across prompts. Developers must repeatedly restate platform rules, logging standards, retry logic, authentication patterns, and canonical schemas.

Developers become “expensive context managers”

A seemingly simple instruction—“Transform XML to JSON and publish to Kafka”—
quickly devolves into a series of corrective prompts:

  • “Use the enterprise logging format.”
  • “Add retries with exponential backoff.”
  • “Fix the transformation rules.”
  • “Apply the standardized error-handling pattern.”

Developers end up managing the model instead of building the solution.

Prompt fatigue

The cycle of re-prompting, patching, and enforcing architectural rules consumes time and erodes confidence in outputs.

This is why generic tools rarely achieve the promised acceleration in integration environments.

Benchmarks show vertical agents are about twice as accurate

CurieTech AI recently published comparative benchmarks evaluating its vertical integration agents against leading generic tools, including Claude Code.
The tests covered real-world tasks:

  • generating complete, multi-step integration flows
  • building cross-system data transformations
  • producing platform-aligned retries and error chains
  • implementing enterprise-standard logging
  • converting business requirements into executable integration logic

The results were clear: generic tools performed at roughly half the accuracy of vertical agents.

Generic outputs often looked plausible but contained structural errors or governance violations that would cause failures in QA or production. Vertical agents produced platform-aligned, fully structured workflows on the first pass.

For integration engineering – where errors cascade – this accuracy gap directly impacts delivery predictability and long-term quality.

The vertical agent advantage: Single-shot solutioning

The defining capability of vertical agents is single-shot task execution.

Generic tools force stepwise prompting and correction. But vertical agents—because they understand patterns, sequencing, and governance—can take a requirement like:

“Create an idempotent order-sync flow from NetSuite to SAP S/4HANA with canonical transformations, retries, and enterprise logging.”


and return:

  • the flow
  • transformations
  • error handling
  • retries
  • logging
  • and test scaffolding

in one coherent output.

This shift – from instruction-oriented prompting to goal-oriented prompting—removes context latency and prompt fatigue while drastically reducing the need for developer oversight.

Built-in governance: The most underrated benefit

Integrations live and die by adherence to standards. Vertical agents embed those standards directly into generation:

  • naming and folder conventions
  • canonical data models
  • PII masking and sensitive-data controls
  • logging fields and formats
  • retry and exception handling patterns
  • platform-specific best practices

Generic models cannot consistently maintain these rules across prompts or projects.

Vertical agents enforce them automatically, which leads to higher-quality integrations with far fewer QA defects and production issues.

The real ROI: Quality, consistency, predictability

Organizations adopting vertical agents report three consistent benefits:

1. Higher-Quality Integrations

Outputs follow correct patterns and platform rules—reducing defects and architectural drift.

2. Greater Consistency Across Teams

Standardized logic and structures eliminate developer-to-developer variability.

3. More Predictable Delivery Timelines

Less rework means smoother pipelines and faster delivery.

A recent enterprise using CurieTech AI summarized the impact succinctly:

“For MuleSoft users, generic AI tools won’t cut it. But with domain-specific agents, the ROI is clear. Just start.”

For CIOs, these outcomes translate to increased throughput and higher trust in integration delivery.

Preparing for the agentic future

The industry is already moving beyond single responses toward agentic orchestration, where AI systems coordinate requirements gathering, design, mapping, development, testing, documentation, and deployment.

Vertical agents—because they understand multi-step integration workflows—are uniquely suited to lead this transition.

Generic coding agents lack the domain grounding to maintain coherence across these interconnected phases.

The bottom line

Generic coding assistants provide breadth, but vertical AI development agents deliver the depth, structure, and governance enterprise integrations require.

Vertical agents elevate both EAI and iPaaS programs by offering:

  • significantly higher accuracy
  • higher-quality, production-ready outputs
  • built-in governance and compliance
  • consistent logic and transformations
  • predictable delivery cycles

As integration workloads expand and become more central to digital transformation, organizations that adopt vertical AI agents early will deliver faster, with higher accuracy, and with far greater confidence.

In enterprise integrations, specialization isn’t optional—it is the foundation of the next decade of reliability and scale.

Learn more about CurieTech AI here.

IT leaders turn to third-party providers to manage tech debt

As tech debt threatens to cripple many IT organizations, a huge number of CIOs have turned to third-party service providers to maintain or upgrade legacy software and systems, according to a new survey.

A full 95% of IT leaders are now using outside service providers to modernize legacy IT and reduce tech debt, according to a survey by MSP Ensono.

The push is in part due to the cost of legacy IT, with nearly half of those surveyed saying they paid more in the past year to maintain older IT systems than they had budgeted. More importantly, dealing with legacy applications and infrastructure is holding IT organizations back, as nearly nine in 10 IT leaders say legacy maintenance has hampered their AI modernization plans.

“Maintaining legacy systems is really slowing down modernization efforts,” says Tim Beerman, Ensono’s CTO. “It’s the typical innovator’s dilemma — they’re focusing on outdated systems and how to address them.”

In some cases, CIOs have turned to service providers to manage legacy systems, but in other cases, they have looked to outside IT teams to retire tech debt and modernize software and systems, Beerman says. One reason they’re turning to outside service providers is an aging employee base, with internal experts in legacy systems retiring and taking their knowledge with them, he adds.

“Not very many people are able to do it themselves,” Beerman says. “You have maturing workforces and people moving out of the workforce, and you need to go find expertise in areas where you can’t hire that talent.”

While the MSP model has been around for decades, the move to using it to manage tech debt appears to be a growing trend as organizations look to clear up budget and find time to deploy AI, he adds.

“If you look at the advent of lot of new technology, especially AI, that’s moving much faster, and clients are looking for help,” Beerman says. “On one side, you have this legacy problem that they need to manage and maintain, and then you have technology moving at a pace that it hasn’t moved in years.”

Outsourcing risk

Ryan Leirvik, CEO at cybersecurity services firm Neuvik, also sees a trend toward using service providers to manage legacy IT. He sees several advantages, including matching the right experts to legacy systems, but CIOs may also use MSPs to manage their risk, he says.

“Of the many advantages, one primary advantage often not mentioned is shifting the exploitation or service interruption risk to the vendor,” he adds. “In an environment where vulnerability discovery, patching, and overall maintenance is an ongoing and expensive effort, the risk of getting it wrong typically sits with the vendor in charge.”

The number of IT leaders in the survey who overspent their legacy IT maintenance budgets also doesn’t surprise Leirvik, a former chief of staff and associate director of cyber at the US Department of Defense.

Many organizations have a talent mismatch between the IT infrastructure they have and the one they need to move to, he says. In addition, the ongoing maintenance of legacy software and systems often costs more than anticipated, he adds.

“There’s this huge maintenance tail that we weren’t expecting because the initial price point was one cost and the maintenance is 1X,” Leirvik says.

To get out of the legacy maintenance trap, IT leaders need foresight and discipline to choose the right third-party provider, he adds. “Take the long-term view — make sure the five-year plan lines up with this particular vendor,” he says. “Do your goals as an organization match up with where they’re going to help you out?”

Paying twice

While some IT leaders have turned to third-party vendors to update legacy systems, a recently released report from ITSM and customer-service software vendor Freshworks raises questions about the efficiency of modernization efforts.

More than three-quarters of those surveyed by Freshworks say software implementations take longer than expected, with two-thirds of those projects exceeding expected budgets.

Third-party providers may not solve the problems, says Ashwin Ballal, Freshworks’ CIO.

“Legacy systems have become so complex that companies are increasingly turning to third-party vendors and consultants for help, but the problem is that, more often than not, organizations are trading one subpar legacy system for another,” he says. “Adding vendors and consultants often compounds the problem, bringing in new layers of complexity rather than resolving the old ones.”

The solution isn’t adding more vendors, but new technology that works out of the box, Ballal adds.

“In theory, third-party providers bring expertise and speed,” he says. “In practice, organizations often find themselves paying for things twice — once for complex technology, and then again for consultants to make it work.”

Third-party vendors unavoidable

Other IT leaders see some third-party support as nearly inevitable. Whether it’s updating old code, moving workloads to the cloud, adopting SaaS tools, or improving cybersecurity, most organizations now need outside assistance, says Adam Winston, field CTO and CISO at cybersecurity vendor WatchGuard Technologies.

A buildup of legacy systems, including outdated remote-access tools and VPNs, can crush organizations with tech debt, he adds. Many organizations haven’t yet fully modernized to the cloud or to SaaS tools, and they will turn to outside providers when the time comes, he says.

“Most companies don’t build and design and manage their own apps, and that’s where all that tech debt basically is sitting, and they are in some hybrid IT design,” he says. “They may be still sitting in an era dating back to co-location and on-premise, and that almost always includes legacy servers, legacy networks, legacy systems that aren’t really following a modern design or architecture.”

Winston advises IT leaders to create plans to retire outdated technology and to negotiate service contracts that lean on vendors to keep IT purchases as up to date as possible. Too many vendors are quick to drop support for older products when new ones come out, he suggests.

“If you’re not going to upgrade, do the math on that legacy support and say, ‘If we can’t upgrade that, how are we going to isolate it?’” he says. “‘What is our graveyard segmentation strategy to move the risk in the event that this can’t be upgraded?’ The vendor due diligence leaves a lot of this stuff on the table, and then people seem to get surprised.”

CIOs should avoid specializing in legacy IT, he adds. “If you can’t amortize the cost of the software or the build, promise yourself that every new application that’s coming into the system is going to use the latest component,” Winston says.

êž°ì—… 전반에 슀며드는 에읎전틱 AI···변화하는 아킀텍튞의 역할

엔터프띌읎슈 아킀텍튞 ꎀ렚 Ʞ획 Ʞ사에서 생성형 AI가 얞꞉된 적은 있지만, êž°ì—… Ʞ술 전반에 믞치는 영향은 지ꞈ까지 크게 드러나지 않았닀. 귞러나 지ꞈ은 죌요 서비슀형 소프튞웚얎(SaaS) Ʞ업읎 에읎전틱 AI 솔룚션을 잇달아 낎놓윌멎서 아킀텍처와 아킀텍튞 역할 자첎가 변화하고 있닀. 귞렇닀멎 지ꞈ CIO와 아킀텍튞는 묎엇을 알아알 할까?

êž°ì—…, 특히 CEO는 생산성을 높읎고 성장섞륌 회복하Ʞ 위핎 AI 도입읎 필요하닀고 ꟞쀀히 목소늬륌 낎왔고, 분석가듀도 같은 의견을 전하고 있닀. 예륌 듀얎 가튞너는 향후 5년 동안 IT 업묎의 75%가 AI륌 활용한 직원에 의핎 수행될 것읎띌고 전망했닀. 읎는 새로욎 시장 진출, 추가 제품·서비슀 개발, 마진을 높음 Ʞ능 확충처럌 IT 업묎가 새로욎 가치륌 만듀얎낎도록 적극적윌로 나서알 한닀는 의믞음 수 있닀.

생산성읎 읎처럌 귌볞적윌로 변화한닀멎, Ʞ업에는 비슈니슀 프로섞슀와 읎륌 욎영하는 Ʞ술 전반에 대한 새로욎 계획읎 필요하닀. 최귌 사례듀은 Ʞ업읎 새로욎 욎영 몚덞을 도입하지 않윌멎 Ʞ술 투자 횚곌륌 제대로 얻Ʞ 얎렵닀는 점을 볎여죌고 있닀.

에읎전틱 AI 도입은 Ʞ업의 프로섞슀뿐 아니띌 소프튞웚얎 개발 방식, 맞춀화, Ʞ술 구현 방식까지 몚두 바꿀 가능성읎 높닀. 따띌서 아킀텍튞는 소프튞웚얎가 얎떻게 개발되고 조정되며 배포되는지 재섀계하는 최전선에 서게 된닀.

Ʞ술 업계 음부에서는 생성형 AI가 êž°ì—…ìš© 소프튞웚얎와 읎륌 제공하는 대형 벀더에 귌볞적 변화륌 가젞올 것윌로 볎고 있닀. 귞러나 포레슀터(Forrester) 쎝ꎄ 애널늬슀튞 디에고 로 죌디첎는 “AI가 볞격화된닀고 핎서 소프튞웚얎 산업읎 붕ꎎ된닀는 죌장은 터묎니없닀. 귞런 결론을 낎렀멎 AI에 가장 낙ꎀ적읞 전묞가의 예상조찚 뛰얎넘는 수쀀의 완전묎결한 AI가 전제돌알 한닀”띌고 말했닀. 로 죌디첎는 최귌 엎늰 원 컚퍌런슀에서 비슈니슀 Ʞ술 늬더 4,000명에게 “변화는 분명 진행되고 있지만, 읎는 최귌 축적된 성곌륌 Ʞ반윌로 음얎나는 것”읎띌고 섀명했닀.

로 죌디첎는 “애자음은 조직 간 조윚을 개선했고, 데람옵슀는 개발곌 욎영 사읎의 벜을 허묌었닀. 읎는 몚두 목표가 같았닀. 바로 아읎디얎와 구현 사읎의 간극을 쀄읎는 것읎닀”띌고 말했닀. 귞는 AI가 êž°ì—…ìš© 소프튞웚얎 개발 방식을 바꿀 것읎띌는 점을 부정하지는 않았지만, 애자음곌 데람옵슀가 귞랬듯 AI도 소프튞웚얎 개발 생애죌Ʞ륌 개선하고 ê²°êµ­ 아킀텍처 전반을 고도화하게 될 것읎띌고 강조했닀. 닀륞 점은 변화의 속도닀. 윘텐잠 ꎀ늬 소프튞웚얎 êž°ì—… 엄람띌윔의 AI 슀태프 엔지니얎 필 휘태컀는 “개발 역사상 읎런 속도의 변화는 없었닀”띌고 진닚했닀.

복잡성 슝가와 프로섞슀 변화

소프튞웚얎 개발 및 맞춀화 죌Ʞ가 바뀌고 에읎전틱 애플늬쌀읎션읎 볎펞화되멎서, 아킀텍튞는 복잡성곌 새로욎 비슈니슀 프로섞슀륌 엌두에 둔 계획을 수늜핎알 하는 상황읎닀. 에읎전틱 AI가 지ꞈ까지 직원읎 수동윌로 처늬하던 업묎륌 맡게 된닀멎 Ʞ졎 비슈니슀 프로섞슀륌 귞대로 유지하Ʞ는 얎렵닀.

로 죌디첎는 아마졎웹서비슀(AWS) 같은 AI 선도 Ʞ업읎 대규몚 읞력 감축에 나선 읎후 곌엎된 녌쟁에 닀시 한번 의견을 전했닀. 귞는 원 컚퍌런슀에서 “몚든 직원읎 자신의 음을 도와죌는 뮇 하나씩을 갖게 될 것읎띌는 생각은 닚순한 발상읎닀”띌며, “Ʞ업은 각 역할곌 비슈니슀 프로섞슀륌 멎밀히 분석핎, 적절한 작업에 적절한 에읎전튞륌 배치하는 데 예산곌 자원을 쓰고 있는지 확읞핎알 한닀. 읎 곌정을 거치지 않윌멎 필요하지 않은 곳에 에읎전틱 Ʞ술을 도입핎 복잡한 업묎륌 처늬하지도 못하멎서 Ʞ업의 큎띌우드 비용만 늘늬는 결곌륌 쎈래하게 된닀”띌고 겜고했닀.

AI êž°ë°˜ 로우윔드 플랫폌 êž°ì—… 아웃시슀템슈(OutSystems)의 CIO 티아고 아제베두는 “쀑요한 정볎에 접귌할 수 있는 에읎전튞륌 만드는 음은 생각볎닀 쉜닀”띌고 말했닀. 귞는 “귞래서 데읎터 구분읎 필요하닀. 에읎전튞륌 배포할 때는 읎륌 통제할 수 있얎알 한닀. 에읎전튞가 많아질수록 비용도 핚께 늘얎난닀”띌고 섀명했닀.

하지만 휘태컀는 결정론적 방식곌 비결정론적 방식 사읎의 찚읎가 묎엇볎닀 크닀고 지적했닀. 비결정론적 방식은 결곌가 맀번 달띌질 수 있Ʞ 때묞에, 항상 동음한 결곌륌 낮는 결정론적 에읎전튞륌 음종의 가드레음로 둬알 한닀는 것읎닀. 귞는 ì–Žë–€ 비슈니슀 결곌륌 결정론적·비결정론적 방식 쀑 얎디에 둘 것읞지 정의하는 음읎 아킀텍처의 핵심 역할읎띌고 섀명했닀. 또한 휘태컀는 여Ʞ서 AI가 조직의 빈틈을 메우는 데 도움을 쀄 수 있닀고 덧붙였닀. 아킀텍튞로 음한 겜험읎 있는 휘태컀는 Ʞ업읎 AI륌 적극 싀험핎 자사 아킀텍처에 ì–Žë–€ 읎점을 쀄 수 있는지, 귞늬고 궁극적윌로 비슈니슀 성곌에 ì–Žë–€ 영향을 믞칠 수 있는지 확읞하는 음읎 맀우 쀑요핎질 것읎띌고 강조했닀.

가튞너 애널늬슀튞 대멮 플러뚞와 알늬시아 멀러늬는 “싀질적 겜쟁력을 확볎하는 Ꞟ은 곌장된 Ʞ대륌 쫓거나 AI의 잠재력을 깎아낎늬는 데 있지 않닀. 가치륌 찜출하는 쀑간지점을 찟는 데 있닀”띌고 밝혔닀. 두 사람은 “AI의 가능성은 분명하지만, ê·ž 가치륌 옚전히 싀현할 가능성은 볎장되지 않는닀. 가튞너 조사에 따륎멎 AI 프로젝튞 가욎데 ROI륌 달성하는 겜우는 5개 쀑 1개에 불곌하고, 진정한 변화륌 읎끄는 사례는 50개 쀑 1개 수쀀에 귞친닀”띌고 전했닀. 또 닀륞 조사에서는 조직의 늬더가 디지턞 전환을 제대로 읎끌 수 있닀고 신뢰하는 직원읎 32%에 불곌하닀는 결곌도 나왔닀. 읎에 대핮 아제베두는 “에읎전튞는 아킀텍처 복잡성을 더핎죌Ʞ 때묞에 아킀텍튞 역할읎 더 쀑요핎지고 있닀”띌고 분석했닀.

곌거 아킀텍튞는 죌로 프레임워크 쀑심의 업묎륌 수행핎왔닀. 휘태컀는 읎제 직원, 애플늬쌀읎션, 데읎터베읎슀, 에읎전틱 AI가 얜혀 있는 엔터프띌읎슈 환겜을 ꎀ늬하렀멎 새로욎 Ʞ술 몚덞을 읎핎하고 도입핎알 한닀고 섀명했닀. 귞는 귞쀑 하나로 MCP륌 얞꞉하멎서, MCP가 AI 몚덞을 데읎터 소슀와 연결하는 표쀀 방식을 제공핎, 지ꞈ처럌 각Ʞ 닀륞 방식윌로 구성된 통합 구조나 검색 슝강 생성(RAG) 구현의 복잡성을 쀄여쀄 수 있닀고 얞꞉했닀. AI는 읎러한 새로욎 복잡성을 닀룚는 데도 도움을 제공할 전망읎닀. 로 죌디첎는 “Ʞ획, 요구사항 ꎀ늬, 에픜 생성, 사용자 슀토늬 작성, 윔드 생성, 윔드 묞서화, 번역까지 지원하는 닀양한 도구가 등장하고 있닀”띌고 섀명했닀.

새로욎 책임

포레슀터(Forrester) 시니얎 애널늬슀튞 슀테판 반레쌐은 읎제 에읎전틱 AI가 죌요 엔터프띌읎슈 아킀텍처 도구의 핵심 Ʞ능윌로 자늬 잡고 있닀고 섀명한닀. 귞는 “에읎전튞는 데읎터 검슝, 역량 맀핑, 아티팩튞 생성 같은 작업을 자동화핎 아킀텍튞가 전략곌 전환 업묎에 집쀑할 수 있게 한닀”띌고 말했닀. 반레쌐은 셀로니슀, SAP 시귞나비였, 서비슀나우가 도입한 에읎전틱 통합 Ʞ술을 사례로 듀었닀. 휘태컀는 아킀텍튞가 조직을 볎혞하고 에읎전틱 AI의 의사결정곌 결곌에 책임을 지는 역할로 더욱 쀑요핎지고 있닀고 진닚했닀.

음부 아킀텍튞는 읎런 변화가 Ʞ졎 전묞 영역을 앜화시킚닀고 생각할 수도 있닀. 귞러나 휘태컀는 였히렀 역할의 범위륌 넓힐 Ʞ회띌고 뎀닀. 귞는 “아킀텍튞는 여러 영역을 깊읎 있게 파고듀 수 있닀. 사람을 한 가지 범죌에 가둬두는 방식은 결윔 바람직하지 않닀”띌고 말했닀.

전통적윌로 아킀텍처는 묎얞가륌 섀계하고 구축한 ë’€ 고정된 형태로 졎재하는 구조륌 의믞했닀. 하지만 에읎전틱 AI가 Ʞ업에 확산되멎서 아킀텍처륌 ꎀ늬하는 아킀텍튞의 역할은 더욱 유동적윌로 변하고 있닀. 읎제는 섀계 및 구축 감독뿐 아니띌, 계획을 ꟞쀀히 몚니터링하고 조정하는 역할까지 요구된닀. 읎륌 ‘였쌀슀튞레읎션’읎띌고 부륎Ʞ도 하며, 음종의 지도 읜Ʞ에 가깝닀는 비유도 나옚닀. 아킀텍튞가 겜로륌 섀계하더띌도, 싀제 환겜의 닀양한 변수로 읞핎 Ꞟ읎 달띌질 수 있Ʞ 때묞읎닀. 날씚 변화나 쓰러진 나묎 때묞에 Ꞟ을 우회핎알 하듯, 아킀텍튞 역시 비슈니슀 환겜읎 바뀌멎 계획을 수정하고 새로욎 방향을 읎끌얎알 한닀.

또한 새로욎 아킀텍튞 역할 역시 Ʞ술 발전에 따띌 계속 변하게 될 전망읎닀. 로 죌디첎는 조직의 자동화 수쀀읎 더 높아질 것읎띌고 낎닀뎀고, 아제베두는 조직 전반에 구축되는 에읎전튞륌 묶얎 칎탈로귞 형태로 ꎀ늬하는 ‘였쌀슀튞레읎션’ ꎀ점에 힘을 싀었닀. 아킀텍튞와 CIO가 조직 전첎의 조윚자로 역할을 확장할 수 있는 Ʞ회띌는 섀명읎닀.

직핚읎 묎엇읎든, 아킀텍처의 쀑요성은 ê·ž 얎느 때볎닀 컀지고 있닀. 휘태컀는 “AI가 더 많은 윔드륌 작성하게 될수록 아킀텍튞가 되는 사람도 더 늘얎날 것”읎띌며 “앞윌로는 눈앞에 있는 수많은 에읎전튞륌 조윚하고 통제하는 음읎 아킀텍튞의 볞래 역할읎 될 것”읎띌고 말했닀. 귞는 Ʞ술 닎당자가 점점 더 많은 개발 업묎륌 에읎전튞와 AI에 맡Ʞ게 되멎서, 개별 에읎전튞와 프로섞슀가 얎떻게 작동핎알 하는지륌 섀계하는 책임읎 더욱 확대되고 많은 Ʞ술 직원읎 읎 역할을 분닎하게 될 것읎띌고 섀명했닀.

귞는 “AI가 윔드륌 생성핎쀄 수는 있지만, 윔드의 볎안을 확읞하는 책임은 여전히 사람에게 있닀”띌고 강조했닀. 읎에 따띌 IT 조직은 윔드륌 직접 개발하는 팀에서, AI가 만든 Ʞ술을 점검·수용하고 읎륌 싀제 비슈니슀 프로섞슀에 배치하는 역할을 수행하는 아킀텍처 쀑심 조직윌로 변화하게 될 것읎띌고 전망했닀.

읎믞 조직 낎에 섀도우 AI가 깊숙읎 슀며든 상황에서, 휘태컀는 Ʞ업읎 도입한 AI 에읎전튞와 비슈니슀가 조윚하도록 지원하멎서 동시에 고객 데읎터와 사읎버 볎안 첎계륌 볎혞할 수 있는 아킀텍튞 팀의 필요성읎 컀지고 있닀고 강조했닀. AI 에읎전튞는 Ʞ업의 욎영 구조륌 닀시 귞렀낎고 있윌며, 동시에 아킀텍튞 역할의 믞래 또한 새롭게 정의하고 있닀.
dl-ciokorea@foundryco.com

Agentic AI’s rise is making the enterprise architect role more fluid

In a previous feature about enterprise architects, gen AI had emerged, but its impact on enterprise technology hadn’t been felt. Today, gen AI has spawned a plethora of agentic AI solutions from the major SaaS providers, and enterprise architecture and the role of enterprise architect is being redrawn. So what do CIOs and their architects need to know?

Organizations, especially their CEOs, have been vocal of the need for AI to improve productivity and bring back growth, and analysts have backed the trend. Gartner, for example, forecasts that 75% of IT work will be completed by human employees using AI over the next five years, which will demand, it says, a proactive approach to identifying new value-creating IT work, like expanding into new markets, creating additional products and services, or adding features that boost margins.

If this radical change in productivity takes place, organizations will need a new plan for business processes and the tech that operates those processes. Recent history shows if organizations don’t adopt new operating models, the benefits of tech investments can’t be achieved.

As a result of agentic AI, processes will change, as well as the software used by the enterprise, and the development and implementation of the technology. Enterprise architects, therefore, are at the forefront of planning and changing the way software is developed, customized, and implemented.

In some quarters of the tech industry, gen AI is seen as a radical change to enterprise software, and to its large, well-known vendors. “To say AI unleashed will destroy the software industry is absurd, as it would require an AI perfection that even the most optimistic couldn’t agree to,” says Diego Lo Giudice, principal analyst at Forrester. Speaking at the One Conference in the fall, Lo Giudice reminded 4,000 business technology leaders that change is taking place, but it’s built on the foundations of recent successes.

“Agile has given better alignment, and DevOps has torn down the wall between developers and operations,” he said. “They’re all trying to do the same thing, reduce the gap between an idea and implementation.” He’s not denying AI will change the development of enterprise software, but like Agile and DevOps, AI will improve the lifecycle of software development and, therefore, the enterprise architecture. The difference is the speed of change. “In the history of development, there’s never been anything like this,” adds Phil Whittaker, AI staff engineer at content management software provider Umbraco.

Complexity and process change

As the software development and customization cycle changes, and agentic applications become commonplace, enterprise architects will need to plan for increased complexity and new business processes. Existing business processes can’t continue if agentic AI is taking on tasks currently done manually by staff.

Again, Lo Giudice adds some levity to a debate that can often become heated, especially in the wake of major redundancies by AI leaders such as AWS. “The view that everyone will get a bot that helps them do their job is naïve,” he said at the One Conference. “Organizations will need to carry out a thorough analysis of roles and business processes to ensure they spend money and resources on deploying the right agents to the right tasks. Failure to do so will lead to agentic technology being deployed that’s not needed, can’t cope with complex tasks, and increases the cloud costs of the business.

“It’s easy to build an agent that has access to really important information,” says Tiago Azevedo, CIO for AI-powered low-code platform provider OutSystems. “You need segregation of data. When you publish an agent, you need to be able to control it, and there’ll be many agents, so costs will grow.”

The big difference, though, is deterministic and non-deterministic, says Whittaker. So non-deterministic requires guardrails of deterministic agents that produce the same output every time over the more random outcomes of non-deterministic agents. Defining business outcomes by deterministic and non-deterministic is a clear role for enterprise architecture. He adds that this is where AI can help organizations fill in gaps. Whittaker, who’s been an enterprise architect, says it’ll be vital for organizations to experiment with AI to see how it can benefit their architecture and, ultimately, business outcomes.

“The path to greatness lies not in chasing hype or dismissing AI’s potential, but in finding the golden middle ground where value is truly captured,” write Gartner analysts Daryl Plummer and Alicia Mullery. “AI’s promise is undeniable, but realizing its full value is far from guaranteed. Our research reveals the sobering odds that only one in five AI initiatives achieve ROI, and just one in 50 deliver true transformation.” Further research also finds just 32% of employees trust the organization’s leadership to drive transformation. “Agents bring an additional component of complexity to architecture that makes the role so relevant,” Azevedo adds.

In the past, enterprise architects were focused on frameworks. Whittaker points out that new technology models will need to be understood and deployed by architects to manage an enterprise that comprises employees, applications, databases, and agentic AI. He cites MCP as one as it provides a standard way to connect AI models to data sources, and simplifies the current tangle of bespoke integrations and RAG implementations. AI will also help architects with this new complexity. “There are tools for planning, requirements, creating epics, user stories, code generation, documenting code, and translating it,” added Lo Giudice.

New responsibilities

Agentic AI is now a core feature of every major EA tool, says Stéphane Vanrechem, senior analyst at Forrester. “These agents automate data validation, capability mapping, and artifact creation, freeing architects to focus on strategy and transformation.” He cites the technology of Celonis, SAP Signavio, and ServiceNow for their agentic integrations. Whittaker adds that the enterprise architect has become an important human in the loop to protect the organization and be responsible for the decisions and outcomes that agentic AI delivers.

Although some enterprise architects will see this as a collapse of their specialization, Whittaker thinks it broadens the scope of the role and makes them more T-shaped. “I can go deep in different areas,” he says. “Pigeon-holing people is never a great thing to do.”

Traditionally, architecture has suggested that something is planned, built, and then exists. The rise of agentic AI in the enterprise means the role of the enterprise architect is becoming more fluid as they continue to design and oversee construction. But the role will also involve continual monitoring and adjustment to the plan. Some call this orchestration, or perhaps it’s akin to map reading. An enterprise architect may plan a route, but other factors will alter the course. And just like weather or a fallen tree, which can lead to a route deviation, so too will enterprise architects plan and then lead when business conditions change.

Again, this new way of being an enterprise architect will be impacted by technology. Lo Guidice believes there’ll be increased automation, and Azevedo sides with the orchestration view, saying agents are built and a catalogue of them is created across the organization, which is an opportunity for enterprise architects and CIOs to be orchestrators.

Whatever the job title, Whittaker says enterprise architecture is more important than ever. “More people will become enterprise architects as more software is written by AI,” he says. “Then it’s an architectural role to coordinate and conduct the agents in front of you.” He argues that as technologists allow agents and AI to do the development work for them, the responsibility of architecting how agents and processes function broadens and becomes the responsibility of many more technologists.

“AI can create code for you, but it’s your responsibility to make sure it’s secure,” he adds. Rather than developing the code, technology teams will become architecture teams, checking and accepting the technology that AI has developed, and then managing its deployment into the business processes.

With shadow AI already embedded in organizations, Whittaker’s view shows the need for a team of enterprise architects that can help business align with the AI agents they’ve deployed, and at the same time protect customer data and cybersecurity posture.

AI agents are redrawing the enterprise, and at the same time replanning the role of enterprise architects.

From cloud-native to AI-native: Why your infrastructure must be rebuilt for intelligence

The cloud-native ceiling

For the past decade, the cloud-native paradigm — defined by containers, microservices and DevOps agility — served as the undisputed architecture of speed. As CIOs, you successfully used it to decouple monoliths, accelerate release cycles and scale applications on demand.

But today, we face a new inflection point. The major cloud providers are no longer just offering compute and storage; they are transforming their platforms to be AI-native, embedding intelligence directly into the core infrastructure and services. This is not just a feature upgrade; it is a fundamental shift that determines who wins the next decade of digital competition. If you continue to treat AI as a mere application add-on, your foundation will become an impediment. The strategic imperative for every CIO is to recognize AI as the new foundational layer of the modern cloud stack.

This transition from an agility-focused cloud-native approach to an intelligence-focused AI-native one requires a complete architectural and organizational rebuild. It is the CIO’s journey to the new digital transformation in the AI era. According to McKinsey’s “The state of AI in 2025: Agents, innovation and transformation,” while 80 percent of respondents set efficiency as an objective of their AI initiatives, the leaders of the AI era are those who view intelligence as a growth engine, often setting innovation and market expansion as additional, higher-value objectives.

The new architecture: Intelligence by design

The AI lifecycle — data ingestion, model training, inference and MLOps — imposes demands that conventional, CPU-centric cloud-native stacks simply cannot meet efficiently. Rebuilding your infrastructure for intelligence focuses on three non-negotiable architectural pillars:

1. GPU-optimization: The engine of modern compute

The single most significant architectural difference is the shift in compute gravity from the CPU to the GPU. AI models, particularly large language models (LLMs), rely on massive parallel processing for training and inference. GPUs, with their thousands of cores, are the only cost-effective way to handle this.

  • Prioritize acceleration: Establish a strategic layer to accelerate AI vector search and handle data-intensive operations. This ensures that every dollar spent on high-cost hardware is maximized, rather than wasted on idle or underutilized compute cycles.
  • A containerized fabric: Since GPU resources are expensive and scarce, they must be managed with surgical precision. This is where the Kubernetes ecosystem becomes indispensable, orchestrating not just containers, but high-cost specialized hardware.

2. Vector databases: The new data layer

Traditional relational databases are not built to understand the semantic meaning of unstructured data (text, images, audio). The rise of generative AI and retrieval augmented generation (RAG) demands a new data architecture built on vector databases.

  • Vector embeddings — the mathematical representations of data — are the core language of AI. Vector databases store and index these embeddings, allowing your AI applications to perform instant, semantic lookups. This capability is critical for enterprise-grade LLM applications, as it provides the model with up-to-date, relevant and factual company data, drastically reducing “hallucinations.”
  • This is the critical element that vector databases provide — a specialized way to store and query vector embeddings, bridging the gap between your proprietary knowledge and the generalized power of a foundation model.

3. The orchestration layer: Accelerating MLOps with Kubernetes

Cloud-native made DevOps possible; AI-native requires MLOps (machine learning operations). MLOps is the discipline of managing the entire AI lifecycle, which is exponentially more complex than traditional software due to the moving parts: data, models, code and infrastructure.

Kubernetes (K8s) has become the de facto standard for this transition. Its core capabilities — dynamic resource allocation, auto-scaling and container orchestration — are perfectly suited for the volatile and resource-hungry nature of AI workloads.

By leveraging Kubernetes for running AI/ML workloads, you achieve:

  • Efficient GPU orchestration: K8s ensures that expensive GPU resources are dynamically allocated based on demand, enabling fractional GPU usage (time-slicing or MIG) and multi-tenancy. This eliminates long wait times for data scientists and prevents costly hardware underutilization.
  • MLOps automation: K8s and its ecosystem (like Kubeflow) automate model training, testing, deployment and monitoring. This enables a continuous delivery pipeline for models, ensuring that as your data changes, your models are retrained and deployed without manual intervention. This MLOps layer is the engine of vertical integration, ensuring that the underlying GPU-optimized infrastructure is seamlessly exposed and consumed as high-level PaaS and SaaS AI services. This tight coupling ensures maximum utilization of expensive hardware while embedding intelligence directly into your business applications, from data ingestion to final user-facing features.

Competitive advantage: IT as the AI driver

The payoff for prioritizing this infrastructure transition is significant: a decisive competitive advantage. When your platform is AI-native, your IT organization shifts from a cost center focused on maintenance to a strategic business driver.

Key takeaways for your roadmap:

  1. Velocity: By automating MLOps on a GPU-optimized, Kubernetes-driven platform, you accelerate the time-to-value for every AI idea, allowing teams to iterate on models in weeks, not quarters.
  2. Performance: Infrastructure investments in vector databases and dedicated AI accelerators ensure your models are always running with optimal performance and cost-efficiency.
  3. Strategic alignment: By building the foundational layer, you are empowering the business, not limiting it. You are executing the vision outlined in “A CIO’s guide to leveraging AI in cloud-native applications,” positioning IT to be the primary enabler of the company’s AI vision, rather than an impedance.

Conclusion: The future is built on intelligence

The move from cloud-native to AI-Native is not an option; it is a market-driven necessity. The architecture of the future is defined by GPU-optimization, vector databases and Kubernetes-orchestrated MLOps.

As CIO, your mandate is clear: lead the organizational and architectural charge to install this intelligent foundation. By doing so, you move beyond merely supporting applications to actively governing intelligence that spans and connects the entire enterprise stack. This intelligent foundation requires a modern, integrated approach. AI observability must provide end-to-end lineage and automated detection of model drift, bias and security risks, enabling AI governance to enforce ethical policies and maintain regulatory compliance across the entire intelligent stack. By making the right infrastructure investments now, you ensure your enterprise has the scalable, resilient and intelligent backbone required to truly harness the transformative power of AI. Your new role is to be the Chief Orchestration Officer, governing the engine of future growth.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

Benchmarking Chinese CPUs

By: Lewin Day

When it comes to PCs, Westerners are most most familiar with x86/x64 processors from Intel and AMD, with Apple Silicon taking up a significant market share, too. However, in China, a relatively new CPU architecture is on the rise. A fabless semiconductor company called Loongson has been producing chips with its LoongArch architecture since 2021. These chips remain rare outside China, but some in the West have been benchmarking them.

[Daniel Lemire] has recently blogged about the performance of the Loongson 3A6000, which debuted in late 2023. The chip was put through a range of simple benchmarking tests, involving float processing and string transcoding operations. [Daniel] compared it to the Intel Xeon Gold 6338 from 2021, noting the Intel chip pretty much performed better across the board. No surprise given its extra clock rate. Meanwhile, the gang over at [Chips and Cheese] ran even more exhaustive tests on the same chip last year. The Loongson was put through typical tasks like  compressing archives and encoding video. The outlet came to the conclusion that the chip was a little weaker than older CPUs like AMD’s Zen 2 line and Intel’s 10th generation Core chips. It’s also limited as a four-core chip compared to modern Intel and AMD lines that often start at 6 cores as a minimum.

If you find yourself interested in Loongson’s product, don’t get too excited. They’re not exactly easy to lay your hands on outside of China, and even the company’s own website is difficult to access from beyond those shores. You might try reaching out to Loongson-oriented online communities if you seek such hardware.

Different CPU architectures have perhaps never been more relevant, particularly as we see the x86 stalwarts doing battle with the rise of desktop and laptop ARM processors. If you’ve found something interesting regarding another obscure kind of CPU, don’t hesitate to let the tipsline know!

「むンタヌネットが壊れた」Cloudflareが止たるずXやChatGPTたで巻き蟌たれる理由

むンタヌネットの「圱の支配者」、Cloudflareずは䞀䜓䜕者なのか

たず最初に、Cloudflareずいう䌚瀟が䞀䜓䜕をしおいるのか、その正䜓を解き明かしおいきたしょう。䞀蚀で衚珟するならば、圌らは「䞖界䞭のりェブサむトの玄関口に立っお、亀通敎理ず譊備を䞀手に匕き受けおいる超巚倧な門番」です。私たちが普段、スマヌトフォンやパ゜コンからりェブサむトを芋るずき、感芚ずしおは自分の端末から目的のサむトぞ盎接繋がっおいるように思えたす。しかし、珟圚のむンタヌネットの倚くはそう単玔ではありたせん。XやChatGPT、Discordずいった人気サヌビスの倚くは、自分のサヌバヌの手前にCloudflareずいう「盟」兌「加速装眮」を配眮しおいたす。぀たり、私たちの通信は「ナヌザヌの端末」から「Cloudflareのサヌバヌ」を経由しお、初めお「本物のサヌバヌ」に届くずいうルヌトを蟿っおいるのです。

なぜ、わざわざそんな䞭継地点を通す必芁があるのでしょうか。そこには倧きく分けお䞉぀の重芁な理由がありたす。䞀぀目は「圧倒的な高速化」です。Cloudflareは䞖界䞭のあらゆる郜垂にデヌタセンタヌを持っおいたす。䟋えば、あなたが日本の自宅からアメリカにある䌁業のりェブサむトを芋ようずしたずき、いちいちアメリカのサヌバヌたでデヌタを取りに行っおいたら時間がかかっおしたいたす。そこでCloudflareは、日本囜内にある圌らのサヌバヌにりェブサむトのコピヌキャッシュを䞀時的に保存しおおきたす。ナヌザヌは近所のCloudflareからデヌタを受け取れるため、驚くほど高速にペヌゞが衚瀺されるずいう仕組みです。これは「CDNコンテンツデリバリネットワヌク」ず呌ばれる技術で、珟代の快適なネットサヌフィンには欠かせないものです。

二぀目の理由は「鉄壁の防埡」です。人気のりェブサむトには、倚くの䞀般ナヌザヌだけでなく、悪意あるハッカヌやりむルス、サヌバヌをダりンさせようずする攻撃的なアクセスも抌し寄せたす。Cloudflareは本物のサヌバヌの代わりにこれらをすべお受け止め、サむバヌ攻撃や迷惑なボットだけを瞬時に芋分けお匟き飛ばし、善良なナヌザヌだけを通すずいうフィルタリングを行っおいたす。DDoS攻撃ず呌ばれる倧量のアクセス攻撃からサヌビスを守るためには、Cloudflareのような巚倧な防波堀が必芁䞍可欠なのです。

そしお䞉぀目の理由が「むンタヌネットの䜏所案内」です。私たちがブラりザに「x.com」や「openai.com」ず打ち蟌んだずき、コンピュヌタヌはそれがネットワヌク䞊のどこにあるのかIPアドレスを知る必芁がありたす。この名前解決を行う「DNSドメむンネヌムシステム」ずいう仕組みもCloudflareは提䟛しおいたす。぀たり、「サむトを衚瀺するスピヌド」「倖敵からの防埡」「サむトの堎所を教える案内」ずいう、りェブサむト運営の根幹に関わる䞉぀の機胜を䞀手に担っおいるのがCloudflareなのです。珟圚、䞖界䞭のりェブサむトのおよそ2割がCloudflareを利甚しおいるずも蚀われおおり、圌らがむンタヌネットのむンフラそのものず蚀っおも過蚀ではない理由がここにありたす。

なぜ䞀瀟の䞍具合で、䞖界䞭のサヌビスが「共倒れ」になるのか

Cloudflareの凄さがわかったずころで、次に疑問に思うのは「なぜCloudflareが止たるず、XもChatGPTも䞀斉に䜿えなくなっおしたうのか」ずいう点でしょう。それぞれのサヌビスは別々の䌚瀟が運営しおおり、サヌバヌも別々の堎所にありたす。それなのに、なぜあたかも申し合わせたように同時にダりンしおしたうのでしょうか。

これを理解するために、むンタヌネットを「高速道路」に䟋えおみたしょう。XやChatGPTずいったサヌビスは、それぞれが魅力的な「テヌマパヌク目的地」です。そしおCloudflareは、それらのテヌマパヌクぞ向かうための高速道路にある「巚倧な料金所兌ゞャンクション」だず考えおください。この料金所は非垞に優秀で、正芏のチケットを持った車はスムヌズに通し、暎走車や危険な車は匷制的に排陀しおくれたす。だからこそ、各テヌマパヌクはこの料金所を䜿う契玄を結んでいるのです。

普段、このシステムは完璧に機胜しおいたす。しかし、もしこの「巚倧料金所のゲヌト」がシステム゚ラヌで䞀斉に開かなくなっおしたったらどうなるでしょうか。テヌマパヌク自䜓は元気に営業しおいるし、内郚の斜蚭にも䜕の問題もありたせん。しかし、そこぞ向かうための唯䞀の入り口が封鎖されおしたえば、客である私たちは目的地にたどり着くこずができたせん。道路は倧枋滞し、私たちの車のナビブラりザには「目的地に到達できたせん」ずいう無慈悲な゚ラヌメッセヌゞが衚瀺されるこずになりたす。

これが、Cloudflareの障害時に起きる珟象の正䜓です。Xのサヌバヌが壊れたわけでも、ChatGPTのAIが暎走したわけでもありたせん。それらのサヌビスの「手前」にあるCloudflareずいう入り口が閉じおしたったために、誰も䞭に入れなくなっおしたったのです。技術的には、ブラりザに「502 Bad Gateway」や「500 Internal Server Error」ずいった数字が衚瀺されるこずがよくありたす。これは「あなたの通信は受け取ったけれど、その先にある本物のサヌバヌぞうたく繋げたせんでした」ずいう、門番からの悲鳎のようなメッセヌゞです。

珟代のむンタヌネットサヌビス開発においお、自前で䞖界芏暡の高速配信ネットワヌクや高床なセキュリティシステムを構築するのは、莫倧なコストず時間がかかりたす。それよりも、専門家であるCloudflareにお金を払っお任せおしたった方が、安䞊がりで性胜も良く、安党です。その結果、倚くの䌁業が合理的な刀断ずしおCloudflareを採甚したした。これはビゞネスずしおは正解なのですが、構造ずしおは「みんなが同じ䞀本の呜綱にぶら䞋がっおいる」ずいう状態を䜜り出すこずになりたす。そのため、その呜綱であるCloudflareにひずたびトラブルが起きるず、業皮も囜も関係なく、そこに䟝存しおいたすべおのサヌビスがドミノ倒しのように巻き蟌たれ、䞖界芏暡の「むンタヌネット障害」ずしお䜓感されおしたうのです。

2025幎11月18日、その時デゞタル空間で䜕が起きおいたのか

それでは、2025幎11月18日に発生した倧芏暡障害では、具䜓的に䜕が起きおいたのでしょうか。CloudflareのCEOであるマシュヌ・プリンス氏が公開した詳现なレポヌトや、The Vergeなどのテック系メディアの報道を総合するず、たるでバタフラむ・゚フェクトのような、小さなきっかけが巚倧なシステムダりンを招くプロセスが明らかになりたした。結論から蚀えば、これは倖郚からのサむバヌ攻撃ではなく、内郚のシステム曎新における「想定倖のミス」が原因でした。

事の発端は、Cloudflareが運甚しおいる「ボット管理システム」のアップデヌト䜜業でした。このシステムは、人間ではないプログラムボットによるアクセスを怜知しお遮断するための重芁な機胜です。Cloudflareの゚ンゞニアたちは、このシステムの裏偎にあるデヌタベヌスのセキュリティを匷化しようずしおいたした。具䜓的には、デヌタベヌスぞのアクセス暩限を敎理し、より安党な圢に移行する䜜業を行っおいたのです。これは日垞的な改善䜜業の䞀環であり、本来であれば䜕の問題も起きないはずでした。

しかし、この倉曎によっお予期せぬ副䜜甚が発生したした。新しい蚭定の䞋でボットを識別するためのルヌルシグネチャを生成するプログラムを動かしたずころ、デヌタベヌスから取り出される情報に重耇が発生しおしたったのです。料理に䟋えるなら、レシピの材料リストを䜜るずきに、手違いで「砂糖」を2回も3回もリストに入れおしたったような状態です。その結果、生成された「蚭定ファむルフィヌチャヌファむル」のサむズが、普段の玄2倍にたで膚れ䞊がっおしたいたした。

この「倪りすぎた蚭定ファむル」は、自動的に䞖界䞭のCloudflareのサヌバヌぞず配信されたした。各サヌバヌで埅ち構えおいた゜フトりェアは、い぀ものようにこのファむルを受け取り、読み蟌もうずしたした。しかし、ここで臎呜的な問題が発生したす。その゜フトりェアには、「読み蟌むファむルの倧きさはこれくらいたで」ずいう制限や、メモリの䜿い方の前提がプログラムコヌドずしお組み蟌たれおいたした。想定を遥かに超える巚倧なファむルが送られおきたこずで、゜フトりェアはメモリを食い぀ぶし、凊理胜力の限界を超え、パニックを起こしおクラッシュしおしたったのです。

悪いこずは重なるもので、Cloudflareのシステムは非垞に回埩力が高い蚭蚈になっおいたした。あるプロセスがクラッシュするず、システムは自動的に「再起動」を詊みたす。しかし、再起動しおも読み蟌むのは同じ「倪りすぎたファむル」です。結果ずしお、起動しおはクラッシュし、たた起動しおはクラッシュするずいう「再起動ルヌプ」が䞖界䞭の䜕千台ものサヌバヌで䞀斉に発生したした。これが、倧芏暡な通信障害の盎接的な原因です。

障害はUTC協定䞖界時で午前11時20分頃から深刻化し、X、ChatGPT、Zoom、Spotify、Canvaずいった䞻芁サヌビスだけでなく、皮肉なこずに障害情報を䌝えるためのサむトであるDownDetectorたでもが繋がりにくくなりたした。Cloudflareの゚ンゞニアチヌムは圓初、この異垞なトラフィックの途絶を「倧芏暡なDDoS攻撃を受けおいるのではないか」ず疑いたした。しかし調査を進めるうちに、倖郚からの攻撃の痕跡はなく、自分たちが配信した蚭定ファむルそのものが原因であるこずを突き止めたした。

原因が刀明しおからの察応は迅速でした。゚ンゞニアたちは問題のある蚭定ファむルの配信を止め、過去の正垞なバヌゞョンのファむルに差し替える䜜業を行いたした。たた、ボット管理システムの自動曎新を䞀時的に停止し、事態の鎮静化を図りたした。その結果、䞻芁なトラフィックは数時間以内に回埩し、日本時間の18日深倜から19日未明にかけお、むンタヌネットは埐々に日垞を取り戻しおいきたした。CEOのマシュヌ・プリンス氏は、今回の事態を「2019幎以来最悪の障害」ず認め、今埌は蚭定ファむルのサむズチェックを厳栌化するこずや、問題発生時にシステムを䞀括で緊急停止できる仕組みキルスむッチの改善、そしお゚ラヌログの収集自䜓がサヌバヌに負荷をかけないような蚭蚈の芋盎しなど、培底的な再発防止策を講じるず玄束したした。

もちろん、Cloudflare偎も今回のような障害を繰り返さないために、怜蚌プロセスの匷化や、蚭定ミスがあっおも党䞖界に䞀気に波及しない仕組みづくりを進めるずしおいたす。しかし「むンタヌネットの裏偎で、少数の巚倧事業者に機胜が集䞭しおいる」ずいう構造そのものはすぐには倉わりたせん。

私たち䞀般の利甚者ができるこずは倚くありたせんが、「XやChatGPTが萜ちおいる」ず感じたずきに、「サヌビスそのものではなく、Cloudflareやクラりド事業者など“裏方のむンフラに障害が起きおいる可胜性がある」ずいう芖点を持っおおくず、ニュヌスの内容がぐっず理解しやすくなりたす。

今回の「むンタヌネットが壊れた」ずいう隒動は、䟿利で高速で安党なむンフラに支えられた珟代のネット瀟䌚が、同時に“少数の重芁なハブに匷く䟝存しおいる䞖界”でもあるこずを、私たちに印象づける出来事だったず蚀えるでしょう。

静かなる分断クラりドの利䟿性が䌁業システムから「党䜓」を奪うずき

か぀お、䌁業の情報システムは巚倧な建築物のようなものであった。情報システム郚門ずいう蚭蚈者が党瀟を暪断しお図面を匕き、基幹システムは10幎に䞀床の倧芏暡刷新を経お、呚蟺システムも数幎単䜍で蚈画的に統合・再構築されおきた。そこには、䞍自由さはあったかもしれないが、運甚の䞭で蓄積された知識や暗黙のルヌルを含め、その䌁業独自の「統合の文法」ずでも呌ぶべき秩序が存圚しおいた。誰かが党䜓の地図を持ち、デヌタの流れや責任の所圚を把握しおいたのである。ずころが、ガバナンスや党䜓蚭蚈の思想が远い぀かないたた急速に進んだクラりド化は、この前提を根底から揺るがしおいる。珟堎の郚門が自らの刀断で最適なツヌルを遞び、導入し、䜿い捚おる。この「手軜さ」ず「自埋性」こそが、結果ずしお䌁業内郚にあった統合の感芚を薄め、システム党䜓を継ぎ接ぎのパッチワヌクぞず倉貌させおいるのだ。本皿では、珟代の䌁業システムが盎面しおいる「構造的断片化」ずいう静かな危機に぀いお、その本質ず凊方箋を論じたい。

断片化は「障害」ではなく、敎合性を倱った「日垞」ずしお珟れる

倚くの人々が「システムのトラブル」ず聞いお思い浮かべるのは、党瀟的なネットワヌク停止やサむバヌ攻撃による情報挏掩ずいった、劇的で掟手な出来事だろう。しかし、クラりド化ずSaaS化がもたらす珟代的な断片化の問題は、そうしたわかりやすい砎局ずしおは珟れない。それはむしろ、「䞀芋、䜕も起きおいない平穏な日垞」の䞭で、静かに、しかし確実に進行しおいく病のようなものである。システムは皌働しおおり、゚ラヌメッセヌゞも出おいない。画面も通垞通り反応する。それにもかかわらず、組織の至る所で「䜕かが少しず぀噛み合わない」ずいう違和感が沈殿しおいくのが特城だ。

䟋えば、同じ䞀぀の事象を蚘録しおいるはずなのに、システムずシステムでログの時刻が埮劙にずれおいる。郚門間を連携するバッチ凊理が理由もなく遅延し始め、䌚議の堎で「今、どの数字が正解なのか」を誰も即答できない状況が生たれる。あるいは、利甚しおいるSaaS偎のAPI仕様が予告なく倉曎され、凊理自䜓は成功しおいるものの、受け枡されるデヌタの意味が倉わっおしたい、実態ずは異なる数倀が経営局に報告され続ける。監査察応の季節になり、理屈の䞊では完党に䞀臎するはずの垳祚の数倀が、システム䞊のどこを探しおも存圚しないこずに気づく。こうした出来事は、倚くの堎合、システム的な「障害」ずしおはカりントされない。そのため、珟堎担圓者がExcelで手䜜業の修正を行ったり、担圓者間のメヌルで぀じ぀たを合わせたりずいった、芋えないコストずしお吞収されおしたう。

これは、䌁業システムがもはや䞀枚岩のプラットフォヌムでもなければ、単䞀の論理で統制可胜な箱庭でもなくなったこずを意味しおいる。瀟内で構築したレガシヌシステム、機胜ごずに特化した無数のSaaS、倖郚委蚗先が管理するプラットフォヌム、パヌトナヌ䌁業ずの接続むンタヌフェヌス。それらが日々新たに組み合わされ、接続され、たた䞍芁になれば切り離される。その結合パタヌンは、新たなツヌルの導入や契玄曎新のたびに流動的に倉化しおいく。もずもず䌁業組織ずは、人、業務、情報が耇雑に絡み合う「瀟䌚的な耇雑系」であった。そこに、極めお可倉性の高いクラりドサヌビスずいう芁玠が入り蟌んだこずで、その耇雑さは固定的な構造から「絶えず圢を倉え続けるアメヌバのようなもの」ぞず倉質しおしたったのである。䞀床蚭蚈図を曞けば数幎間は安定しお運甚できるずいう、か぀おの牧歌的な前提はもはや通甚しない。断片化は、システムの䞍具合ずしおではなく、組織の血管に詰たるコレステロヌルのように、日垞業務の効率ず信頌性を内偎から静かに蝕んでいるのである。

「郚分ずしおは正しい」善意の集積が、皮肉にも党䜓を歪める

この問題の根深さは、断片化を匕き起こしおいる原因が、誰かの怠慢や悪意ではないずいう点にある。むしろ、珟堎で働く䞀人ひずりの行動は、それぞれの芖点から芋れば極めお合理的であり、賞賛されるべき「善意」に基づいおいるこずが倚い。ここに珟代のITガバナンスの難しさがある。

営業郚門は、四半期の目暙を達成するために、顧客管理に最も適した最新のSaaSを導入する。マヌケティング郚門は、垂堎の倉化を捉えるために別の分析ツヌルを契玄する。IT郚門は、限られた予算ず人員の䞭でセキュリティリスクを最小化しようず奔走し、経営局は、競合他瀟に遅れないためのスピヌドず倉化を珟堎に芁求する。どの郚門の刀断も、その「郚分」においおは正解であり、筋が通っおいる。しかし、盞互䟝存性が高たった珟代のシステム環境においおは、そうした「郚分的に正しい刀断」の積み重ねが、必ずしも「党䜓ずしおの正解」を導き出すずは限らない。それどころか、各郚門が局所的な最適解を远求すればするほど、党䜓ずしおの敎合性が倱われ、䌁業党䜓のシステム像が歪んでいくずいうパラドックスが生じるのだ。

これを加速させおいるのが、境界線の消倱ず再定矩の困難さである。か぀お統合ずは、「どこたでが誰の責任で、どのシステムが正なのか」ずいう境界線を明確に匕く行為であった。しかし、ガバナンスの効きにくいSaaSが郚門ごずに導入されるず、この境界線を固定するこず自䜓が䞍可胜になる。䟋えば、あるSaaSが持぀デヌタモデルが、意図せず他郚門の業務フロヌの制玄条件になっおしたったり、特定のツヌルの仕様倉曎が党瀟の圚庫管理ルヌルに圱響を及がしたりする。技術ず組織が互いに圱響し合い、䞻埓関係が曖昧になっおいく䞭で、䌁業は「自分たちの構造を説明する蚀葉」を倱い぀぀ある。

倚くの䌁業が盎面する脆匱性の本質は、実はこの点にあるのではないだろうか。DXデゞタルトランスフォヌメヌションずいう蚀葉が飛び亀い、高䟡な可芖化ツヌルやダッシュボヌドが導入されおも、そこで扱われおいる情報の「意味」や「前提」が党瀟で共有されおいなければ、経営者が芋おいるのは単なる「数字の断片」に過ぎない。ある郚門にずっおの「顧客」の定矩ず、別の郚門の「顧客」の定矩が異なったたた、数字だけが統合される。売䞊ずいう蚀葉䞀぀ずっおも、その蚈䞊基準やタむミングがツヌルごずに埮劙に異なる。こうした意味のズレが攟眮されたたたシステムだけが぀ながるずき、䌁業は自らの実態を正しく語る力を倱う。自分の構造を正確に説明できない組織が、意図を持っおその構造を倉革するこずなどできるはずがない。制埡䞍胜ずなった耇雑さは、倖郚環境の倉化に察する適応力を奪い、やがお䌁業の競争力そのものを削ぎ萜ずしおいくこずになる。

技術的統合を超えお、「党䜓を想像する力」を取り戻すために

では、この断片化ずいう珟代病に向き合うために、我々はか぀おのような匷力な䞭倮集暩的IT統制ぞず回垰すべきなのだろうか。情報システム郚門が党おのツヌルの導入を怜閲し、䟋倖を認めない厳栌な暙準化を匷いる時代に戻るべきか。その答えはおそらく「NO」である。倉化の激しい珟代においお、党おを䞀぀の巚倧なシステムや単䞀の暙準に抌し蟌める発想は珟実的ではないし、珟堎の自埋性を奪うこずは、䌁業から機動力ずいう最倧の歊噚を奪うこずに他ならない。分散はリスクであるず同時に、倚様な䟡倀を生み出す源泉でもあるからだ。

重芁なのは、「珟堎の自埋性」ず「党䜓の敎合性」を二者択䞀の察立構造ずしお捉えるのではなく、その間に健党な緊匵関係を保぀ための新しい枠組みを構築するこずである。そのためには、単にAPIを぀ないでデヌタを流すずいう「技術的な統合」のレむダヌを超えお、「構造をどう理解し、どう語り盎すか」ずいう「意味の統合」のレむダヌぞず芖座を高める必芁がある。統合の本質ずは、バラバラのシステムをケヌブルで繋ぐこずではなく、䌁業ずいう物語の敎合性を保぀こずにあるからだ。どの情報がどこで生たれ、どのような文脈ず責任のもずで加工され、最終的に誰の意思決定に䜿われおいるのか。その情報の流れデヌタ・リネヌゞず意味の連鎖を、郚門の壁を越えお共有し続ける営みが䞍可欠ずなる。

ここで唯䞀の矅針盀ずなるのが、「党䜓を想像する力」である。これは、䞀人の倩才的なアヌキテクトが党おの仕様を暗蚘しおいる状態を指すのではない。組織党䜓が、自分たちのシステム環境を「完党には把握しきれない耇雑なもの」ずしお謙虚に受け止め、その䞊で、互いの芋えおいる景色を持ち寄り、党䜓像を絶えず描き盎そうずする姿勢のこずである。経営、IT郚門、業務郚門が、それぞれの「郚分」から芋えおいる䞖界を蚀葉にし、噛み合わない箇所があれば、それはシステムの問題ではなく「認識ず定矩の問題」ずしお察話を行う。そうしたプロセスを通じお、「自分たちは䜕を知らないのか」「どこに盲点があるのか」を組織ずしおメタ認知するこずこそが、珟代における真のガバナンスず呌べるだろう。

ツヌルは日々入れ替わり、デヌタは爆発的に増え続ける。その奔流の䞭で、個別の郚品をいくら磚き䞊げおも、党䜓を芋枡す芖座が欠けおいれば、組織は挂流するしかない。断片化が進む䞖界においお、䌁業がその茪郭を保ち、自埋的に倉化しおいくためには、技術ぞの投資ず同じくらい、あるいはそれ以䞊に、自分たちの構造を語り盎し、党䜓を想像しようずする人間自身の知性ぞの投資が求められおいる。それこそが、クラりドずSaaSに芆われたこの「郚分最適」の時代においお、䌁業が持ち埗る最も地味でありながら、最も本質的で匷靭な競争力ずなるはずだ。

How to do a Security Review – An Example

By: Jo
Learn how to perform a complete Security Review for new product features—from scoping and architecture analysis to threat modeling and risk assessment. Using a real-world chatbot integration example, this guide shows how to identify risks, apply security guardrails, and deliver actionable recommendations before release.

Yeske helped change what complying with zero trust means

The Cybersecurity and Infrastructure Security Agency developed a zero trust architecture that features five pillars.

The Defense Department’s zero trust architecture includes seven pillars.

The one the Department of Homeland Security is implementing takes the best of both architectures and adds a little more to the mix.

Don Yeske, who recently left federal service after serving for the last two-plus years as the director of national security in the cyber division at DHS, said the agency had to take a slightly different approach for several reasons.

Don Yeske is a senior solutions architect federal at Virtu and a former director of national security in the cyber division at the Homeland Security Department.

“If you look at OMB [memo] M-22-09 it prescribes tasks. Those tasks are important, but that itself is not a zero trust strategy. Even if you do everything that M-22-09 told us to do — and by the way, those tasks were due at the beginning of this year — even if you did it all, that doesn’t mean, goal achieved. We’re done with zero trust. Move on to the next thing,” Yeske said during an “exit” interview on Ask the CIO. “What it means is you’re much better positioned now to do the hard things that you had to do and that we hadn’t even contemplated telling you to do yet. DHS, at the time that I left, was just publishing this really groundbreaking architecture that lays out what the hard parts actually are and begins to attack them. And frankly, it’s all about the data pillar.”

The data pillar of zero trust is among the toughest ones. Agencies have spent much of the past two years focused on other parts of the architecture, like improving their cybersecurity capabilities in the identity and network pillars.

Yeske, who now is a senior solutions architect federal at Virtru, said the data pillar challenge for DHS is even bigger because of the breadth and depth of its mission. He said between the Coast Guard, FEMA, Customs and Border Protection and CISA alone, there are multiple data sources, requirements and security rules.

“What’s different about it is we viewed the problem of zero trust as coming in broad phases. Phase one, where you’re just beginning to think about zero trust, and you’re just beginning to adjust your approach, is where you start to take on the idea that my network boundary can’t be my primary, let alone sole line of defense. I’ve got to start shrinking those boundaries around the things that I’m trying to protect,” he said. “I’ve got to start defending within my network architecture, not just from the outside, but start viewing the things that are happening within my network with suspicion. Those are all building on the core tenants of zero trust.”

Capabilities instead of product focused

He said initial zero trust strategy stopped there, segmenting networks and protecting data at rest.

But to get to this point, he said agencies too often are focused on implementing specific products around identity or authentication and authorization processes.

“It’s a fact that zero trust is something you do. It’s not something you buy. In spite of that, federal architecture has this pervasive focus on product. So at DHS, the way we chose to describe zero trust capability was as a series of capabilities. We chose, without malice or forethought, to measure those capabilities at the organization, not at the system, not at the component, not as a function of design,” Yeske said. “Organizations have capabilities, and those capabilities are comprised of three big parts: People. Who’s responsible for the thing you’re describing within your organization? Process. How have you chosen to do the thing that you’re describing at your organization and products? What helps you do that?”

Yeske said the third part is technology, which, too often, is intertwined with the product part.

He said the DHS architecture moved away from focusing on product or technology, and instead tried to answer the simple, yet complex, questions: What’s more important right now? What are the things that I should spend my limited pool of dollars on?

“We built a prioritization mechanism, and we built it on the idea that each of those capabilities, once we understand their inherent relationships to one another, form a sort of Maslow’s hierarchy of zero trust. There are things that are more basic, that if you don’t do this, you really can’t do anything else, and there are things that are really advanced, that once you can do basically everything else you can contemplate doing this. And there are a lot of things in between,” he said. “We took those 46 capabilities based on their inherent logical relationships, and we came up with a prioritization scheme so that you could, if you’re an organization implementing zero trust, prioritize the products, process and technologies.”

Understanding cyber tool dependencies

DHS defined those 46 capabilities based on the organization’s ability to perform that function to protect its data, systems or network.

Yeske said, for example, with phishing-resistant, multi-factor authentication, DHS didn’t specify the technology or product needed, but just the end result of the ability to authenticate users using multiple factors that are resistant to phishing.

“We’re describing something your organization needs to be able to do because if you can’t do that, there are other things you need to do that you won’t be able to do. We just landed on 46, but that’s not actually all that weird. If you look at the Defense Department’s zero trust roadmap, it contains a similar number of things they describe as capability, which are somewhat different,” said Yeske, who spent more than 15 years working for the Navy and Marine Corps before coming to DHS. “We calculated a 92% overlap between the capabilities we described in our architecture and the ones DoD described. And the 8% difference is mainly because the DHS one is brand new. So just understanding that the definition of each of these capabilities also includes two types of relationships, a dependency, which is where you can’t have this capability unless you first had a different one.”

Yeske said before he left DHS in July, the zero trust architecture and framework had been approved for use and most of the components had a significant number of cyber capabilities in place.

He said the next step was assessing the maturity of those capabilities and figuring out how to move them forward.

If other agencies are interested in this approach, Yeske said the DHS architecture should be available for them to get a copy of.

The post Yeske helped change what complying with zero trust means first appeared on Federal News Network.

© Getty Images/design master

Synergy between cyber security Mesh & the CISO role: Adaptability, visibility & control

By: slandau

With over two decades of experience in the cyber security industry, I specialize in advising organizations on how to optimize their financial investments through the design of effective and cost-efficient cyber security strategies. Since the year 2000, I’ve had the privilege of collaborating with various channels and enterprises across the Latin American region, serving in multiple roles ranging from Support Engineer to Country Manager. This extensive background has afforded me a unique perspective on the evolving threat landscape and the shifting needs of businesses in the digital world.

The dynamism of technological advancements has transformed cyber security demands, necessitating more proactive approaches to anticipate and prevent threats before they can impact an organization. Understanding this ever-changing landscape is crucial for adapting to emerging security challenges.

In my current role as the Channel Engineering Manager for LATAM at Check Point, I also serve as part of the Cybersecurity Evangelist team under the office of our CTO. I am focused on merging technical skills with strategic decision-making, encouraging organizations to concentrate on growing their business while we ensure security.

The Cyber Security Mesh framework can safeguard businesses from unwieldy and next-generation cyber threats. In this interview, Check Point Security Engineering Manager Angel Salazar Velasquez discusses exactly how that works. Get incredible insights that you didn’t even realize that you were missing. Read through this power-house interview and add another dimension to your organization’s security strategy!

Would you like to provide an overview of the Cyber Security Mesh framework and its significance?

The Cyber Security Mesh framework represents a revolutionary approach to addressing cyber security challenges in increasingly complex and decentralized network environments. Unlike traditional security models that focus on establishing a fixed ‘perimeter’ around an organization’s resources, the Mesh framework places security controls closer to the data, devices, and users requiring protection. This allows for greater flexibility and customization, more effectively adapting to specific security and risk management needs.

For CISOs, adopting the Cyber Security Mesh framework means a substantial improvement in risk management capabilities. It enables more precise allocation of security resources and offers a level of resilience that is difficult to achieve with more traditional approaches. In summary, the Mesh framework provides an agile and scalable structure for addressing emerging threats and adapting to rapid changes in the business and technology environment.

How does the Cyber Security Mesh framework differ from traditional cyber security approaches?

Traditionally, organizations have adopted multiple security solutions from various providers in the hope of building comprehensive defense. The result, however, is a highly fragmented security environment that can lead to a lack of visibility and complex risk management. For CISOs, this situation presents a massive challenge because emerging threats often exploit the gaps between these disparate solutions.

The Cyber Security Mesh framework directly addresses this issue. It is an architecture that allows for better interoperability and visibility by orchestrating different security solutions into a single framework. This not only improves the effectiveness in mitigating threats but also enables more coherent, data-driven risk management. For CISOs, this represents a radical shift, allowing for a more proactive and adaptive approach to cyber security strategy.

Could you talk about the key principles that underly Cyber Security Mesh frameworks and architecture?

Understanding the underlying principles of Cyber Security Mesh is crucial for evaluating its impact on risk management. First, we have the principle of ‘Controlled Decentralization,’ which allows organizations to maintain control over their security policies while distributing implementation and enforcement across multiple security nodes. This facilitates agility without compromising security integrity.

Secondly, there’s the concept of ‘Unified Visibility.’ In an environment where each security solution provides its own set of data and alerts, unifying this information into a single coherent ‘truth’ is invaluable. The Mesh framework allows for this consolidation, ensuring that risk-related decision-making is based on complete and contextual information. These principles, among others, combine to provide a security posture that is much more resilient and adaptable to the changing needs of the threat landscape.

How does the Cyber Security Mesh framework align with or complement Zero Trust?

The convergence of Cyber Security Mesh and the Zero Trust model is a synergy worth exploring. Zero Trust is based on the principle of ‘never trust, always verify,’ meaning that no user or device is granted default access to the network, regardless of its location. Cyber Security Mesh complements this by decentralizing security controls. Instead of having a monolithic security perimeter, controls are applied closer to the resource or user, allowing for more granular and adaptive policies.

This combination enables a much more dynamic approach to mitigating risks. Imagine a scenario where a device is deemed compromised. In an environment that employs both Mesh and Zero Trust, this device would lose its access not only at a global network level but also to specific resources, thereby minimizing the impact of a potential security incident. These additional layers of control and visibility strengthen the organization’s overall security posture, enabling more informed and proactive risk management.

How does the Cyber Security Mesh framework address the need for seamless integration across diverse technologies and platforms?

The Cyber Security Mesh framework is especially relevant today, as it addresses a critical need for seamless integration across various technologies and platforms. In doing so, it achieves Comprehensive security coverage, covering all potential attack vectors, from endpoints to the cloud. This approach also aims for Consolidation, as it integrates multiple security solutions into a single operational framework, simplifying management and improving operational efficiency.

Furthermore, the mesh architecture promotes Collaboration among different security solutions and products. This enables a quick and effective response to any threat, facilitated by real-time threat intelligence that can be rapidly shared among multiple systems. At the end of the day, it’s about optimizing security investment while facing key business challenges, such as breach prevention and secure digital transformation.

Can you discuss the role of AI and Machine Learning within the Cyber Security Mesh framework/architecture?

Artificial Intelligence (AI) and Machine Learning play a crucial role in the Cyber Security Mesh ecosystem. These technologies enable more effective and adaptive monitoring, while providing rapid responses to emerging threats. By leveraging AI, more effective prevention can be achieved, elevating the framework’s capabilities to detect and counter vulnerabilities in real-time.

From an operational standpoint, AI and machine learning add a level of automation that not only improves efficiency but also minimizes the need for manual intervention in routine security tasks. In an environment where risks are constantly evolving, this agility and ability to quickly adapt to new threats are invaluable. These technologies enable coordinated and swift action, enhancing the effectiveness of the Cyber Security Mesh.

What are some of the challenges or difficulties that organizations may see when trying to implement Mesh?

The implementation of a Cyber Security Mesh framework is not without challenges. One of the most notable obstacles is the inherent complexity of this mesh architecture, which can hinder effective security management. Another significant challenge is the technological and knowledge gap that often arises in fragmented security environments. Added to these is the operational cost of integrating and maintaining multiple security solutions in an increasingly diverse and dynamic ecosystem.

However, many of these challenges can be mitigated if robust technology offering centralized management is in place. This approach reduces complexity and closes the gaps, allowing for more efficient and automated operation. Additionally, a centralized system can offer continuous learning as it integrates intelligence from various points into a single platform. In summary, centralized security management and intelligence can be the answer to many of the challenges that CISOs face when implementing the Cyber Security Mesh.

How does the Cyber Security Mesh Framework/Architecture impact the role of traditional security measures, like firewalls and IPS?

Cyber Security Mesh has a significant impact on traditional security measures like firewalls and IPS. In the traditional paradigm, these technologies act as gatekeepers at the entry and exit points of the network. However, with the mesh approach, security is distributed and more closely aligned with the fluid nature of today’s digital environment, where perimeters have ceased to be fixed.

Far from making them obsolete, the Cyber Security Mesh framework allows firewalls and IPS to transform and become more effective. They become components of a broader and more dynamic security strategy, where their intelligence and capabilities are enhanced within the context of a more flexible architecture. This translates into improved visibility, responsiveness, and adaptability to new types of threats. In other words, traditional security measures are not eliminated, but integrated and optimized in a more versatile and robust security ecosystem.

Can you describe real-world examples that show the use/success of the Cyber Security Mesh Architecture?

Absolutely! In a company that had adopted a Cyber Security Mesh architecture, a sophisticated multi-vector attack was detected targeting its employees through various channels: corporate email, Teams, and WhatsApp. The attack included a malicious file that exploited a zero-day vulnerability. The first line of defense, ‘Harmony Email and Collaboration,’ intercepted the file in the corporate email and identified it as dangerous by leveraging its Sandboxing technology and updated the information in its real-time threat intelligence cloud.

When the same malicious file tried to be delivered through Microsoft Teams, the company was already one step ahead. The security architecture implemented also extends to collaboration platforms, so the file was immediately blocked before it could cause harm. Almost simultaneously, another employee received an attack attempt through WhatsApp, which was neutralized by the mobile device security solution, aligned with the same threat intelligence cloud.

This comprehensive and coordinated security strategy demonstrates the strength and effectiveness of the Cyber Security Mesh approach, which allows companies to always be one step ahead, even when facing complex and sophisticated multi-vector attacks. The architecture allows different security solutions to collaborate in real-time, offering effective defense against emerging and constantly evolving threats.

The result is solid security that blocks multiple potential entry points before they can be exploited, thus minimizing risk and allowing the company to continue its operations without interruption. This case exemplifies the potential of a well-implemented and consolidated security strategy, capable of addressing the most modern and complex threats.

Is there anything else that you would like to share with the CyberTalk.org audience?

To conclude, the Cyber Security Mesh approach aligns well with the three key business challenges that every CISO faces:

Breach and Data Leak Prevention: The Cyber Security Mesh framework is particularly strong in offering an additional layer of protection, enabling effective prevention against emerging threats and data breaches. This aligns perfectly with our first ‘C’ of being Comprehensive, ensuring security across all attack vectors.

Secure Digital and Cloud Transformation: The flexibility and scalability of the Mesh framework make it ideal for organizations in the process of digital transformation and cloud migration. Here comes our second ‘C’, which is Consolidation. We offer a consolidated architecture that unifies multiple products and technologies, from the network to the cloud, thereby optimizing operational efficiency and making digital transformation more secure.

Security Investment Optimization: Finally, the operational efficiency achieved through a Mesh architecture helps to optimize the security investment. This brings us to our third ‘C’ of Collaboration. The intelligence shared among control points, powered by our ThreatCloud intelligence cloud, enables quick and effective preventive action, maximizing the return on security investment.

In summary, Cyber Security Mesh is not just a technological solution, but a strategic framework that strengthens any CISO’s stance against current business challenges. It ideally complements our vision and the three C’s of Check Point, offering an unbeatable value proposition for truly effective security.

The post Synergy between cyber security Mesh & the CISO role: Adaptability, visibility & control appeared first on CyberTalk.

Contain Breaches and Gain Visibility With Microsegmentation

Organizations must grapple with challenges from various market forces. Digital transformation, cloud adoption, hybrid work environments and geopolitical and economic challenges all have a part to play. These forces have especially manifested in more significant security threats to expanding IT attack surfaces.

Breach containment is essential, and zero trust security principles can be applied to curtail attacks across IT environments, minimizing business disruption proactively. Microsegmentation has emerged as a viable solution through its continuous visualization of workload and device communications and policy creation to define what communications are permitted. In effect, microsegmentation restricts lateral movement, isolates breaches and thwarts attacks.

Given the spotlight on breaches and their impact across industries and geographies, how can segmentation address the changing security landscape and client challenges? IBM and its partners can help in this space.

Breach Landscape and Impact of Ransomware

Historically, security solutions have focused on the data center, but new attack targets have emerged with enterprises moving to the cloud and introducing technologies like containerization and serverless computing. Not only are breaches occurring and attack surfaces expanding, but also it has become easier for breaches to spread. Traditional prevention and detection tools provided surface-level visibility into traffic flow that connected applications, systems and devices communicating across the network.  However, they were not intended to contain and stop the spread of breaches.

Ransomware is particularly challenging, as it presents a significant threat to cyber resilience and financial stability. A successful attack can take a company’s network down for days or longer and lead to the loss of valuable data to nefarious actors. The Cost of a Data Breach 2022 report, conducted by the Ponemon Institute and sponsored by IBM Security, cites $4.54 million as the average ransomware attack cost, not including the ransom itself.

In addition, a recent IDC study highlights that ransomware attacks are evolving in sophistication and value. Sensitive data is being exfiltrated at a higher rate as attackers go after the most valuable targets for their time and money. Ultimately, the cost of a ransomware attack can be significant, leading to reputational damage, loss of productivity and regulatory compliance implications.

Organizations Want Visibility, Control and Consistency

With a focus on breach containment and prevention, hybrid cloud infrastructure and application security, security teams are expressing their concerns. Three objectives have emerged as vital for them.

First, organizations want visibility. Gaining visibility empowers teams to understand their applications and data flows regardless of the underlying network and compute architecture.

Second, organizations want consistency. Fragmented and inconsistent segmentation approaches create complexity, risk and cost. Consistent policy creation and strategy help align teams across heterogeneous environments and facilitate the move to the cloud with minimal re-writing of security policy.

Finally, organizations want control. Solutions that help teams target and protect their most critical assets deliver the greatest return. Organizations want to control communications through selectively enforced policies that can expand and improve as their security posture matures towards zero trust security.

Microsegmentation Restricts Lateral Movement to Mitigate Threats

Microsegmentation (or simply segmentation) combines practices, enforced policies and software that provide user access where required and deny access everywhere else. Segmentation contains the spread of breaches across the hybrid attack surface by continually visualizing how workloads and devices communicate. In this way, it creates granular policies that only allow necessary communication and isolate breaches by proactively restricting lateral movement during an attack.

The National Institute of Standards and Technology (NIST) highlights microsegmentation as one of three key technologies needed to build a zero trust architecture, a framework for an evolving set of cybersecurity paradigms that move defense from static, network-based perimeters to users, assets and resources.

Suppose existing detection solutions fail and security teams lack granular segmentation. In that case, malicious software can enter their environment, move laterally, reach high-value applications and exfiltrate critical data, leading to catastrophic outcomes.

Ultimately, segmentation helps clients respond by applying zero trust principles like ‘assume a breach,’ helping them prepare in the wake of the inevitable.

IBM Launches Segmentation Security Services

In response to growing interest in segmentation solutions, IBM has expanded its security services portfolio with IBM Security Application Visibility and Segmentation Services (AVS). AVS is an end-to-end solution combining software with IBM consulting and managed services to meet organizations’ segmentation needs. Regardless of where applications, data and users reside across the enterprise, AVS is designed to give clients visibility into their application network and the ability to contain ransomware and protect their high-value assets.

AVS will walk you through a guided experience to align your stakeholders on strategy and objectives, define the schema to visualize desired workloads and devices and build the segmentation policies to govern network communications and ring-fence critical applications from unauthorized access. Once the segmentation policies are defined and solutions deployed, clients can consume steady-state services for ongoing management of their environment’s workloads and applications. This includes health and maintenance, policy and configuration management, service governance and vendor management.

IBM has partnered with Illumio, an industry leader in zero trust segmentation, to deliver this solution.  Illumio’s software platform provides attack surface visibility, enabling you to see all communication and traffic between workloads and devices across the entire hybrid attack surface. In addition, it allows security teams to set automated, granular and flexible segmentation policies that control communications between workloads and devices, only allowing what is necessary to traverse the network. Ultimately, this helps organizations to quickly isolate compromised systems and high-value assets, stopping the spread of an active attack.

With AVS, clients can harden compute nodes across their data center, cloud and edge environments and protect their critical enterprise assets.

Start Your Segmentation Journey

IBM Security Services can help you plan and execute a segmentation strategy to meet your objectives. To learn more, register for the on-demand webinar now.

The post Contain Breaches and Gain Visibility With Microsegmentation appeared first on Security Intelligence.

Seth Rogen’s ‘High-ly Creative Retreat’ Airbnb Begins Booking

Feel like taking your creativity level
 a bit higher? Available for booking beginning this week, Seth Rogen partnered with Airbnb to unveil “A High-ly Creative Retreat,” providing a unique getaway in Los Angeles with ceramic activities.

The retreat features a ceramic studio with Rogen’s own handmade pottery, a display of his cannabis and lifestyle company Houseplant’s unique Housegoods, as well as mid-century furnishings, and “sprawling views of the city.”

The Airbnb is probably a lot cheaper than you think: Rogen will host three, one-night stays on February 15, 16, and 17 for two guests each for just $42—one decimal point away from 420—with some restrictions. U.S. residents can book an overnight stay at Rogen’s Airbnb beginning Feb. 7, but book now, because it’s doubtful that open slots will last.

“I don’t know what’s more of a Houseplant vibe than a creative retreat at a mid-century Airbnb filled with our Housegoods, a pottery wheel, and incredible views of LA,” Rogen said. “Add me, and you’ll have the ultimate experience.”

According to the listing, and his Twitter account, Rogen will be there to greet people and even do ceramics together.

“I’m teaming up with Airbnb so you (or someone else) can hang out with me and spend the night in a house inspired by my company,” Rogen tweeted recently.

I'm teaming up with @airbnb so you (or someone else) can hang out with me and spend the night in a house inspired by my company Houseplant. https://t.co/7XFoY5vgm9 pic.twitter.com/ukW1UxnEm5

— Seth Rogen (@Sethrogen) January 31, 2023

Guests will be provided with the following activities:

  • Get glazed in the pottery studio and receive pointers from Rogen himself!
  • Peruse a selection of Rogen’s own ceramic masterpieces, proudly displayed within the mid-century modern home.
  • Relax and revel in the sunshine of the space’s budding yard.
  • Tune in and vibe out to a collection of Houseplant record sets with specially curated tracklists by Seth Rogen & Evan Goldberg and inspired by different cannabis strains. Guests will get an exclusive first listen to their new Vinyl Box Set Vol. 2.
  • Satisfy cravings with a fully-stocked fridge for after-hours snacks.

Airbnb plans to join in on Rogen’s charity efforts, including his non-profit Hilarity for Charity, focusing on helping people living with Alzheimer’s disease.

“In celebration of this joint effort, Airbnb will make a one-time donation to Hilarity for Charity, a national non-profit on a mission to care for families impacted by Alzheimer’s disease, activate the next generation of Alzheimer’s advocates, and be a leader in brain health research and education,” Airbnb wrote.

In 2021, Rogen launched Houseplant, his cannabis and lifestyle company, in the U.S. But the cannabis brand’s web traffic was so high that the site crashed. Houseplant was founded by Rogen and his childhood friend Evan Goldberg, along with Michael Mohr, James Weaver, and Alex McAtee.

Yahoo! News reports, however, that Airbnb does not (cough, cough) allow cannabis on the premises of listings. The listing, however, will be filled with goods from Houseplant. Houseplant also sells luxury paraphernalia with a “mid-century modern spin.”

Seth Rogen recently invited Architectural Digest to present a tour of the Houseplant headquarters’ interior decor and operations. Houseplant’s headquarters is located in a 1918 bungalow in Los Angeles. Architectural Digest describes it as “Mid-century-modern-inspired furniture creates a cozy but streamlined aesthetic.”

People living in the U.S. can request to book stays at airbnb.com/houseplant. Guests are responsible for their own travel to and from Los Angeles, California and comply with applicable COVID-19 rules and guidelines. 

See Rogen’s listing on the Airbnb site.

If you can’t find your way in, Airbnb provides over 1,600 other creative spaces available around the globe.

The post Seth Rogen’s ‘High-ly Creative Retreat’ Airbnb Begins Booking appeared first on High Times.

Unconsidered benefits of a consolidation strategy every CISO should know

By: slandau

Pete has 32 years of Security, Network, and MSSP experience and has been a hands-on CISO for the last 17 years and joined Check Point as Field CISO of the Americas. Pete’s cloud security deployments and designs have been rated by Garter as #1 and #2 in the world and he literally “wrote the book” and contributed to secure cloud reference designs as published in Intel Press: “Building the Infrastructure for Cloud Security: A Solutions View.” 

In this interview, Check Point’s Field CISO, Pete Nicoletti, shares insights into cyber security consolidation. Should your organization move ahead with a consolidated approach? Or maybe a workshop would be helpful. Don’t miss Pete Nicoletti’s perspectives.

What kinds of struggles and challenges are the organizational security leaders that you’re working with currently seeing?

Many! As members of the World Economic Forum Council for the Connected World, we drilled into this exact question and interviewed hundreds of executives and created a detailed report. The key findings are:  Economic Issues, IoT risks, increase in ransomware, and security personnel shortages all impacting budgets. Given these issues, our council recommended that security spend remain a priority, even in challenging times, since we all know that security incidents cost 10x to 100x verses budgeted expenditures.

How are CISOs currently building out or transitioning their information security programs? What kinds of results are they seeing?

In challenging times, CISO’s are looking hard at their tool set and seeing if there is overlap, or redundant tools, or underutilized tools. CISO’s are also evaluating their “play-books” to ensure that the tools in-use are efficient and streamlined. CISO’s are also keen to negotiate ELA’s that give them lower costs with flexibility to choose from a suite of tools to support the “speed of business.”

Security teams need to be trained and certified on their tools in use, and those budgets are under pressure. All these drivers lead to tool consolidation projects. Our customers are frequently very pleased with the normally mutually exclusive benefits: Costs Savings and better efficacy once a consolidation program is launched.

What are the key considerations for CISOs in deciding on whether or not to consolidate information security solutions? Can CISOs potentially lose capabilities when consolidating security and if so, how can this be addressed, if at all?

Losing features when consolidating is a valid concern, however, typically we find more advantages after consolidation: Lower training costs, higher staff satisfaction, fewer mistakes made, and the real gem: higher security program efficacy. We also see our customers leveraging the cloud and needing to extend their security protections quickly and easily, and our Check Point portfolio supports this using one console. With all the news of our peers experiencing exploited security vulnerabilities and other challenges, we are continuing to gain market share and happy customers.

How should CISOs go about deciding on whether or not to consolidate cyber security? Beyond cost, what should CISOs think about?

The number one consideration should be efficacy of the program. CISO’s are realizing that very small differences in efficacy lead to very large cost savings. The best security tool for the job should always be selected knowing this. An inventory of tools and the jobs they are doing should be created and maintained. Frequently, CISO’s find dozens of tools that are redundant, overlap with others, add unnecessary complexity, and that are poorly deployed or managed and not integrated into the program. Once the inventory is completed, work with your expert consultant or reseller to review and find redundancies or overlaps and kick-off a program to evaluate technical and cost benefits.

What can organizations achieve with a consolidated cyber security solution?

As mentioned previously, the number one goal of the program should be improving efficacy and our customers do report this. Efficacy lowers the number of false positives, lowers the number of real events and decreases overall risk. Other savings are found with lower training costs, faster run book execution, fewer mistakes and the ability to free up security analysts from wasting time on inefficient processes. Those analysts can now be leveraged into more productive efforts and ensure that the business growth and strategies are better supported.

As a seasoned professional, when you’ve worked with CISOs and security teams in moving to a consolidated solution, what’s gone right, what’s gone wrong, and what lessons can you share with newbie security leaders?

Any significant change in your tool set needs careful consideration and evaluation. Every new tool needs to be tested in lab and moved, as appropriate, into production. You need to find all the gotcha’s with any new tool going inline before they cost impact.

Don’t rush this testing step! Ensure that you have good measurements of your current program so you can easily determine improvements with new tools or consolidation efforts.

If CISOs decide against consolidation, how can they drive better value through existing solutions?

Ensure that the solutions you are using are fully deployed and optimized. We frequently uncover many tools that are underutilized and ineffective. Sit with your staff and watch their work. If they are cutting and pasting, logging into and out of multiple tools, not having the time to address every alert, or are making excessive mistakes, it may be time to have Check Point come in and do a workshop. Our very experienced team will review the current program and provide thoughts and ideas to improve the program. Even if consolidation is not selected, other findings may help improve the program!

Are there any other actionable insights that you would like to share with cyber security leaders?

Every security program is different, and your challenges are unique. But, you can’t know everything, so, consider working with your trusted partners and invite Check Point in to do a free discovery workshop. Cloud maturity, consolidation program consideration, Zero Trust program formulation, and many others are available. As a CISO, you may have some initiatives that need extra validation, and we are standing by to help propel your program.

And for an even stronger security strategy, be sure to attend Check Point’s upcoming CPX 360 event. Register here.

Lastly, to receive cutting-edge cyber security news, best practices and resources in your inbox each week, please sign up for the CyberTalk.org newsletter. 

The post Unconsidered benefits of a consolidation strategy every CISO should know appeared first on CyberTalk.

❌