Normal view

There are new articles available, click to refresh the page.
Yesterday — 5 December 2025Main stream

At VA, cyber dominance is in, cyber compliance is out

5 December 2025 at 15:25

The Department of Veterans Affairs is moving toward a more operational approach to cybersecurity.

This means VA is applying a deeper focus on protecting the attack surfaces and closing off threat vectors that put veterans’ data at risk.

Eddie Pool, the acting principal assistant secretary for information and technology and acting principal deputy chief information officer at VA, said the agency is changing its cybersecurity posture to reflect a cyber dominance approach.

Eddie Pool is the acting principal assistant secretary for information and technology and acting principal deputy chief information officer at the Department of Veterans Affairs.

“That’s a move away from the traditional and an exclusively compliance based approach to cybersecurity, where we put a lot of our time resources investments in compliance based activities,” Pool said on Ask the CIO. “For example, did someone check the box on a form? Did someone file something in the right place? We’re really moving a lot of our focus over to the risk-based approach to security, pushing things like zero trust architecture, micro segmentation of our networks and really doing things that are more focused on the operational landscape. We are more focused on protecting those attack surfaces and closing off those threat vectors in the cyber space.”

A big part of this move to cyber dominance is applying the concepts that make up a zero trust architecture like micro segmentation and identity and access management.

Pool said as VA modernizes its underlying technology infrastructure, it will “bake in” these zero trust capabilities.

“Over the next several years, you’re going to see that naturally evolve in terms of where we are in the maturity model path. Our approach here is not necessarily to try to map to a model. It’s really to rationalize what are the highest value opportunities that those models bring, and then we prioritize on those activities first,” he said. “We’re not pursuing it in a linear fashion. We are taking parts and pieces and what makes the most sense for the biggest thing for our buck right now, that’s where we’re putting our energy and effort.”

One of those areas that VA is focused on is rationalizing the number of tools and technologies it’s using across the department. Pool said the goal is to get down to a specific set instead of having the “31 flavors” approach.

“We’re going to try to make it where you can have any flavor you want so long as it’s chocolate. We are trying to get that standardized across the department,” he said. “That gives us the opportunity from a sustainment perspective that we can focus the majority of our resources on those enterprise standardized capabilities. From a security perspective, it’s a far less threat landscape to have to worry about having 100 things versus having two or three things.”

The business process reengineering priority

Pool added that redundancy remains a key factor in the security and tool rationalization effort. He said VA will continue to have a diversity of products in its IT investment portfolios.

“Where we are at is we are looking at how do we build that future state architecture, as elegantly and simplistically as possible so that we can manage it more effectively, they can protect it more securely,” he said.

In addition to standardizing on technology and cyber tools and technologies, Pool said VA is bringing the same approach to business processes for enterprisewide services.

He said over the years, VA has built up a laundry list of legacy technology all with different versions and requirements to maintain.

“We’ve done a lot over the years in the Office of Information and Technology to really standardize on our technology platforms. Now it’s time to leverage that, to really bring standard processes to the business,” he said. “What that does is that really does help us continue to put the veteran at the center of everything that we do, and it gives a very predictable, very repeatable process and expectation for veterans across the country, so that you don’t have different experiences based on where you live or where you’re getting your health care and from what part of the organization.”

Part of the standardization effort is that VA will expand its use of automation, particularly in processing of veterans claims.

Pool said the goal is to take more advantage of the agency’s data and use artificial intelligence to accelerate claims processing.

“The richness of the data and the standardization of our data that we’re looking at and how we can eliminate as many steps in these processes as we can, where we have data to make decisions, or we can automate a lot of things that would completely eliminate what would be a paper process that is our focus,” Pool said. “We’re trying to streamline IT to the point that it’s as fast and as efficient, secure and accurate as possible from a VA processing perspective, and in turn, it’s going to bring a decision back to the veteran a lot faster, and a decision that’s ready to go on to the next step in the process.”

Many of these updates already are having an impact on VA’s business processes. The agency said that it set a new record for the number of disability and pension claims processed in a single year, more than 3 million. That beat its record set in 2024 by more than 500,000.

“We’re driving benefit outcomes. We’re driving technology outcomes. From my perspective, everything that we do here, every product, service capability that the department provides the veteran community, it’s all enabled through technology. So technology is the underpinning infrastructure, backbone to make all things happen, or where all things can fail,” Pool said. “First, on the internal side, it’s about making sure that those infrastructure components are modernized. Everything’s hardened. We have a reliable, highly available infrastructure to deliver those services. Then at the application level, at the actual point of delivery, IT is involved in every aspect of every challenge in the department, to again, bring the best technology experts to the table and look at how can we leverage the best technologies to simplify the business processes, whether that’s claims automation, getting veterans their mileage reimbursement earlier or by automating processes to increase the efficacy of the outcomes that we deliver, and just simplify how the veterans consume the services of VA. That’s the only reason why we exist here, is to be that enabling partner to the business to make these things happen.”

The post At VA, cyber dominance is in, cyber compliance is out first appeared on Federal News Network.

© Getty Images/ipopba

Cyber security network and data protection technology on virtual interface screen.

Vertical AI development agents are the future of enterprise integrations

5 December 2025 at 10:58

Enterprise Application Integration (EAI) and modern iPaaS platforms have become two of the most strategically important – and resource-constrained – functions inside today’s enterprises. As organizations scale SaaS adoption, modernize core systems, and automate cross-functional workflows, integration teams face mounting pressure to deliver faster while upholding strict architectural, data quality, and governance standards.

AI has entered this environment with the promise of acceleration. But CIOs are discovering a critical truth:

Not all AI is built for the complexity of enterprise integrations – whether in traditional EAI stacks or modern iPaaS environments.

Generic coding assistants such as Cursor or Claude Code can boost individual productivity, but they struggle with the pattern-heavy, compliance-driven reality of integration engineering. What looks impressive in a demo often breaks down under real-world EAI/iPaaS conditions.

This widening gap has led to the rise of a new category: Vertical AI Development Agents – domain-trained agents purpose-built for integration and middleware development. Companies like CurieTech AI are demonstrating that specialized agents deliver not just speed, but materially higher accuracy, higher-quality outputs, and far better governance than general-purpose tools.

For CIOs running mission-critical integration programs, that difference directly affects reliability, delivery velocity, and ROI.

Why EAI and iPaaS integrations are not a “Generic Coding” problem

Integrations—whether built on legacy middleware or modern iPaaS platforms – operate within a rigid architectural framework:

  • multi-step orchestration, sequencing, and idempotency
  • canonical data transformations and enrichment
  • platform-specific connectors and APIs
  • standardized error-handling frameworks
  • auditability and enterprise logging conventions
  • governance and compliance embedded at every step

Generic coding models are not trained on this domain structure. They often produce code that looks correct, yet subtly breaks sequencing rules, omits required error handling, mishandles transformations, or violates enterprise logging and naming standards.

Vertical agents, by contrast, are trained specifically to understand flow logic, mappings, middleware orchestration, and integration patterns – across both EAI and iPaaS architectures. They don’t just generate code – they reason in the same structures architects and ICC teams use to design integrations.

This domain grounding is the critical distinction.

The hidden drag: Context latency, expensive context managers, and prompt fatigue

Teams experimenting with generic AI encounter three consistent frictions:

Context Latency

Generic models cannot retain complex platform context across prompts. Developers must repeatedly restate platform rules, logging standards, retry logic, authentication patterns, and canonical schemas.

Developers become “expensive context managers”

A seemingly simple instruction—“Transform XML to JSON and publish to Kafka”
quickly devolves into a series of corrective prompts:

  • “Use the enterprise logging format.”
  • “Add retries with exponential backoff.”
  • “Fix the transformation rules.”
  • “Apply the standardized error-handling pattern.”

Developers end up managing the model instead of building the solution.

Prompt fatigue

The cycle of re-prompting, patching, and enforcing architectural rules consumes time and erodes confidence in outputs.

This is why generic tools rarely achieve the promised acceleration in integration environments.

Benchmarks show vertical agents are about twice as accurate

CurieTech AI recently published comparative benchmarks evaluating its vertical integration agents against leading generic tools, including Claude Code.
The tests covered real-world tasks:

  • generating complete, multi-step integration flows
  • building cross-system data transformations
  • producing platform-aligned retries and error chains
  • implementing enterprise-standard logging
  • converting business requirements into executable integration logic

The results were clear: generic tools performed at roughly half the accuracy of vertical agents.

Generic outputs often looked plausible but contained structural errors or governance violations that would cause failures in QA or production. Vertical agents produced platform-aligned, fully structured workflows on the first pass.

For integration engineering – where errors cascade – this accuracy gap directly impacts delivery predictability and long-term quality.

The vertical agent advantage: Single-shot solutioning

The defining capability of vertical agents is single-shot task execution.

Generic tools force stepwise prompting and correction. But vertical agents—because they understand patterns, sequencing, and governance—can take a requirement like:

“Create an idempotent order-sync flow from NetSuite to SAP S/4HANA with canonical transformations, retries, and enterprise logging.”

…and return:

  • the flow
  • transformations
  • error handling
  • retries
  • logging
  • and test scaffolding

in one coherent output.

This shift – from instruction-oriented prompting to goal-oriented prompting—removes context latency and prompt fatigue while drastically reducing the need for developer oversight.

Built-in governance: The most underrated benefit

Integrations live and die by adherence to standards. Vertical agents embed those standards directly into generation:

  • naming and folder conventions
  • canonical data models
  • PII masking and sensitive-data controls
  • logging fields and formats
  • retry and exception handling patterns
  • platform-specific best practices

Generic models cannot consistently maintain these rules across prompts or projects.

Vertical agents enforce them automatically, which leads to higher-quality integrations with far fewer QA defects and production issues.

The real ROI: Quality, consistency, predictability

Organizations adopting vertical agents report three consistent benefits:

1. Higher-Quality Integrations

Outputs follow correct patterns and platform rules—reducing defects and architectural drift.

2. Greater Consistency Across Teams

Standardized logic and structures eliminate developer-to-developer variability.

3. More Predictable Delivery Timelines

Less rework means smoother pipelines and faster delivery.

A recent enterprise using CurieTech AI summarized the impact succinctly:

“For MuleSoft users, generic AI tools won’t cut it. But with domain-specific agents, the ROI is clear. Just start.”

For CIOs, these outcomes translate to increased throughput and higher trust in integration delivery.

Preparing for the agentic future

The industry is already moving beyond single responses toward agentic orchestration, where AI systems coordinate requirements gathering, design, mapping, development, testing, documentation, and deployment.

Vertical agents—because they understand multi-step integration workflows—are uniquely suited to lead this transition.

Generic coding agents lack the domain grounding to maintain coherence across these interconnected phases.

The bottom line

Generic coding assistants provide breadth, but vertical AI development agents deliver the depth, structure, and governance enterprise integrations require.

Vertical agents elevate both EAI and iPaaS programs by offering:

  • significantly higher accuracy
  • higher-quality, production-ready outputs
  • built-in governance and compliance
  • consistent logic and transformations
  • predictable delivery cycles

As integration workloads expand and become more central to digital transformation, organizations that adopt vertical AI agents early will deliver faster, with higher accuracy, and with far greater confidence.

In enterprise integrations, specialization isn’t optional—it is the foundation of the next decade of reliability and scale.

Learn more about CurieTech AI here.

Before yesterdayMain stream

IT leaders turn to third-party providers to manage tech debt

4 December 2025 at 05:01

As tech debt threatens to cripple many IT organizations, a huge number of CIOs have turned to third-party service providers to maintain or upgrade legacy software and systems, according to a new survey.

A full 95% of IT leaders are now using outside service providers to modernize legacy IT and reduce tech debt, according to a survey by MSP Ensono.

The push is in part due to the cost of legacy IT, with nearly half of those surveyed saying they paid more in the past year to maintain older IT systems than they had budgeted. More importantly, dealing with legacy applications and infrastructure is holding IT organizations back, as nearly nine in 10 IT leaders say legacy maintenance has hampered their AI modernization plans.

“Maintaining legacy systems is really slowing down modernization efforts,” says Tim Beerman, Ensono’s CTO. “It’s the typical innovator’s dilemma — they’re focusing on outdated systems and how to address them.”

In some cases, CIOs have turned to service providers to manage legacy systems, but in other cases, they have looked to outside IT teams to retire tech debt and modernize software and systems, Beerman says. One reason they’re turning to outside service providers is an aging employee base, with internal experts in legacy systems retiring and taking their knowledge with them, he adds.

“Not very many people are able to do it themselves,” Beerman says. “You have maturing workforces and people moving out of the workforce, and you need to go find expertise in areas where you can’t hire that talent.”

While the MSP model has been around for decades, the move to using it to manage tech debt appears to be a growing trend as organizations look to clear up budget and find time to deploy AI, he adds.

“If you look at the advent of lot of new technology, especially AI, that’s moving much faster, and clients are looking for help,” Beerman says. “On one side, you have this legacy problem that they need to manage and maintain, and then you have technology moving at a pace that it hasn’t moved in years.”

Outsourcing risk

Ryan Leirvik, CEO at cybersecurity services firm Neuvik, also sees a trend toward using service providers to manage legacy IT. He sees several advantages, including matching the right experts to legacy systems, but CIOs may also use MSPs to manage their risk, he says.

“Of the many advantages, one primary advantage often not mentioned is shifting the exploitation or service interruption risk to the vendor,” he adds. “In an environment where vulnerability discovery, patching, and overall maintenance is an ongoing and expensive effort, the risk of getting it wrong typically sits with the vendor in charge.”

The number of IT leaders in the survey who overspent their legacy IT maintenance budgets also doesn’t surprise Leirvik, a former chief of staff and associate director of cyber at the US Department of Defense.

Many organizations have a talent mismatch between the IT infrastructure they have and the one they need to move to, he says. In addition, the ongoing maintenance of legacy software and systems often costs more than anticipated, he adds.

“There’s this huge maintenance tail that we weren’t expecting because the initial price point was one cost and the maintenance is 1X,” Leirvik says.

To get out of the legacy maintenance trap, IT leaders need foresight and discipline to choose the right third-party provider, he adds. “Take the long-term view — make sure the five-year plan lines up with this particular vendor,” he says. “Do your goals as an organization match up with where they’re going to help you out?”

Paying twice

While some IT leaders have turned to third-party vendors to update legacy systems, a recently released report from ITSM and customer-service software vendor Freshworks raises questions about the efficiency of modernization efforts.

More than three-quarters of those surveyed by Freshworks say software implementations take longer than expected, with two-thirds of those projects exceeding expected budgets.

Third-party providers may not solve the problems, says Ashwin Ballal, Freshworks’ CIO.

“Legacy systems have become so complex that companies are increasingly turning to third-party vendors and consultants for help, but the problem is that, more often than not, organizations are trading one subpar legacy system for another,” he says. “Adding vendors and consultants often compounds the problem, bringing in new layers of complexity rather than resolving the old ones.”

The solution isn’t adding more vendors, but new technology that works out of the box, Ballal adds.

“In theory, third-party providers bring expertise and speed,” he says. “In practice, organizations often find themselves paying for things twice — once for complex technology, and then again for consultants to make it work.”

Third-party vendors unavoidable

Other IT leaders see some third-party support as nearly inevitable. Whether it’s updating old code, moving workloads to the cloud, adopting SaaS tools, or improving cybersecurity, most organizations now need outside assistance, says Adam Winston, field CTO and CISO at cybersecurity vendor WatchGuard Technologies.

A buildup of legacy systems, including outdated remote-access tools and VPNs, can crush organizations with tech debt, he adds. Many organizations haven’t yet fully modernized to the cloud or to SaaS tools, and they will turn to outside providers when the time comes, he says.

“Most companies don’t build and design and manage their own apps, and that’s where all that tech debt basically is sitting, and they are in some hybrid IT design,” he says. “They may be still sitting in an era dating back to co-location and on-premise, and that almost always includes legacy servers, legacy networks, legacy systems that aren’t really following a modern design or architecture.”

Winston advises IT leaders to create plans to retire outdated technology and to negotiate service contracts that lean on vendors to keep IT purchases as up to date as possible. Too many vendors are quick to drop support for older products when new ones come out, he suggests.

“If you’re not going to upgrade, do the math on that legacy support and say, ‘If we can’t upgrade that, how are we going to isolate it?’” he says. “‘What is our graveyard segmentation strategy to move the risk in the event that this can’t be upgraded?’ The vendor due diligence leaves a lot of this stuff on the table, and then people seem to get surprised.”

CIOs should avoid specializing in legacy IT, he adds. “If you can’t amortize the cost of the software or the build, promise yourself that every new application that’s coming into the system is going to use the latest component,” Winston says.

기업 전반에 스며드는 에이전틱 AI···변화하는 아키텍트의 역할

4 December 2025 at 00:36

엔터프라이즈 아키텍트 관련 기획 기사에서 생성형 AI가 언급된 적은 있지만, 기업 기술 전반에 미치는 영향은 지금까지 크게 드러나지 않았다. 그러나 지금은 주요 서비스형 소프트웨어(SaaS) 기업이 에이전틱 AI 솔루션을 잇달아 내놓으면서 아키텍처와 아키텍트 역할 자체가 변화하고 있다. 그렇다면 지금 CIO와 아키텍트는 무엇을 알아야 할까?

기업, 특히 CEO는 생산성을 높이고 성장세를 회복하기 위해 AI 도입이 필요하다고 꾸준히 목소리를 내왔고, 분석가들도 같은 의견을 전하고 있다. 예를 들어 가트너는 향후 5년 동안 IT 업무의 75%가 AI를 활용한 직원에 의해 수행될 것이라고 전망했다. 이는 새로운 시장 진출, 추가 제품·서비스 개발, 마진을 높일 기능 확충처럼 IT 업무가 새로운 가치를 만들어내도록 적극적으로 나서야 한다는 의미일 수 있다.

생산성이 이처럼 근본적으로 변화한다면, 기업에는 비즈니스 프로세스와 이를 운영하는 기술 전반에 대한 새로운 계획이 필요하다. 최근 사례들은 기업이 새로운 운영 모델을 도입하지 않으면 기술 투자 효과를 제대로 얻기 어렵다는 점을 보여주고 있다.

에이전틱 AI 도입은 기업의 프로세스뿐 아니라 소프트웨어 개발 방식, 맞춤화, 기술 구현 방식까지 모두 바꿀 가능성이 높다. 따라서 아키텍트는 소프트웨어가 어떻게 개발되고 조정되며 배포되는지 재설계하는 최전선에 서게 된다.

기술 업계 일부에서는 생성형 AI가 기업용 소프트웨어와 이를 제공하는 대형 벤더에 근본적 변화를 가져올 것으로 보고 있다. 그러나 포레스터(Forrester) 총괄 애널리스트 디에고 로 주디체는 “AI가 본격화된다고 해서 소프트웨어 산업이 붕괴된다는 주장은 터무니없다. 그런 결론을 내려면 AI에 가장 낙관적인 전문가의 예상조차 뛰어넘는 수준의 완전무결한 AI가 전제돼야 한다”라고 말했다. 로 주디체는 최근 열린 원 컨퍼런스에서 비즈니스 기술 리더 4,000명에게 “변화는 분명 진행되고 있지만, 이는 최근 축적된 성과를 기반으로 일어나는 것”이라고 설명했다.

로 주디체는 “애자일은 조직 간 조율을 개선했고, 데브옵스는 개발과 운영 사이의 벽을 허물었다. 이는 모두 목표가 같았다. 바로 아이디어와 구현 사이의 간극을 줄이는 것이다”라고 말했다. 그는 AI가 기업용 소프트웨어 개발 방식을 바꿀 것이라는 점을 부정하지는 않았지만, 애자일과 데브옵스가 그랬듯 AI도 소프트웨어 개발 생애주기를 개선하고 결국 아키텍처 전반을 고도화하게 될 것이라고 강조했다. 다른 점은 변화의 속도다. 콘텐츠 관리 소프트웨어 기업 엄브라코의 AI 스태프 엔지니어 필 휘태커는 “개발 역사상 이런 속도의 변화는 없었다”라고 진단했다.

복잡성 증가와 프로세스 변화

소프트웨어 개발 및 맞춤화 주기가 바뀌고 에이전틱 애플리케이션이 보편화되면서, 아키텍트는 복잡성과 새로운 비즈니스 프로세스를 염두에 둔 계획을 수립해야 하는 상황이다. 에이전틱 AI가 지금까지 직원이 수동으로 처리하던 업무를 맡게 된다면 기존 비즈니스 프로세스를 그대로 유지하기는 어렵다.

로 주디체는 아마존웹서비스(AWS) 같은 AI 선도 기업이 대규모 인력 감축에 나선 이후 과열된 논쟁에 다시 한번 의견을 전했다. 그는 원 컨퍼런스에서 “모든 직원이 자신의 일을 도와주는 봇 하나씩을 갖게 될 것이라는 생각은 단순한 발상이다”라며, “기업은 각 역할과 비즈니스 프로세스를 면밀히 분석해, 적절한 작업에 적절한 에이전트를 배치하는 데 예산과 자원을 쓰고 있는지 확인해야 한다. 이 과정을 거치지 않으면 필요하지 않은 곳에 에이전틱 기술을 도입해 복잡한 업무를 처리하지도 못하면서 기업의 클라우드 비용만 늘리는 결과를 초래하게 된다”라고 경고했다.

AI 기반 로우코드 플랫폼 기업 아웃시스템즈(OutSystems)의 CIO 티아고 아제베두는 “중요한 정보에 접근할 수 있는 에이전트를 만드는 일은 생각보다 쉽다”라고 말했다. 그는 “그래서 데이터 구분이 필요하다. 에이전트를 배포할 때는 이를 통제할 수 있어야 한다. 에이전트가 많아질수록 비용도 함께 늘어난다”라고 설명했다.

하지만 휘태커는 결정론적 방식과 비결정론적 방식 사이의 차이가 무엇보다 크다고 지적했다. 비결정론적 방식은 결과가 매번 달라질 수 있기 때문에, 항상 동일한 결과를 내는 결정론적 에이전트를 일종의 가드레일로 둬야 한다는 것이다. 그는 어떤 비즈니스 결과를 결정론적·비결정론적 방식 중 어디에 둘 것인지 정의하는 일이 아키텍처의 핵심 역할이라고 설명했다. 또한 휘태커는 여기서 AI가 조직의 빈틈을 메우는 데 도움을 줄 수 있다고 덧붙였다. 아키텍트로 일한 경험이 있는 휘태커는 기업이 AI를 적극 실험해 자사 아키텍처에 어떤 이점을 줄 수 있는지, 그리고 궁극적으로 비즈니스 성과에 어떤 영향을 미칠 수 있는지 확인하는 일이 매우 중요해질 것이라고 강조했다.

가트너 애널리스트 대릴 플러머와 알리시아 멀러리는 “실질적 경쟁력을 확보하는 길은 과장된 기대를 쫓거나 AI의 잠재력을 깎아내리는 데 있지 않다. 가치를 창출하는 중간지점을 찾는 데 있다”라고 밝혔다. 두 사람은 “AI의 가능성은 분명하지만, 그 가치를 온전히 실현할 가능성은 보장되지 않는다. 가트너 조사에 따르면 AI 프로젝트 가운데 ROI를 달성하는 경우는 5개 중 1개에 불과하고, 진정한 변화를 이끄는 사례는 50개 중 1개 수준에 그친다”라고 전했다. 또 다른 조사에서는 조직의 리더가 디지털 전환을 제대로 이끌 수 있다고 신뢰하는 직원이 32%에 불과하다는 결과도 나왔다. 이에 대해 아제베두는 “에이전트는 아키텍처 복잡성을 더해주기 때문에 아키텍트 역할이 더 중요해지고 있다”라고 분석했다.

과거 아키텍트는 주로 프레임워크 중심의 업무를 수행해왔다. 휘태커는 이제 직원, 애플리케이션, 데이터베이스, 에이전틱 AI가 얽혀 있는 엔터프라이즈 환경을 관리하려면 새로운 기술 모델을 이해하고 도입해야 한다고 설명했다. 그는 그중 하나로 MCP를 언급하면서, MCP가 AI 모델을 데이터 소스와 연결하는 표준 방식을 제공해, 지금처럼 각기 다른 방식으로 구성된 통합 구조나 검색 증강 생성(RAG) 구현의 복잡성을 줄여줄 수 있다고 언급했다. AI는 이러한 새로운 복잡성을 다루는 데도 도움을 제공할 전망이다. 로 주디체는 “기획, 요구사항 관리, 에픽 생성, 사용자 스토리 작성, 코드 생성, 코드 문서화, 번역까지 지원하는 다양한 도구가 등장하고 있다”라고 설명했다.

새로운 책임

포레스터(Forrester) 시니어 애널리스트 스테판 반레켐은 이제 에이전틱 AI가 주요 엔터프라이즈 아키텍처 도구의 핵심 기능으로 자리 잡고 있다고 설명한다. 그는 “에이전트는 데이터 검증, 역량 매핑, 아티팩트 생성 같은 작업을 자동화해 아키텍트가 전략과 전환 업무에 집중할 수 있게 한다”라고 말했다. 반레켐은 셀로니스, SAP 시그나비오, 서비스나우가 도입한 에이전틱 통합 기술을 사례로 들었다. 휘태커는 아키텍트가 조직을 보호하고 에이전틱 AI의 의사결정과 결과에 책임을 지는 역할로 더욱 중요해지고 있다고 진단했다.

일부 아키텍트는 이런 변화가 기존 전문 영역을 약화시킨다고 생각할 수도 있다. 그러나 휘태커는 오히려 역할의 범위를 넓힐 기회라고 봤다. 그는 “아키텍트는 여러 영역을 깊이 있게 파고들 수 있다. 사람을 한 가지 범주에 가둬두는 방식은 결코 바람직하지 않다”라고 말했다.

전통적으로 아키텍처는 무언가를 설계하고 구축한 뒤 고정된 형태로 존재하는 구조를 의미했다. 하지만 에이전틱 AI가 기업에 확산되면서 아키텍처를 관리하는 아키텍트의 역할은 더욱 유동적으로 변하고 있다. 이제는 설계 및 구축 감독뿐 아니라, 계획을 꾸준히 모니터링하고 조정하는 역할까지 요구된다. 이를 ‘오케스트레이션’이라고 부르기도 하며, 일종의 지도 읽기에 가깝다는 비유도 나온다. 아키텍트가 경로를 설계하더라도, 실제 환경의 다양한 변수로 인해 길이 달라질 수 있기 때문이다. 날씨 변화나 쓰러진 나무 때문에 길을 우회해야 하듯, 아키텍트 역시 비즈니스 환경이 바뀌면 계획을 수정하고 새로운 방향을 이끌어야 한다.

또한 새로운 아키텍트 역할 역시 기술 발전에 따라 계속 변하게 될 전망이다. 로 주디체는 조직의 자동화 수준이 더 높아질 것이라고 내다봤고, 아제베두는 조직 전반에 구축되는 에이전트를 묶어 카탈로그 형태로 관리하는 ‘오케스트레이션’ 관점에 힘을 실었다. 아키텍트와 CIO가 조직 전체의 조율자로 역할을 확장할 수 있는 기회라는 설명이다.

직함이 무엇이든, 아키텍처의 중요성은 그 어느 때보다 커지고 있다. 휘태커는 “AI가 더 많은 코드를 작성하게 될수록 아키텍트가 되는 사람도 더 늘어날 것”이라며 “앞으로는 눈앞에 있는 수많은 에이전트를 조율하고 통제하는 일이 아키텍트의 본래 역할이 될 것”이라고 말했다. 그는 기술 담당자가 점점 더 많은 개발 업무를 에이전트와 AI에 맡기게 되면서, 개별 에이전트와 프로세스가 어떻게 작동해야 하는지를 설계하는 책임이 더욱 확대되고 많은 기술 직원이 이 역할을 분담하게 될 것이라고 설명했다.

그는 “AI가 코드를 생성해줄 수는 있지만, 코드의 보안을 확인하는 책임은 여전히 사람에게 있다”라고 강조했다. 이에 따라 IT 조직은 코드를 직접 개발하는 팀에서, AI가 만든 기술을 점검·수용하고 이를 실제 비즈니스 프로세스에 배치하는 역할을 수행하는 아키텍처 중심 조직으로 변화하게 될 것이라고 전망했다.

이미 조직 내에 섀도우 AI가 깊숙이 스며든 상황에서, 휘태커는 기업이 도입한 AI 에이전트와 비즈니스가 조율하도록 지원하면서 동시에 고객 데이터와 사이버 보안 체계를 보호할 수 있는 아키텍트 팀의 필요성이 커지고 있다고 강조했다. AI 에이전트는 기업의 운영 구조를 다시 그려내고 있으며, 동시에 아키텍트 역할의 미래 또한 새롭게 정의하고 있다.
dl-ciokorea@foundryco.com

Agentic AI’s rise is making the enterprise architect role more fluid

3 December 2025 at 05:00

In a previous feature about enterprise architects, gen AI had emerged, but its impact on enterprise technology hadn’t been felt. Today, gen AI has spawned a plethora of agentic AI solutions from the major SaaS providers, and enterprise architecture and the role of enterprise architect is being redrawn. So what do CIOs and their architects need to know?

Organizations, especially their CEOs, have been vocal of the need for AI to improve productivity and bring back growth, and analysts have backed the trend. Gartner, for example, forecasts that 75% of IT work will be completed by human employees using AI over the next five years, which will demand, it says, a proactive approach to identifying new value-creating IT work, like expanding into new markets, creating additional products and services, or adding features that boost margins.

If this radical change in productivity takes place, organizations will need a new plan for business processes and the tech that operates those processes. Recent history shows if organizations don’t adopt new operating models, the benefits of tech investments can’t be achieved.

As a result of agentic AI, processes will change, as well as the software used by the enterprise, and the development and implementation of the technology. Enterprise architects, therefore, are at the forefront of planning and changing the way software is developed, customized, and implemented.

In some quarters of the tech industry, gen AI is seen as a radical change to enterprise software, and to its large, well-known vendors. “To say AI unleashed will destroy the software industry is absurd, as it would require an AI perfection that even the most optimistic couldn’t agree to,” says Diego Lo Giudice, principal analyst at Forrester. Speaking at the One Conference in the fall, Lo Giudice reminded 4,000 business technology leaders that change is taking place, but it’s built on the foundations of recent successes.

“Agile has given better alignment, and DevOps has torn down the wall between developers and operations,” he said. “They’re all trying to do the same thing, reduce the gap between an idea and implementation.” He’s not denying AI will change the development of enterprise software, but like Agile and DevOps, AI will improve the lifecycle of software development and, therefore, the enterprise architecture. The difference is the speed of change. “In the history of development, there’s never been anything like this,” adds Phil Whittaker, AI staff engineer at content management software provider Umbraco.

Complexity and process change

As the software development and customization cycle changes, and agentic applications become commonplace, enterprise architects will need to plan for increased complexity and new business processes. Existing business processes can’t continue if agentic AI is taking on tasks currently done manually by staff.

Again, Lo Giudice adds some levity to a debate that can often become heated, especially in the wake of major redundancies by AI leaders such as AWS. “The view that everyone will get a bot that helps them do their job is naïve,” he said at the One Conference. “Organizations will need to carry out a thorough analysis of roles and business processes to ensure they spend money and resources on deploying the right agents to the right tasks. Failure to do so will lead to agentic technology being deployed that’s not needed, can’t cope with complex tasks, and increases the cloud costs of the business.

“It’s easy to build an agent that has access to really important information,” says Tiago Azevedo, CIO for AI-powered low-code platform provider OutSystems. “You need segregation of data. When you publish an agent, you need to be able to control it, and there’ll be many agents, so costs will grow.”

The big difference, though, is deterministic and non-deterministic, says Whittaker. So non-deterministic requires guardrails of deterministic agents that produce the same output every time over the more random outcomes of non-deterministic agents. Defining business outcomes by deterministic and non-deterministic is a clear role for enterprise architecture. He adds that this is where AI can help organizations fill in gaps. Whittaker, who’s been an enterprise architect, says it’ll be vital for organizations to experiment with AI to see how it can benefit their architecture and, ultimately, business outcomes.

“The path to greatness lies not in chasing hype or dismissing AI’s potential, but in finding the golden middle ground where value is truly captured,” write Gartner analysts Daryl Plummer and Alicia Mullery. “AI’s promise is undeniable, but realizing its full value is far from guaranteed. Our research reveals the sobering odds that only one in five AI initiatives achieve ROI, and just one in 50 deliver true transformation.” Further research also finds just 32% of employees trust the organization’s leadership to drive transformation. “Agents bring an additional component of complexity to architecture that makes the role so relevant,” Azevedo adds.

In the past, enterprise architects were focused on frameworks. Whittaker points out that new technology models will need to be understood and deployed by architects to manage an enterprise that comprises employees, applications, databases, and agentic AI. He cites MCP as one as it provides a standard way to connect AI models to data sources, and simplifies the current tangle of bespoke integrations and RAG implementations. AI will also help architects with this new complexity. “There are tools for planning, requirements, creating epics, user stories, code generation, documenting code, and translating it,” added Lo Giudice.

New responsibilities

Agentic AI is now a core feature of every major EA tool, says Stéphane Vanrechem, senior analyst at Forrester. “These agents automate data validation, capability mapping, and artifact creation, freeing architects to focus on strategy and transformation.” He cites the technology of Celonis, SAP Signavio, and ServiceNow for their agentic integrations. Whittaker adds that the enterprise architect has become an important human in the loop to protect the organization and be responsible for the decisions and outcomes that agentic AI delivers.

Although some enterprise architects will see this as a collapse of their specialization, Whittaker thinks it broadens the scope of the role and makes them more T-shaped. “I can go deep in different areas,” he says. “Pigeon-holing people is never a great thing to do.”

Traditionally, architecture has suggested that something is planned, built, and then exists. The rise of agentic AI in the enterprise means the role of the enterprise architect is becoming more fluid as they continue to design and oversee construction. But the role will also involve continual monitoring and adjustment to the plan. Some call this orchestration, or perhaps it’s akin to map reading. An enterprise architect may plan a route, but other factors will alter the course. And just like weather or a fallen tree, which can lead to a route deviation, so too will enterprise architects plan and then lead when business conditions change.

Again, this new way of being an enterprise architect will be impacted by technology. Lo Guidice believes there’ll be increased automation, and Azevedo sides with the orchestration view, saying agents are built and a catalogue of them is created across the organization, which is an opportunity for enterprise architects and CIOs to be orchestrators.

Whatever the job title, Whittaker says enterprise architecture is more important than ever. “More people will become enterprise architects as more software is written by AI,” he says. “Then it’s an architectural role to coordinate and conduct the agents in front of you.” He argues that as technologists allow agents and AI to do the development work for them, the responsibility of architecting how agents and processes function broadens and becomes the responsibility of many more technologists.

“AI can create code for you, but it’s your responsibility to make sure it’s secure,” he adds. Rather than developing the code, technology teams will become architecture teams, checking and accepting the technology that AI has developed, and then managing its deployment into the business processes.

With shadow AI already embedded in organizations, Whittaker’s view shows the need for a team of enterprise architects that can help business align with the AI agents they’ve deployed, and at the same time protect customer data and cybersecurity posture.

AI agents are redrawing the enterprise, and at the same time replanning the role of enterprise architects.

From cloud-native to AI-native: Why your infrastructure must be rebuilt for intelligence

1 December 2025 at 11:13

The cloud-native ceiling

For the past decade, the cloud-native paradigm — defined by containers, microservices and DevOps agility — served as the undisputed architecture of speed. As CIOs, you successfully used it to decouple monoliths, accelerate release cycles and scale applications on demand.

But today, we face a new inflection point. The major cloud providers are no longer just offering compute and storage; they are transforming their platforms to be AI-native, embedding intelligence directly into the core infrastructure and services. This is not just a feature upgrade; it is a fundamental shift that determines who wins the next decade of digital competition. If you continue to treat AI as a mere application add-on, your foundation will become an impediment. The strategic imperative for every CIO is to recognize AI as the new foundational layer of the modern cloud stack.

This transition from an agility-focused cloud-native approach to an intelligence-focused AI-native one requires a complete architectural and organizational rebuild. It is the CIO’s journey to the new digital transformation in the AI era. According to McKinsey’s “The state of AI in 2025: Agents, innovation and transformation,” while 80 percent of respondents set efficiency as an objective of their AI initiatives, the leaders of the AI era are those who view intelligence as a growth engine, often setting innovation and market expansion as additional, higher-value objectives.

The new architecture: Intelligence by design

The AI lifecycle — data ingestion, model training, inference and MLOps — imposes demands that conventional, CPU-centric cloud-native stacks simply cannot meet efficiently. Rebuilding your infrastructure for intelligence focuses on three non-negotiable architectural pillars:

1. GPU-optimization: The engine of modern compute

The single most significant architectural difference is the shift in compute gravity from the CPU to the GPU. AI models, particularly large language models (LLMs), rely on massive parallel processing for training and inference. GPUs, with their thousands of cores, are the only cost-effective way to handle this.

  • Prioritize acceleration: Establish a strategic layer to accelerate AI vector search and handle data-intensive operations. This ensures that every dollar spent on high-cost hardware is maximized, rather than wasted on idle or underutilized compute cycles.
  • A containerized fabric: Since GPU resources are expensive and scarce, they must be managed with surgical precision. This is where the Kubernetes ecosystem becomes indispensable, orchestrating not just containers, but high-cost specialized hardware.

2. Vector databases: The new data layer

Traditional relational databases are not built to understand the semantic meaning of unstructured data (text, images, audio). The rise of generative AI and retrieval augmented generation (RAG) demands a new data architecture built on vector databases.

  • Vector embeddings — the mathematical representations of data — are the core language of AI. Vector databases store and index these embeddings, allowing your AI applications to perform instant, semantic lookups. This capability is critical for enterprise-grade LLM applications, as it provides the model with up-to-date, relevant and factual company data, drastically reducing “hallucinations.”
  • This is the critical element that vector databases provide — a specialized way to store and query vector embeddings, bridging the gap between your proprietary knowledge and the generalized power of a foundation model.

3. The orchestration layer: Accelerating MLOps with Kubernetes

Cloud-native made DevOps possible; AI-native requires MLOps (machine learning operations). MLOps is the discipline of managing the entire AI lifecycle, which is exponentially more complex than traditional software due to the moving parts: data, models, code and infrastructure.

Kubernetes (K8s) has become the de facto standard for this transition. Its core capabilities — dynamic resource allocation, auto-scaling and container orchestration — are perfectly suited for the volatile and resource-hungry nature of AI workloads.

By leveraging Kubernetes for running AI/ML workloads, you achieve:

  • Efficient GPU orchestration: K8s ensures that expensive GPU resources are dynamically allocated based on demand, enabling fractional GPU usage (time-slicing or MIG) and multi-tenancy. This eliminates long wait times for data scientists and prevents costly hardware underutilization.
  • MLOps automation: K8s and its ecosystem (like Kubeflow) automate model training, testing, deployment and monitoring. This enables a continuous delivery pipeline for models, ensuring that as your data changes, your models are retrained and deployed without manual intervention. This MLOps layer is the engine of vertical integration, ensuring that the underlying GPU-optimized infrastructure is seamlessly exposed and consumed as high-level PaaS and SaaS AI services. This tight coupling ensures maximum utilization of expensive hardware while embedding intelligence directly into your business applications, from data ingestion to final user-facing features.

Competitive advantage: IT as the AI driver

The payoff for prioritizing this infrastructure transition is significant: a decisive competitive advantage. When your platform is AI-native, your IT organization shifts from a cost center focused on maintenance to a strategic business driver.

Key takeaways for your roadmap:

  1. Velocity: By automating MLOps on a GPU-optimized, Kubernetes-driven platform, you accelerate the time-to-value for every AI idea, allowing teams to iterate on models in weeks, not quarters.
  2. Performance: Infrastructure investments in vector databases and dedicated AI accelerators ensure your models are always running with optimal performance and cost-efficiency.
  3. Strategic alignment: By building the foundational layer, you are empowering the business, not limiting it. You are executing the vision outlined in “A CIO’s guide to leveraging AI in cloud-native applications,” positioning IT to be the primary enabler of the company’s AI vision, rather than an impedance.

Conclusion: The future is built on intelligence

The move from cloud-native to AI-Native is not an option; it is a market-driven necessity. The architecture of the future is defined by GPU-optimization, vector databases and Kubernetes-orchestrated MLOps.

As CIO, your mandate is clear: lead the organizational and architectural charge to install this intelligent foundation. By doing so, you move beyond merely supporting applications to actively governing intelligence that spans and connects the entire enterprise stack. This intelligent foundation requires a modern, integrated approach. AI observability must provide end-to-end lineage and automated detection of model drift, bias and security risks, enabling AI governance to enforce ethical policies and maintain regulatory compliance across the entire intelligent stack. By making the right infrastructure investments now, you ensure your enterprise has the scalable, resilient and intelligent backbone required to truly harness the transformative power of AI. Your new role is to be the Chief Orchestration Officer, governing the engine of future growth.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

Benchmarking Chinese CPUs

By: Lewin Day
26 November 2025 at 19:00

When it comes to PCs, Westerners are most most familiar with x86/x64 processors from Intel and AMD, with Apple Silicon taking up a significant market share, too. However, in China, a relatively new CPU architecture is on the rise. A fabless semiconductor company called Loongson has been producing chips with its LoongArch architecture since 2021. These chips remain rare outside China, but some in the West have been benchmarking them.

[Daniel Lemire] has recently blogged about the performance of the Loongson 3A6000, which debuted in late 2023. The chip was put through a range of simple benchmarking tests, involving float processing and string transcoding operations. [Daniel] compared it to the Intel Xeon Gold 6338 from 2021, noting the Intel chip pretty much performed better across the board. No surprise given its extra clock rate. Meanwhile, the gang over at [Chips and Cheese] ran even more exhaustive tests on the same chip last year. The Loongson was put through typical tasks like  compressing archives and encoding video. The outlet came to the conclusion that the chip was a little weaker than older CPUs like AMD’s Zen 2 line and Intel’s 10th generation Core chips. It’s also limited as a four-core chip compared to modern Intel and AMD lines that often start at 6 cores as a minimum.

If you find yourself interested in Loongson’s product, don’t get too excited. They’re not exactly easy to lay your hands on outside of China, and even the company’s own website is difficult to access from beyond those shores. You might try reaching out to Loongson-oriented online communities if you seek such hardware.

Different CPU architectures have perhaps never been more relevant, particularly as we see the x86 stalwarts doing battle with the rise of desktop and laptop ARM processors. If you’ve found something interesting regarding another obscure kind of CPU, don’t hesitate to let the tipsline know!

「インターネットが壊れた?」Cloudflareが止まるとXやChatGPTまで巻き込まれる理由

21 November 2025 at 21:06

インターネットの「影の支配者」、Cloudflareとは一体何者なのか

まず最初に、Cloudflareという会社が一体何をしているのか、その正体を解き明かしていきましょう。一言で表現するならば、彼らは「世界中のウェブサイトの玄関口に立って、交通整理と警備を一手に引き受けている超巨大な門番」です。私たちが普段、スマートフォンやパソコンからウェブサイトを見るとき、感覚としては自分の端末から目的のサイトへ直接繋がっているように思えます。しかし、現在のインターネットの多くはそう単純ではありません。XやChatGPT、Discordといった人気サービスの多くは、自分のサーバーの手前にCloudflareという「盾」兼「加速装置」を配置しています。つまり、私たちの通信は「ユーザーの端末」から「Cloudflareのサーバー」を経由して、初めて「本物のサーバー」に届くというルートを辿っているのです。

なぜ、わざわざそんな中継地点を通す必要があるのでしょうか。そこには大きく分けて三つの重要な理由があります。一つ目は「圧倒的な高速化」です。Cloudflareは世界中のあらゆる都市にデータセンターを持っています。例えば、あなたが日本の自宅からアメリカにある企業のウェブサイトを見ようとしたとき、いちいちアメリカのサーバーまでデータを取りに行っていたら時間がかかってしまいます。そこでCloudflareは、日本国内にある彼らのサーバーにウェブサイトのコピー(キャッシュ)を一時的に保存しておきます。ユーザーは近所のCloudflareからデータを受け取れるため、驚くほど高速にページが表示されるという仕組みです。これは「CDN(コンテンツデリバリネットワーク)」と呼ばれる技術で、現代の快適なネットサーフィンには欠かせないものです。

二つ目の理由は「鉄壁の防御」です。人気のウェブサイトには、多くの一般ユーザーだけでなく、悪意あるハッカーやウイルス、サーバーをダウンさせようとする攻撃的なアクセスも押し寄せます。Cloudflareは本物のサーバーの代わりにこれらをすべて受け止め、サイバー攻撃や迷惑なボットだけを瞬時に見分けて弾き飛ばし、善良なユーザーだけを通すというフィルタリングを行っています。DDoS攻撃と呼ばれる大量のアクセス攻撃からサービスを守るためには、Cloudflareのような巨大な防波堤が必要不可欠なのです。

そして三つ目の理由が「インターネットの住所案内」です。私たちがブラウザに「x.com」や「openai.com」と打ち込んだとき、コンピューターはそれがネットワーク上のどこにあるのか(IPアドレス)を知る必要があります。この名前解決を行う「DNS(ドメインネームシステム)」という仕組みもCloudflareは提供しています。つまり、「サイトを表示するスピード」「外敵からの防御」「サイトの場所を教える案内」という、ウェブサイト運営の根幹に関わる三つの機能を一手に担っているのがCloudflareなのです。現在、世界中のウェブサイトのおよそ2割がCloudflareを利用しているとも言われており、彼らがインターネットのインフラそのものと言っても過言ではない理由がここにあります。

なぜ一社の不具合で、世界中のサービスが「共倒れ」になるのか

Cloudflareの凄さがわかったところで、次に疑問に思うのは「なぜCloudflareが止まると、XもChatGPTも一斉に使えなくなってしまうのか」という点でしょう。それぞれのサービスは別々の会社が運営しており、サーバーも別々の場所にあります。それなのに、なぜあたかも申し合わせたように同時にダウンしてしまうのでしょうか。

これを理解するために、インターネットを「高速道路」に例えてみましょう。XやChatGPTといったサービスは、それぞれが魅力的な「テーマパーク(目的地)」です。そしてCloudflareは、それらのテーマパークへ向かうための高速道路にある「巨大な料金所兼ジャンクション」だと考えてください。この料金所は非常に優秀で、正規のチケットを持った車はスムーズに通し、暴走車や危険な車は強制的に排除してくれます。だからこそ、各テーマパークはこの料金所を使う契約を結んでいるのです。

普段、このシステムは完璧に機能しています。しかし、もしこの「巨大料金所のゲート」がシステムエラーで一斉に開かなくなってしまったらどうなるでしょうか。テーマパーク自体は元気に営業しているし、内部の施設にも何の問題もありません。しかし、そこへ向かうための唯一の入り口が封鎖されてしまえば、客である私たちは目的地にたどり着くことができません。道路は大渋滞し、私たちの車のナビ(ブラウザ)には「目的地に到達できません」という無慈悲なエラーメッセージが表示されることになります。

これが、Cloudflareの障害時に起きる現象の正体です。Xのサーバーが壊れたわけでも、ChatGPTのAIが暴走したわけでもありません。それらのサービスの「手前」にあるCloudflareという入り口が閉じてしまったために、誰も中に入れなくなってしまったのです。技術的には、ブラウザに「502 Bad Gateway」や「500 Internal Server Error」といった数字が表示されることがよくあります。これは「あなたの通信は受け取ったけれど、その先にある本物のサーバーへうまく繋げませんでした」という、門番からの悲鳴のようなメッセージです。

現代のインターネットサービス開発において、自前で世界規模の高速配信ネットワークや高度なセキュリティシステムを構築するのは、莫大なコストと時間がかかります。それよりも、専門家であるCloudflareにお金を払って任せてしまった方が、安上がりで性能も良く、安全です。その結果、多くの企業が合理的な判断としてCloudflareを採用しました。これはビジネスとしては正解なのですが、構造としては「みんなが同じ一本の命綱にぶら下がっている」という状態を作り出すことになります。そのため、その命綱であるCloudflareにひとたびトラブルが起きると、業種も国も関係なく、そこに依存していたすべてのサービスがドミノ倒しのように巻き込まれ、世界規模の「インターネット障害」として体感されてしまうのです。

2025年11月18日、その時デジタル空間で何が起きていたのか

それでは、2025年11月18日に発生した大規模障害では、具体的に何が起きていたのでしょうか。CloudflareのCEOであるマシュー・プリンス氏が公開した詳細なレポートや、The Vergeなどのテック系メディアの報道を総合すると、まるでバタフライ・エフェクトのような、小さなきっかけが巨大なシステムダウンを招くプロセスが明らかになりました。結論から言えば、これは外部からのサイバー攻撃ではなく、内部のシステム更新における「想定外のミス」が原因でした。

事の発端は、Cloudflareが運用している「ボット管理システム」のアップデート作業でした。このシステムは、人間ではないプログラム(ボット)によるアクセスを検知して遮断するための重要な機能です。Cloudflareのエンジニアたちは、このシステムの裏側にあるデータベースのセキュリティを強化しようとしていました。具体的には、データベースへのアクセス権限を整理し、より安全な形に移行する作業を行っていたのです。これは日常的な改善作業の一環であり、本来であれば何の問題も起きないはずでした。

しかし、この変更によって予期せぬ副作用が発生しました。新しい設定の下でボットを識別するためのルール(シグネチャ)を生成するプログラムを動かしたところ、データベースから取り出される情報に重複が発生してしまったのです。料理に例えるなら、レシピの材料リストを作るときに、手違いで「砂糖」を2回も3回もリストに入れてしまったような状態です。その結果、生成された「設定ファイル(フィーチャーファイル)」のサイズが、普段の約2倍にまで膨れ上がってしまいました。

この「太りすぎた設定ファイル」は、自動的に世界中のCloudflareのサーバーへと配信されました。各サーバーで待ち構えていたソフトウェアは、いつものようにこのファイルを受け取り、読み込もうとしました。しかし、ここで致命的な問題が発生します。そのソフトウェアには、「読み込むファイルの大きさはこれくらいまで」という制限や、メモリの使い方の前提がプログラムコードとして組み込まれていました。想定を遥かに超える巨大なファイルが送られてきたことで、ソフトウェアはメモリを食いつぶし、処理能力の限界を超え、パニックを起こしてクラッシュしてしまったのです。

悪いことは重なるもので、Cloudflareのシステムは非常に回復力が高い設計になっていました。あるプロセスがクラッシュすると、システムは自動的に「再起動」を試みます。しかし、再起動しても読み込むのは同じ「太りすぎたファイル」です。結果として、起動してはクラッシュし、また起動してはクラッシュするという「再起動ループ」が世界中の何千台ものサーバーで一斉に発生しました。これが、大規模な通信障害の直接的な原因です。

障害はUTC(協定世界時)で午前11時20分頃から深刻化し、X、ChatGPT、Zoom、Spotify、Canvaといった主要サービスだけでなく、皮肉なことに障害情報を伝えるためのサイトであるDownDetectorまでもが繋がりにくくなりました。Cloudflareのエンジニアチームは当初、この異常なトラフィックの途絶を「大規模なDDoS攻撃を受けているのではないか」と疑いました。しかし調査を進めるうちに、外部からの攻撃の痕跡はなく、自分たちが配信した設定ファイルそのものが原因であることを突き止めました。

原因が判明してからの対応は迅速でした。エンジニアたちは問題のある設定ファイルの配信を止め、過去の正常なバージョンのファイルに差し替える作業を行いました。また、ボット管理システムの自動更新を一時的に停止し、事態の鎮静化を図りました。その結果、主要なトラフィックは数時間以内に回復し、日本時間の18日深夜から19日未明にかけて、インターネットは徐々に日常を取り戻していきました。CEOのマシュー・プリンス氏は、今回の事態を「2019年以来最悪の障害」と認め、今後は設定ファイルのサイズチェックを厳格化することや、問題発生時にシステムを一括で緊急停止できる仕組み(キルスイッチ)の改善、そしてエラーログの収集自体がサーバーに負荷をかけないような設計の見直しなど、徹底的な再発防止策を講じると約束しました。

もちろん、Cloudflare側も今回のような障害を繰り返さないために、検証プロセスの強化や、設定ミスがあっても全世界に一気に波及しない仕組みづくりを進めるとしています。しかし「インターネットの裏側で、少数の巨大事業者に機能が集中している」という構造そのものはすぐには変わりません。

私たち一般の利用者ができることは多くありませんが、「XやChatGPTが落ちている」と感じたときに、「サービスそのものではなく、Cloudflareやクラウド事業者など“裏方のインフラに障害が起きている可能性がある」という視点を持っておくと、ニュースの内容がぐっと理解しやすくなります。

今回の「インターネットが壊れた」という騒動は、便利で高速で安全なインフラに支えられた現代のネット社会が、同時に“少数の重要なハブに強く依存している世界”でもあることを、私たちに印象づける出来事だったと言えるでしょう。

静かなる分断:クラウドの利便性が企業システムから「全体」を奪うとき

20 November 2025 at 19:47

かつて、企業の情報システムは巨大な建築物のようなものであった。情報システム部門という設計者が全社を横断して図面を引き、基幹システムは10年に一度の大規模刷新を経て、周辺システムも数年単位で計画的に統合・再構築されてきた。そこには、不自由さはあったかもしれないが、運用の中で蓄積された知識や暗黙のルールを含め、その企業独自の「統合の文法」とでも呼ぶべき秩序が存在していた。誰かが全体の地図を持ち、データの流れや責任の所在を把握していたのである。ところが、ガバナンスや全体設計の思想が追いつかないまま急速に進んだクラウド化は、この前提を根底から揺るがしている。現場の部門が自らの判断で最適なツールを選び、導入し、使い捨てる。この「手軽さ」と「自律性」こそが、結果として企業内部にあった統合の感覚を薄め、システム全体を継ぎ接ぎのパッチワークへと変貌させているのだ。本稿では、現代の企業システムが直面している「構造的断片化」という静かな危機について、その本質と処方箋を論じたい。

断片化は「障害」ではなく、整合性を失った「日常」として現れる

多くの人々が「システムのトラブル」と聞いて思い浮かべるのは、全社的なネットワーク停止やサイバー攻撃による情報漏洩といった、劇的で派手な出来事だろう。しかし、クラウド化とSaaS化がもたらす現代的な断片化の問題は、そうしたわかりやすい破局としては現れない。それはむしろ、「一見、何も起きていない平穏な日常」の中で、静かに、しかし確実に進行していく病のようなものである。システムは稼働しており、エラーメッセージも出ていない。画面も通常通り反応する。それにもかかわらず、組織の至る所で「何かが少しずつ噛み合わない」という違和感が沈殿していくのが特徴だ。

例えば、同じ一つの事象を記録しているはずなのに、システムAとシステムBでログの時刻が微妙にずれている。部門間を連携するバッチ処理が理由もなく遅延し始め、会議の場で「今、どの数字が正解なのか」を誰も即答できない状況が生まれる。あるいは、利用しているSaaS側のAPI仕様が予告なく変更され、処理自体は成功しているものの、受け渡されるデータの意味が変わってしまい、実態とは異なる数値が経営層に報告され続ける。監査対応の季節になり、理屈の上では完全に一致するはずの帳票の数値が、システム上のどこを探しても存在しないことに気づく。こうした出来事は、多くの場合、システム的な「障害」としてはカウントされない。そのため、現場担当者がExcelで手作業の修正を行ったり、担当者間のメールでつじつまを合わせたりといった、見えないコストとして吸収されてしまう。

これは、企業システムがもはや一枚岩のプラットフォームでもなければ、単一の論理で統制可能な箱庭でもなくなったことを意味している。社内で構築したレガシーシステム、機能ごとに特化した無数のSaaS、外部委託先が管理するプラットフォーム、パートナー企業との接続インターフェース。それらが日々新たに組み合わされ、接続され、また不要になれば切り離される。その結合パターンは、新たなツールの導入や契約更新のたびに流動的に変化していく。もともと企業組織とは、人、業務、情報が複雑に絡み合う「社会的な複雑系」であった。そこに、極めて可変性の高いクラウドサービスという要素が入り込んだことで、その複雑さは固定的な構造から「絶えず形を変え続けるアメーバのようなもの」へと変質してしまったのである。一度設計図を書けば数年間は安定して運用できるという、かつての牧歌的な前提はもはや通用しない。断片化は、システムの不具合としてではなく、組織の血管に詰まるコレステロールのように、日常業務の効率と信頼性を内側から静かに蝕んでいるのである。

「部分としては正しい」善意の集積が、皮肉にも全体を歪める

この問題の根深さは、断片化を引き起こしている原因が、誰かの怠慢や悪意ではないという点にある。むしろ、現場で働く一人ひとりの行動は、それぞれの視点から見れば極めて合理的であり、賞賛されるべき「善意」に基づいていることが多い。ここに現代のITガバナンスの難しさがある。

営業部門は、四半期の目標を達成するために、顧客管理に最も適した最新のSaaSを導入する。マーケティング部門は、市場の変化を捉えるために別の分析ツールを契約する。IT部門は、限られた予算と人員の中でセキュリティリスクを最小化しようと奔走し、経営層は、競合他社に遅れないためのスピードと変化を現場に要求する。どの部門の判断も、その「部分」においては正解であり、筋が通っている。しかし、相互依存性が高まった現代のシステム環境においては、そうした「部分的に正しい判断」の積み重ねが、必ずしも「全体としての正解」を導き出すとは限らない。それどころか、各部門が局所的な最適解を追求すればするほど、全体としての整合性が失われ、企業全体のシステム像が歪んでいくというパラドックスが生じるのだ。

これを加速させているのが、境界線の消失と再定義の困難さである。かつて統合とは、「どこまでが誰の責任で、どのシステムが正なのか」という境界線を明確に引く行為であった。しかし、ガバナンスの効きにくいSaaSが部門ごとに導入されると、この境界線を固定すること自体が不可能になる。例えば、あるSaaSが持つデータモデルが、意図せず他部門の業務フローの制約条件になってしまったり、特定のツールの仕様変更が全社の在庫管理ルールに影響を及ぼしたりする。技術と組織が互いに影響し合い、主従関係が曖昧になっていく中で、企業は「自分たちの構造を説明する言葉」を失いつつある。

多くの企業が直面する脆弱性の本質は、実はこの点にあるのではないだろうか。DX(デジタルトランスフォーメーション)という言葉が飛び交い、高価な可視化ツールやダッシュボードが導入されても、そこで扱われている情報の「意味」や「前提」が全社で共有されていなければ、経営者が見ているのは単なる「数字の断片」に過ぎない。ある部門にとっての「顧客」の定義と、別の部門の「顧客」の定義が異なったまま、数字だけが統合される。売上という言葉一つとっても、その計上基準やタイミングがツールごとに微妙に異なる。こうした意味のズレが放置されたままシステムだけがつながるとき、企業は自らの実態を正しく語る力を失う。自分の構造を正確に説明できない組織が、意図を持ってその構造を変革することなどできるはずがない。制御不能となった複雑さは、外部環境の変化に対する適応力を奪い、やがて企業の競争力そのものを削ぎ落としていくことになる。

技術的統合を超えて、「全体を想像する力」を取り戻すために

では、この断片化という現代病に向き合うために、我々はかつてのような強力な中央集権的IT統制へと回帰すべきなのだろうか。情報システム部門が全てのツールの導入を検閲し、例外を認めない厳格な標準化を強いる時代に戻るべきか。その答えはおそらく「NO」である。変化の激しい現代において、全てを一つの巨大なシステムや単一の標準に押し込める発想は現実的ではないし、現場の自律性を奪うことは、企業から機動力という最大の武器を奪うことに他ならない。分散はリスクであると同時に、多様な価値を生み出す源泉でもあるからだ。

重要なのは、「現場の自律性」と「全体の整合性」を二者択一の対立構造として捉えるのではなく、その間に健全な緊張関係を保つための新しい枠組みを構築することである。そのためには、単にAPIをつないでデータを流すという「技術的な統合」のレイヤーを超えて、「構造をどう理解し、どう語り直すか」という「意味の統合」のレイヤーへと視座を高める必要がある。統合の本質とは、バラバラのシステムをケーブルで繋ぐことではなく、企業という物語の整合性を保つことにあるからだ。どの情報がどこで生まれ、どのような文脈と責任のもとで加工され、最終的に誰の意思決定に使われているのか。その情報の流れ(データ・リネージ)と意味の連鎖を、部門の壁を越えて共有し続ける営みが不可欠となる。

ここで唯一の羅針盤となるのが、「全体を想像する力」である。これは、一人の天才的なアーキテクトが全ての仕様を暗記している状態を指すのではない。組織全体が、自分たちのシステム環境を「完全には把握しきれない複雑なもの」として謙虚に受け止め、その上で、互いの見えている景色を持ち寄り、全体像を絶えず描き直そうとする姿勢のことである。経営、IT部門、業務部門が、それぞれの「部分」から見えている世界を言葉にし、噛み合わない箇所があれば、それはシステムの問題ではなく「認識と定義の問題」として対話を行う。そうしたプロセスを通じて、「自分たちは何を知らないのか」「どこに盲点があるのか」を組織としてメタ認知することこそが、現代における真のガバナンスと呼べるだろう。

ツールは日々入れ替わり、データは爆発的に増え続ける。その奔流の中で、個別の部品をいくら磨き上げても、全体を見渡す視座が欠けていれば、組織は漂流するしかない。断片化が進む世界において、企業がその輪郭を保ち、自律的に変化していくためには、技術への投資と同じくらい、あるいはそれ以上に、自分たちの構造を語り直し、全体を想像しようとする人間自身の知性への投資が求められている。それこそが、クラウドとSaaSに覆われたこの「部分最適」の時代において、企業が持ち得る最も地味でありながら、最も本質的で強靭な競争力となるはずだ。

How to do a Security Review – An Example

By: Jo
16 November 2025 at 03:36
Learn how to perform a complete Security Review for new product features—from scoping and architecture analysis to threat modeling and risk assessment. Using a real-world chatbot integration example, this guide shows how to identify risks, apply security guardrails, and deliver actionable recommendations before release.

Yeske helped change what complying with zero trust means

7 November 2025 at 17:44

The Cybersecurity and Infrastructure Security Agency developed a zero trust architecture that features five pillars.

The Defense Department’s zero trust architecture includes seven pillars.

The one the Department of Homeland Security is implementing takes the best of both architectures and adds a little more to the mix.

Don Yeske, who recently left federal service after serving for the last two-plus years as the director of national security in the cyber division at DHS, said the agency had to take a slightly different approach for several reasons.

Don Yeske is a senior solutions architect federal at Virtu and a former director of national security in the cyber division at the Homeland Security Department.

“If you look at OMB [memo] M-22-09 it prescribes tasks. Those tasks are important, but that itself is not a zero trust strategy. Even if you do everything that M-22-09 told us to do — and by the way, those tasks were due at the beginning of this year — even if you did it all, that doesn’t mean, goal achieved. We’re done with zero trust. Move on to the next thing,” Yeske said during an “exit” interview on Ask the CIO. “What it means is you’re much better positioned now to do the hard things that you had to do and that we hadn’t even contemplated telling you to do yet. DHS, at the time that I left, was just publishing this really groundbreaking architecture that lays out what the hard parts actually are and begins to attack them. And frankly, it’s all about the data pillar.”

The data pillar of zero trust is among the toughest ones. Agencies have spent much of the past two years focused on other parts of the architecture, like improving their cybersecurity capabilities in the identity and network pillars.

Yeske, who now is a senior solutions architect federal at Virtru, said the data pillar challenge for DHS is even bigger because of the breadth and depth of its mission. He said between the Coast Guard, FEMA, Customs and Border Protection and CISA alone, there are multiple data sources, requirements and security rules.

“What’s different about it is we viewed the problem of zero trust as coming in broad phases. Phase one, where you’re just beginning to think about zero trust, and you’re just beginning to adjust your approach, is where you start to take on the idea that my network boundary can’t be my primary, let alone sole line of defense. I’ve got to start shrinking those boundaries around the things that I’m trying to protect,” he said. “I’ve got to start defending within my network architecture, not just from the outside, but start viewing the things that are happening within my network with suspicion. Those are all building on the core tenants of zero trust.”

Capabilities instead of product focused

He said initial zero trust strategy stopped there, segmenting networks and protecting data at rest.

But to get to this point, he said agencies too often are focused on implementing specific products around identity or authentication and authorization processes.

“It’s a fact that zero trust is something you do. It’s not something you buy. In spite of that, federal architecture has this pervasive focus on product. So at DHS, the way we chose to describe zero trust capability was as a series of capabilities. We chose, without malice or forethought, to measure those capabilities at the organization, not at the system, not at the component, not as a function of design,” Yeske said. “Organizations have capabilities, and those capabilities are comprised of three big parts: People. Who’s responsible for the thing you’re describing within your organization? Process. How have you chosen to do the thing that you’re describing at your organization and products? What helps you do that?”

Yeske said the third part is technology, which, too often, is intertwined with the product part.

He said the DHS architecture moved away from focusing on product or technology, and instead tried to answer the simple, yet complex, questions: What’s more important right now? What are the things that I should spend my limited pool of dollars on?

“We built a prioritization mechanism, and we built it on the idea that each of those capabilities, once we understand their inherent relationships to one another, form a sort of Maslow’s hierarchy of zero trust. There are things that are more basic, that if you don’t do this, you really can’t do anything else, and there are things that are really advanced, that once you can do basically everything else you can contemplate doing this. And there are a lot of things in between,” he said. “We took those 46 capabilities based on their inherent logical relationships, and we came up with a prioritization scheme so that you could, if you’re an organization implementing zero trust, prioritize the products, process and technologies.”

Understanding cyber tool dependencies

DHS defined those 46 capabilities based on the organization’s ability to perform that function to protect its data, systems or network.

Yeske said, for example, with phishing-resistant, multi-factor authentication, DHS didn’t specify the technology or product needed, but just the end result of the ability to authenticate users using multiple factors that are resistant to phishing.

“We’re describing something your organization needs to be able to do because if you can’t do that, there are other things you need to do that you won’t be able to do. We just landed on 46, but that’s not actually all that weird. If you look at the Defense Department’s zero trust roadmap, it contains a similar number of things they describe as capability, which are somewhat different,” said Yeske, who spent more than 15 years working for the Navy and Marine Corps before coming to DHS. “We calculated a 92% overlap between the capabilities we described in our architecture and the ones DoD described. And the 8% difference is mainly because the DHS one is brand new. So just understanding that the definition of each of these capabilities also includes two types of relationships, a dependency, which is where you can’t have this capability unless you first had a different one.”

Yeske said before he left DHS in July, the zero trust architecture and framework had been approved for use and most of the components had a significant number of cyber capabilities in place.

He said the next step was assessing the maturity of those capabilities and figuring out how to move them forward.

If other agencies are interested in this approach, Yeske said the DHS architecture should be available for them to get a copy of.

The post Yeske helped change what complying with zero trust means first appeared on Federal News Network.

© Getty Images/design master

Innovator Spotlight: Corelight

By: Gary
9 September 2025 at 12:24

The Network’s Hidden Battlefield: Rethinking Cybersecurity Defense Modern cyber threats are no longer knocking at the perimeter – they’re already inside. The traditional security paradigm has fundamentally shifted, and CISOs...

The post Innovator Spotlight: Corelight appeared first on Cyber Defense Magazine.

Synergy between cyber security Mesh & the CISO role: Adaptability, visibility & control

By: slandau
22 July 2024 at 09:00

With over two decades of experience in the cyber security industry, I specialize in advising organizations on how to optimize their financial investments through the design of effective and cost-efficient cyber security strategies. Since the year 2000, I’ve had the privilege of collaborating with various channels and enterprises across the Latin American region, serving in multiple roles ranging from Support Engineer to Country Manager. This extensive background has afforded me a unique perspective on the evolving threat landscape and the shifting needs of businesses in the digital world.

The dynamism of technological advancements has transformed cyber security demands, necessitating more proactive approaches to anticipate and prevent threats before they can impact an organization. Understanding this ever-changing landscape is crucial for adapting to emerging security challenges.

In my current role as the Channel Engineering Manager for LATAM at Check Point, I also serve as part of the Cybersecurity Evangelist team under the office of our CTO. I am focused on merging technical skills with strategic decision-making, encouraging organizations to concentrate on growing their business while we ensure security.

The Cyber Security Mesh framework can safeguard businesses from unwieldy and next-generation cyber threats. In this interview, Check Point Security Engineering Manager Angel Salazar Velasquez discusses exactly how that works. Get incredible insights that you didn’t even realize that you were missing. Read through this power-house interview and add another dimension to your organization’s security strategy!

Would you like to provide an overview of the Cyber Security Mesh framework and its significance?

The Cyber Security Mesh framework represents a revolutionary approach to addressing cyber security challenges in increasingly complex and decentralized network environments. Unlike traditional security models that focus on establishing a fixed ‘perimeter’ around an organization’s resources, the Mesh framework places security controls closer to the data, devices, and users requiring protection. This allows for greater flexibility and customization, more effectively adapting to specific security and risk management needs.

For CISOs, adopting the Cyber Security Mesh framework means a substantial improvement in risk management capabilities. It enables more precise allocation of security resources and offers a level of resilience that is difficult to achieve with more traditional approaches. In summary, the Mesh framework provides an agile and scalable structure for addressing emerging threats and adapting to rapid changes in the business and technology environment.

How does the Cyber Security Mesh framework differ from traditional cyber security approaches?

Traditionally, organizations have adopted multiple security solutions from various providers in the hope of building comprehensive defense. The result, however, is a highly fragmented security environment that can lead to a lack of visibility and complex risk management. For CISOs, this situation presents a massive challenge because emerging threats often exploit the gaps between these disparate solutions.

The Cyber Security Mesh framework directly addresses this issue. It is an architecture that allows for better interoperability and visibility by orchestrating different security solutions into a single framework. This not only improves the effectiveness in mitigating threats but also enables more coherent, data-driven risk management. For CISOs, this represents a radical shift, allowing for a more proactive and adaptive approach to cyber security strategy.

Could you talk about the key principles that underly Cyber Security Mesh frameworks and architecture?

Understanding the underlying principles of Cyber Security Mesh is crucial for evaluating its impact on risk management. First, we have the principle of ‘Controlled Decentralization,’ which allows organizations to maintain control over their security policies while distributing implementation and enforcement across multiple security nodes. This facilitates agility without compromising security integrity.

Secondly, there’s the concept of ‘Unified Visibility.’ In an environment where each security solution provides its own set of data and alerts, unifying this information into a single coherent ‘truth’ is invaluable. The Mesh framework allows for this consolidation, ensuring that risk-related decision-making is based on complete and contextual information. These principles, among others, combine to provide a security posture that is much more resilient and adaptable to the changing needs of the threat landscape.

How does the Cyber Security Mesh framework align with or complement Zero Trust?

The convergence of Cyber Security Mesh and the Zero Trust model is a synergy worth exploring. Zero Trust is based on the principle of ‘never trust, always verify,’ meaning that no user or device is granted default access to the network, regardless of its location. Cyber Security Mesh complements this by decentralizing security controls. Instead of having a monolithic security perimeter, controls are applied closer to the resource or user, allowing for more granular and adaptive policies.

This combination enables a much more dynamic approach to mitigating risks. Imagine a scenario where a device is deemed compromised. In an environment that employs both Mesh and Zero Trust, this device would lose its access not only at a global network level but also to specific resources, thereby minimizing the impact of a potential security incident. These additional layers of control and visibility strengthen the organization’s overall security posture, enabling more informed and proactive risk management.

How does the Cyber Security Mesh framework address the need for seamless integration across diverse technologies and platforms?

The Cyber Security Mesh framework is especially relevant today, as it addresses a critical need for seamless integration across various technologies and platforms. In doing so, it achieves Comprehensive security coverage, covering all potential attack vectors, from endpoints to the cloud. This approach also aims for Consolidation, as it integrates multiple security solutions into a single operational framework, simplifying management and improving operational efficiency.

Furthermore, the mesh architecture promotes Collaboration among different security solutions and products. This enables a quick and effective response to any threat, facilitated by real-time threat intelligence that can be rapidly shared among multiple systems. At the end of the day, it’s about optimizing security investment while facing key business challenges, such as breach prevention and secure digital transformation.

Can you discuss the role of AI and Machine Learning within the Cyber Security Mesh framework/architecture?

Artificial Intelligence (AI) and Machine Learning play a crucial role in the Cyber Security Mesh ecosystem. These technologies enable more effective and adaptive monitoring, while providing rapid responses to emerging threats. By leveraging AI, more effective prevention can be achieved, elevating the framework’s capabilities to detect and counter vulnerabilities in real-time.

From an operational standpoint, AI and machine learning add a level of automation that not only improves efficiency but also minimizes the need for manual intervention in routine security tasks. In an environment where risks are constantly evolving, this agility and ability to quickly adapt to new threats are invaluable. These technologies enable coordinated and swift action, enhancing the effectiveness of the Cyber Security Mesh.

What are some of the challenges or difficulties that organizations may see when trying to implement Mesh?

The implementation of a Cyber Security Mesh framework is not without challenges. One of the most notable obstacles is the inherent complexity of this mesh architecture, which can hinder effective security management. Another significant challenge is the technological and knowledge gap that often arises in fragmented security environments. Added to these is the operational cost of integrating and maintaining multiple security solutions in an increasingly diverse and dynamic ecosystem.

However, many of these challenges can be mitigated if robust technology offering centralized management is in place. This approach reduces complexity and closes the gaps, allowing for more efficient and automated operation. Additionally, a centralized system can offer continuous learning as it integrates intelligence from various points into a single platform. In summary, centralized security management and intelligence can be the answer to many of the challenges that CISOs face when implementing the Cyber Security Mesh.

How does the Cyber Security Mesh Framework/Architecture impact the role of traditional security measures, like firewalls and IPS?

Cyber Security Mesh has a significant impact on traditional security measures like firewalls and IPS. In the traditional paradigm, these technologies act as gatekeepers at the entry and exit points of the network. However, with the mesh approach, security is distributed and more closely aligned with the fluid nature of today’s digital environment, where perimeters have ceased to be fixed.

Far from making them obsolete, the Cyber Security Mesh framework allows firewalls and IPS to transform and become more effective. They become components of a broader and more dynamic security strategy, where their intelligence and capabilities are enhanced within the context of a more flexible architecture. This translates into improved visibility, responsiveness, and adaptability to new types of threats. In other words, traditional security measures are not eliminated, but integrated and optimized in a more versatile and robust security ecosystem.

Can you describe real-world examples that show the use/success of the Cyber Security Mesh Architecture?

Absolutely! In a company that had adopted a Cyber Security Mesh architecture, a sophisticated multi-vector attack was detected targeting its employees through various channels: corporate email, Teams, and WhatsApp. The attack included a malicious file that exploited a zero-day vulnerability. The first line of defense, ‘Harmony Email and Collaboration,’ intercepted the file in the corporate email and identified it as dangerous by leveraging its Sandboxing technology and updated the information in its real-time threat intelligence cloud.

When the same malicious file tried to be delivered through Microsoft Teams, the company was already one step ahead. The security architecture implemented also extends to collaboration platforms, so the file was immediately blocked before it could cause harm. Almost simultaneously, another employee received an attack attempt through WhatsApp, which was neutralized by the mobile device security solution, aligned with the same threat intelligence cloud.

This comprehensive and coordinated security strategy demonstrates the strength and effectiveness of the Cyber Security Mesh approach, which allows companies to always be one step ahead, even when facing complex and sophisticated multi-vector attacks. The architecture allows different security solutions to collaborate in real-time, offering effective defense against emerging and constantly evolving threats.

The result is solid security that blocks multiple potential entry points before they can be exploited, thus minimizing risk and allowing the company to continue its operations without interruption. This case exemplifies the potential of a well-implemented and consolidated security strategy, capable of addressing the most modern and complex threats.

Is there anything else that you would like to share with the CyberTalk.org audience?

To conclude, the Cyber Security Mesh approach aligns well with the three key business challenges that every CISO faces:

Breach and Data Leak Prevention: The Cyber Security Mesh framework is particularly strong in offering an additional layer of protection, enabling effective prevention against emerging threats and data breaches. This aligns perfectly with our first ‘C’ of being Comprehensive, ensuring security across all attack vectors.

Secure Digital and Cloud Transformation: The flexibility and scalability of the Mesh framework make it ideal for organizations in the process of digital transformation and cloud migration. Here comes our second ‘C’, which is Consolidation. We offer a consolidated architecture that unifies multiple products and technologies, from the network to the cloud, thereby optimizing operational efficiency and making digital transformation more secure.

Security Investment Optimization: Finally, the operational efficiency achieved through a Mesh architecture helps to optimize the security investment. This brings us to our third ‘C’ of Collaboration. The intelligence shared among control points, powered by our ThreatCloud intelligence cloud, enables quick and effective preventive action, maximizing the return on security investment.

In summary, Cyber Security Mesh is not just a technological solution, but a strategic framework that strengthens any CISO’s stance against current business challenges. It ideally complements our vision and the three C’s of Check Point, offering an unbeatable value proposition for truly effective security.

The post Synergy between cyber security Mesh & the CISO role: Adaptability, visibility & control appeared first on CyberTalk.

Contain Breaches and Gain Visibility With Microsegmentation

1 February 2023 at 09:00

Organizations must grapple with challenges from various market forces. Digital transformation, cloud adoption, hybrid work environments and geopolitical and economic challenges all have a part to play. These forces have especially manifested in more significant security threats to expanding IT attack surfaces.

Breach containment is essential, and zero trust security principles can be applied to curtail attacks across IT environments, minimizing business disruption proactively. Microsegmentation has emerged as a viable solution through its continuous visualization of workload and device communications and policy creation to define what communications are permitted. In effect, microsegmentation restricts lateral movement, isolates breaches and thwarts attacks.

Given the spotlight on breaches and their impact across industries and geographies, how can segmentation address the changing security landscape and client challenges? IBM and its partners can help in this space.

Breach Landscape and Impact of Ransomware

Historically, security solutions have focused on the data center, but new attack targets have emerged with enterprises moving to the cloud and introducing technologies like containerization and serverless computing. Not only are breaches occurring and attack surfaces expanding, but also it has become easier for breaches to spread. Traditional prevention and detection tools provided surface-level visibility into traffic flow that connected applications, systems and devices communicating across the network.  However, they were not intended to contain and stop the spread of breaches.

Ransomware is particularly challenging, as it presents a significant threat to cyber resilience and financial stability. A successful attack can take a company’s network down for days or longer and lead to the loss of valuable data to nefarious actors. The Cost of a Data Breach 2022 report, conducted by the Ponemon Institute and sponsored by IBM Security, cites $4.54 million as the average ransomware attack cost, not including the ransom itself.

In addition, a recent IDC study highlights that ransomware attacks are evolving in sophistication and value. Sensitive data is being exfiltrated at a higher rate as attackers go after the most valuable targets for their time and money. Ultimately, the cost of a ransomware attack can be significant, leading to reputational damage, loss of productivity and regulatory compliance implications.

Organizations Want Visibility, Control and Consistency

With a focus on breach containment and prevention, hybrid cloud infrastructure and application security, security teams are expressing their concerns. Three objectives have emerged as vital for them.

First, organizations want visibility. Gaining visibility empowers teams to understand their applications and data flows regardless of the underlying network and compute architecture.

Second, organizations want consistency. Fragmented and inconsistent segmentation approaches create complexity, risk and cost. Consistent policy creation and strategy help align teams across heterogeneous environments and facilitate the move to the cloud with minimal re-writing of security policy.

Finally, organizations want control. Solutions that help teams target and protect their most critical assets deliver the greatest return. Organizations want to control communications through selectively enforced policies that can expand and improve as their security posture matures towards zero trust security.

Microsegmentation Restricts Lateral Movement to Mitigate Threats

Microsegmentation (or simply segmentation) combines practices, enforced policies and software that provide user access where required and deny access everywhere else. Segmentation contains the spread of breaches across the hybrid attack surface by continually visualizing how workloads and devices communicate. In this way, it creates granular policies that only allow necessary communication and isolate breaches by proactively restricting lateral movement during an attack.

The National Institute of Standards and Technology (NIST) highlights microsegmentation as one of three key technologies needed to build a zero trust architecture, a framework for an evolving set of cybersecurity paradigms that move defense from static, network-based perimeters to users, assets and resources.

Suppose existing detection solutions fail and security teams lack granular segmentation. In that case, malicious software can enter their environment, move laterally, reach high-value applications and exfiltrate critical data, leading to catastrophic outcomes.

Ultimately, segmentation helps clients respond by applying zero trust principles like ‘assume a breach,’ helping them prepare in the wake of the inevitable.

IBM Launches Segmentation Security Services

In response to growing interest in segmentation solutions, IBM has expanded its security services portfolio with IBM Security Application Visibility and Segmentation Services (AVS). AVS is an end-to-end solution combining software with IBM consulting and managed services to meet organizations’ segmentation needs. Regardless of where applications, data and users reside across the enterprise, AVS is designed to give clients visibility into their application network and the ability to contain ransomware and protect their high-value assets.

AVS will walk you through a guided experience to align your stakeholders on strategy and objectives, define the schema to visualize desired workloads and devices and build the segmentation policies to govern network communications and ring-fence critical applications from unauthorized access. Once the segmentation policies are defined and solutions deployed, clients can consume steady-state services for ongoing management of their environment’s workloads and applications. This includes health and maintenance, policy and configuration management, service governance and vendor management.

IBM has partnered with Illumio, an industry leader in zero trust segmentation, to deliver this solution.  Illumio’s software platform provides attack surface visibility, enabling you to see all communication and traffic between workloads and devices across the entire hybrid attack surface. In addition, it allows security teams to set automated, granular and flexible segmentation policies that control communications between workloads and devices, only allowing what is necessary to traverse the network. Ultimately, this helps organizations to quickly isolate compromised systems and high-value assets, stopping the spread of an active attack.

With AVS, clients can harden compute nodes across their data center, cloud and edge environments and protect their critical enterprise assets.

Start Your Segmentation Journey

IBM Security Services can help you plan and execute a segmentation strategy to meet your objectives. To learn more, register for the on-demand webinar now.

The post Contain Breaches and Gain Visibility With Microsegmentation appeared first on Security Intelligence.

Seth Rogen’s ‘High-ly Creative Retreat’ Airbnb Begins Booking

8 February 2023 at 08:00

Feel like taking your creativity level… a bit higher? Available for booking beginning this week, Seth Rogen partnered with Airbnb to unveil “A High-ly Creative Retreat,” providing a unique getaway in Los Angeles with ceramic activities.

The retreat features a ceramic studio with Rogen’s own handmade pottery, a display of his cannabis and lifestyle company Houseplant’s unique Housegoods, as well as mid-century furnishings, and “sprawling views of the city.”

The Airbnb is probably a lot cheaper than you think: Rogen will host three, one-night stays on February 15, 16, and 17 for two guests each for just $42—one decimal point away from 420—with some restrictions. U.S. residents can book an overnight stay at Rogen’s Airbnb beginning Feb. 7, but book now, because it’s doubtful that open slots will last.

“I don’t know what’s more of a Houseplant vibe than a creative retreat at a mid-century Airbnb filled with our Housegoods, a pottery wheel, and incredible views of LA,” Rogen said. “Add me, and you’ll have the ultimate experience.”

According to the listing, and his Twitter account, Rogen will be there to greet people and even do ceramics together.

“I’m teaming up with Airbnb so you (or someone else) can hang out with me and spend the night in a house inspired by my company,” Rogen tweeted recently.

I'm teaming up with @airbnb so you (or someone else) can hang out with me and spend the night in a house inspired by my company Houseplant. https://t.co/7XFoY5vgm9 pic.twitter.com/ukW1UxnEm5

— Seth Rogen (@Sethrogen) January 31, 2023

Guests will be provided with the following activities:

  • Get glazed in the pottery studio and receive pointers from Rogen himself!
  • Peruse a selection of Rogen’s own ceramic masterpieces, proudly displayed within the mid-century modern home.
  • Relax and revel in the sunshine of the space’s budding yard.
  • Tune in and vibe out to a collection of Houseplant record sets with specially curated tracklists by Seth Rogen & Evan Goldberg and inspired by different cannabis strains. Guests will get an exclusive first listen to their new Vinyl Box Set Vol. 2.
  • Satisfy cravings with a fully-stocked fridge for after-hours snacks.

Airbnb plans to join in on Rogen’s charity efforts, including his non-profit Hilarity for Charity, focusing on helping people living with Alzheimer’s disease.

“In celebration of this joint effort, Airbnb will make a one-time donation to Hilarity for Charity, a national non-profit on a mission to care for families impacted by Alzheimer’s disease, activate the next generation of Alzheimer’s advocates, and be a leader in brain health research and education,” Airbnb wrote.

In 2021, Rogen launched Houseplant, his cannabis and lifestyle company, in the U.S. But the cannabis brand’s web traffic was so high that the site crashed. Houseplant was founded by Rogen and his childhood friend Evan Goldberg, along with Michael Mohr, James Weaver, and Alex McAtee.

Yahoo! News reports, however, that Airbnb does not (cough, cough) allow cannabis on the premises of listings. The listing, however, will be filled with goods from Houseplant. Houseplant also sells luxury paraphernalia with a “mid-century modern spin.”

Seth Rogen recently invited Architectural Digest to present a tour of the Houseplant headquarters’ interior decor and operations. Houseplant’s headquarters is located in a 1918 bungalow in Los Angeles. Architectural Digest describes it as “Mid-century-modern-inspired furniture creates a cozy but streamlined aesthetic.”

People living in the U.S. can request to book stays at airbnb.com/houseplant. Guests are responsible for their own travel to and from Los Angeles, California and comply with applicable COVID-19 rules and guidelines. 

See Rogen’s listing on the Airbnb site.

If you can’t find your way in, Airbnb provides over 1,600 other creative spaces available around the globe.

The post Seth Rogen’s ‘High-ly Creative Retreat’ Airbnb Begins Booking appeared first on High Times.

Unconsidered benefits of a consolidation strategy every CISO should know

By: slandau
27 January 2023 at 15:45

Pete has 32 years of Security, Network, and MSSP experience and has been a hands-on CISO for the last 17 years and joined Check Point as Field CISO of the Americas. Pete’s cloud security deployments and designs have been rated by Garter as #1 and #2 in the world and he literally “wrote the book” and contributed to secure cloud reference designs as published in Intel Press: “Building the Infrastructure for Cloud Security: A Solutions View.” 

In this interview, Check Point’s Field CISO, Pete Nicoletti, shares insights into cyber security consolidation. Should your organization move ahead with a consolidated approach? Or maybe a workshop would be helpful. Don’t miss Pete Nicoletti’s perspectives.

What kinds of struggles and challenges are the organizational security leaders that you’re working with currently seeing?

Many! As members of the World Economic Forum Council for the Connected World, we drilled into this exact question and interviewed hundreds of executives and created a detailed report. The key findings are:  Economic Issues, IoT risks, increase in ransomware, and security personnel shortages all impacting budgets. Given these issues, our council recommended that security spend remain a priority, even in challenging times, since we all know that security incidents cost 10x to 100x verses budgeted expenditures.

How are CISOs currently building out or transitioning their information security programs? What kinds of results are they seeing?

In challenging times, CISO’s are looking hard at their tool set and seeing if there is overlap, or redundant tools, or underutilized tools. CISO’s are also evaluating their “play-books” to ensure that the tools in-use are efficient and streamlined. CISO’s are also keen to negotiate ELA’s that give them lower costs with flexibility to choose from a suite of tools to support the “speed of business.”

Security teams need to be trained and certified on their tools in use, and those budgets are under pressure. All these drivers lead to tool consolidation projects. Our customers are frequently very pleased with the normally mutually exclusive benefits: Costs Savings and better efficacy once a consolidation program is launched.

What are the key considerations for CISOs in deciding on whether or not to consolidate information security solutions? Can CISOs potentially lose capabilities when consolidating security and if so, how can this be addressed, if at all?

Losing features when consolidating is a valid concern, however, typically we find more advantages after consolidation: Lower training costs, higher staff satisfaction, fewer mistakes made, and the real gem: higher security program efficacy. We also see our customers leveraging the cloud and needing to extend their security protections quickly and easily, and our Check Point portfolio supports this using one console. With all the news of our peers experiencing exploited security vulnerabilities and other challenges, we are continuing to gain market share and happy customers.

How should CISOs go about deciding on whether or not to consolidate cyber security? Beyond cost, what should CISOs think about?

The number one consideration should be efficacy of the program. CISO’s are realizing that very small differences in efficacy lead to very large cost savings. The best security tool for the job should always be selected knowing this. An inventory of tools and the jobs they are doing should be created and maintained. Frequently, CISO’s find dozens of tools that are redundant, overlap with others, add unnecessary complexity, and that are poorly deployed or managed and not integrated into the program. Once the inventory is completed, work with your expert consultant or reseller to review and find redundancies or overlaps and kick-off a program to evaluate technical and cost benefits.

What can organizations achieve with a consolidated cyber security solution?

As mentioned previously, the number one goal of the program should be improving efficacy and our customers do report this. Efficacy lowers the number of false positives, lowers the number of real events and decreases overall risk. Other savings are found with lower training costs, faster run book execution, fewer mistakes and the ability to free up security analysts from wasting time on inefficient processes. Those analysts can now be leveraged into more productive efforts and ensure that the business growth and strategies are better supported.

As a seasoned professional, when you’ve worked with CISOs and security teams in moving to a consolidated solution, what’s gone right, what’s gone wrong, and what lessons can you share with newbie security leaders?

Any significant change in your tool set needs careful consideration and evaluation. Every new tool needs to be tested in lab and moved, as appropriate, into production. You need to find all the gotcha’s with any new tool going inline before they cost impact.

Don’t rush this testing step! Ensure that you have good measurements of your current program so you can easily determine improvements with new tools or consolidation efforts.

If CISOs decide against consolidation, how can they drive better value through existing solutions?

Ensure that the solutions you are using are fully deployed and optimized. We frequently uncover many tools that are underutilized and ineffective. Sit with your staff and watch their work. If they are cutting and pasting, logging into and out of multiple tools, not having the time to address every alert, or are making excessive mistakes, it may be time to have Check Point come in and do a workshop. Our very experienced team will review the current program and provide thoughts and ideas to improve the program. Even if consolidation is not selected, other findings may help improve the program!

Are there any other actionable insights that you would like to share with cyber security leaders?

Every security program is different, and your challenges are unique. But, you can’t know everything, so, consider working with your trusted partners and invite Check Point in to do a free discovery workshop. Cloud maturity, consolidation program consideration, Zero Trust program formulation, and many others are available. As a CISO, you may have some initiatives that need extra validation, and we are standing by to help propel your program.

And for an even stronger security strategy, be sure to attend Check Point’s upcoming CPX 360 event. Register here.

Lastly, to receive cutting-edge cyber security news, best practices and resources in your inbox each week, please sign up for the CyberTalk.org newsletter. 

The post Unconsidered benefits of a consolidation strategy every CISO should know appeared first on CyberTalk.

❌
❌