Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

칼럼 | 데이터 관리 방식이 달라진다···2026년 ‘뜨는 5가지, 지는 5가지’

19 January 2026 at 02:28

데이터 환경은 대부분의 기업이 따라가기 어려울 만큼 빠르게 변화하고 있다. 이런 변화 속도는 2가지 힘이 맞물리면서 가속화되고 있다. 하나는 점차 성숙 단계에 접어드는 엔터프라이즈 데이터 관리 관행이고, 다른 하나는 기업이 활용하는 데이터에 더 높은 수준의 일관성, 정합성, 신뢰를 요구하는 AI 플랫폼이다.

그 결과 2026년은 기업이 주변부를 조금씩 손보는 데서 벗어나, 데이터 관리의 핵심 구조를 본격적으로 전환하는 해가 될 전망이다. 데이터 관리 영역에서 무엇이 필요해지고 무엇이 아닌지에 대한 기준도 점차 뚜렷해지고 있으며, 이는 파편화된 도구 환경과 수작업 중심의 관리, 실질적인 인텔리전스를 제공하지 못하는 대시보드에 피로감을 느낀 시장의 현실을 그대로 보여준다.

2026년 데이터 관리 영역에서 ‘뜨는 요소’와 ‘지는 요소’를 정리해 본다.

뜨는 요소 1: 사람의 판단에 기반한 네이티브 거버넌스

데이터 거버넌스는 더 이상 부가적인 작업에 그치지 않는다. 유니티 카탈로그, 스노우플레이크 호라이즌, AWS 글루 카탈로그와 같은 플랫폼은 거버넌스를 아키텍처의 기초 요소로 직접 통합하고 있다. 이는 외부 거버넌스 계층이 오히려 마찰을 키우고, 데이터 전반을 일관되게 관리하는 데 한계로 작용한다는 인식이 반영된 결과다. 새롭게 자리 잡은 흐름의 핵심은 네이티브 자동화다. 데이터 품질 점검, 이상 징후 알림, 사용 현황 모니터링이 백그라운드에서 상시적으로 작동하며, 사람이 따라갈 수 없는 속도로 환경 전반의 변화를 포착한다.

다만 이러한 자동화가 사람의 판단을 대체하는 것은 아니다. 문제는 도구가 진단하지만, 심각도의 기준을 어떻게 정할지, 어떤 SLA가 중요한지, 에스컬레이션 경로를 어떻게 설계할지는 여전히 사람이 결정한다. 업계는 도구가 탐지를 담당하고, 의미 부여와 책임은 사람이 맡는 구조로 변화하고 있다. 이는 거버넌스가 언젠가 완전히 자동화될 것이라는 인식에서 벗어나는 흐름으로 볼 수 있다. 대신 기업은 네이티브 기술의 이점을 적극 활용하는 동시에, 사람의 의사결정이 지닌 가치를 다시 한번 강화하고 있다.

뜨는 요소 2: 플랫폼 통합과 포스트 웨어하우스 레이크하우스의 부상

수십 개의 특화된 데이터 도구를 이어 붙여 사용하던 시대가 막을 내리고 있다. 분산을 전제로 한 사고방식이 복잡성의 한계에 도달했기 때문이다. 그동안 기업은 데이터 수집 시스템, 파이프라인, 카탈로그, 거버넌스 계층, 웨어하우스 엔진, 대시보드 도구를 조합해 왔다. 그 결과 유지 비용은 높고 구조는 취약하며, 거버넌스 측면에서는 예상보다 훨씬 관리하기 어려운 환경이 형성됐다.

데이터브릭스, 스노우플레이크, 마이크로소프트는 이런 상황을 기회로 보고 플랫폼을 통합 환경으로 확장하고 있다. 레이크하우스는 데이터 아키텍처의 핵심 지향점으로 자리 잡았다. 정형 및 비정형 데이터를 하나의 플랫폼에서 처리하고, 분석과 머신러닝, AI 학습까지 아우를 수 있기 때문이다. 기업은 더 이상 데이터 사일로 간 이동이나 호환되지 않는 시스템을 동시에 관리하길 원하지 않는다. 필요한 것은 마찰을 줄이고 보안을 단순화하며 AI 개발 속도를 높일 수 있는 중앙 운영 환경이다. 플랫폼 통합은 이제 벤더 종속의 문제가 아니라, 데이터가 폭증하고 AI가 그 어느 때보다 높은 일관성을 요구하는 환경에서 생존을 위한 선택으로 받아들여지고 있다.

뜨는 요소 3: 제로 ETL을 통한 엔드투엔드 파이프라인 관리

수작업 기반의 ETL(추출, 전환, 적재)은 사실상 마지막 단계에 접어들고 있다. ETL은 여러 시스템에 흩어진 데이터를 추출하고, 분석에 적합한 형태로 변환한 뒤, 데이터 웨어하우스나 레이크 같은 저장소에 적재하는 과정을 의미한다. 파이썬 스크립트나 커스텀 SQL 작업은 유연성을 제공하지만, 작은 변화에도 쉽게 오류가 발생하고 엔지니어의 지속적인 관리 부담을 요구한다. 이런 공백을 관리형 파이프라인 도구가 빠르게 메우고 있다. 데이터브릭스 레이크플로우, 스노우플레이크 오픈플로우, AWS 글루는 데이터 추출부터 모니터링, 장애 복구까지 아우르는 차세대 오케스트레이션 환경을 제시한다.

복잡한 소스 시스템을 처리하는 과제는 여전히 남아있지만, 방향성은 분명하다. 기업은 스스로 유지되는 파이프라인을 원하고 있다. 구성 요소를 줄이고, 사소한 스크립트 누락으로 발생하는 야간 장애를 최소화하길 기대한다. 일부 조직은 파이프라인 자체를 우회하는 선택도 하고 있다. 제로 ETL 패턴을 통해 운영 시스템의 데이터를 분석 환경으로 즉시 복제함으로써, 야간 배치 작업이 지닌 취약성을 제거하는 방식이다. 이는 실시간 가시성과 신뢰할 수 있는 AI 학습 데이터를 요구하는 애플리케이션에서 새로운 표준으로 떠오르고 있다.

뜨는 요소 4: 대화형 분석과 에이전틱 BI

대시보드는 점차 기업 내 중심 도구로서의 입지를 잃고 있다. 수년간 투자가 이어졌음에도 실제 활용도는 여전히 낮고, 그 수도 계속해서 늘어나는 양상을 보이고 있다. 대부분의 비즈니스 사용자는 정적인 차트 속에 묻힌 인사이트를 직접 찾아내고 싶어 하지 않는다. 이들이 원하는 것은 단순한 시각화가 아니라 명확한 답변과 설명, 그리고 맥락이다.

이런 공백을 대화형 분석이 메우고 있다. 생성형 BI 시스템은 사용자가 원하는 대시보드를 말로 설명하거나, 에이전트에게 데이터를 직접 해석해 달라고 요청할 수 있도록 한다. 필터를 하나씩 클릭하는 대신 분기별 성과 요약을 요청하거나, 특정 지표가 왜 변했는지를 질문할 수 있다. 초기의 자연어 기반 SQL 자동 생성 기술은 쿼리 작성 과정을 자동화하는 데 초점을 맞춰 한계를 드러냈다. 반면 최근의 흐름은 다르다. AI 에이전트는 쿼리를 만드는 역할보다 인사이트를 종합하고, 필요에 따라 시각화를 생성하는 데 집중한다. 이들은 단순한 질의 처리 도구가 아니라, 데이터와 비즈니스 질문을 함께 이해하는 분석가에 가까운 존재로 진화하고 있다.

뜨는 요소 5: 벡터 네이티브 스토리지와 개방형 테이블 포맷

AI는 스토리지에 대한 요구 조건 자체를 바꾸고 있다. 특히 검색 증강 생성(RAG)은 벡터 임베딩을 전제로 한다. 이는 데이터베이스가 벡터 데이터를 별도의 확장 기능이 아닌, 기본 데이터 유형으로 저장하고 처리할 수 있어야 함을 의미한다. 이에 따라 벤더는 데이터 엔진 내부에 벡터 기능을 직접 내장하기 위해 경쟁적으로 움직이고 있다.

동시에 아파치 아이스버그(Apache Iceberg)가 개방형 테이블 포맷의 새로운 표준으로 자리 잡아가고 있다. 아이스버그는 데이터 복제나 별도의 변환 과정 없이도 다양한 컴퓨팅 엔진이 동일한 데이터를 사용할 수 있도록 지원한다. 그동안 업계를 괴롭혀 온 상호운용성 문제를 상당 부분 해소하고, 오브젝트 스토리지를 진정한 멀티 엔진 기반으로 전환시키는 역할을 한다. 이를 통해 기업은 데이터 생태계가 변화할 때마다 모든 구조를 다시 작성하지 않고도, 장기적인 관점에서 데이터를 안정적으로 활용할 수 있는 기반을 마련할 수 있다.

다음은 2026년에 지는 데이터 관리 요소다.

지는 요소 1: 기존 모놀리식 웨어하우스와 과도하게 분산된 도구 체계

하나의 거대한 시스템에 모든 기능을 탑재한 전통적인 데이터 웨어하우스는 대규모 비정형 데이터를 처리하는 데 한계가 있고, AI가 요구하는 실시간 처리 역량도 충분히 제공하지 못한다. 그렇다고 해서 그 반대 극단이 해법이 된 것도 아니다. 현대 데이터 스택은 수많은 소규모 도구에 역할과 책임을 분산시켰고, 그 결과 거버넌스는 복잡해졌으며 AI를 위한 준비 속도도 오히려 느려졌다. 데이터 메시 역시 상황은 비슷하다. 데이터 소유와 분산 책임이라는 원칙 자체는 여전히 의미를 갖지만, 이를 엄격하게 구현하려는 접근법은 점차 힘을 잃고 있다.

지는 요소 2: 수작업 기반 ETL과 커스텀 커넥터

야간 배치 스크립트는 문제를 즉각적으로 드러내지 않은 채 중단되기 쉽고, 처리 지연을 초래하며 엔지니어의 시간을 지속적으로 소모한다. 데이터 복제 도구와 관리형 파이프라인이 표준으로 자리 잡으면서, 업계는 이러한 취약한 워크플로우에서 빠르게 벗어나고 있다. 사람이 직접 연결하고 관리하던 수동적인 데이터 연계 방식은, 상시적으로 작동하고 지속적으로 모니터링되는 오케스트레이션 구조로 대체되고 있다.

지는 요소 3: 수동 데이터 관리와 수동적 카탈로그

사람이 데이터를 일일이 검토하고 관리하는 방식은 더 이상 현실적인 선택지가 아니다. 문제가 발생한 이후에 정리하는 방식은 비용 대비 효과가 낮고, 기대만큼의 성과를 내기도 어렵다. 단순히 정보를 나열하는 위키 형태의 수동형 데이터 카탈로그 역시 점차 비중이 줄어들고 있다. 대신 데이터 상태를 지속적으로 감시하고 변화와 이상 징후를 자동으로 파악하는 액티브 메타데이터 시스템이 필수 요소로 떠오르고 있다.

지는 요소 4: 정적 대시보드와 일방적 보고

추가 질문에 답하지 못하는 대시보드는 사용자에게 좌절감을 준다. 기업이 원하는 것은 단순히 결과를 보여주는 도구가 아니라 함께 생각할 수 있는 분석 환경이다. AI 비서 사용 경험으로 비즈니스 기대 수준이 높아지면서, 정적인 보고 방식은 그 부담을 감당하지 못하고 있다.

지는 요소 5: 온프레미스 하둡 클러스터

하둡 클러스터(Hadoop)는 대규모 데이터를 분산 저장·처리하기 위해 여러 서버를 하나의 시스템처럼 묶어 운영하는 오픈소스 빅데이터 처리 환경이다. 하지만 온프레미스 환경에서 이를 직접 운영하는 방식은 점점 설득력을 잃고 있다. 오브젝트 스토리지와 서버리스 컴퓨팅를 결합한 구조는 더 높은 확장성과 단순한 운영, 낮은 비용이라는 분명한 이점을 제공한다. 반면 수많은 구성 요소로 이뤄진 하둡 서비스 생태계는 현대적인 데이터 환경과 더 이상 잘 맞지 않는 구조가 되고 있다.

2026년의 데이터 관리는 ‘명확성’을 중심에 두고 있다. 시장은 파편화된 구조와 수작업 개입, 그리고 소통하지 못하는 분석 방식을 점차 외면하고 있다. 미래의 중심에는 통합 플랫폼, 네이티브 거버넌스, 벡터 네이티브 스토리지, 대화형 분석, 그리고 최소한의 인간 개입으로 운영되는 파이프라인이 자리 잡고 있다. AI는 데이터 관리를 대체하는 존재가 아니다. 대신 단순함과 개방성, 통합된 설계를 중시하는 방향으로 데이터 관리의 규칙 자체를 다시 쓰고 있다.
dl-ciokorea@foundryco.com

What’s in, and what’s out: Data management in 2026 has a new attitude

16 January 2026 at 07:00

The data landscape is shifting faster than most organizations can track. The pace of change is driven by two forces that are finally colliding productively: enterprise data management practices that are maturing and AI platforms that are demanding more coherence, consistency and trust in the data they consume.

As a result, 2026 is shaping up to be the year when companies stop tinkering on the edges and start transforming the core. What is emerging is a clear sense of what is in and what is out for data management, and it reflects a market that is tired of fragmented tooling, manual oversight and dashboards that fail to deliver real intelligence.

So, here’s a list of what’s “In” and what’s “Out” for data management in 2026:

IN: Native governance that automates the work but still relies on human process

Data governance is no longer a bolt-on exercise. Platforms like Unity Catalog, Snowflake Horizon and AWS Glue Catalog are building governance into the foundation itself. This shift is driven by the realization that external governance layers add friction and rarely deliver reliable end-to-end coverage. The new pattern is native automation. Data quality checks, anomaly alerts and usage monitoring run continuously in the background. They identify what is happening across the environment with speed that humans cannot match.

Yet this automation does not replace human judgment. The tools diagnose issues, but people still decide how severity is defined, which SLAs matter and how escalation paths work. The industry is settling into a balanced model. Tools handle detection. Humans handle meaning and accountability. It is a refreshing rejection of the idea that governance will someday be fully automated. Instead, organizations are taking advantage of native technology while reinforcing the value of human decision-making.

IN: Platform consolidation and the rise of the post-warehouse lakehouse

The era of cobbling together a dozen specialized data tools is ending. Complexity has caught up with the decentralized mindset. Teams have spent years stitching together ingestion systems, pipelines, catalogs, governance layers, warehouse engines and dashboard tools. The result has been fragile stacks that are expensive to maintain and surprisingly hard to govern.

Databricks, Snowflake and Microsoft see an opportunity and are extending their platforms into unified environments. The Lakehouse has emerged as the architectural north star. It gives organizations a single platform for structured and unstructured data, analytics, machine learning and AI training. Companies no longer want to move data between silos or juggle incompatible systems. What they need is a central operating environment that reduces friction, simplifies security and accelerates AI development. Consolidation is no longer about vendor lock-in. It is about survival in a world where data volumes are exploding and AI demands more consistency than ever.

IN: End-to-end pipeline management with zero ETL as the new ideal

Handwritten ETL is entering its final chapter. Python scripts and custom SQL jobs may offer flexibility, but they break too easily and demand constant care from engineers. Managed pipeline tools are stepping into the gap. Databricks Lakeflow, Snowflake Openflow and AWS Glue represent a new generation of orchestration that covers extraction through monitoring and recovery.

While there is still work to do in handling complex source systems, the direction is unmistakable. Companies want pipelines that maintain themselves. They want fewer moving parts and fewer late-night failures caused by an overlooked script. Some organizations are even bypassing pipes altogether. Zero ETL patterns replicate data from operational systems to analytical environments instantly, eliminating the fragility that comes with nightly batch jobs. It is an emerging standard for applications that need real-time visibility and reliable AI training data.

IN: Conversational analytics and agentic BI

Dashboards are losing their grip on the enterprise. Despite years of investment, adoption remains low and dashboard sprawl continues to grow. Most business users do not want to hunt for insights buried in static charts. They want answers. They want explanations. They want context.

Conversational analytics is stepping forward to fill the void. Generative BI systems let users describe the dashboard they want or ask an agent to explain the data directly. Instead of clicking through filters, a user might request a performance summary for the quarter or ask why a metric changed. Early attempts at Text to SQL struggled because they attempted to automate the query writing layer. The next wave is different. AI agents now focus on synthesizing insights and generating visualizations on demand. They act less like query engines and more like analysts who understand both the data and the business question.

IN: Vector native storage and open table formats

AI is reshaping storage requirements. Retrieval Augmented Generation depends on vector embeddings, which means that databases must store vectors as first-class objects. Vendors are racing to embed vector support directly in their engines.

At the same time, Apache Iceberg is becoming the new standard for open table formats. It allows every compute engine to work on the same data without duplication or transformation. Iceberg removes a decade of interoperability pain and turns object storage into a true multi-engine foundation. Organizations finally get a way to future-proof their data without rewriting everything each time the ecosystem shifts.

And here’s what’s “Out”:

OUT: Monolithic warehouses and hyper-decentralized tooling

Traditional enterprise warehouses cannot handle unstructured data at scale and cannot deliver the real-time capabilities needed for AI. Yet the opposite extreme has failed too. The highly fragmented Modern Data Stack scattered responsibilities across too many small tools. It created governance chaos and slowed down AI readiness. Even the rigid interpretation of Data Mesh has faded. The principles live on, but the strict implementation has lost momentum as companies focus more on AI integration and less on organizational theory.

OUT: Hand-coded ETL and custom connectors

Nightly batch scripts break silently, cause delays and consume engineering bandwidth. With replication tools and managed pipelines becoming mainstream, the industry is rapidly abandoning these brittle workflows. Manual plumbing is giving way to orchestration that is always on and always monitored.

OUT: Manual stewardship and passive catalogs

The idea of humans reviewing data manually is no longer realistic. Reactive cleanup costs too much and delivers too little. Passive catalogs that serve as wikis are declining. Active metadata systems that monitor data continuously are now essential.

Out: Static dashboards and one-way reporting

Dashboards that cannot answer follow up questions frustrate users. Companies want tools that converse. They want analytics that think with them. Static reporting is collapsing under the weight of business expectations shaped by AI assistants.

OUT: On-premises Hadoop clusters

Maintaining on-prem Hadoop is becoming indefensible. Object storage combined with serverless compute offers elasticity, simplicity and lower cost. The complex zoo of Hadoop services no longer fits the modern data landscape.

Data management in 2026 is about clarity. The market is rejecting fragmentation, manual intervention and analytics that fail to communicate. The future belongs to unified platforms, native governance, vector native storage, conversational analytics and pipelines that operate with minimal human interference. AI is not replacing data management. It is rewriting the rules in ways that reward simplicity, openness and integrated design.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

How analytics capability has quietly reshaped IT operations

13 January 2026 at 07:15

As CIOs have entered 2026 anticipating change and opportunity, it is worth looking back at how 2025 reshaped IT operations in ways few anticipated.

In 2025, IT operations crossed a threshold that many organizations did not fully recognize at the time. While attention remained fixed on AI, automation platforms and next-generation tooling, the more consequential shift occurred elsewhere. IT operations became decisively shaped by analytics capability, not as a technology layer, but as an organizational system that governs how insight is created, trusted and embedded into operational decisions at scale.

This distinction matters. Across 2025, a clear pattern emerged. Organizations that approached analytics largely as a set of tools often found it difficult to translate operational intelligence into material performance gains. Those that focused more explicitly on analytics capability, spanning governance, decision rights, skills, operating models and leadership support, tended to achieve stronger operational outcomes. The year did not belong to the most automated IT functions. It belonged to the most analytically capable ones.

The end of tool-centric IT operations

One of the clearest lessons of 2025 was the diminishing return of tool-centric IT operations strategies. Most large organizations now possess advanced monitoring and observability platforms, AI-driven alerting and automation capabilities. Yet despite this maturity, CIOs continued to report familiar challenges such as alert fatigue and poor prioritization, along with difficulty turning operational data into decisions and actions.

The issue was not a lack of data or intelligence. It was the absence of an organizational capability to turn operational insight into coordinated action. In many IT functions, analytics outputs existed in dashboards and models but were not embedded in decision forums or escalation pathways. Intelligence was generated faster than the organization could absorb it.

2025 made one thing clear. Analytics capability, not tooling, has become the primary constraint on IT operations performance.

A shift from monitoring to decision-enablement

Up until recently, the focus of IT operations analytics was on visibility. Success was defined by how comprehensively systems could be monitored and how quickly anomalies could be detected. In 2025, leading organizations moved beyond visibility toward decision-enablement.

This shift was subtle but profound. High-performing IT operations teams did not ask, “What does the data show?” They asked, “What decisions should this data change?” Analytics capability matured where insight was explicitly linked to operational choices such as incident triage, capacity investment decisions, vendor escalation, technical debt prioritization and resilience trade-offs.

Crucially, this required clarity on decision ownership. Analytics that is not anchored to named decision-makers and decision rights rarely drives action. In 2025, the strongest IT operations functions formalized who decides what, at what threshold and with what analytical evidence. This governance layer, not AI sophistication, proved decisive.

AI amplified weaknesses as much as strengths

AI adoption accelerated across IT operations in 2025, particularly in areas such as predictive incident management, root cause analysis and automated remediation. But AI did not uniformly improve outcomes. Instead, it amplified existing capability strengths and weaknesses.

Where analytics capability was mature, AI enhanced the speed, scale and consistency of operational decisions and actions. Where it was weak, AI generated noise, confusion and misplaced confidence. Many CIOs observed that AI-driven insights were either ignored or over-trusted, with little middle ground. Both outcomes reflected capability gaps, not model limitations.

The lesson from 2025 is that AI does not replace analytics capability in IT operations. It exposes it. Organizations lacking strong decision governance, data ownership and analytical literacy found themselves overwhelmed by AI-enabled systems they could not effectively operationalize.

Operational analytics became a leadership issue

Another defining shift in 2025 was the elevation of IT operations analytics from a technical concern to a leadership concern. In high-performing organizations, senior IT leaders became actively involved in shaping how operational insight was used, not just how it was produced.

This involvement was not about reviewing dashboards. It was about setting expectations for evidence-based operations, reinforcing analytical discipline in incident reviews and insisting that investment decisions be grounded in operational data rather than anecdote. Where leadership treated analytics as the basis for operational decisions, IT operations matured rapidly.

Conversely, where analytics remained delegated entirely to technical teams, its influence plateaued. 2025 demonstrated that analytics capability in IT operations is inseparable from leadership behavior.

From reactive optimization to systemic learning

Perhaps the most underappreciated development of 2025 was the shift from reactive optimization to systemic learning in IT operations. Traditional operational analytics often focused on fixing the last incident or improving the next response. Leading organizations used analytics to identify structural patterns such as recurring failures, architectural bottlenecks, process debt and skill constraints.

This required looking beyond individual incidents to learn from issues over time and build organizational memory. These capabilities cannot be automated. IT operations teams that invested in them moved from firefighting to foresight, using analytics not only to respond faster, but to design failures out of the IT operating environment.

In 2025, resilience became less about redundancy and more about learning velocity.

The new role of the CIO in IT operations analytics

By the end of 2025, the CIO’s role in IT operations analytics had subtly but decisively changed. AI forced a shift from sponsorship to stewardship. The CIO was no longer simply the sponsor of tools or platforms. Increasingly, they became the architect of the organizational conditions that allow analytics to shape operations meaningfully.

This included clarifying decision hierarchies, aligning incentives with analytical outcomes, investing in analytical skills across operations teams and protecting time for reflection and improvement. CIOs who embraced this role saw analytics scale naturally across IT operations. Those who did not often saw impressive pilots fail to translate into everyday practice.

The defining lesson of 2025

Looking back, 2025 was not the year IT operations became intelligent. It was the year intelligence became operationally consequential, where analytics capability determined whether insight changed behavior or remained aspirational.

The organizations that quietly advanced their IT operations this year did so by strengthening the organizational systems that govern how insight becomes action. Operational intelligence only creates value when organizations are capable of deciding what takes precedence, when to intervene operationally and where to commit resources for the future.

What to expect in 2026: When analytics capability becomes non-optional

While 2025 marked the consolidation of analytics capability in IT operations, 2026 will likely be the year analytics capability becomes non-optional across IT operations. As AI and automation continue to advance, the gap between analytically capable IT operations teams and those where analytics capability is lacking will widen, not because of technology, but because of how effectively organizations convert intelligence into action.

Decision latency emerges as a core operational risk

By 2026, decision speed will replace operational visibility as the dominant constraint on IT operations. As analytics and AI generate richer, more frequent insights, organizations without clear decision rights, escalation thresholds and evidence standards will struggle to respond coherently. In many cases, delays and conflicting interventions will cause more disruption than technology failures themselves. Leading IT operations teams will begin treating decision latency as a measurable operational risk.

AI exposes capability gaps rather than closing them

AI adoption will continue to accelerate across IT operations in 2026, but its impact will remain uneven. Where analytics capability is strong, AI will enhance decision speed and organizational learning. Where it is weak, AI will amplify confusion or analysis paralysis. The differentiator will not be model sophistication, but the organization’s ability to govern decisions, knowing when to trust automated insight, when to challenge it and who is accountable for outcomes.

Analytics becomes a leadership discipline

In 2026, analytics in IT operations will become even more of a leadership expectation than a technical activity. CIOs and senior IT leaders will be judged less on the tools they sponsor and more on how consistently operational decisions are grounded in evidence. Incident reviews, investment prioritization and resilience planning will increasingly be evaluated by the quality of analytical reasoning applied, not just the results achieved.

Operational insight shapes system design

Leading IT operations teams will move analytics upstream in 2026, from improving response and recovery to shaping architecture and design. Longitudinal operational data will increasingly inform platform choices, sourcing decisions and resilience trade-offs across cost, risk and availability. This marks a shift from reactive optimization to evidence-led system design, where analytics capability influences how IT environments are built, not just how they are run.

The future of IT operations will not be shaped by smarter systems alone, but by organizations that can consistently turn intelligence into decisions and actions. Without analytics capability, this remains ad hoc, inconsistent and ultimately ineffective.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

MS, 오스모스 인수···패브릭 데이터 엔지니어링 병목 줄인다

8 January 2026 at 00:25

MS가 AI 기반 데이터 엔지니어링 기업 오스모스를 인수했다. 인수 금액은 공개되지 않았다. 이번 인수는 통합 데이터·분석 플랫폼인 MS 패브릭에서 발생하는 데이터 엔지니어링의 마찰을 줄이기 위한 전략의 일환으로, 분석과 AI 프로젝트의 실제 적용이 늘어나는 흐름에 대응하기 위한 조치다.

오스모스가 보유한 기술은 에이전틱 AI를 적용해 원시 데이터를 원레이크(OneLake) 내에서 분석하고 AI에 바로 활용할 수 있는 자산으로 전환한다. MS 애저 데이터 애널리틱스 부문 부사장인 보그단 크리바트블로그 게시글에서, 이를 통해 많은 기업이 분석 자체보다 데이터 준비에 더 많은 시간을 소모하는 문제를 피할 수 있다고 설명했다.

MS 제품 담당 시니어 디렉터인 로이 하손도 별도의 소셜미디어 게시글에서 오스모스가 약 2년 전 패브릭에 네이티브 앱 형태로 AI 데이터 랭글러와 AI 데이터 엔지니어링 에이전트를 출시했으며, 빠르게 인기를 끌었다고 언급했다.

하손은 “고객이 패브릭 스파크 위에서 오스모스를 사용하는 것을 매우 긍정적으로 받아들였고, 이를 통해 개발과 유지보수에 들어가는 노력이 50% 이상 줄었다는 점을 빠르게 확인할 수 있었다”라고 말했다.

오스모스는 인수 이전 MS 패브릭용 데이터 에이전트와 데이터브릭스용 데이터 에이전트, 오스모스 AI 어시스트 스위트(업로더, 파이프라인, 데이터셋)를 제공해 왔다. 회사는 이를 AI 기반 데이터 수집 및 엔지니어링 도구 모음이라고 소개하며, 외부에서 유입되는 복잡하고 정제되지 않은 데이터를 최소한의 수작업이나 코딩만으로 운영 시스템에 적용할 수 있도록 자동화한다고 밝혔다.

MS의 오스모스 인수가 기업에 의미하는 바는?

MS가 패브릭 내에서 오스모스 기술을 어떻게 통합할지에 대한 구체적인 제품 로드맵은 아직 공개하지 않았다. 다만 업계 분석가들은 이번 통합이 CIO와 개발팀 모두에게 도움이 될 가능성이 크다고 봤다.

무어 인사이트 앤 스트래티지의 수석 애널리스트 로버트 크레이머는 CIO 관점에서 이번 인수가 운영 효율성 개선 및 분석, AI 이니셔티브의 가치 실현 속도 단축 등의 효과를 가져올 수 있다고 설명하며, 특히 데이터 엔지니어링 인력과 예산이 제한된 환경에서 효과가 두드러질 것이라고 진단했다.

하이퍼프레임 리서치의 AI 스택 부문 책임자인 스테파니 월터는 또 다른 장점으로, 통제 및 복원이 가능하고 감사까지 고려한 데이터 엔지니어링 자동화를 구현할 수 있다는 점을 꼽았다. 월터는 “AI가 실험 단계를 넘어 기업 전반으로 확산되면서, 신뢰성 및 규정 준수를 유지하기 위해서는 통제된 자동화가 필수가 되고 있다”라고 언급했다.

반면 크레이머는 월터의 평가와 달리, 기업이 패브릭 내 데이터 엔지니어링을 위해 오스모스 기술에 의존할 경우 플랫폼 종속성이 커질 수 있다고 경고했다. 그에 따르면, 종속성이 커지면 에이전틱 파이프라인 인증, 변경 사항에 대한 감사 및 롤백, 자율적 데이터 엔지니어링, 규제 및 컴플라이언스 요구 사항 간 정합성 등 거버넌스 및 리스크 관련 문제가 제기될 수 있다.

엔지니어링 반복 작업을 줄이는 효과

개발자 관점에서도 이번 인수는 생산성 향상으로 이어질 수 있다. 크레이머는 오스모스 인수가 복잡하고 정제되지 않은 데이터를 다루는 과정에서 발생하는 반복적이고 부가가치가 낮은 엔지니어링 작업을 줄여줄 가능성이 크다고 설명했다.

그는 “데이터 정리 작업, 일관성이 떨어지는 외부 데이터 소스 매핑, 파이프라인 골격 구성, 스파크 스타일의 변환 코드와 같은 작업을 사람이 직접 수행하는 대신 에이전트가 할 수 있다. 그러면 엔지니어는 아키텍처 설계, 성능 최적화, 데이터 품질 관리, 가드레일 설계에 집중할 수 있게 된다”라고 진단했다.

이어 그는 “개발 라이프사이클 역시 AI가 생성한 파이프라인과 변환 결과를 검토하고 테스트하며 안정성을 강화하는 방향으로 바뀔 수 있다. 이 과정에서 가시성 확보, 승인 워크플로, 되돌리기 기능이 핵심 설계 요건으로 자리 잡을 것”이라고 말했다.

최근 추가된 패브릭 기능과의 시너지

분석가들은 오스모스 인수가 최근 패브릭의 기능 개선을 보완하는 역할을 할 것이라고 내다봤다.

월터는 “최근 패브릭이 IQ 도입, 새로운 데이터베이스 추가, 원레이크 연동 강화 등 여러 기능을 확장하면서 이제 문제는 데이터에 접근하는 것이 아니라, 분석과 AI에 바로 쓸 수 있도록 데이터를 준비하는 단계로 옮겨가고 있다. 오스모스는 패브릭 환경 안에서 직접 데이터 수집, 변환, 구조 변경을 자동화함으로써 격차를 해소한다”라고 설명했다.

이어 그는 “패브릭 IQ 관점에서 보면 오스모스는 상위 데이터 소스가 바뀌더라도 시맨틱 계층과 추론 계층에 공급되는 데이터가 지속적으로 정제되고 안정적으로 유지되도록 돕는다. 시맨틱 시스템은 데이터가 일정하게 유지되고 이해 가능한 상태일 때만 효과를 낼 수 있는데, 오스모스는 이를 방해하는 운영 단계의 불필요한 마찰을 줄이는 데 초점을 맞췄다”라고 말했다.

오스모스 제품 및 기존 고객은?

다만 이번 인수가 모든 오스모스 고객에게 긍정적인 소식만은 아니다. 오스모스는 1월 내 MS 패브릭용 데이터 에이전트, 데이터브릭스용 데이터 에이전트, 오스모스 AI 어시스트 스위트 등 3가지 제품의 독립형 서비스 제공을 중단할 예정이다. 이는 오스모스의 기술은 당분간 패브릭 내부에서만 제공된다는 의미다. 데이터브릭스용 제품이나 AI 어시스트 스위트를 사용해 온 고객은 다른 대안을 찾거나, MS 패브릭 기반 서비스로 전환하는 방안을 검토해야 한다.
dl-ciokorea@foundryco.com

What is NVIDIA’s CUDA and How is it Used in Cybersecurity?

By: OTW
17 November 2025 at 17:09

Welcome back my aspiring cyberwarriors!

You have likely heard of the company NVIDIA. Not only are the dominant company in computer graphics adapters (if you are gamer, you likely have one) and now, artificial intelligence. In recent weeks, they have become the most valuable company in the world ($5 trillion).

The two primary reasons that Nvidia has become so important to artificial intelligence are:

  1. Nvidia chips can process data in multiple threads, in some cases, thousands of threads. This makes doing complex calculations in parallel possible, making them much faster.
  2. Nvidia created a development environment named CUDA for harnessing the power of these powerful CPU’s. This development environment is a favorite among artificial intelligence, data analytics, and cybersecurity professionals.

Let’s a brief moment to examine this powerful environment.

What is CUDA?

Most computers have two main processors:

CPU (Central Processing Unit): General-purpose, executes instructions sequentially or on a small number of cores. These CPU’s such as Intel and AMD provide the flexibility to run many different applications on your computer.

GPU (Graphics Processing Unit): These GPU’s were originally designed to draw graphics for applications such as games and VR environments. These GPU’s contain hundreds or thousands of small cores that excel at doing the same thing many times in parallel.

CUDA (Compute Unified Device Architecture) is NVIDIA’s framework that lets you take control of the GPU for general computing tasks. In other words, CUDA lets you write code that doesn’t just render graphics—it crunches numbers at massive scale. That’s why it’s a favorite for machine learning, password cracking, and scientific computing.

Why Should Hackers & Developers Care?

CUDA matters as an important tool in your cybersecurity toolkit because:

Speed: A GPU can run password hashes or machine learning models orders of magnitude faster than a CPU.

Parallelism: If you need to test millions of combinations, analyze huge datasets, or simulate workloads, CUDA gives you raw power.

Applications in Hacking: Tools like Hashcat and Pyrit use CUDA to massively accelerate brute-force and dictionary attacks. Security researchers who understand CUDA can customize or write their own GPU-accelerated tools.

The CUDA environment sees the GPU as a device with:

Threads: The smallest execution unit (like a tiny worker).

Blocks: Groups of threads.

Grids: Groups of blocks.

Think of it like this:

  • A CPU worker can cook one meal at a time.
  • A GPU is like a kitchen with thousands of cooks—we split the work (threads), organize them into brigades (blocks), and assign the whole team to the job (grid).

Coding With CUDA

CUDA extends C/C++ with some keywords.
Here’s the simple workflow:

  1. You write a kernel function (runs on the GPU).
  2. You call it from the host code (the CPU side).
  3. Launch thousands of threads in parallel → GPU executes them fast.

Example skeleton code:

c__global__ void add(int *a, int *b, int *c) {
    int idx = threadIdx.x;
    c[idx] = a[idx] + b[idx];
}

int main() {
    // Allocate memory on host and device
    // Copy data to GPU
    // Run kernel with N threads
    add<<<1, N>>>(dev_a, dev_b, dev_c);
    // Copy results back to host
}

The keywords:

  • __global__ → A function (kernel) run on the GPU.
  • threadIdx → Built-in variable identifying which thread you are.
  • <<<1, N>>> → Tells CUDA to launch 1 block of N threads.

This simple example adds two arrays in parallel. Imagine scaling this to millions of operations at once!

The CUDA Toolchain Setup

If you want to try CUDA make certain you have the following items:

1. an NVIDIA GPU.

2. the CUDA Toolkit (contains compiler nvcc).

3. Write your CUDA programs in C/C++ and compile it with nvcc.

Run and watch your GPU chew through problems.

To install the CUDA toolkit in Kali Linux, simply enter;

kali > sudo apt install nvidia-cuda-toolkit

Next, write your code and compile it with nvcc, such as;

kali > nvcc hackersarise.cu -o hackersarise

Practical Applications of CUDA

CUDA is already excelling at hacking and computing applications such as;

  1. Password cracking (Hashcat, John the Ripper with GPU support).
  2. AI & ML (TensorFlow/PyTorch use CUDA under the hood). Our application of using Wi-Fi to see through walls uses CUDA.
  3. Cryptanalysis (breaking encryption) & simulation tasks.
  4. Network packet analysis at high scale.

As a beginner, start with small projects—then explore how to take compute-heavy tasks and offload them to the GPU.

Summary

CUDA is NVIDIA’s way of letting you program GPUs for general-purpose computing. To the hacker or cybersecurity pro, it’s a way to supercharge computation-heavy tasks.

Learn the thread-block-grid model, write simple kernels, and then think: what problems can I solve dramatically faster if run in parallel?


Data Dilemmas: Balancing Privacy Rights in the Age of Big Tech

By: galidon
22 July 2025 at 02:11

The world is becoming increasingly more digital and, whilst this is a good thing for a number of different reasons, this huge shift brings with it questions and scrutiny as to what exactly these huge tech companies are doing with such vast amounts of data.

The leading tech companies, including Google, Apple, Meta, Amazon and Microsoft – giants within the tech world – have all recently been accused of following unethical practices.

From Meta being questioned in courts over its advertising regime, to Amazon facing concerns over the fact that their Echo devices are potentially recording private conversations within the home, it’s not surprising that users are looking for more information as to how their data is being used.

With this comes the counterargument that big tech companies are doing what they can to strike the balance between privacy rights and ensuring that their product and the experience users get from using them don’t change.  But, how exactly are the big tech companies using and utilising sensitive and personal data while ensuring they still meet and adhere to the ever-expanding list of privacy rights? Let’s take a look.

Is Our Data The Price We Pay For Free?

In marketplaces and stores, we exchange legitimate currency for goods and services. But, with social media and other online platforms, we’re instead paying with our attention. A lot of online users are unaware of the expansive trail of browsing and search history that they leave behind.

Almost everything is logged and monitored online, right from the very first interaction and, depending on the web browser you use, some will collect more information than others. There are costs involved in almost every digital and online service we use and it costs money to host servers and sites – so why do we get to browse for free?

Simply because the cost is being underwritten in other ways. The most common form is through advertising, but the ways that only a few people think about, or want to think about, is through the harvesting and use of our data. Every single website is tracked or recorded in different ways and by different people, from marketing agencies who analyze the performance of a website to broadband providers who check connections.

Users will struggle to understand why companies want their data, but that’s simply because they don’t quite understand the value behind it. Data is currently considered to be one of the most valuable assets, mainly because it is a non-rival entity – this means that it can be replicated for free and with little to no impact on the quality. The nature of data means that it can be used for product research, market analysis or to train and better inform AI systems. All companies want more data in order to have as many financial and legal incentives and rights as they can.

What Are Cookies?

Data tracking is done through cookies, which are small files of letters and numbers which are downloaded onto your computer when you visit a website. They are used by almost all websites for a number of reasons, such as remembering your browsing preferences, keeping a record of what you’ve added to your shopping basket or counting how many people visit the site. Cookies are why you might see ads online months after visiting a website or get emails when you’ve left something in a shopping basket online.

Why Do Big Tech Companies Want User Data?

How Laws Have Changed How Companies Use Your Data

In the EU, data is more heavily protected than it is in the US, for example. EU laws have taken a more hardline stance against the big tech companies when it comes to protecting users, with the General Data Protection Regulation, or GDPR, in place to offer the “toughest privacy and security law in the world”.

This law makes it compulsory for companies, particularly big tech companies, to outline specifically what it is they are using data for. This law was passed in 2016 and any company which violates it is subjected to fines which either total 4% of the company’s overall revenue, or €20 million – whichever is greater. In 2019, Google was fined a huge €57 million for violating GDPR laws, citing that they posed huge security risks.

Unlike the EU, the US does not have comprehensive laws to protect online users, which is what allows these companies to have access to data that they can then use to take advantage of said data. Following the EU’s introduction of GDPR, both Facebook and Google had to change and update their privacy rights and laws, but in the US, there is still some way to go.

This is because Google makes a lot of money from their user data. Over 80% of Google’s revenue comes from the advertising aspect of its business, which allows advertisers to target ads for services and products based on what users are searching for, with this information gathered from Google. Google is the largest search engine in the world, so all of these user’s data quickly adds up. It’s been said that “Google sells the data that they collect so the ads can be better suited to user’s interests.”.

Advertisers will also make use of Google’s Analytic data, which is a service that gives companies insight into their website activity by tracking users who land on there. A few years ago, there were rumours that Google Analytics wrongly gave U.S intelligence agencies access to data from French users, whilst Google hadn’t done enough in order to ensure privacy when this data was transferred between the US and Europe.

Reasons Why Big Tech Companies Want Your User Data

  • Social media apps want information on how you use their platform in order to give you content that you actually want. TikTok in particular works to build you a customised and personalised algorithm to try and show you videos that you will actually engage with to keep you on the app for longer based on ads and content that you have previously watched and engaged with.
  • Big tech companies will be interested in your data so that they can show you relevant ads. Most of the big tech companies make a lot of money through advertising on their platform, so they want to ensure that they keep advertisers happy by showing their services or products to the consumers who are more likely to convert.
  • Your data will be used to personalise your browsing and platform experience to keep you coming back.

How Is Data Collection Changing?

One of the biggest reasons why companies are using your data is in order to serve you better when you are online. But, in terms of big tech companies, these reasons are often very different. With more and more people relying on technology provided by the likes of Google, Apple, Microsoft and Amazon, these companies need to be more reliable and be held to accountability more so that the rights of consumers are protected.

Changes and popularity in technology such as AI and cryptocurrency are becoming increasingly more common, and with these technologies comes the increase in risks of scams and fraud, such as the recent Hyperverse case. It is important now more than ever for these companies to put user’s minds at ease and improve their privacy rights.

Originally posted 2024-04-13 23:13:36. Republished by Blog Post Promoter

The post Data Dilemmas: Balancing Privacy Rights in the Age of Big Tech first appeared on Information Technology Blog.

Worry-free Pentesting: Continuous Oversight In Offensive Security Testing

6 December 2022 at 06:00

In your cybersecurity practice, do you ever worry that you’ve left your back door open and an intruder might sneak inside? If you answered yes, you’re not alone. The experience can be a common one, especially for security leaders of large organizations with multiple layers of tech and cross-team collaboration to accomplish live, continuous security workflows.

At Synack, the better way to pentest is one that’s always on, can scale to test for urgent vulnerabilities or compliance needs, and provides transparent, thorough reporting and coverage insight.

Know what’s being tested, where it’s happening and how often it’s occurring 

With Synack365, our Premier Security Testing Platform, you can find relief in the fact that we’re always checking for unlocked doors. To provide better testing oversight, we maintain reports that list all web assets being tested, which our customers have praised. Customer feedback indicated that adding continuous oversight into host assets would also help to know which host or web assets are being tested, when and where they’re being tested, and how much testing has occurred. 

Synack’s expanded Coverage Analytics tells you all that and more for host assets, in addition to our previous coverage details on web applications and API endpoints, all found within the Synack platform. With Coverage Analytics, Synack customers are able to identify which web or host assets have been tested and the nature of the testing performed. This is helpful for auditing purposes and provides proof of testing activity, not just that an asset is in scope. Additionally, Coverage Analytics gives customers an understanding of areas that haven’t been tested as heavily for vulnerabilities and can provide internal red team leaders with direction for supplemental testing and prioritization. 

Unmatched Oversight of Coverage 

Other forms of security testing are unable to provide the details and information Synack Coverage Analytics does. Bug bounty testing typically goes through the untraceable public internet or via tagged headers, which require security researcher cooperation. The number of researchers and hours that they are testing are not easily trackable via these methods, if at all. Traditional penetration testing doesn’t have direct measurement capabilities. Our LaunchPoint infrastructure stands between the Synack Red Team, our community of 1,500 security researchers, and customer assets, so customers have better visibility of the measurable traffic during a test. More and more frequently, we hear that customers are required to provide this kind of information to their auditors in financial services and other industries. 

A look at the Classified Traffic & Vulnerabilities view in Synack’s Coverage Analytics. Sample data has been used for illustration purposes.

Benefits of Coverage Analytics 

  • Know what’s being tested within your web and host assets: where, when and how much 
  • View the traffic generated by the Synack Red Team during pentesting
  • Take next steps with confidence; identify where you may need supplemental testing and how to prioritize such testing

Starting today, security leaders can reduce their teams’ fears of pentesting in the dark by knowing what’s being tested, where and how much at any time across both web and host assets. Coverage Analytics makes sharing findings with executive leaders, board members or auditors simple and painless.

Current Synack customers can log in to the Synack Platform to explore Coverage Analytics today. If you have questions or are interested in learning more about Coverage Analytics, part of Synack’s Better Way to Pentest, don’t hesitate to contact us today!

The post Worry-free Pentesting: Continuous Oversight In Offensive Security Testing appeared first on Synack.

The Case for Integrating Dark Web Intelligence Into Your Daily Operations

30 January 2020 at 09:00

Some of the best intelligence an operator or decision-maker can obtain comes straight from the belly of the beast. That’s why dark web intelligence can be incredibly valuable to your security operations center (SOC). By leveraging this critical information, operators can gain a better understanding of the tactics, techniques and procedures (TTPs) employed by threat actors. With that knowledge in hand, decision-makers can better position themselves to protect their organizations.

This is in line with the classic teachings from Sun Tzu about knowing your enemy, and the entire passage containing that advice is particularly relevant to cybersecurity:

“If you know the enemy and know yourself, you need not fear the result of a hundred battles. If you know yourself but not the enemy, for every victory gained you will also suffer a defeat. If you know neither the enemy nor yourself, you will succumb in every battle.”

Let’s translate the middle section of this passage into colloquial cybersecurity talk: You can have the best security operations center in the world with outstanding cyber hygiene, but if you aren’t feeding it the right information, you may suffer defeats — and much of that information comes from dark web intelligence.

Completing Your Threat Intelligence Picture

To be candid, if you’re not looking at the dark web, there is a big gap in your security posture. Why? Because that’s where a lot of serious action happens. To paraphrase Sir Winston Churchill, the greatest defense against a cyber menace is to attack the enemy’s operations as near as possible to the point of departure.

Now, this is not a call to get too wrapped up in the dark web. Rather, a solid approach would be to go where the nefarious acts are being discussed and planned so you can take the appropriate proactive steps to prevent an attack on your assets.

The first step is to ensure that you have a basic understanding of the dark web. One common way to communicate over the dark web involves using peer-to-peer networks on Tor and I2P (Invisible Internet Project). In short, both networks are designed to provide secure communications and hide all types of information. Yes, this is only a basic illustration of dark web communications, but if your security operations center aims to improve its capabilities in the dark web intelligence space, you must be able to explain the dark web in these simple terms for two reasons:

  1. You cannot access these sites as you would any other website.
  2. You’re going to have to warn your superiors what you’re up to. The dark web is an unsavory place, full of illegal content. Your decision-makers need to know what will be happening with their assets at a high level, which makes it vitally important to speak their language.

And this part is critical: If you want to get the most out of dark web intelligence, you may have to put on a mask and appear to “be one of the bad guys.” You will need to explain to your decision-makers why full-time staff might have to spend entire days as someone else. This is necessary because when you start searching for granular details related to your organization, you may have to secure the trust of malicious actors to gain entry into their circles. That’s where the truly rich intelligence is.

This could involve transacting in bitcoins or other cryptocurrencies, stumbling upon things the average person would rather not see, trying to decipher between coded language and broken language, and the typical challenges that come with putting up an act — all so you can become a trusted persona. Just like any other relationship you develop in life, this doesn’t happen overnight.

Of course, there are organizations out there that can provide their own “personas” for a fee and do the work for you. Using these services can be advantageous for small and medium businesses that may not have the resources to do all of this on their own. But the bigger your enterprise is, the more likely it becomes that you will want these capabilities in-house. In general, it’s also a characteristic of good operational security to be able to do this in-house.

Determining What Intelligence You Need

One of the most difficult challenges you will face when you decide to integrate dark web intelligence into your daily operations is figuring out what intelligence could help your organization. A good start is to cluster the information you might collect into groups. Here are some primer questions you can use to develop these groups:

  • What applies to the cybersecurity world in general?
  • What applies to your industry?
  • What applies to your organization?
  • What applies to your people?

For the first question, there are plenty of service providers who make it their business to scour the dark web and collect such information. This is an area where it may make more sense to rely on these service providers and integrate their knowledge feeds into existing ones within your security operations center. With the assistance of artificial intelligence (AI) to manage and make sense of all these data points, you can certainly create a good defensive perimeter and take remediation steps if you identify gaps in your network.

It’s the second, third and fourth clusters that may require some tailoring and additional resources. Certain service providers can provide industry-specific dark web intelligence — and you would be wise to integrate that into your workflow — but at the levels of your organization and its people, you will need to do the work on your own. Effectively, you would be doing human intelligence work on the dark web.

Why Human Operators Will Always Be Needed

No matter how far technological protections advance, when places like the dark web exist, there will always be the human element to worry about. We’re not yet at the stage where machines are deciding what to target — it’s still humans who make those decisions.

Therefore, having top-level, industrywide information feeds can be great and even necessary, but it may not be enough. You need to get into the weeds here because when malicious actors move on a specific target, that organization has to play a large role in protecting itself with specific threat intelligence. A key component of ensuring protections are in place is knowing what people are saying about you, even on the dark web.

As Sun Tzu said: “If you know the enemy and know yourself, you need not fear the result of a hundred battles.” There’s a lot of wisdom in that, even if it was said some 2,500 years ago.

The post The Case for Integrating Dark Web Intelligence Into Your Daily Operations appeared first on Security Intelligence.

❌
❌