Reading view

There are new articles available, click to refresh the page.

One Identity Unveils Major Upgrade to Identity Manager, Strengthening Enterprise Identity Security

One Identity, a trusted leader in identity security, today announces a major upgrade to One Identity Manager, a top-rated IGA solution, strengthening identity governance as a critical security control for modern enterprise environments. 

One Identity Manager 10.0 introduces security-driven capabilities for risk-based governance, identity threat detection and response (ITDR), and AI-assisted insight, helping organizations better anticipate, contain, and manage identity-driven attacks across their complex IT ecosystems. 

For more than a decade, Identity Manager has served as a proven foundation for securing and governing identities at scale across some of the world’s largest and most complex environments. Version 10.0 builds on that foundation with a modernized experience, deeper integrations, and embedded intelligence that gives security teams clear visibility, stronger control, and more efficient execution across governance workflows.  

New capabilities include enhanced risk management integrations that allow organizations to ingest and act on user risk scores from third-party analytics and UEBA tools. Newly introduced ITDR playbooks automate key remediation actions such as disabling accounts, flagging security incidents, and launching targeted attestation. Together, these capabilities help organizations shorten the window between detection and action when identity threats emerge. 

The release also introduces a modern, browser-based interface that delivers full administrative functionality without desktop installation. AI-assisted reporting, powered by a secure, customer-controlled large language model, enables authorized users to query identity data in natural language, reducing reliance on complex SQL and accelerating insights for audits, reviews, and compliance.  

Enhanced SIEM compatibility through standards-based Syslog CEF formatting improves interoperability with modern security monitoring platforms. This helps security teams connect identity governance more seamlessly into broader security operations. 

“One Identity Manager 10.0 is a major upgrade that strengthens identity governance as a critical security component for protecting enterprise environments,” said Praerit Garg, CEO of One Identity. “Organizations today face relentless identity-driven threats. This release combines a proven governance foundation with intelligence, automation, and usability that help security teams detect risk earlier, take decisive action, and operate at scale with confidence.”

“One Identity Manager 10.0 represents a significant change in identity governance for large-scale use,” said Ciro Guariglia, CTO of Intragen by Nomios. “The platform improves the data model and automation engine, while bringing in a more scalable, policy-driven method for attestations. This change makes large certification campaigns easier to manage, instead of burdening administrators and the system.”  

With Identity Manager 10.0, One Identity continues advancing identity security as a central pillar of enterprise defense, helping organizations strengthen protection, reduce exposure, and support secure business operations in complex environments. 

About One Identity 

One Identity delivers trusted identity security for enterprises worldwide to protect and simplify access to digital identities. With flexible deployment options and subscription terms – from self-managed to fully managed – our solutions integrate seamlessly into your identity fabric to strengthen your identity perimeter, protect against breaches and ensure governance and compliance. Trusted by more than 11,000 organizations managing over 500 million identities, One Identity is a leader in identity governance and administration (IGA), privileged access management (PAM), and access management (AM) for security without compromise.

Users can learn more at www.oneidentity.com

Contact

Liberty Pike

One Identity LLC

liberty.pike@oneidentity.com

Why I’m withholding certainty that “precise” US cyber-op disrupted Venezuelan electricity

The New York Times has published new details about a purported cyberattack that unnamed US officials claim plunged parts of Venezuela into darkness in the lead-up to the capture of the country’s president, Nicolás Maduro.

Key among the new details is that the cyber operation was able to turn off electricity for most residents in the capital city of Caracas for only a few minutes, though in some neighborhoods close to the military base where Maduro was seized, the outage lasted for three days. The cyber-op also targeted Venezuelan military radar defenses. The paper said the US Cyber Command was involved.

Got more details?

“Turning off the power in Caracas and interfering with radar allowed US military helicopters to move into the country undetected on their mission to capture Nicolás Maduro, the Venezuelan president who has now been brought to the United States to face drug charges,” the NYT reported.

Read full article

Comments

© Getty Images

2026 Study from Panorays: 85% of CISOs Can’t See Third-Party Threats Amid Increasing Supply Chain Attacks

Panorays, a leading provider of third-party security risk management software, has released the 2026 edition of its annual CISO Survey for Third-Party Cyber Risk Management

The survey highlights third-party cyber risk as one of the most critical challenges facing security leaders today, driven largely by a lack of visibility. While 60% of CISOs report an increase in third-party security incidents, only 15% say they have full visibility into those risks.

These gaps are compounded by limited resources and technology stacks that weren’t designed to manage dynamic supply-chain threats at scale.

Drawing on responses from 200 CISOs of US-based companies, the 2026 Panorays CISO Survey puts a spotlight on cybersecurity executives’ continuing challenges to shore up software supply chain security, as these efforts are further undermined by resource constraints and tech stacks that fall short. Despite growing adoption, standard Governance, Risk, and Compliance (GRC) platforms have largely failed security teams, leaving them without the ability or confidence needed to effectively address the rising tide of third-party threats. 

Key Findings and Insights

  • Preparedness is dangerously low: While 77% of CISOs see third-party risk as a major threat, only 21% have tested crisis response plans in place. This suggests that organizations are increasingly susceptible to prolonged outages, exposure of sensitive systems and financial losses in the event of a security breach, as well as compliance violation penalties. Without a proper response plan in place, even minor incidents have the potential to spiral out of control. 
  • Most organizations are blind to vendors: Although 60% report rising third-party breaches, just 41% monitor risk beyond direct suppliers. CISOs face massive observability gaps, as they’re only watching the front door. But the biggest risks are lurking in the background, largely unseen by most security teams.
  • Shadow AI is creating new attack paths: Despite rapid AI adoption, only 22% of CISOs have formal vetting processes, leaving unmanaged third-party AI tools embedded in core environments. Teams are adopting black-box AI tools faster than security teams can keep up, with 60% of respondents identifying shadow AI as uniquely risky. This creates a dangerous and growing blind spot for CISOs, as high-risk third-party systems are granted access to IT environments without scrutiny.
  • CISOs are dissatisfied with their compliance stacks. The report found that 61% of businesses have invested in GRC software solutions, yet 66% say that these platforms are ineffective in dealing with the dynamic nature of external third-party supply chain risks. As a result, security teams are forced to rely on manual workarounds instead, increasing the likelihood of vulnerabilities being missed. 
  • Static security assessments are no longer up to the job. This is a growing consensus among CISOs, with 71% admitting that traditional questionnaires fall short of expectations, creating fatigue instead of visibility into the threat landscape. Fortunately, CISOs are quickly embracing alternatives, with 66% moving on to AI-driven assessment tools.
Left to right: Panorays Co-founders Meir Antar (COO), Matan Or-El (CEO) and Demi Ben-Ari (Chief Strategy Officer)

Left to right: Panorays Co-founders Meir Antar (COO), Matan Or-El (CEO) and Demi Ben-Ari (Chief Strategy Officer)

“Our findings show that third-party security vulnerabilities aren’t going away – in fact, they’re becoming more prevalent due to a dangerous lack of visibility and the rampant adoption of unmanaged AI tools,” said Matan Or-El, founder and CEO of Panorays. “Meanwhile, it’s especially alarming that only 15% of CISOs say they have the ability to map out their entire supply chains.”

“The rise of AI has only made supply chains more complex, and the connected nature of these data-dependent systems is expanding the attack surface,” Or-El continued. “CISOs are increasingly seeing the value of AI-driven solutions to increase clarity around the evolving threat landscape.”  

Visibility Is Being Prioritized, but CISOs’ Hands Remain Tied

The new report found there’s a growing sense of urgency among CISOs due to the failure of traditional GRC platforms to manage third-party risk at scale. Almost two-thirds of organizations have invested in GRC tools, up from just 27% in the 2025 version of Panorays’ report, yet overall visibility has declined, resulting in growing dissatisfaction about the ineffectiveness of these systems. 

Fortunately, there are signs that organizations can close the visibility gap as more CISOs explore the use of advanced, AI-driven tools to improve their security posture. Adoption of AI for third-party risk management has surged, up from 27% a year ago to 66% this year. 

This shift has led to significant, but still alarmingly insufficient, growth in the ability of organizations to properly assess the third-party threat landscape. 

The 2026 survey found that 15% of CISOs now say they have full visibility into their software supply chains, up from just 3% a year ago, but much work remains to be done. While the progress is encouraging, the overall picture remains bleak, as 85% of organizations still lack a complete view of their overall threat landscape. 

About the Survey

The 2026 CISO Survey was conducted in October 2025 by the independent research company Global Surveyz on behalf of Panorays. It’s based on responses from 200 Chief Information Security Officers, all of whom are full-time employees tasked with overseeing third-party cybersecurity risk management within their organizations. The sample included CISOs from the finance, insurance, professional services, technology, healthcare and software development sectors.

About Panorays

Panorays is a global provider of third-party cybersecurity management software. Adopted by leading banking, insurance, financial services, and healthcare organizations, Panorays enables businesses to optimize their defenses for each unique third-party relationship. With personalized and adaptive third-party cyber risk management, Panorays helps businesses stay ahead of emerging threats and delivers actionable remediations with strategic advantages with over 1,000 customers worldwide. The company serves enterprise and mid-market customers primarily in North America, the UK and the EU, Headquartered in New York and Israel, with offices around the world, Panorays is funded by numerous international investors, including Aleph VC, Oak HC/FT, Greenfield Partners, BlueRed Partners (Singapore), StepStone Group, Moneta VC, Imperva Co-Founder Amichai Shulman and former CEO of Palo Alto Networks Lane Bess. For more information, users can visit panorays.com or contact at info@panorays.com.

Contact

PR

Dan Edelstein

InboundJunction

pr@inboundjunction.com

Every M&A deal has a cyber delta: Close it before hackers do

When mergers and acquisitions grab headlines, the cybersecurity posture of the involved organization is rarely scrutinized, unless one of the parties suffers a breach. But once the deal is done, a key factor that determines how well two companies become one is the gap between what they believe is the state of their security posture and what actually holds up under scrutiny.

We call this the cyber delta.

The unique attributes of a deal, such as compressed timelines, regulatory hurdles and political and market factors, make it virtually impossible to reduce that gap to a single risk score or cyber delta metric. But we can pinpoint the common risk vectors that occur in cases where the companies envision some level of IT consolidation and/or governance.

In a world where adversaries are opportunistic and regulations unforgiving, cyber due diligence can’t remain a late-stage checkbox. It needs to be a strategic pillar of how deals are evaluated, structured and executed.

While every transaction is different, here are some common problems.

Legacy risk

Legacy systems often carry the highest risk — not because they’re old or broken, but because no one truly understands them anymore. Unpatched servers, outdated middleware, forgotten databases and unsupported operating systems often become liabilities after the deal closes.

Traditional due diligence frequently overlooks this kind of technical debt.

To surface it, security teams need configuration-level visibility to determine key issues such as whether critical systems are running end-of-life software, administrative interfaces are exposed externally or if patches can be applied without breaking core dependencies.

This level of scrutiny can’t wait for post-merger integration. It must be baked into early risk modeling before the deal is done.

Risk assessment misalignment

A large organization buying a much smaller one or a highly regulated company buying one in a less regulated space will have very different risk profiles, so the goal isn’t necessarily parity, it’s unification. But even if you don’t unite all the technologies, you still need a unified view of risk.

Establishing open lines of communication across teams is essential to establishing measurable baselines for both sides. That provides a framework for measuring progress and spotting where the biggest gaps are. The goal is to agree on what “good” looks like, what needs fixing and where the priorities are.

Security scores or shared risk indexes can help, especially when you’re trying to compare two environments that work differently. It’s less about having one perfect KPI and more about knowing what you’ve got, what it’s going to take to secure it and how you’ll track that over time.

Security maturity misalignment

Another common risk is the mismatch in security maturity between the acquiring organization and its target. One company might have rigorous asset inventories, patch SLAs and automated detection; the other may be operating with ad hoc response plans and minimal logging. This misalignment creates serious friction — and risk — during integration.

Each security team should understand the other company’s threat modeling, incident response and vulnerability triage processes. They also need to identify where alignment is mandatory (e.g., access controls, endpoint protection) and where temporary coexistence is acceptable.

While every deal has a different integration blueprint, most can be split into two broad categories. First is full integration, which requires collaboration across each company’s security teams to map interdependencies between systems, understand identity sprawl and simulate interconnectivity to identify points of weakness that could ripple through both environments.

Second is partial integration or a standalone operation. In these cases, the focus shifts to interface points. Are APIs between the two firms secured and rate-limited? Are shared systems — like CRMs or collaboration tools — properly monitored and segmented? Security diligence should also reflect the business function of the acquired entity. A dev team’s cloud environment presents different risks than a customer service platform handling PII.

Compliance by inheritance

You’re not just acquiring infrastructure — you’re inheriting obligations. A target’s security program may be sufficient to avoid breaches but still fall short of current regulatory standards. To avoid latent compliance risk:

  • Map systems to relevant regulatory frameworks (e.g., GDPR, HIPAA, CCPA, SEC cybersecurity disclosure rules)
  • Review how sensitive data is classified, encrypted and audited
  • Flag high-risk areas such as weak authentication, unmonitored data transfers, legacy encryption, etc.

These issues often stay hidden until audits, legal inquiries or customer complaints surface. Addressing them proactively avoids painful surprises.

Technology culture clash

When a cloud-native company is acquired by a company that is less so, the due diligence process must align with the velocity and architecture of modern development. Risks often lie in the operational details, such as cloud infrastructure concerns around over-permissive IAM roles and misconfigured storage buckets.

CI/CD pipelines require examination to ensure build processes are secure and secrets aren’t stored in plain text or version control. APIs and integrations need assessment to confirm tokens are properly scoped and revocable, with endpoints protected by rate limiting and authentication. For IoT and edge devices, critical considerations include whether firmware updates are available and signed and whether remote management ports are exposed.

Security culture clash

When two companies come together, you’re not just dealing with different tools — you’re dealing with different ways of thinking about risk. One team might have a solid process for tracking and prioritizing issues. The other might be in constant firefighting mode, just trying to keep up.

Trying to force everyone into one framework right away usually doesn’t work. A better move is to start with shared visibility. Get both sides looking at the same data and using the same language when they talk about risk. The next step is to focus on the areas where the two environments actually touch — things like identity, access and shared infrastructure. That’s where misalignment causes the most problems.

Security leaders don’t need to have it all figured out on day one. They just need people to see the same picture and be willing to work on it together.

Global deals, local risk

Cross-border M&A introduces another layer of complexity. Different regions carry distinct legal, technical and cultural definitions of risk. A European company may prioritize data sovereignty and breach notification timelines; a U.S. firm may focus more on operational resilience and insurance coverage.

Smart security teams build region-specific exposure profiles that account for local laws and regulatory disclosure requirements, threat actor activity by regions and technical norms and enforcement capacity. Global harmonization isn’t always possible, but understanding the landscape in advance helps prevent surprises down the road.

Gaining an advantage by reducing the cyber delta

There will always be some level of uncertainty in M&A cybersecurity. But the organizations that work actively to shrink the cyber delta will have an operational edge.

Don’t let a breach become part of the deal.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

Why modernising infrastructure can mitigate cyber threats

Cyber threats are rising, with a 44% year-on-year increase in cyber-attacks underlining how important security is in the age of AI.

Despite this, many businesses are failing to upgrade legacy IT infrastructure – and leaving themselves vulnerable.

“I find it very strange the organisations that I speak to where their ability to just patch in the cycle might be six months apart,” says Rhys Powell, senior black belt, Managed Cloud Services at Red Hat, speaking in a recent CIO webcast. “We’re seeing out in the world today how easy it appears for criminals… to be able to get into infrastructure and to be able to take advantage of that.”

Additionally, data leaks caused by unsecured employee use of AI tools are now posing a new, potent risk for businesses, Powell notes.

How should IT leaders modernise infrastructure and address these security threats?

Powell says that a key benefit of the containerised journey “is that it is built with security first.”

However, the right expertise and approach are needed to make this approach work: Red Hat research shows two out of three businesses using containerised technology delayed or slowed down deployment over security concerns.

With OpenShift’s managed service, a site reliability engineering team proactively fixes problems in the background, Powell explains, allowing containerisation to be implemented smoothly and securely.

With underlying infrastructure security handled by Red Hat, businesses can focus on making sure that the application layer is as secure, monitored and controlled as possible.

That enables security teams to roll out fixes far more quickly and with far more confidence than before.

Click below to watch the full webcast series:

‘AI부터 직원 번아웃 예방까지…’ 현직 CISO가 뽑은 2026년 최우선 보안 과제 12가지

AI에 대한 과도한 기대가 가라앉으면서 최고정보보호책임자(CISO)가 2026년에 집중해야 할 과제는 더욱 많아졌다. 이는 보안 조직의 번아웃을 막는 것부터 AI의 실질적인 사용례를 찾는 문제, 침해 사고를 사전에 탐지하는 데 초점을 맞춘 전략, 나아가 양자 컴퓨팅이 기존 암호 체계를 무력화할 수 있다는 잠재적 위협에 대비하는 계획 등 다양하다. 여러 산업의 CISO가 2026년 최우선 과제로 삼고 있는 보안 의제를 공유했다.

1. 사후 대응이 아닌 회복탄력성 우선

포티튜드리(Fortitude Re)의 CISO 엘리엇 프랭클린은 조직의 클라우드 인프라 의존도가 더욱 커지는 상황에서 회복탄력성과 아키텍처 규율을 강화하는 것을 2026년 핵심 목표로 제시했다. 프랭클린은 “체계적인 프로젝트 관리와 의도적인 설계에 초점을 맞춘 접근 방식을 취할 것”이라고 설명했다.

모든 신규 이니셔티브는 명확한 아키텍처 계획과 함께 시스템 전반의 종단 간 의존성, 그리고 잠재적인 장애 지점을 깊이 이해하는 것에서 시작한다. 프랭클린은 “장애나 중단이 발생한 뒤 대응하는 방식이 아니라, 엔지니어링 중심의 사려 깊은 접근을 통해 시스템의 안정성, 확장성, 신뢰성을 강화하는 것이 목표”라며 “이러한 기반이 마련돼야 기술과 보안 투자가 장기적으로 지속되고 진화할 수 있다는 확신 속에서 비즈니스를 추진할 수 있다”라고 말했다.

2. AI가 보안 의제를 주도

스탠다드차타드의 그룹 CISO 체자리 피에카르스키는 2026년 보안 의제가 2가지 측면에서 AI 중심으로 전개될 것으로 내다봤다. 하나는 위협 환경을 정의하는 것이고, 다른 하나는 이에 대응하는 방어 아키텍처를 설계하는 것이다.

피에카르스키는 “공격을 완화하는 과정에서는 속도가 무엇보다 중요하다. AI와 오케스트레이션 도구를 활용하면 탐지를 신속하게 자동화하고 사고 대응을 효율적으로 간소화할 수 있다”라며, “이를 통해 공격자의 체류 시간을 크게 줄이고 복구 속도를 높여, 위협이 확대되기 전에 차단할 수 있다”라고 설명했다.

AI 기반 애플리케이션과 시스템이 확산돼 새로운 공격 표면이 등장하는 상황에서, 피에카르스키는 AI를 활용한 위협과 전술로부터 환경을 방어하고 더 견고하게 만드는 데 우선순위를 두고 있다고 밝혔다. 그는 “사이버 전반에서 AI가 제공하는 기회를 활용하는 동시에, 은행이 AI를 안전하고 신뢰할 수 있게 사용할 수 있도록 하는 것이 핵심”이라고 전했다.

키아젠(Qiagen)의 CISO 다니엘 샤츠 역시 2026년 전반에 걸쳐 AI가 핵심 주제일 것으로 전망했다. 그는 AI를 활용해 보안 통제와 운영을 개선하는 동시에, AI가 제품에 안전하게 통합되도록 하는 것이 중요하다고 설명했다.

샤츠는 특히 생성형 AI 기반 위협이 눈에 띄게 정교해지고 규모가 확대될 것으로 예상했다. 그는 “현재까지 관찰된 공격은 대부분 수작업에 가까운 AI 지원 캠페인이지만, 다른 위협과 마찬가지로 점차 자동화되고 산업화된 사회공학 공격으로 진화할 가능성이 크다”라고 분석했다. 이어 “생성형 AI가 보편화되면서 조직이 AI 공격 표면을 이해하고 관리하며 보호할 수 있도록 돕는 기술의 중요성이 빠르게 커질 것”이라며 “웹 개발 초기와 같은 실수를 반복하지 않기 위해서는 산업 전반에서 초기 단계부터 적절한 보안 통제를 내재화하는 것이 중요하다”라고 강조했다.

3. 가시성과 통제 확보

플렉세라(Flexera)의 CIO 겸 CISO 코널 갤러거가 꼽은 최우선 과제는 AI 도구와 챗봇을 활용하는 과정에서 생산성과 지식재산 보호의 균형을 맞추는 일이다. 갤러거는 “신뢰할 수 있는 엔터프라이즈급 AI 솔루션을 표준화하는 동시에, 승인되지 않은 도구로 인한 데이터 유출을 막기 위한 통제 방안을 마련하고 있다”라고 설명했다.

그는 실무적으로 SaaS 관리 및 탐지 도구를 활용해 섀도우 IT와 비인가 AI 사용 현황을 파악할 계획이라고 밝혔다. ESG와 보안과 관련한 고객 및 규제 요구가 지속적으로 증가하는 만큼, 갤러거는 컴플라이언스와 보고를 자동화하는 것도 중요한 과제로 꼽았다. 여기에 위협 인텔리전스 피드와 취약점 관리 솔루션을 통해 실제 환경에서 발생하는 위협 동향을 선제적으로 파악해 대응한다는 전략이다.

갤러거는 “핵심은 가시성과 통제”라며 “우리 환경에 무엇이 존재하고 어떻게 사용되는지, 그리고 변화가 발생했을 때 얼마나 빠르게 대응할 수 있는지를 명확히 아는 것이 중요하다”라고 설명했다.

4. 사람 및 비인간 신원 관리

샤츠는 사람 및 비인간 신원 관리를 주요 과제로 삼고 있다. 그는 효과적인 신원 관리를 가능하게 하는 기술이 앞으로도 핵심적인 역할을 할 것으로 내다봤다. 샤츠는 “신원은 여전히 보호하기 어려운 영역이며, 에이전틱 AI의 등장과 함께 비인간 신원의 규모도 확대되기 시작했다”라고 설명했다.

유사한 방향에서 프랭클린 역시 사람과 비인간 신원을 아우르는 권한 관리에 우선순위를 두고 있다.

프랭클린은 “서비스 계정과 API, 자동화 도구를 사용자 계정과 동일한 수준의 엄격함으로 관리하는 것이 중요하다. 자동화가 확대될수록 이러한 디지털 신원을 효과적으로 관리하는 것이 복잡한 환경에서 신뢰성과 추적성, 통제를 유지하는 데 결정적인 요소가 된다”라고 분석했다. 이어 궁극적인 목표가 생산성과 협업을 지원하는 동시에 조직 전반의 회복탄력성을 강화하는 데 있다고 덧붙였다.

5. 에이전틱 AI 제품에 보안 내재화

일부 보안 리더는 고도화되는 공격에 대응하기 위해 에이전틱 AI 제품 자체에 보안을 직접 내재화하는 전략을 추진하고 있다. 퀄트릭스(Qualtrics)의 CSO 아사프 케렌은 “AI 위험을 단순히 차단하려는 단계를 넘어, 에이전틱 솔루션의 설계 단계부터 보안을 통합함으로써 안전한 경로가 곧 가장 빠른 길이 되도록 하고 있다”라고 설명했다.

케렌은 AI를 활용해 보안 역량을 강화하는 한편, SOC 초기 분석과 통제 테스트와 같은 내부 기능을 자동화하고 가속화할 계획이라고 언급했다.

6. 보안과 신뢰의 연계

케렌은 2026년이 보안을 단순한 백오피스 기능이 아니라 고객에게 신뢰를 주는 가시적인 신호로 만들어야 하는 해라고 밝혔다. 그는 고객과의 관계에서 보안을 통해 보다 능동적이고 투명한 파트너십을 만드는 데 주력하고 있다.

케렌은 “고객은 기업이 데이터를 어떻게 다루고 AI를 어떻게 활용하는지를 기준으로 구매 결정을 내리고 있다. 보안을 단순한 위험 완화 수단이 아니라 시장 진출을 위한 경쟁 요소로 바라봐야 한다”라고 조언했다.

이를 위해 AI 관련 ISO 42001과 페드램프 하이(FedRAMP High) 인증을 추진하고, 보안 관행을 투명하게 공개하며, 보안 수준을 가치 제안의 핵심 요소로 드러낼 계획이다. 케렌은 “강력한 보안과 책임 있는 AI 활용을 신뢰성 있게 입증할 수 있는 기업이 신뢰를 중시하는 고객의 선택을 받게 될 것”이라고 강조했다.

7. 양자 대비 전략 수립

스탠다드차타드의 피에카르스키는 “양자 컴퓨팅은 기존 암호화 방식을 무력화할 가능성이 있어 데이터 보안에 중대한 영향을 미치고, 새로운 공격 벡터를 열 수 있는 심각한 사이버 위험 요소”라고 진단했다.

이 같은 인식을 바탕으로 피에카르스키와 보안 팀은 다가올 변화에 대비한 준비에 본격적으로 나서고 있다. 그는 “2026년에도 신흥 위협에 대응하고 관련 위험을 해소하기 위해, 다년간에 걸쳐 회복탄력성 높은 암호화 대비 전략을 지속적으로 추진할 것”이라고 설명했다.

ISC2의 CISO 존 프랜스는 포스트 양자 암호 체계에 대한 준비를 서둘러야 한다고 조언했다. 프랜스에 따르면 이는 기업 내부와 시스템 전반을 점검하는 것에서 출발해, 벤더와 파트너의 준비 수준을 확인하는 단계까지 포함한다.

프랜스는 “로드맵에는 암호화 자산 목록을 정리하는 작업을 포함해야 하며, 이후 벤더에게 ‘양자 대응을 위해 무엇을 하고 있으며 로드맵은 어떻게 되는가’를 묻는 과정이 필요하다. 예상보다 이른 시점에 사용을 중단해야 할 기술도 생길 수 있기 때문에, 이에 대한 계획을 미리 세워야 한다”라고 말했다.

8. 시스템뿐 아니라 사람을 보호

2026년 이후의 보안 전략은 도구와 통제, 컴플라이언스에만 국한되지 않고 인재의 회복탄력성까지 함께 고려해야 한다. 지속적인 스트레스와 요구 역량 변화가 사이버 보안 인재를 구조적으로 재편하고 있기 때문이다.

번아웃이 상시적인 문제로 자리 잡은 데다 AI가 직무와 요구 역량을 변화시키고, 경제 상황이 예산에 압박을 가하는 환경에서 CISO는 기술 못지않게 팀의 웰빙을 돌봐야 한다는 것이다.

프랜스는 “팀을 활용하면서도 소진시키지 않는 균형을 유지하는 것이 주요 과제”라고 언급했다.

9. 침해 사고 조기 탐지

클라우드 시스템이 확장되고 공급망이 더욱 복잡해지면서, 모든 침해를 완벽하게 차단할 수 있다는 기존의 접근 방식은 이제 한계를 드러내고 있다. 이에 따라 CISO는 보안 프로그램에서 예방보다 탐지와 대응을 우선시해야 하는 압박을 점점 더 크게 받고 있다.

팀뷰어의 CISO 얀 비는 “가장 강력한 보안 프로그램은 모든 침해를 막는 것이 아니라, 침해를 가장 먼저 발견하는 프로그램”이라고 설명했다.

비는 조직을 요새처럼 둘러싸는 방식보다 가시성과 속도를 중시해야 한다고 분석했다. 그는 “에이전틱 AI와 초연결 SaaS 환경에서는 속도가 모든 것을 좌우한다. 몇 초 만에 이상 징후를 감지할 수 있는 기업이, 방어는 견고하지만 대응이 느린 경쟁 기업보다 앞서 나가게 될 것”이라고 내다봤다.

10. 위협 곡선에 선제적으로 대응

샤츠는 “기업은 전반적으로 의사결정과 실행 속도가 느린 편이기 때문에, 올해 예산 주기를 마무리하는 시점에서 변화하는 위협 환경을 미리 내다보고 적절히 대비하는 것이 중요하다”라고 설명했다.

샤츠의 주요 과제에는 세계경제포럼(WEF)의 사이버보안 전망 보고서, 유럽연합 사이버보안청(ENISA)의 위협 환경 보고서, ISF의 위협 호라이즌 등 이미 검증된 자료를 검토하는 작업이 포함돼 있다.

그는 “이러한 자료는 향후 12개월에서 36개월을 내다본 위험 통제 전략을 보다 현실적으로 수립하고, 조직 내부의 기대치를 정립하는 데 도움이 된다”라고 말했다.

11. 의사소통 격차 해소

CISO와 이사회, IT 리더는 보안을 단순한 기술적 조치가 아닌 기업 경영 전반에 영향을 미치는 핵심 비즈니스 우선순위로 인식하고 공통된 관점에서 소통해야 한다.

비에 따르면 많은 이사회가 여전히 보안을 컴플라이언스나 비용 문제로 바라보고 있다. 반면 CISO는 리스크 관리와 비즈니스 연속성의 관점에서 보안을 설명하고 있다. 그는 “이 같은 커뮤니케이션 격차가 공격자가 악용할 수 있는 사각지대를 만든다. 사이버 리스크가 비즈니스 리스크와 분리될 수 없는 상황이 되면서, 이사회와 CISO는 기술적 위협을 경영진이 실행할 수 있는 재무적·평판적·운영적 영향으로 설명하기 위해 더욱 긴밀히 협력해야 한다”라고 조언했다.

특히 CISO는 단순한 보고에 그치지 않고 스토리텔링에 집중해야 한다. 이는 위협 인텔리전스를 비즈니스 성과와 연결해 명확하게 전달하는 것을 의미한다.

이사회 역시 사이버 회복탄력성을 하나의 비용 항목이 아니라 경쟁 우위로 인식할 필요가 있다. 비는 “보안과 전략 사이의 문화적 격차를 해소한 기업이 사고가 불가피하게 발생했을 때 더 빠르게 회복하고, 투자자에게 더 큰 신뢰를 줄 수 있다”라고 강조했다.

12. 유행이 아닌 성과 중심

갤러거는 “2026년에는 실험보다 실행이 훨씬 더 중요해질 것”이라고 전망했다.

이를 위해 갤러거는 보안 프로그램 전반에 걸쳐 투명성, 거버넌스, 측정 가능한 성과를 강조하는 체계적인 접근 방식을 도입할 계획이다. 그는 “모든 이니셔티브는 투입된 비용이 실제 ROI와 가시적인 위험 감소로 어떻게 연결되는지를 기준으로 평가받게 될 것”이라고 말했다.

특히 AI 관련 이니셔티브는 2025년의 기대와 열기가 가라앉으면서, 실질적인 성과와 명확한 비즈니스 사용례를 입증해야 하는 검증 대상이 될 가능성이 크다.

갤러거는 “2026년은 AI를 단순한 기술 실험이 아니라, 실제 비즈니스를 움직이는 동력으로 만드는 해가 될 것”이라고 내다봤다.
dl-ciokorea@foundryco.com

Industrial routers bear the brunt of OT cyberattacks, new Forescout research finds

Industrial routers and other OT perimeter devices are absorbing the majority of cyberattacks targeting operational technology environments, according to new Forescout Vedere Labs research.

Analysing 90 days of real-world honeypot data, researchers found that 67% of malicious activity was directed at OT perimeter devices, such as industrial routers and firewalls, compared with 33% aimed at directly exposed OT assets like PLCs and HMIs.

The findings highlight the growing risk facing edge devices that sit between IT and OT networks.

Automated attacks dominate the OT perimeter

The research shows that OT environments are under constant, automated attack, with more than 60 million requests logged across 11 devices in just three months. Once high-volume SNMP fingerprinting traffic was removed, the remaining 3.5 million events revealed that industrial firewalls and routers were the most heavily targeted assets.

Attackers overwhelmingly relied on SSH and Telnet brute-force attempts, which accounted for 72% of perimeter attacks. Many of the credentials used were drawn from well-known default IoT password lists that have circulated for almost a decade, underlining the persistent risk posed by weak or unchanged credentials.

HTTP and HTTPS traffic made up a further 24% of attacks, including thousands of automated exploit attempts designed to force devices to download malware from external servers.

Emerging botnets raise concerns

Researchers identified several malware families actively targeting OT perimeter devices, including RondoDox, Redtail, and ShadowV2. Of these, RondoDox stood out as the most prevalent, responsible for 59% of observed malicious HTTP activity.

RondoDox is a relatively new botnet that has rapidly expanded its exploit arsenal to include more than 50 known vulnerabilities, many without assigned CVEs. While most current exploits focus on IT and IoT devices, researchers warn that the addition of industrial router vulnerabilities could quickly increase the risk to critical infrastructure operators.

ShadowV2, first observed only months ago, has already become the third most common botnet in the dataset, demonstrating how quickly new automated threats are emerging.

Chaya_005: a long-running reconnaissance campaign

One of the most significant findings was the discovery of a previously undocumented activity cluster, dubbed Chaya_005. Active for at least two years, Chaya_005 appears to focus on fingerprinting and capability testing of industrial edge devices, rather than immediate mass exploitation.

The campaign initially included a successful exploit against a legacy Sierra Wireless router, before evolving into a broader set of malformed exploit attempts against multiple vendors’ devices. Researchers believe the activity may be designed to identify which devices are vulnerable to specific command-execution techniques, potentially for future exploitation or monetisation.

Unlike typical botnets, Chaya_005 showed no evidence of indiscriminate scanning or follow-on attacks, suggesting a more deliberate and targeted reconnaissance effort.

Hacktivists and OT expand the threat surface

The research also highlights the growing interest of hacktivist groups in OT targets. In one incident, the pro-Russian group TwoNet compromised and defaced a water treatment HMI in Forescout’s adversary engagement environment.

While such attacks often rely on manual exploitation, the data shows that routers, PLCs, HMIs and even IP cameras are routinely targeted by automated scanners and botnets, blurring the traditional distinction between IT and OT threats.

Security teams urged to rethink IT/OT boundaries

Forescout warns that treating attacks as “IT-only” or “OT-only” is increasingly dangerous. Automated malware does not distinguish between environments, and compromised IT devices at the OT perimeter can serve as a stepping stone into critical systems.

To reduce risk, researchers recommend that organisations harden OT devices, eliminate weak credentials, avoid exposing industrial equipment directly to the internet, and implement OT-aware monitoring capable of detecting malicious behaviour specific to industrial protocols.

The post Industrial routers bear the brunt of OT cyberattacks, new Forescout research finds appeared first on IT Security Guru.

SCADA (ICS) Hacking and Security: An Introduction to SCADA Forensics

Welcome back, my aspiring SCADA/ICS cyberwarriors!

SCADA (Supervisory Control and Data Acquisition) systems and the wider class of industrial control systems (ICS) run many parts of modern life, such as electricity, water, transport, factories. These systems were originally built to work in closed environments and not to be exposed to the public Internet. Over the last decade they have been connected more and more to corporate networks and remote services to improve efficiency and monitoring. That change has also made them reachable by the same attackers who target regular IT systems. When a SCADA system is hit by malware, sabotage, or human error, operators must restore service fast. At the same time investigators need trustworthy evidence to find out what happened and to support legal, regulatory, or insurance processes.

Forensics techniques from traditional IT are helpful, but they usually do not fit SCADA devices directly. Many field controllers run custom or minimal operating systems, lack detailed logs, and expose few of the standard interfaces that desktop forensics relies on. To address that gap, we are starting a focused, practical 3-day course on SCADA forensics. The course is designed to equip you with hands-on skills for collecting, preserving and analysing evidence from PLCs, RTUs, HMIs and engineering workstations.

Today we will explain how SCADA systems are built, what makes forensics in that space hard, and which practical approaches and tools investigators can use nowadays.

Background and SCADA Architecture

A SCADA environment usually has three main parts: the control center, the network that connects things, and the field devices.

The control center contains servers that run the supervisory applications, databases or historians that store measurement data, and operator screens (human-machine interfaces). These hosts look more like regular IT systems and are usually the easiest place to start a forensic investigation.

The network between control center and field devices is varied. It can include Ethernet, serial links, cellular radios, or specialized industrial buses. Protocols range from simple serial messages to industrial Ethernet and protocol stacks that are unique to vendors. That variety makes it harder to collect and interpret network traffic consistently.

Field devices sit at the edge. They include PLCs (programmable logic controllers), RTUs (remote terminal units), and other embedded controllers that handle sensors and actuators. Many of these devices run stripped-down or proprietary firmware, hold little storage, and are designed to operate continuously.

Understanding these layers helps set realistic expectations for what evidence is available and how to collect it without stopping critical operations.

scada water system

Challenges in SCADA Forensics

SCADA forensics has specific challenges that change how an investigation is done.

First, some field devices are not built for forensics. They often lack detailed logs, have limited storage, and run proprietary software. That makes it hard to find recorded events or to run standard acquisition tools on the device.

Second, availability matters. Many SCADA devices must stay online to keep a plant, substation, or waterworks operating. Investigators cannot simply shut everything down to image drives. This requirement forces use of live-acquisition techniques that gather volatile data while systems keep running.

Third, timing and synchronization are difficult. Distributed devices often have different clocks and can drift. That makes correlating events across a wide system challenging unless timestamps are synchronized or corrected during analysis.

Finally, organizational and legal issues interfere. Companies often reluctant to share device details, firmware, or incident records because of safety, reputation, or legal concerns. That slows development of general-purpose tools and slows learning from real incidents.

All these challenges only increase the value of SCADA forensics specialists. Salary varies by location, experience, and roles, but can range from approximately $65,000 to over $120,000 per year.

Real-world attack chain

To understand why SCADA forensics matters, it helps to look at how real incidents unfold. The following examples show how a single compromise inside the corporate network can quickly spread into the operational side of a company. In both cases, the attack starts with the compromise of an HR employee’s workstation, which is a common low-privilege entry point. From there, the attacker begins basic domain reconnaissance, such as mapping users, groups, servers, and RDP access paths. 

Case 1

In the first path, the attacker discovers that the compromised account has the right to replicate directory data, similar to a DCSync privilege. That allows the extraction of domain administrator credentials. Once the attacker holds domain admin rights, they use Group Policy to push a task or service that creates a persistent connection to their command-and-control server. From that moment, they can access nearly every machine in the domain without resistance. With such reach, pivoting into the SCADA or engineering network becomes a matter of time. In one real scenario, this setup lasted only weeks before attackers gained full control and eventually destroyed the domain.

Case 2

The second path shows a different but equally dangerous route. After gathering domain information, the attacker finds that the HR account has RDP access to a BACKUP server, which stores local administrator hashes. They use these hashes to move laterally, discovering that most domain users also have RDP access through an RDG gateway that connects to multiple workstations. From there, they hop across endpoints, including those used by engineers. Once inside engineering workstations, the attacker maps out routes to the industrial control network and starts interacting with devices by changing configurations, altering setpoints, or pushing malicious logic into PLCs.

Both cases end with full access to SCADA and industrial equipment. The common causes are poor segmentation between IT and OT, excessive privileges, and weak monitoring.

Frameworks and Methodologies

A practical framework for SCADA forensics has to preserve evidence and keep the process safe. The basic idea is to capture the most fragile, meaningful data first and leave more invasive actions for later or for offline testing.

Start with clear roles and priorities. You need to know who can order device changes, who will gather evidence, and who is responsible for restoring service. Communication between operations and security must be planned ahead of incidents.

As previously said, capture volatile and remote evidence first, then persistent local data. This includes memory contents, current register values, and anything stored only in RAM. Remote evidence includes network traffic, historian streams, and operator session logs. Persistent local data includes configuration files, firmware images, and file system contents. Capturing network traffic and historian data early preserves context without touching the device.

A common operational pattern is to use lightweight preservation agents or passive sensors that record traffic and key events in real time. These components should avoid any action that changes device behavior. Heavy analysis and pattern matching happen later on copies of captured data in a safe environment.

When device interaction is required, prefer read-only APIs, documented diagnostic ports, or vendor-supported tools. If hardware-level extraction is necessary, use controlled methods (for example JTAG reads, serial console captures, or bus sniffers) with clear test plans and safety checks. Keep detailed logs of every command and action taken during live acquisition so the evidence chain is traceable.

Automation helps, but only if it is conservative. Two-stage approaches are useful, where stage one performs simple, safe preservation and stage two runs deeper analyses offline. Any automated agent must be tested to ensure it never interferes with real-time control logic.

a compromised russian scada system

SCADA Network Forensics

Network captures are often the richest, least disruptive source of evidence. Packet captures and flow data show commands sent to controllers, operator actions, and any external systems that are connected to the control network.

Start by placing passive capture points in places that see control traffic without being in the critical data path, such as network mirrors or dedicated taps. Capture both raw packets and derived session logs as well as timestamps with a reliable time source.

Protocol awareness is essential. We will cover some of them in the next article. A lot more will be covered during the course. Industrial protocols like Modbus, DNP3, and vendor-specific protocols carry operational commands. Parsing these messages into readable audit records makes it much easier to spot abnormal commands, unauthorized writes to registers, or suspicious sequence patterns. Deterministic models, for example, state machines that describe allowed sequences of messages, help identify anomalies. But expect normal operations to be noisy and variable. Any model must be trained or tuned to the site’s own behavior to reduce false positives.

Network forensics also supports containment. If an anomaly is detected in real time, defenders can ramp up capture fidelity in critical segments and preserve extra context for later analysis. Because many incidents move from corporate IT into OT networks, collecting correlated data from both domains gives a bigger picture of the attacker’s path

oil refinery

Endpoint and Device Forensics

Field devices are the hardest but the most important forensic targets. The path to useful evidence often follows a tiered strategy, where you use non-invasive sources first, then proceed to live acquisition, and finally to hardware-level extraction only when necessary.

Non-invasive collection means pulling data from historians, backups, documented export functions, and vendor tools that allow read-only access. These sources often include configuration snapshots, logged process values, and operator commands.

Live acquisition captures runtime state without stopping the device. Where possible, use the device’s read-only interfaces or diagnostic links to get memory snapshots, register values, and program state. If a device provides a console or API that returns internal variables, collect those values along with timestamps and any available context.

If read-only or diagnostic interfaces are not available or do not contain the needed data, hardware extraction methods come next. This includes connecting to serial consoles, listening on fieldbuses, using JTAG or SWD to read memory, or intercepting firmware during upload processes. These operations require specialized hardware and procedures. It must be planned carefully to avoid accidental writes, timing interruptions, or safety hazards.

Interpreting raw dumps is often the bottleneck. Memory and storage can contain mixed content, such as configuration data, program code, encrypted blobs, and timestamps. But there are techniques that can help, including differential analysis (comparing multiple dumps from similar devices), data carving for detectable structures, and machine-assisted methods that separate low-entropy (likely structured) regions from high-entropy (likely encrypted) ones. Comparing captured firmware to a known baseline is a reliable way to detect tampering.

Where possible, create an offline test environment that emulates the device and process so investigators can replay traffic, exercise suspected malicious inputs, and validate hypotheses without touching production hardware.

SCADA Forensics Tooling

Right now the toolset is mixed. Investigators use standard forensic suites for control-center hosts, packet-capture and IDS tools extended with industrial protocol parsers for networks, and bespoke hardware tools or vendor utilities for field devices. Many useful tools exist, but most are specific to a vendor, a protocol, or a device family.

A practical roadmap for better tooling includes three points. First, create and adopt standardized formats for logging control-protocol events and for preserving packet captures with synchronized timestamps. Second, build non-disruptive acquisition primitives that work across device classes, ways to read key memory regions, configuration, and program images without stopping operation. Third, develop shared anonymized incident datasets that let researchers validate tools against realistic behaviors and edge cases.

In the meantime, it’s important to combine several approaches, such as maintaining high-quality network capture, work with vendors to understand diagnostic interfaces, prepare hardware tools and safe extraction procedures, while documenting everything. Establish and test standard operating procedures in advance so that when an incident happens the team acts quickly and consistently.

Conclusion

Attacks on critical infrastructure are rising, and SCADA forensics still trails IT forensics because field devices are often proprietary, have limited logging, and cannot be taken offline. We showed those gaps and gave practical actions. You will need to preserve network and historian data early, prefer read-only device collection, enforce strict IT/OT segmentation, reduce privileges, and rehearse incident response to protect those systems. In the next article, we will look at different protocols to give you a better idea of how everything works.

To support hands-on learning, our 3-day SCADA Forensics course starts in November that uses realistic ICS network topologies, breach simulations, and labs to teach how to reconstruct attack chains, identify IOCs, and analyze artifacts on PLCs, RTUs, engineering workstations and HMIs. 

During the course you will use common forensic tools to complete exercises and focus on safe, non-disruptive procedures you can apply in production environments. 

Learn more: https://hackersarise.thinkific.com/courses/scada-forensics

The post SCADA (ICS) Hacking and Security: An Introduction to SCADA Forensics first appeared on Hackers Arise.

The Evolution of Antivirus Software to Face Modern Threats

Over the years, endpoint security has evolved from primitive antivirus software to more sophisticated next-generation platforms employing advanced technology and better endpoint detection and response.

Because of the increased threat that modern cyberattacks pose, experts are exploring more elegant ways of keeping data safe from threats.

Signature-Based Antivirus Software

Signature-based detection is the use of footprints to identify malware. All programs, applications, software and files have a digital footprint. Buried within their code, these digital footprints or signatures are unique to the respective property. With signature-based detection, traditional antivirus products can scan a computer for the footprints of known malware.

These malware footprints are stored in a database. Antivirus products essentially search for the footprints of known malware in the database. If they discover one, they’ll identify the malware, in which case they’ll either delete or quarantine it.

When new malware emerges and experts document it, antivirus vendors create and release a signature database update to detect and block the new threat. These updates increase the tool’s detection capabilities, and in some cases, vendors may release them multiple times per day.

With an average of 350,000 new malware instances registered daily, there are a lot of signature database updates to keep up with. While some antivirus vendors update their programs throughout the day, others release scheduled daily, weekly or monthly software updates to keep things simple for their users.

But convenience comes at the risk of real-time protection. When antivirus software is missing new malware signatures from its database, customers are unprotected against new or advanced threats.

Next-Generation Antivirus

While signature-based detection has been the default in traditional antivirus solutions for years, its drawbacks have prompted people to think about how to make antivirus more effective. Today’s next-generation anti-malware solutions use advanced technologies like behavior analysis, artificial intelligence (AI) and machine learning (ML) to detect threats based on the attacker’s intention rather than looking for a match to a known signature.

Behavior analysis in threat prevention is similar, although admittedly more complex. Instead of only cross-checking files with a reference list of signatures, a next-generation antivirus platform can analyze malicious files’ actions (or intentions) and determine when something is suspicious. This approach is about 99% effective against new and advanced malware threats, compared to signature-based solutions’ average of 60% effectiveness.

Next-generation antivirus takes traditional antivirus software to a new level of endpoint security protection. It goes beyond known file-based malware signatures and heuristics because it’s a system-centric, cloud-based approach. It uses predictive analytics driven by ML and AI as well as threat intelligence to:

  • Detect and prevent malware and fileless attacks
  • Identify malicious behavior and tactics, techniques and procedures (TTPs) from unknown sources
  • Collect and analyze comprehensive endpoint data to determine root causes
  • Respond to new and emerging threats that previously went undetected.

Countering Modern Attacks

Today’s attackers know precisely where to find gaps and weaknesses in an organization’s network perimeter security, and they penetrate these in ways that bypass traditional antivirus software. These attackers use highly developed tools to target vulnerabilities that leverage:

  • Memory-based attacks
  • PowerShell scripting language
  • Remote logins
  • Macro-based attacks.

To counter these attackers, next-generation antivirus focuses on events – files, processes, applications and network connections – to see how actions in each of these areas are related. Analysis of event streams can help identify malicious intent, behaviors and activities; once identified, the attacks can be blocked.

This approach is increasingly important today because enterprises are finding that attackers are targeting their specific networks. The attacks are multi-stage and personalized and pose a significantly higher risk; traditional antivirus solutions don’t have a chance of stopping them.

Explore IBM Security QRadar Solutions  

Endpoint Detection and Response

Endpoint detection and response (EDR) software flips that model, relying on behavioral analysis of what’s happening on the endpoint. For example, if a Word document spawns a PowerShell process and executes an unknown script, that’s concerning. The file will be flagged and quarantined until the validity of the process is confirmed. Not relying on signature-based detection enables the EDR platform to react better to new and advanced threats.

Some of the ways EDR thwarts advanced threats include the following:

  • EDR provides real-time monitoring and detection of threats that may not be easily recognized by standard antivirus
  • EDR detects unknown threats based on a behavior that isn’t normal
  • Data collection and analysis determine threat patterns and alert organizations to threats
  • Forensic capabilities can determine what happened during a security event
  • EDR can isolate and quarantine suspicious or infected items. It often uses sandboxing to ensure a file’s safety without disrupting the user’s system.
  • EDR can include automated remediation and removal of specific threats.

EDR agent software is deployed to endpoints within an organization and begins recording activity on these endpoints. These agents are like security cameras focused on the processes and events running on the devices.

EDR platforms have several approaches to detecting threats. Some detect locally on the endpoint via ML, some forward all recorded data to an on-premises control server for analysis, some upload the recorded data to a cloud resource for detection and inspection and others use a hybrid approach.

Detections by EDR platforms are based on several tools, including AI, threat intelligence, behavioral analysis and indicators of compromise (IOCs). These tools also offer a range of responses, such as actions that trigger alerts, isolate the machine from the network, roll back to a known good state, delete or terminate threats and generate forensic evidence files.

Managed Detection and Response

Managed detection and response (MDR) is not a technology, but a form of managed service, sometimes delivered by a managed security service provider. MDR provides value to organizations with limited resources or the expertise to continuously monitor potential attack surfaces. Specific security goals and outcomes define these services. MDR providers offer various cybersecurity tools, such as endpoint detection, security information and event management (SIEM), network traffic analysis (NTA), user and entity behavior analytics (UEBA), asset discovery, vulnerability management, intrusion detection and cloud security.

Gartner estimates that by 2025, 50% of organizations will use MDR services. There are several reasons to support this prediction:

  • The widening talent shortage and skills gap: Many cybersecurity leaders confirm that they cannot use security technologies to their full advantage due to a global talent crunch.
  • Cybersecurity teams are understaffed and overworked: Budget cuts, layoffs and resource diversion have left IT departments with many challenges.
  • Widespread alert fatigue: Security analysts are becoming less productive due to “alert fatigue” from too many notifications and false positives from security applications. This results in distraction, ignored alerts, increased stress and fear of missing incidents. Many alerts are never addressed when, ideally, they should be studied and acted upon.

The technology behind an MDR service can include an array of options. This is an important thing to understand when evaluating MDR providers. The technology stack behind the service determines the scope of attacks they have access to detect.

Cybersecurity is about “defense-in-depth” — having multiple layers of protection to counter the numerous possible attack vectors. Various technologies provide complete visibility, detection and response capabilities. Some of the technologies offered by MDR services include:

  • SIEM
  • NTA
  • Endpoint protection platform
  • Intrusion detection system.

Extended Detection and Response

Extended detection and response (XDR) is the next phase in the evolution of EDR. XDR provides detection and protection across various environments, including networks and network components, cloud infrastructure and Software-as-a-Service (SaaS).

Features of XDR include:

  • Visibility into all network layers, including the entire application stack
  • Advanced detection, including automated correlation and ML processes capable of detecting events often missed by SIEM solutions
  • Intelligent alert suppression filters out the noise that typically reduces the productivity of cybersecurity staff.

Benefits of XDR include:

  • Improved analysis to help organizations collect the correct data and transform that data with contextual information
  • Identify hidden threats with the help of advanced behavior models powered by ML algorithms
  • Identify and correlate threats across various application stacks and network layers
  • Minimize fatigue by providing prioritized and precise alerts for investigation
  • Provide forensic capabilities needed to integrate multiple signals. This helps teams to construct the big picture of an attack and complete investigations promptly with high confidence in their findings.

XDR is gaining in popularity. XDR provides a single platform that can ingest endpoint agent data, network-level information and, in many cases, device logs. This data is correlated, and detections occur from one or many sources of telemetry.

XDR streamlines the functions of the analysts’ role by allowing them to view detections and respond from a single console. The single-pane-of-glass approach offers faster time to value, a shortened learning curve and quicker response times since the analysts no longer need to pivot between windows. Another advantage of XDR is its ability to piece multiple sources of telemetry together to achieve a big-picture view of detections. These tools are able to see what occurs not only on the endpoints but also between the endpoints.

The Future of Antivirus Software

Security is constantly evolving, and future threats may become much more dangerous than we are observing now. We cannot ignore these recent changes in the threat landscape. Rather, we need to understand them and stop these increasingly destructive attacks.

The post The Evolution of Antivirus Software to Face Modern Threats appeared first on Security Intelligence.

An IBM Hacker Breaks Down High-Profile Attacks

On September 19, 2022, an 18-year-old cyberattacker known as “teapotuberhacker” (aka TeaPot) allegedly breached the Slack messages of game developer Rockstar Games. Using this access, they pilfered over 90 videos of the upcoming Grand Theft Auto VI game. They then posted those videos on the fan website GTAForums.com. Gamers got an unsanctioned sneak peek of game footage, characters, plot points and other critical details. It was a game developer’s worst nightmare.

In addition, the malicious actor claimed responsibility for a similar security breach affecting ride-sharing company Uber just a week prior. According to reports, they infiltrated the company’s Slack by tricking an employee into granting them access. Then, they spammed the employees with multi-factor authentication (MFA) push notifications until they gained access to internal systems, where they could browse the source code.

Incidents like the Rockstar and Uber hacks should serve as a warning to all CISOs. Proper security must consider the role info-hungry actors and audiences can play when dealing with sensitive information and intellectual property.

Stephanie Carruthers, Chief People Hacker for the X‑Force Red team at IBM Security, broke down how the incident at Uber happened and what helps prevent these types of attacks.

“But We Have MFA”

First, Carruthers believes one potential and even likely scenario is the person targeted at Uber may have been a contractor. The hacker likely purchased stolen credentials belonging to this contractor on the dark web — as an initial step in their social engineering campaign. The attacker likely then used those credentials to log into one of Uber’s systems. However, Uber had multi-factor authentication (MFA) in place, and the attacker was asked to validate their identity multiple times.

According to reports, “TeaPot” contacted the target victim directly with a phone call, pretended to be IT, and asked them to approve the MFA requests. Once they did, the attacker logged in and could access different systems, including Slack and other sensitive areas.

“The key lesson here is that just because you have measures like MFA in place, it doesn’t mean you’re secure or that attacks can’t happen to you,” Carruthers said. “For a very long time, a lot of organizations were saying, ‘Oh, we have MFA, so we’re not worried.’ That’s not a good mindset, as demonstrated in this specific case.”

As part of her role with X-Force, Carruthers conducts social engineering assessments for organizations. She has been doing MFA bypass techniques for clients for several years. “That mindset of having a false sense of security is one of the things I think organizations still aren’t grasping because they think they have the tools in place so that it can’t happen to them.”

Social Engineering Tests Can Help Prevent These Types of Attacks

According to Carruthers, social engineering tests fall into two buckets: remote and onsite. She and her team look at phishing, voice phishing and smishing for remote tests. The onsite piece involves the X-Force team showing up in person and essentially breaking and entering a client’s network. During the testing, the X-Force teams attempt to coerce employees into giving them information that would allow them to breach systems — and take note of those who try to stop them and those who do not.

The team’s remote test focuses on an increasingly popular method: layering the methods together almost like an attack chain. Instead of only conducting a phishing campaign, this adds another step to the mix.

“What we’ll do, just like you saw in this Uber attack, is follow up on the phish with phone calls,” Carruthers said. “Targets will tell us the phish sounded suspicious but then thank us for calling because we have a friendly voice. And they’ll actually comply with what that phishing email requested. But it’s interesting to see attackers starting to layer on social engineering approaches rather than just hoping one of their phishing emails work.”

She explained that the team’s odds of success go up threefold when following up with a phone call. According to IBM’s 2022 X-Force Threat Intelligence Index, the click rate for the average targeted phishing campaign was 17.8%. Targeted phishing campaigns that added phone calls (vishing, or voice phishing) were three times more effective, netting a click from 53.2% of victims.

What Is OSINT — and How It Helps Attackers Succeed

For bad actors, the more intelligence they have on their target, the better. Attackers typically gather intelligence by scraping data readily available from public sources, called open source intelligence (OSINT). Thanks to social media and publicly-documented online activities, attackers can easily profile an organization or employee.

Carruthers says she’s spending more time today doing OSINT than ever before. “Actively getting info on a company is so important because that gives us all of the bits and pieces to build that campaign that’s going to be realistic to our targets,” she said. “We often look for people who have access to more sensitive information, and I wouldn’t be surprised if that person (in the Uber hack) was picked because of the access they had.”

For Carruthers, it’s critical to understand what information is out there about employees and organizations. “That digital footprint could be leveraged against them,” she said. “I can’t tell you how many times clients come back to us saying they couldn’t believe we found all these things. A little piece of information that seems harmless could be the cherry on top of our campaign that makes it look much more realistic.”

Tangible Hack Prevention Strategies

While multi-factor authentication can be bypassed, it is still a critical security tool. However, Carruthers suggests that organizations consider deploying a physical device like a Fido2 token. This option shouldn’t be too difficult to manage for small to medium-sized businesses.

“Next, I recommend using password managers with long, complex master passwords so they can’t be guessed or cracked or anything like that,” she said. “Those are some of the best practices for applications like Slack.”

Of course, no hacking prevention strategies that address social engineering would be complete without security awareness. Carruthers advises organizations to be aware of attacks out in the wild and be ready to address them. “Companies need to actually go through and review what’s included in their current training, and whether it’s addressing the realistic attacks happening today against their organization,” she said.

For example, the training may teach employees not to give their passwords to anyone over the phone. But when an attacker calls, they may not ask for your password. Instead, they may ask you to log in to a website that they control. Organizations will want to ensure their training is always fresh and interactive and that employees stay engaged.

The final piece of advice from Carruthers is for companies to refrain from relying too heavily on security tools. “It’s so easy to say that you can purchase a certain security tool and that you’ll never have to worry about being phished again,” she said.

The key takeaways here are:

  • Incorporate physical devices into MFA. This builds a significant roadblock for attackers.
  • Try to minimize your digital footprint. Avoid oversharing in public forums like social media.
  • Use password managers. This way, employees only need to remember one password.
  • Bolster security awareness programs with particular focus on social engineering threats. Far too often, security awareness misses this key element.
  • Don’t rely too heavily on security tools. They can only take your security posture so far.

Finally, it’s important to reiterate what Carruthers and the X-Force team continue to prove with their social engineering tests: a false sense of security is counterproductive to preventing attacks. A more effective strategy combines quality security practices with awareness, adaptability and vigilance.

Learn more about X-Force Red penetration testing services here. To schedule a no-cost consult with X-Force, click here.

The post An IBM Hacker Breaks Down High-Profile Attacks appeared first on Security Intelligence.

How Much is the U.S. Investing in Cyber (And is it Enough)?

It’s no secret that cyberattacks in the U.S. are increasing in frequency and sophistication. Since cyber crime impacts millions of businesses and individuals, many look to the government to see what it’s doing to anticipate, prevent and deal with these crimes.

To gain perspective on what’s happening in this area, the U.S. government’s budget and spending plans for cyber is a great place to start. This article will explore how much the government is spending, where that money is going and how its budget compares to previous years.

How Much is the U.S. Spending on Cybersecurity, and Where is the Money Going?

In June 2022, the U.S. announced new spending bills for the fiscal year 2023, including an allocation of $15.6 billion for cybersecurity. The majority of the money — $11.2 billion — will be appropriated for the Department of Defense (DoD), and $2.9 billion will go to the Cybersecurity and Infrastructure Security Agency (CISA).

The money going to the DoD will be used in a variety of ways. For example, Paul Nakasone, commander of the U.S. Cyber Command, has discussed plans to grow five Cyber Mission Force teams. Approximately 133 of these already exist and focus on carrying out defensive cyber operations.

How Involved is the Private Sector in the Allocation of Funds?

Clearly, the majority of funds in the new budget will go to government agencies. However, the government also plans to invest in the private sector and has discussed the importance of strengthening relationships with companies and private organizations.

One key area here is information sharing; after all, cybersecurity is a team sport. However, the government has faced criticism in the past for expecting detailed data from companies while failing to provide adequate information on their end. Recently, government agencies have spoken more about working towards more open and two-sided information sharing, but only time will tell how successful that strategy will be.

U.S. lawmakers have asked the defense secretary to work more closely with CISA and the private organizations within it, especially in areas related to Russian and Chinese activity. CISA has also received $417 million more in funding than was initially requested by the White House.

How do Current Federal Investments in Cyber Compare to Previous Years?

Compared to the previous few years, investment in cybersecurity is gradually increasing. 2021 saw $8.64 billion in spending, followed by a slight increase in 2022.

It’s a positive trend that signals the government is taking the issue seriously. But are state and local governments keeping up?

How is Cyber Investment Changing at the Local and State Levels?

The data shows that the government is also investing in cybersecurity in non-financial capacities at the local and state level. In 2021, for instance, state legislative sessions saw more than 285 pieces of cybersecurity-related legislation introduced, and in 2022 that number increased to 300.

In addition, President Biden introduced the Infrastructure Investment and Jobs Act in 2021, which allocated $1 billion in grants to bolster cybersecurity at the local, state, tribal and territorial levels. The government will distribute this amount over four years until 2025.

It adds up to a promising development for local and state governments, who are finally gaining the resources to protect their communities more effectively. Plus, it demonstrates a growing understanding of the importance of cybersecurity at the federal level and, hopefully, a more informed approach in the future.

Promising Signs for the Future

While cybersecurity funding is one truly positive sign, there are more reasons to be hopeful — such as the appointment of the USA’s first-ever National Cyber Director, Chris Inglis.

Looking to the future, the U.S. will need to constantly readjust its cyber defense posture and adapt to this ever-changing landscape, especially as cyber crime becomes not only more common but also more challenging and complex. It costs money to do that effectively, so the government must prioritize cyber funding for the foreseeable future.

Of course, individual organizations will need to take responsibility for their own security, too.

IBM can help — with solutions like the Security QRadar XDR, you get a suite of tools and powerful features to help you defend your organization against attacks and keep your teams focused on what’s important. Find out more here.

The post How Much is the U.S. Investing in Cyber (And is it Enough)? appeared first on Security Intelligence.

This crowdsourced payments tracker wants to solve the ransomware visibility problem

Ransomware attacks, fueled by COVID-19 pandemic turbulence, have become a major money earner for cybercriminals, with the number of attacks rising in 2020. These file-encrypting attacks have continued largely unabated this year, too. In the last few months alone we’ve witnessed the attack on Colonial Pipeline that forced the company to shut down its systems […]

Fujifilm becomes the latest victim of a network-crippling ransomware attack

Japanese multinational conglomerate Fujifilm has been forced to shut down parts of its global network after falling victim to a suspected ransomware attack. The company, which is best known for its digital imaging products but also produces high-tech medical kit, including devices for rapid processing of COVID-19 tests, confirmed that its Tokyo headquarters was hit […]

Cyber threat startup Cygilant hit by ransomware

Cygilant, a threat detection cybersecurity company, has confirmed a ransomware attack.

Christina Lattuca, Cygilant’s chief financial officer, said in a statement that the company was “aware of a ransomware attack impacting a portion of Cygilant’s technology environment.”

“Our Cyber Defense and Response Center team took immediate and decisive action to stop the progression of the attack. We are working closely with third-party forensic investigators and law enforcement to understand the full nature and impact of the attack. Cygilant is committed to the ongoing security of our network and to continuously strengthening all aspects of our security program,” the statement said.

Cygilant is believed to be the latest victim of NetWalker, a ransomware-as-a-service group, which lets threat groups rent access to its infrastructure to launch their own attacks, according to Brett Callow, a ransomware expert and threat analyst at security firm Emsisoft.

The file-encrypting malware itself not only scrambles a victim’s files but also exfiltrates the data to the hacker’s servers. The hackers typically threaten to publish the victim’s files if the ransom isn’t paid.

A site on the dark web associated with the NetWalker ransomware group posted screenshots of internal network files and directories believed to be associated with Cygilant.

Cygilant did not say if it paid the ransom. But at the time of writing, the dark web listing with Cygilant’s data had disappeared.

“Groups permanently delist companies when they’ve paid or, in some cases, temporarily delist them once they’ve agreed to come to the negotiating table,” said Callow. “NetWalker has temporarily delisted pending negotiations in at least one other case.”

Cyber threat startup Cygilant hit by ransomware by Zack Whittaker originally published on TechCrunch

Cybersecurity in 2023

By: seo_spec

Companies are beginning to realize that the location of their employees and the devices they use are not as important as they used to be. The work culture will be more about what you do and not where you do it. In 2023, a hybrid world begins to develop where the barriers of the digital and real world will disappear. With flexibility being a priority for businesses and ordinary users, they face the challenge of user security and privacy as they easily change their location with multiple devices and networks and use different communication platforms. One of the most important trends that will dominate in 2023 is political or social attacks and state-sponsored cyberattacks. Political attacks can quickly damage businesses, industries, and economies, as well as cause unrest in a region.

The cloud platform and the Zero Trust method are the perfect combination to increase the security of access to every user and device, regardless of their local location. In 2023, this cybersecurity method will be actively implemented.

Clive Harby said in 2006 that this data is the new oil, but it is now routinely shared by vendors, customers and businesses. Zero Trust’s main goal is to protect this strategic asset from falling into the wrong hands.

As the use of artificial intelligence and machine learning evolve, we’re seeing new cybersecurity solutions emerge more often and help identify and respond to threats in real time. Such technologies will help organizations find and avoid attacks. Many expect technology to facilitate faster and more accurate responses to possible threats as the threat landscape evolves.

The emergence of quantum-type computers also indicates that the vulnerability of traditional forms of encryption is increasing. As a result, researchers have developed new quantum-resistant forms of security that can protect advanced computing systems. These new technologies will be crucial to securing sensitive information and methods over the coming years.

Companies, too, are beginning to view blockchain as something more than cryptocurrency. Blockchain is expected to be used to create new, innovative solutions to protect cyberspace. For example, blockchain systems can provide more secure verification of a user’s identity, and blockchain-based data stores can help protect against data leakage.

As the threat continues to grow, government and regulators are setting requirements for organizations to create appropriate cyber defenses. In 2023, expect an increased focus on cybersecurity regulation and compliance and the implementation of new requirements and guidelines to help organizations protect their system data.

As more and more devices connect to the Internet, IoT devices are becoming a serious problem. As a result, new technology is being developed to protect IoT devices from manipulation. This will be especially important in healthcare, where the security of medical devices is important.

In 2023, cyber threats will emerge in a new world, and many companies will actively use the latest technology to provide cyber protection. As a result, it is safe to assume that this trend and development will play an important role in strengthening information security in general.

❌