Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

Four things AWS needs to fix at re:Invent this week

1 December 2025 at 13:46

The mood among Amazon Web Services customers is shifting from curiosity to urgency as the company prepares to once again to “re:Invent” itself at its annual customer conference this week.

After a year in which Microsoft and Google tightened their narratives around unified data, AI platforms and workflow-ready agents, AWS can no longer rely on its scale, breadth, or incremental roadmap to maintain the confidence of CIOs.

Instead, say analysts, the hyperscaler must address four key concerns at re:Invent in Las Vegas this week if it wants to retain its position as the default enterprise cloud.

Closing the integration gaps between analytics, data, and AI

Although AWS is ahead in raw capability and breadth of services, say analysts, it is falling behind in its integration and unification of data, analytics, machine learning, and AI.

“It lags behind rivals on simplicity and integration,” said Phil Fersht, CEO of HFS Research. “Customers want fewer hops between analytics, machine learning, and generative AI. They want unified governance and a consistent metadata layer so agents can reason across systems,” he said.

Microsoft, at its Ignite customer event last month, beefed up its unified data and analytics platform, Fabric IQ, with new semantic intelligence capabilities. AWS, too, has been trying its hand at unifying its AI and analytics services with the launch of SageMaker Unified Studio last year but has yet to reach the level of simplicity that Microsoft’s IQ offerings promise.

When it comes to new AI analytics services from AWS, CIOs can expect more of the same, said David Linthicum, independent consultant and retired chief cloud strategy officer at Deloitte Consulting. “Realistically, they can expect AWS to keep integrating its existing services; the key test will be whether this shows up as less complexity and faster time-to-insight, not just new service names,”

Lack of cohesion in AI platform strategy

That complexity isn’t confined to analytics alone. The same lack of cohesion is now spilling over into AWS’s AI platform strategy, where the cloud giant risks ceding mindshare despite its compute advantage.

“SageMaker is still respected, but it no longer dominates the AI platform conversation. Open source frameworks like Ray, MLflow, and KubeRay are rapidly capturing developer mindshare because they offer flexibility and avoid lock in,” Fersht said.

This fragmentation is exactly what partners want AWS to fix by offering clearer, more opinionated MLOps paths, deeper integration between Bedrock and SageMaker, and ready-to-use patterns that help enterprises progress from building models to deploying real agents at scale.

More plug-and-play, less build-it-yourself

AWS’s tooling shortcomings don’t end there, said Fersht. The hyperscaler’s focus on providing the parts for agentic AI and leaving others to build with them make it harder for business users to consume its services.

“AWS is giving strong primitives, but competitors are shipping business-ready agents that sit closer to workflows and outcomes. Enterprises want both power and simplicity,” Fersht said.

Although there’s an assumption that enterprises are big enough to build things themselves, they want more plug-and-play than AWS imagines, Fersht said: “They do not want to engineer everything from scratch. They want reusable agent blueprints that map to sales, service, IT operations, and supply chain tasks.”

In fact, if AWS wants to compete with rivals to become the default agent platform for enterprises, it must hide complexity behind higher-level abstractions and simplify its agent stack, double down on workflow level agents, and give customers clear guidance on safe deployment, accountability, and ROI, he said.

Vibe coding disarray

Like other hyperscalers, AWS is aggressively experimenting in the vibe coding and agentic IDE space, where there’s no clear consensus on what developers actually want, according to Fersht.

“Everyone is experimenting because no one has cracked the next generation developer workflow. AWS is no different,” he said, adding that in some respects AWS has been more conservative than its rivals.

AWS is sure to be dealing some new innovations at AWS re:Invent in Las Vegas this week, but despite defining the cloud computing industry in 2006, it now finds itself, in many respects, playing catch-up.

스노우플레이크, 셀렉트스타 인수···AI의 데이터 이해도 높인다

26 November 2025 at 01:57

스노우플레이크가 샌프란시스코 기반 스타트업 셀렉트스타(Select Star)와 컨텍스트 메타데이터 플랫폼을 인수하기 위한 최종 계약을 체결했다고 밝혔다. 이번 인수는 호라이즌 카탈로그를 강화하기 위한 조치로, 이는 스노우플레이크 데이터 클라우드 내에서 데이터 검색, 관리, 거버넌스를 통합 제공하는 제품군이다.

스노우플레이크 호라이즌데이터브릭스 유니티 같은 데이터 및 거버넌스 카탈로그는 여러 클라우드와 애플리케이션에 흩어진 데이터를 하나의 제어 지점에서 관리할 수 있어 기업 사이에서 빠르게 주목받고 있다.

카탈로그는 기업 전체 데이터 자산에 대한 통합된 컨텍스트 뷰를 보여주는 기능도 제공한다. 이는 AI 기반 애플리케이션과 에이전트를 개발하려는 기업에게 점점 필수 요건이 되고 있다. 안정적으로 작동하기 위해서는 정제되고 문서화가 잘 돼 있으며 추적 가능한 입력값이 필요하기 때문이다.

스노우플레이크는 셀렉트스타의 컨텍스트 메타데이터 플랫폼을 활용해 호라이즌의 데이터 접근 역량을 확장하고, 사용자가 AI 기반 애플리케이션과 에이전트를 구축할 때 데이터를 컨텍스트화할 수 있는 선택지를 넓힐 계획이다.

셀렉트스타는 이미 포스트그레SQL(PostgreSQL)마이SQL(MySQL) 같은 데이터베이스는 물론, 태블로(Tableau), 파워BI 같은 BI 도구, 그리고 dbt와 에어플로우 같은 데이터 파이프라인·오케스트레이션 도구와 연동돼 있다.

기업의 ‘AI 네이티브’ 기반을 둘러싼 주도권 경쟁

AI 기반 애플리케이션과 에이전트의 수요가 급증하는 가운데, 호라이즌 기능 확대는 스노우플레이크가 데이터 및 애널리틱스 워크로드에서 주도권을 선점하려는 전략과 맞물려 있다.

HFS 리서치 CEO 필 퍼스트는 “AI 워크로드 경쟁은 저장 공간이 아니라 메타데이터, 계보, 신뢰성에서 승부가 난다. 호라이즌 카탈로그는 탄탄한 기반을 갖추고 있지만, 스노우플레이크가 아직 갖추지 못한 자동 검색, 컬럼 단위 계보, 사용량 인텔리전스, 그리고 데이터 분석가의 반복 작업을 줄여주는 사용자 경험을 셀렉트스타가 제공한다”라고 분석했다.

퍼스트는 “플랫폼에 단순히 추가하는 것이 아니라 깊이 통합된 형태의 풀 스택 메타데이터 인텔리전스를 제공하는 역량은 시장에 분명한 공백이 있다. 데이터브릭스는 유니티 카탈로그를 통해 거버넌스와 계보 측면에서 격차를 계속 벌리고 있다. 스노우플레이크는 이를 빠르게 따라잡아야 한다는 점을 잘 알고 있다”라고 말했다. 이어 퍼스트는 이러한 역량 확보를 위한 비유기적 전략이 스노우플레이크 입장에서 더 효과적일 수 있다고 진단했다.

ISG 소프트웨어 리서치 전무 데이비드 메닝거는 스노우플레이크가 AI 애플리케이션을 위한 데이터 분석 워크로드 시장 주도권을 두고 구글, AWS, 마이크로소프트(MS) 등 주요 클라우 업체와 경쟁하고 있다고 언급했다.

메닝거는 “AI는 지금 시장에서 가장 큰 화두이며, 전적으로 데이터에 의존한다. 우리 조사에서도 기업이 AI에 활용할 수 있는 형태로 데이터를 준비하는 과정을 가장 어려워하는 것으로 나타났다”라고 설명했다.

그가 언급한 시장 경쟁의 강도는 스노우플레이크가 올해 추진한 인수 사례에서도 나타난다. 지난 6월 스노우플레이크는 미국의 클라우드 기반 포스트그레SQL 데이터베이스 기업 크런치데이터(Crunch Data) 인수 의향을 발표했다. 스노우플레이크는 AI 데이터 클라우드에서 ‘스노우플레이크 포스트그레스’로 불릴 포스트그레SQL 데이터베이스를 제공하고, 개발자가 AI 기반 애플리케이션을 보다 쉽게 구축할 수 있도록 지원한다는 목표를 제시했다.

인수 발표 시점만 보면, 스노우플레이크가 데이터브릭스의 오픈소스 서버리스 포스트그레스 기업 네온(Neon) 인수에 대응한 것으로 보일 수 있다. 분석가들은 두 기업이 분석, 운영형 스토리지, 머신러닝을 아우르는 ‘AI 네이티브 데이터 기반’의 선두 자리를 차지하기 위해 경쟁하고 있다고 설명했다.

또한 이달 초 스노우플레이크는 데이토메트리(Datometry)를 인수했다. 스노우플레이크는 이를 통해 기존에 무료로 제공해 온 마이그레이션 도구 세트 가운데 하나인 ‘스노우컨버트 AI’를 강화하며, 기존 데이터베이스 워크로드를 클라우드로 이전하려는 기업을 대상으로 대규모 코드 재작성 과정에서 발생하는 부담, 비용, 불확실성을 최소화하겠다고 밝혔다.
dl-ciokorea@foundryco.com

Don’t Use a Ruler to Measure Wind Speed: Establishing a Standard for Competitive Solutions Testing

25 November 2025 at 10:35

Competitive testing is a business-critical function for financial institutions seeking the ideal solutions provider to help optimize their risk management strategies. Don’t get seduced by inflated test results or flowery marketing claims, however. Selecting the right risk solutions could be one of the most important tasks your business ever undertakes – and one of the..

The post Don’t Use a Ruler to Measure Wind Speed: Establishing a Standard for Competitive Solutions Testing appeared first on Security Boulevard.

원인 모르면 보안 침해 반복된다···조직 회복력 흔드는 ‘분석 부재’

23 November 2025 at 22:31

대부분의 보안 조직에서는 사고 이후 분석이 여전히 중요한 과제로 남아 있다. 파운드리의 ‘보안 우선순위(Security Priorities)’ 조사에 따르면, 보안 리더의 57%가 지난 1년 동안 발생한 보안 사고의 근본 원인을 파악하는 데 어려움을 겪었다고 답했으며, 이는 재침해 위험을 더욱 높이는 요인으로 나타났다.

보안 전문가들은 사고 발생 이후 즉각적인 진화와 복구 압박이 커지면서, 학습과 분석에 투입되는 자원이 부족해지는 것이 문제의 핵심이라고 진단했다. 반복 침해 가능성을 낮추기 위해서는 사고 대응을 단순한 일회성 정리 작업이 아니라 지속적인 학습 주기로 운영해야 한다는 설명이다.

관리형 보안 대응 기업 헌트리스(Huntress)의 보안 운영 총괄 드레이 아가는 “많은 조직이 즉각적인 침해 차단에만 집중하고 있다. 이 때문에 정작 핵심적인 포렌식 조사가 뒷전으로 밀리고, 결국 다음 공격자가 그대로 다시 들어올 수 있는 상황을 만들고 있다”라고 설명했다.

아가는 “근본 원인을 정확히 짚어내는 철저한 사후 분석이 이뤄지지 않으면 조직은 사실상 눈을 가린 채 방어하는 셈이며, 같은 실수를 반복하게 된다”라고 지적했다.

근본 원인 분석을 통한 회복력 강화

전문가들은 많은 기업이 사고 대응을 분석이 아닌 운영 중심의 절차로만 취급하고 있다고 지적한다. 이 때문에 침해 차단과 복구 같은 절차는 충분히 반복 연습돼 있지만, 심층 포렌식 조사나 사고 이후의 학습은 뒤처지고 있다.

관리형 보안 서비스 기업 블루보이언트(BlueVoyant)의 디지털 포렌식·사고 대응 디렉터 톰 무어는 “증거 보존과 근본 원인 분석이 체계적으로 이뤄지지 않으면 중요한 통찰이 사라지게 된다. 견고한 사고 대응은 단순히 시스템을 다시 가동하는 데 그치지 않는다. 사고로부터 얻은 교훈을 탐지·예방·위험 감소 전략에 반영하는 과정까지 포함해야 한다”라고 설명했다.

무어는 또한 “이 같이 지속적으로 학습 및 개선하는 순환 구조가 장기적인 회복력을 강화한다. 빠르게 변화하고 적응하는 사이버 위협 환경에서는 그 가치가 더욱 커진다”라고 말했다.

클라우드 보안 기업 셈페리스(Semperis)의 위기관리 수석 컨설턴트 마리 하그레이브스도 “대다수 조직은 ‘불길에서 무엇을 배울지’보다 ‘눈앞의 불을 끄는 데’ 더 집중하고 있다”라고 평가했다.

그는 모든 위기가 감지, 대응, 검토라는 3단계로 구성된다고 언급하며, “회복력이 구축되는 지점은 3번째 단계인 사후 검토 과정이다. 실시간 데이터를 수집하고 이를 면밀히 분석해, 도출된 교훈을 실제 조치로 연결하는 조직은 더 빠르게 회복하고 더 강해진다. 사고 대응은 단순히 살아남는 것이 아니라, 변화에 적응하며 회복력을 쌓는 과정”이라고 조언했다.

공격 경로 추적

충분한 사전 대비가 필수이기 때문에, 기업은 SIEM(보안 사고 및 이벤트 관리) 같은 기술을 통해 디지털 포렌식에 필요한 전용 도구와 역량을 갖춰야 한다.

SIEM이 중요한 이유는 게이트웨이와 VPN 장비 상당수가 몇 시간 내에 자체 저장 공간을 덮어쓰도록 설계돼 있기 때문이다.

헌트리스의 아가는 “공격자가 VPN을 통해 침투한 뒤 하루 정도 내부에 머물다가 핵심 서버로 이동하면, 그 사이에 VPN 텔레메트리 정보는 이미 사라졌을 가능성이 크다. SIEM처럼 VPN 로그를 중앙에서 수집·보존하는 체계를 마련하면 사고 이후 탐지는 물론, 초기 침해가 어떻게 발생했는지 근본 원인을 분석하는 데 필요한 핵심 데이터를 확보할 수 있다”라고 말했다.

헌트리스의 통계에 따르면, 숙련도 높은 사이버 범죄자의 약 70%가 VPN을 통해 침입하는 것으로 나타났다. 아가는 “SIEM을 도입한 환경에서는 공격 경로 초기에 위협을 포착할 수 있을 뿐 아니라, 사후 분석을 통해 침해로 이어진 정확한 근본 원인을 규명하는 작업도 가능하다”라고 설명했다.

또한 MDR(관리형 탐지·대응), XDR(확장형 탐지·대응) 같은 다양한 서비스에 포렌식 캡처 소프트웨어를 포함할 수도 있다. 이런 기술은 벤더와 포렌식 조사 전문가가 협력해 침해의 출발점을 식별하고 이를 해결하는 데 필요한 분석과 조치를 수행할 수 있도록 지원한다.

사이버 보안 기업 시큐러스 커뮤니케이션(Securus Communication)의 CTO 롭 더비셔는 “이런 도구가 갖춰져 있지 않으면 침해가 어떻게 발생했는지 사후에 파악하기가 훨씬 어려워진다. 침해가 발생했을 때 사고 대응 서비스를 제공하는 기업도 있지만, 침해를 신속히 정리하고 재발을 막는 핵심은 대응을 훨씬 효율적으로 수행하는 도구와 절차를 미리 갖추는 데 있다”라고 말했다.

에클렉틱IQ(EclecticIQ)의 시니어 위협 인텔리전스 애널리스트 아르다 뷔윅카야는 “근본 원인 분석이 충분히 이뤄지지 않으면 실제 공격 원인이 여전히 파악되지 않은 상태로 남아 있을 수 있고, 심지어 활성 상태일 가능성도 있다”라고 지적했다.

뷔윅카야는 “디지털 포렌식 전문성, 근본 원인 분석 절차, 위협 인텔리전스 통합을 통해 개별 사고를 공격자의 전술 및 캠페인과 연결하는 접근이 필요하다. 이런 방식은 조직이 경험하는 모든 사고를 회복력 강화의 계기로 삼는 기반이 된다”라고 조언했다.

체계적인 계획 수립

사고가 발생했을 때 상황을 총괄하는 대응팀은 일반적으로 CISO가 주도권을 행사하도록 해야 한다. 또한 IT 담당자부터 법률 자문까지 각 이해관계자의 역할과 책임이 계획서에 명확히 정의돼 있어야 한다.

전문가들은 사고 대응 플레이북이 일반적으로 다음 핵심 단계를 이룬다고 설명한다.

  • 준비 단계: 검증된 사고 대응 계획을 유지하고, 역할과 보고 체계를 명확히 한다.
  • 탐지 및 분석: 모니터링을 중앙화하고, 위협 인텔리전스를 활용하며 포렌식 역량을 확보한다.
  • 차단 및 복구: 신속하게 대응하되 증거를 보존하고, 복구 전에 시스템을 검증한다.
  • 사후 분석: 구조화된 검토를 수행해 결과를 문서화하고, 이를 보안 아키텍처와 교육에 반영한다.
  • 지속적 개선: 위협 모델링을 통합하고, 대응 자동화를 확대하며, 역량 개발에 투자한다.

많은 조직이 ISO 등 이미 검증된 프레임워크를 사고 대응 체계의 템플릿으로 활용하고 있다. 인티그리티360(Integrity360)의 CTO 리처드 포드는 “이런 프레임워크는 거버넌스부터 기술적 대응까지 모든 핵심 요소를 체계적으로 구성할 수 있도록 섹션 단위로 정리돼 있다. 널리 알려진 프레임워크를 사용하면 완성도를 높일 수 있을 뿐 아니라, 해당 기준에 익숙한 외부 이해관계자와의 소통도 훨씬 수월해진다”라고 설명했다.

조직 회복력 구축

효과적인 사고 대응은 시간이 지날수록 조직의 회복력을 높일 수 있도록, 체계적으로 구조화되고 반복적으로 실행 가능하며 인텔리전스를 기반으로 운영되는 프로세스를 구축하는 데 초점을 둬야 한다.

사고 대응 계획은 모의 훈련이나 테이블탑 훈련 등을 통해 정기적으로 테스트하고 보완하며 업데이트해야 한다. 이는 더 넓은 차원의 비즈니스 연속성 및 조직 회복 전략의 일부로 수행돼야 한다.

사이버 보안 기업 트렌드마이크로(Trend Micro)의 필드 CTO 바라트 미스트리는 많은 조직이 여전히 사고 대응 체계가 충분히 성숙한 수준에 이르지 못하고 있다고 지적한다. 그는 사고 대응이 단순한 차단과 복구에 그쳐서는 안 되며, 포렌식 분석과 사후 검토까지 확장돼야 한다고 강조했다.

미스트리는 “근본 원인 분석을 건너뛰면 결국 겉으로 드러난 증상만 해결하는 셈이다. 이런 문제는 여러 요인이 겹쳐 발생한다. 공격 과정을 정확히 재구성하기 어렵게 만드는, 도구 간 단절로 인한 가시성 부족, 포렌식과 위협 헌팅 역량이 부족한 인재 격차, 그리고 사후 분석이 형식적으로 끝나거나 아예 생략되는 프로세스 취약점이 대표적 요인이다”라고 지적했다.

‘침해–복구–재침해’의 악순환 끊기

많은 경우 운영을 신속히 복구하는 데만 집중하다 보니, 서버 초기화나 로그 손실, 포렌식 흔적 소실 등 핵심 증거가 의도치 않게 사라지곤 한다.

미스트리는 “여기에 업무 압박, 시간 제약, 제한된 자원 등이 겹치면서, 사고로부터 무엇을 배울지보다 다음 긴급 업무를 처리하는 데 더 몰두하게 된다. 그 결과, 사후 스캔이나 근본 원인 분석, 절차 업데이트 같은 필수 작업이 자주 건너뛰어진다”라고 설명했다.

이렇게 되면 초기 공격 경로와 내부 확산 방식이 끝내 규명되지 못한 채 취약점이 남게 되고, 이는 ‘침해-복구-재침해’가 반복되는 악순환을 만든다.

미스트리는 “이 악순환을 끊기 위해서는 조직이 사고 대응 전략에 포렌식 준비태세를 반드시 포함해야 한다. 증거 보존, 체계적인 사후 분석, 학습 내용을 보안 아키텍처와 교육에 반영하는 과정이 필수”라고 조언했다.
dl-ciokorea@foundryco.com

Why Network Monitoring Matters: How Seceon Enables Proactive, Intelligent Cyber Defence

21 November 2025 at 07:52

In today’s fast-evolving digital world, organizations increasingly rely on hybrid workforces, cloud-first strategies, and distributed infrastructures to gain agility and scalability. This transformation has expanded the network into a complex ecosystem spanning on-premises, cloud, and remote endpoints, vastly increasing the attack surface. Cyber adversaries exploit this complexity using stealth techniques like encrypted tunnels, credential misuse,

The post Why Network Monitoring Matters: How Seceon Enables Proactive, Intelligent Cyber Defence appeared first on Seceon Inc.

The post Why Network Monitoring Matters: How Seceon Enables Proactive, Intelligent Cyber Defence appeared first on Security Boulevard.

What is NVIDIA’s CUDA and How is it Used in Cybersecurity?

By: OTW
17 November 2025 at 17:09

Welcome back my aspiring cyberwarriors!

You have likely heard of the company NVIDIA. Not only are the dominant company in computer graphics adapters (if you are gamer, you likely have one) and now, artificial intelligence. In recent weeks, they have become the most valuable company in the world ($5 trillion).

The two primary reasons that Nvidia has become so important to artificial intelligence are:

  1. Nvidia chips can process data in multiple threads, in some cases, thousands of threads. This makes doing complex calculations in parallel possible, making them much faster.
  2. Nvidia created a development environment named CUDA for harnessing the power of these powerful CPU’s. This development environment is a favorite among artificial intelligence, data analytics, and cybersecurity professionals.

Let’s a brief moment to examine this powerful environment.

What is CUDA?

Most computers have two main processors:

CPU (Central Processing Unit): General-purpose, executes instructions sequentially or on a small number of cores. These CPU’s such as Intel and AMD provide the flexibility to run many different applications on your computer.

GPU (Graphics Processing Unit): These GPU’s were originally designed to draw graphics for applications such as games and VR environments. These GPU’s contain hundreds or thousands of small cores that excel at doing the same thing many times in parallel.

CUDA (Compute Unified Device Architecture) is NVIDIA’s framework that lets you take control of the GPU for general computing tasks. In other words, CUDA lets you write code that doesn’t just render graphics—it crunches numbers at massive scale. That’s why it’s a favorite for machine learning, password cracking, and scientific computing.

Why Should Hackers & Developers Care?

CUDA matters as an important tool in your cybersecurity toolkit because:

Speed: A GPU can run password hashes or machine learning models orders of magnitude faster than a CPU.

Parallelism: If you need to test millions of combinations, analyze huge datasets, or simulate workloads, CUDA gives you raw power.

Applications in Hacking: Tools like Hashcat and Pyrit use CUDA to massively accelerate brute-force and dictionary attacks. Security researchers who understand CUDA can customize or write their own GPU-accelerated tools.

The CUDA environment sees the GPU as a device with:

Threads: The smallest execution unit (like a tiny worker).

Blocks: Groups of threads.

Grids: Groups of blocks.

Think of it like this:

  • A CPU worker can cook one meal at a time.
  • A GPU is like a kitchen with thousands of cooks—we split the work (threads), organize them into brigades (blocks), and assign the whole team to the job (grid).

Coding With CUDA

CUDA extends C/C++ with some keywords.
Here’s the simple workflow:

  1. You write a kernel function (runs on the GPU).
  2. You call it from the host code (the CPU side).
  3. Launch thousands of threads in parallel → GPU executes them fast.

Example skeleton code:

c__global__ void add(int *a, int *b, int *c) {
    int idx = threadIdx.x;
    c[idx] = a[idx] + b[idx];
}

int main() {
    // Allocate memory on host and device
    // Copy data to GPU
    // Run kernel with N threads
    add<<<1, N>>>(dev_a, dev_b, dev_c);
    // Copy results back to host
}

The keywords:

  • __global__ → A function (kernel) run on the GPU.
  • threadIdx → Built-in variable identifying which thread you are.
  • <<<1, N>>> → Tells CUDA to launch 1 block of N threads.

This simple example adds two arrays in parallel. Imagine scaling this to millions of operations at once!

The CUDA Toolchain Setup

If you want to try CUDA make certain you have the following items:

1. an NVIDIA GPU.

2. the CUDA Toolkit (contains compiler nvcc).

3. Write your CUDA programs in C/C++ and compile it with nvcc.

Run and watch your GPU chew through problems.

To install the CUDA toolkit in Kali Linux, simply enter;

kali > sudo apt install nvidia-cuda-toolkit

Next, write your code and compile it with nvcc, such as;

kali > nvcc hackersarise.cu -o hackersarise

Practical Applications of CUDA

CUDA is already excelling at hacking and computing applications such as;

  1. Password cracking (Hashcat, John the Ripper with GPU support).
  2. AI & ML (TensorFlow/PyTorch use CUDA under the hood). Our application of using Wi-Fi to see through walls uses CUDA.
  3. Cryptanalysis (breaking encryption) & simulation tasks.
  4. Network packet analysis at high scale.

As a beginner, start with small projects—then explore how to take compute-heavy tasks and offload them to the GPU.

Summary

CUDA is NVIDIA’s way of letting you program GPUs for general-purpose computing. To the hacker or cybersecurity pro, it’s a way to supercharge computation-heavy tasks.

Learn the thread-block-grid model, write simple kernels, and then think: what problems can I solve dramatically faster if run in parallel?


Data Dilemmas: Balancing Privacy Rights in the Age of Big Tech

By: galidon
22 July 2025 at 02:11

The world is becoming increasingly more digital and, whilst this is a good thing for a number of different reasons, this huge shift brings with it questions and scrutiny as to what exactly these huge tech companies are doing with such vast amounts of data.

The leading tech companies, including Google, Apple, Meta, Amazon and Microsoft – giants within the tech world – have all recently been accused of following unethical practices.

From Meta being questioned in courts over its advertising regime, to Amazon facing concerns over the fact that their Echo devices are potentially recording private conversations within the home, it’s not surprising that users are looking for more information as to how their data is being used.

With this comes the counterargument that big tech companies are doing what they can to strike the balance between privacy rights and ensuring that their product and the experience users get from using them don’t change.  But, how exactly are the big tech companies using and utilising sensitive and personal data while ensuring they still meet and adhere to the ever-expanding list of privacy rights? Let’s take a look.

Is Our Data The Price We Pay For Free?

In marketplaces and stores, we exchange legitimate currency for goods and services. But, with social media and other online platforms, we’re instead paying with our attention. A lot of online users are unaware of the expansive trail of browsing and search history that they leave behind.

Almost everything is logged and monitored online, right from the very first interaction and, depending on the web browser you use, some will collect more information than others. There are costs involved in almost every digital and online service we use and it costs money to host servers and sites – so why do we get to browse for free?

Simply because the cost is being underwritten in other ways. The most common form is through advertising, but the ways that only a few people think about, or want to think about, is through the harvesting and use of our data. Every single website is tracked or recorded in different ways and by different people, from marketing agencies who analyze the performance of a website to broadband providers who check connections.

Users will struggle to understand why companies want their data, but that’s simply because they don’t quite understand the value behind it. Data is currently considered to be one of the most valuable assets, mainly because it is a non-rival entity – this means that it can be replicated for free and with little to no impact on the quality. The nature of data means that it can be used for product research, market analysis or to train and better inform AI systems. All companies want more data in order to have as many financial and legal incentives and rights as they can.

What Are Cookies?

Data tracking is done through cookies, which are small files of letters and numbers which are downloaded onto your computer when you visit a website. They are used by almost all websites for a number of reasons, such as remembering your browsing preferences, keeping a record of what you’ve added to your shopping basket or counting how many people visit the site. Cookies are why you might see ads online months after visiting a website or get emails when you’ve left something in a shopping basket online.

Why Do Big Tech Companies Want User Data?

How Laws Have Changed How Companies Use Your Data

In the EU, data is more heavily protected than it is in the US, for example. EU laws have taken a more hardline stance against the big tech companies when it comes to protecting users, with the General Data Protection Regulation, or GDPR, in place to offer the “toughest privacy and security law in the world”.

This law makes it compulsory for companies, particularly big tech companies, to outline specifically what it is they are using data for. This law was passed in 2016 and any company which violates it is subjected to fines which either total 4% of the company’s overall revenue, or €20 million – whichever is greater. In 2019, Google was fined a huge €57 million for violating GDPR laws, citing that they posed huge security risks.

Unlike the EU, the US does not have comprehensive laws to protect online users, which is what allows these companies to have access to data that they can then use to take advantage of said data. Following the EU’s introduction of GDPR, both Facebook and Google had to change and update their privacy rights and laws, but in the US, there is still some way to go.

This is because Google makes a lot of money from their user data. Over 80% of Google’s revenue comes from the advertising aspect of its business, which allows advertisers to target ads for services and products based on what users are searching for, with this information gathered from Google. Google is the largest search engine in the world, so all of these user’s data quickly adds up. It’s been said that “Google sells the data that they collect so the ads can be better suited to user’s interests.”.

Advertisers will also make use of Google’s Analytic data, which is a service that gives companies insight into their website activity by tracking users who land on there. A few years ago, there were rumours that Google Analytics wrongly gave U.S intelligence agencies access to data from French users, whilst Google hadn’t done enough in order to ensure privacy when this data was transferred between the US and Europe.

Reasons Why Big Tech Companies Want Your User Data

  • Social media apps want information on how you use their platform in order to give you content that you actually want. TikTok in particular works to build you a customised and personalised algorithm to try and show you videos that you will actually engage with to keep you on the app for longer based on ads and content that you have previously watched and engaged with.
  • Big tech companies will be interested in your data so that they can show you relevant ads. Most of the big tech companies make a lot of money through advertising on their platform, so they want to ensure that they keep advertisers happy by showing their services or products to the consumers who are more likely to convert.
  • Your data will be used to personalise your browsing and platform experience to keep you coming back.

How Is Data Collection Changing?

One of the biggest reasons why companies are using your data is in order to serve you better when you are online. But, in terms of big tech companies, these reasons are often very different. With more and more people relying on technology provided by the likes of Google, Apple, Microsoft and Amazon, these companies need to be more reliable and be held to accountability more so that the rights of consumers are protected.

Changes and popularity in technology such as AI and cryptocurrency are becoming increasingly more common, and with these technologies comes the increase in risks of scams and fraud, such as the recent Hyperverse case. It is important now more than ever for these companies to put user’s minds at ease and improve their privacy rights.

Originally posted 2024-04-13 23:13:36. Republished by Blog Post Promoter

The post Data Dilemmas: Balancing Privacy Rights in the Age of Big Tech first appeared on Information Technology Blog.

Worry-free Pentesting: Continuous Oversight In Offensive Security Testing

6 December 2022 at 06:00

In your cybersecurity practice, do you ever worry that you’ve left your back door open and an intruder might sneak inside? If you answered yes, you’re not alone. The experience can be a common one, especially for security leaders of large organizations with multiple layers of tech and cross-team collaboration to accomplish live, continuous security workflows.

At Synack, the better way to pentest is one that’s always on, can scale to test for urgent vulnerabilities or compliance needs, and provides transparent, thorough reporting and coverage insight.

Know what’s being tested, where it’s happening and how often it’s occurring 

With Synack365, our Premier Security Testing Platform, you can find relief in the fact that we’re always checking for unlocked doors. To provide better testing oversight, we maintain reports that list all web assets being tested, which our customers have praised. Customer feedback indicated that adding continuous oversight into host assets would also help to know which host or web assets are being tested, when and where they’re being tested, and how much testing has occurred. 

Synack’s expanded Coverage Analytics tells you all that and more for host assets, in addition to our previous coverage details on web applications and API endpoints, all found within the Synack platform. With Coverage Analytics, Synack customers are able to identify which web or host assets have been tested and the nature of the testing performed. This is helpful for auditing purposes and provides proof of testing activity, not just that an asset is in scope. Additionally, Coverage Analytics gives customers an understanding of areas that haven’t been tested as heavily for vulnerabilities and can provide internal red team leaders with direction for supplemental testing and prioritization. 

Unmatched Oversight of Coverage 

Other forms of security testing are unable to provide the details and information Synack Coverage Analytics does. Bug bounty testing typically goes through the untraceable public internet or via tagged headers, which require security researcher cooperation. The number of researchers and hours that they are testing are not easily trackable via these methods, if at all. Traditional penetration testing doesn’t have direct measurement capabilities. Our LaunchPoint infrastructure stands between the Synack Red Team, our community of 1,500 security researchers, and customer assets, so customers have better visibility of the measurable traffic during a test. More and more frequently, we hear that customers are required to provide this kind of information to their auditors in financial services and other industries. 

A look at the Classified Traffic & Vulnerabilities view in Synack’s Coverage Analytics. Sample data has been used for illustration purposes.

Benefits of Coverage Analytics 

  • Know what’s being tested within your web and host assets: where, when and how much 
  • View the traffic generated by the Synack Red Team during pentesting
  • Take next steps with confidence; identify where you may need supplemental testing and how to prioritize such testing

Starting today, security leaders can reduce their teams’ fears of pentesting in the dark by knowing what’s being tested, where and how much at any time across both web and host assets. Coverage Analytics makes sharing findings with executive leaders, board members or auditors simple and painless.

Current Synack customers can log in to the Synack Platform to explore Coverage Analytics today. If you have questions or are interested in learning more about Coverage Analytics, part of Synack’s Better Way to Pentest, don’t hesitate to contact us today!

The post Worry-free Pentesting: Continuous Oversight In Offensive Security Testing appeared first on Synack.

The Case for Integrating Dark Web Intelligence Into Your Daily Operations

30 January 2020 at 09:00

Some of the best intelligence an operator or decision-maker can obtain comes straight from the belly of the beast. That’s why dark web intelligence can be incredibly valuable to your security operations center (SOC). By leveraging this critical information, operators can gain a better understanding of the tactics, techniques and procedures (TTPs) employed by threat actors. With that knowledge in hand, decision-makers can better position themselves to protect their organizations.

This is in line with the classic teachings from Sun Tzu about knowing your enemy, and the entire passage containing that advice is particularly relevant to cybersecurity:

“If you know the enemy and know yourself, you need not fear the result of a hundred battles. If you know yourself but not the enemy, for every victory gained you will also suffer a defeat. If you know neither the enemy nor yourself, you will succumb in every battle.”

Let’s translate the middle section of this passage into colloquial cybersecurity talk: You can have the best security operations center in the world with outstanding cyber hygiene, but if you aren’t feeding it the right information, you may suffer defeats — and much of that information comes from dark web intelligence.

Completing Your Threat Intelligence Picture

To be candid, if you’re not looking at the dark web, there is a big gap in your security posture. Why? Because that’s where a lot of serious action happens. To paraphrase Sir Winston Churchill, the greatest defense against a cyber menace is to attack the enemy’s operations as near as possible to the point of departure.

Now, this is not a call to get too wrapped up in the dark web. Rather, a solid approach would be to go where the nefarious acts are being discussed and planned so you can take the appropriate proactive steps to prevent an attack on your assets.

The first step is to ensure that you have a basic understanding of the dark web. One common way to communicate over the dark web involves using peer-to-peer networks on Tor and I2P (Invisible Internet Project). In short, both networks are designed to provide secure communications and hide all types of information. Yes, this is only a basic illustration of dark web communications, but if your security operations center aims to improve its capabilities in the dark web intelligence space, you must be able to explain the dark web in these simple terms for two reasons:

  1. You cannot access these sites as you would any other website.
  2. You’re going to have to warn your superiors what you’re up to. The dark web is an unsavory place, full of illegal content. Your decision-makers need to know what will be happening with their assets at a high level, which makes it vitally important to speak their language.

And this part is critical: If you want to get the most out of dark web intelligence, you may have to put on a mask and appear to “be one of the bad guys.” You will need to explain to your decision-makers why full-time staff might have to spend entire days as someone else. This is necessary because when you start searching for granular details related to your organization, you may have to secure the trust of malicious actors to gain entry into their circles. That’s where the truly rich intelligence is.

This could involve transacting in bitcoins or other cryptocurrencies, stumbling upon things the average person would rather not see, trying to decipher between coded language and broken language, and the typical challenges that come with putting up an act — all so you can become a trusted persona. Just like any other relationship you develop in life, this doesn’t happen overnight.

Of course, there are organizations out there that can provide their own “personas” for a fee and do the work for you. Using these services can be advantageous for small and medium businesses that may not have the resources to do all of this on their own. But the bigger your enterprise is, the more likely it becomes that you will want these capabilities in-house. In general, it’s also a characteristic of good operational security to be able to do this in-house.

Determining What Intelligence You Need

One of the most difficult challenges you will face when you decide to integrate dark web intelligence into your daily operations is figuring out what intelligence could help your organization. A good start is to cluster the information you might collect into groups. Here are some primer questions you can use to develop these groups:

  • What applies to the cybersecurity world in general?
  • What applies to your industry?
  • What applies to your organization?
  • What applies to your people?

For the first question, there are plenty of service providers who make it their business to scour the dark web and collect such information. This is an area where it may make more sense to rely on these service providers and integrate their knowledge feeds into existing ones within your security operations center. With the assistance of artificial intelligence (AI) to manage and make sense of all these data points, you can certainly create a good defensive perimeter and take remediation steps if you identify gaps in your network.

It’s the second, third and fourth clusters that may require some tailoring and additional resources. Certain service providers can provide industry-specific dark web intelligence — and you would be wise to integrate that into your workflow — but at the levels of your organization and its people, you will need to do the work on your own. Effectively, you would be doing human intelligence work on the dark web.

Why Human Operators Will Always Be Needed

No matter how far technological protections advance, when places like the dark web exist, there will always be the human element to worry about. We’re not yet at the stage where machines are deciding what to target — it’s still humans who make those decisions.

Therefore, having top-level, industrywide information feeds can be great and even necessary, but it may not be enough. You need to get into the weeds here because when malicious actors move on a specific target, that organization has to play a large role in protecting itself with specific threat intelligence. A key component of ensuring protections are in place is knowing what people are saying about you, even on the dark web.

As Sun Tzu said: “If you know the enemy and know yourself, you need not fear the result of a hundred battles.” There’s a lot of wisdom in that, even if it was said some 2,500 years ago.

The post The Case for Integrating Dark Web Intelligence Into Your Daily Operations appeared first on Security Intelligence.

❌
❌