Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

클라우드 주권만으론 부족하다···공공 부문의 새로운 쟁점은 ‘종단 간 암호화’

2 December 2025 at 00:49

데이터의 위치 지정만으로는 더 이상 충분하지 않다는 인식이 확산되고 있다. 정부가 자국 내에 데이터를 보관하더라도 서드파티 서버에 올려두면 실제 주권을 보장하지 못한다는 우려가 커지면서, 규제 당국은 보다 근본적인 조치를 요구하고 있다. 바로 데이터 암호화 키에 대한 통제권이다.

스위스 지방정부 개인정보보호 책임자 협의체인 프리바팀(Privatim)은 최근 결의문을 통해, 민감한 정부 데이터를 다룰 때 기관이 직접 종단 간(E2E) 암호화를 구현하지 않는 한 글로벌 서비스형 소프트웨어(SaaS) 사용을 피해야 한다고 촉구했다. 결의문은 이러한 기준에 미치지 못하는 사례로 마이크로소프트 365를 명시적으로 언급했다.

결의문은 “대다수 SaaS 솔루션은 업체가 평문 데이터에 접근하지 못하도록 보장하는 진정한 종단 간 암호화를 아직 지원하지 않는다. 따라서 SaaS 애플리케이션 사용은 기관의 통제력을 상당히 약화시키는 결과를 초래한다”라고 밝혔다.

분석가들은 이런 통제력 상실이 데이터 주권의 핵심 개념을 훼손한다고 지적했다. 그레이하운드리서치(Greyhound Research) 최고 애널리스트 산치트 비르 고기아는 “클라우드 업체가 법적 절차든 내부 메커니즘이든 어떤 방식으로든 고객 데이터를 복호화할 수 있는 능력을 갖고 있다면, 그 데이터는 더 이상 진정한 의미의 주권을 지닌 것이 아니다”라고 말했다.

고기아는 유럽 국가 전반에서 이와 유사한 의견이 제시되고 있다고 언급했다. 그에 따르면 유럽에서는 독일, 프랑스, 덴마크, 유럽연합 집행위원회 등이 클라우드 업체의 중립성에 대한 신뢰가 약화되고 있다며 경고하거나 조치를 취하고 있다. 그는 “스위스는 다른 유럽 국가들이 암시적으로 언급해온 내용을 명확히 했다. 결의문은 미국 클라우드 법과 해외 감시 위험 때문에 종단 간 암호화가 적용되지 않은 클라우드 솔루션은 민감한 공공 부문 업무에 적합하지 않다고 규정하고 있다”라고 말했다.

암호화와 ‘위치’의 한계

프리바팀은 결의문에서 데이터 위치 규정만으로는 해결할 수 없는 리스크를 지적하면서, 당국이 글로벌 기업의 계약 의무 준수 여부를 검증할 수 있을 정도로 충분한 투명성을 제공받지 못하고 있다고 진단했다. 이런 불투명성이 기술의 실제 구현 방식, 시스템을 변경 관리, 그리고 직원과 하청 업체를 어떻게 감독하는지까지 이어지며, 외부 서비스 제공자가 여러 단계로 얽히는 복잡한 구조로 확대된다고 강조했다.

가트너(Gartner) 수석 애널리스트 아시시 배너지는 데이터가 특정 국가에 저장돼 있어도 미국 클라우드 법처럼 초국가적 적용이 가능한 법률에 따라 외국 정부가 접근할 수 있다고 말했다. 그는 또한 소프트웨어 벤더가 계약 조건을 주기적으로 수정할 수 있어 고객의 통제권이 더 약화된다고 분석했다.

배너지는 “중동과 유럽의 여러 고객이 ‘데이터가 어디에 저장돼 있든, 대부분 미국 기반인 클라우드 업체가 여전히 접근할 수 있다’는 점을 우려하고 있다”라고 말했다.

에베레스트그룹(Everest Group) 수석 애널리스트 프라브죠트 카우르는 스위스의 입장이 기술 주권을 강화하려는 규제 변화 흐름을 더욱 가속한다고 설명했다. 카우르는 “스위스의 기준이 다른 국가보다 엄격한 것은 사실이지만, 결코 특별한 사례는 아니다. 계약이나 절차적 안전장치에 의존하는 시장에서도 기술 주권을 강화하는 방향으로 전환이 빨라지고 있다”라고 언급했다.

프리바팀은 이런 한계를 고려해 모든 공공 부문에서 클라우드 사용 기준을 강화해야 한다고 제시했다. 결의문은 “특히 민감한 개인정보나 법적 비밀 유지 의무가 적용되는 데이터를 다루는 공공기관은 데이터를 직접 암호화하고, 클라우드 업체가 암호 키에 접근할 수 없는 경우에만 글로벌 SaaS 솔루션을 사용해야 한다”라고 밝혔다.

이는 현재의 관행과 확연히 다른 접근이다. 지금까지 많은 정부 기관은 클라우드 업체가 기본으로 제공하는 암호화 기능에 의존해 왔다. 마이크로소프트 365와 같은 서비스는 저장 및 전송 단계에서 암호화를 제공하지만, 운영상 필요나 규제 준수, 법적 요청에 대응하기 위해 마이크로소프트가 데이터를 복호화할 수 있는 권한을 여전히 보유하고 있다.

보안은 강화되지만 통찰력은 감소

다만 전문가들은 고객이 통제하는 종단 간 암호화가 상당한 타협점을 수반한다고 지적했다.

카우르는 “업체가 평문 데이터를 전혀 볼 수 없게 되면, 정부는 검색과 인덱싱 기능 저하, 협업 기능 제한, 자동화된 위협 탐지나 데이터 유출 방지 도구 활용 제약에 직면하게 된다”라고 말했다. 그는 이어 “코파일럿과 같은 AI 기반 생산성 기능도 업체 측 데이터 처리를 전제로 하기 때문에, 엄격한 종단 간 암호화 환경에서는 사실상 활용이 불가능하다”라고 설명했다.

기능적 제약 외에 인프라와 비용 부담도 문제가 될 수 있다. 기관은 자체 키 관리 시스템을 운영해야 하며, 이는 새로운 거버넌스 업무와 인력 수요를 유발한다. 배너지는 대규모 암호화 및 복호화 작업이 추가 하드웨어 자원을 요구하고 지연을 증가시켜 시스템 성능에 영향을 줄 수 있다고 분석했다.

배너지는 “추가 하드웨어가 필요해지고 사용자 경험에서도 지연이 발생할 수 있으며, 전체 솔루션 비용도 더 높아질 수 있다”라고 말했다.

고기아는 이러한 제약으로 인해 대부분의 정부가 전면적인 암호화 대신 단계적 접근 방식을 택할 것이라고 전망했다. 그는 “기밀 문서, 법적 조사 자료, 국가안보 관련 문서 등 고도의 민감 데이터를 별도 테넌트나 주권 환경에 두고 완전한 종단 간 암호화를 적용하는 방식이 현실적인 선택지가 될 것”이라고 말했다. 반면 행정 문서나 시민 서비스 등 폭넓은 공공 업무는 통제된 암호화와 강화된 감사 기능을 적용한 주요 클라우드 플랫폼을 계속 활용할 것으로 보인다고 진단했다.

클라우드 컴퓨팅 역량의 변화

카우르는 스위스의 접근 방식이 국제적으로 확산될 경우, 주요 클라우드 업체가 계약상 또는 지역 보장에 머무르지 못하고 기술 주권을 강화해야 할 것이라고 전망했다. 그는 “변화 조짐은 이미 나타나고 있다. 특히 마이크로소프트가 고객 통제 암호화와 관할권 기반 접근 제한을 강화하는 보다 엄격한 모델을 도입하기 시작했다”라고 말했다.

고기아는 이런 변화가 클라우드 업체가 정부 고객을 대하는 방식을 근본적으로 흔든다고 분석했다. 그는 “데이터센터 위치, 지역 지원, 계약 기반 분리 등을 주요 보증 수단으로 삼았던 기존 정부 클라우드 전략의 상당 부분이 더 이상 유효하지 않게 됐다”라고 말했다. 또한 “클라이언트 측 암호화, 기밀 컴퓨팅, 외부 키 관리는 선택적 기능이 아니라 고규제 시장의 공공 부문 계약에서 반드시 갖춰야 할 기본 요건이 됐다”라고 강조했다.

배너지는 이로 인해 시장 구조도 크게 재편될 수 있다고 전망했다. 그는 주권 문제에 상대적으로 민감하지 않은 상업 고객을 위한 글로벌 클라우드와, 완전한 통제를 요구하는 정부를 위한 프리미엄 주권 클라우드라는 ‘이원화 구조’가 생길 수 있다고 진단했다. 이어 “유럽 등지에서 부상하는 신흥 클라우드 업체와 지역 벤더들이 엄격한 암호화 요건을 충족하는 주권 기반 솔루션을 제공하면서 시장 점유율을 확대할 가능성이 있다”라고 분석했다.

프리바팀의 권고안은 스위스 공공 기관에만 적용되는 지침이지만, 이번 논쟁은 기술 정책을 둘러싼 지정학적 경쟁이 격화하는 상황에서 단순히 데이터 위치를 통제하는 것만으로는 더 이상 규제 당국의 주권 요구를 충족시키기 어렵다는 점을 보여준다.
dl-ciokorea@foundryco.com

End-to-end encryption is next frontline in governments’ data sovereignty war with hyperscalers

1 December 2025 at 08:21

Data residency is no longer enough. As governments lose faith that storing data within their borders, but on someone else’s servers, provides real sovereignty, regulators are demanding something more fundamental: control over the encryption keys for their data.

Privatim, a collective of Swiss local government data protection officers, last week called on their employers to avoid the use of international software-as-a-service solutions for sensitive government data unless the agencies themselves implement end-to-end encryption. The resolution specifically cited Microsoft 365 as an example of the kinds of platforms that fall short.

“Most SaaS solutions do not yet offer true end-to-end encryption that would prevent the provider from accessing plaintext data,” said the Swiss data protection officers’ resolution. “The use of SaaS applications therefore entails a significant loss of control.”

Security analysts say this loss of control undermines the very concept of data sovereignty. “When a cloud provider has any ability to decrypt customer data, either through legal process or internal mechanisms, the data is no longer truly sovereign,” said Sanchit Vir Gogia, chief analyst at Greyhound Research.

The Swiss position isn’t isolated, Gogia said. Across Europe, Germany, France, Denmark and the European Commission have each issued warnings or taken action, pointing to a loss of faith in the neutrality of foreign-owned hyperscalers, he said. “Switzerland distinguished itself by stating explicitly what others have implied: that the US CLOUD Act and foreign surveillance risk renders cloud solutions lacking end-to-end encryption unsuitable for high-sensitivity public sector use, according to the resolution.”

Encryption, location, location

Privatim’s resolution identified risks that geographic data residency cannot address. Globally operating companies offer insufficient transparency for authorities to verify compliance with contractual obligations, the group said. This opacity extends to technical implementations, change management, and monitoring of employees and subcontractors who can form long chains of external service providers.

Data stored in one jurisdiction can still be accessed by foreign governments under extraterritorial laws like the US Clarifying Lawful Overseas Use of Data (CLOUD) Act, said Ashish Banerjee, senior principal analyst at Gartner. Software providers can also unilaterally amend contract terms periodically, further reducing customer control, he said.

“Several clients in the Middle East and Europe have raised concerns that, regardless of where their data is stored, it could still be accessed by cloud providers — most of which are US-based,” Banerjee said.

Prabhjyot Kaur, senior analyst at Everest Group, said the Swiss stance accelerates a broader regulatory pivot toward technical sovereignty controls. “While the Swiss position is more stringent than most, it is not an isolated outlier,” she said. “It accelerates a broader regulatory pivot toward technical sovereignty controls, even in markets that still rely on contractual or procedural safeguards today.”

Given these limitations, Privatim called for stricter rules on cloud use at all levels of government: “The use of international SaaS solutions for particularly sensitive personal data or data subject to legal confidentiality obligations by public bodies is only possible if the data is encrypted by the responsible body itself and the cloud provider has no access to the key.”

This represents a departure from current practices, where many government bodies rely on cloud providers’ native encryption features. Services like Microsoft 365 offer encryption at rest and in transit, but Microsoft retains the ability to decrypt that data for operational purposes, compliance requirements, or legal requests.

More security, less insight

Customer-controlled end-to-end encryption comes with significant trade-offs, analysts said.

“When the provider has zero visibility into plaintext, governments would face reduced search and indexing capabilities, limited collaboration features, and restrictions on automated threat detection and data loss prevention tooling,” said Kaur. “AI-driven productivity enhancements like copilots also rely on provider-side processing, which becomes impossible under strict end-to-end encryption.”

Beyond functionality losses, agencies would face significant infrastructure and cost challenges. They would need to operate their own key management systems, introducing governance overhead and staffing needs. Encryption and decryption at scale can impact system performance, as they require additional hardware resources and increase latency, Banerjee said.

“This might require additional hardware resources, increased latency in user interactions, and a more expensive overall solution,” he said.

These constraints mean most governments will likely adopt a tiered approach rather than blanket encryption, said Gogia. “Highly confidential content, including classified documents, legal investigations, and state security dossiers, can be wrapped in true end-to-end encryption and segregated into specialized tenants or sovereign environments,” he said. Broader government operations, including administrative records and citizen services, will continue to use mainstream cloud platforms with controlled encryption and enhanced auditability.

A shift in cloud computing power

If the Swiss approach gains momentum internationally, hyperscalers will need to strengthen technical sovereignty controls rather than relying primarily on contractual or regional assurances, Kaur said. “The required adaptations are already visible, particularly from Microsoft, which has begun rolling out more stringent models around customer-controlled encryption and jurisdictional access restrictions.”

The shift challenges fundamental assumptions in how cloud providers have approached government customers, according to Gogia. “This invalidates large portions of the existing government cloud playbooks that depend on data center residency, regional support, and contractual segmentation as the primary guarantees,” he said. “Client-side encryption, confidential computing, and external key management are no longer optional capabilities but baseline requirements for public sector contracts in high-compliance markets.”

The market dynamics could shift significantly as a result. Banerjee said this could create a two-tier structure: global cloud services for commercial customers less concerned about sovereignty, and premium sovereign clouds for governments demanding full control. “Non-US cloud providers and local vendors — such as emerging players in Europe — could gain market share by delivering sovereign solutions that meet strict encryption requirements,” he said.

Privatim’s recommendations apply specifically to Swiss public bodies and serve as guidance rather than binding policy. But the debate signals that data location alone may no longer satisfy regulators’ sovereignty concerns in an era where geopolitical rivalries are increasingly playing out through technology policy.

Guardrails and governance: A CIO’s blueprint for responsible generative and agentic AI

24 November 2025 at 10:09

The promise of generative AI (genAI) and agentic AI is electrifying. From automating complex tasks to unlocking unprecedented creativity, these technologies are poised to redefine your enterprise landscape. But as a CIO, you know that with great power comes great responsibility — and significant risk. The headlines are already filled with cautionary tales of data breaches, biased outputs and compliance nightmares.

The truth is, without robust guardrails and a clear governance framework, the very innovations you champion could become your biggest liabilities. This isn’t about stifling innovation; it’s about channeling it responsibly, ensuring your AI initiatives drive value without compromising security, ethics or trust.

Let’s dive into the critical areas where you must lead the charge.

Guardrails & governance: Why they are necessary, but no longer sufficient for agents

Many in the industry are confusing the function of guardrails and thinking they’re a flimsy substitute for true oversight. This is a critical misconception that must be addressed. Guardrails and governance are not interchangeable; they are two essential parts of a single system of control.

Think of it this way:

  • AI governance is the blueprint and the organization. It’s the framework of policies, roles, committees (like your AI review board) and processes that define what is acceptable, who is accountable and how you will monitor and audit all AI systems across the enterprise. Governance is the strategy and the chain of command.
  • AI guardrails are the physical controls and the rules in the code. These are the technical mechanisms embedded directly into the AI system’s architecture, APIs and interfaces to enforce the governance policies in real-time. Guardrails are the enforcement layer.

While we must distinguish between governance (the overarching policy framework) and guardrails (technical, in-the-moment controls), the reality of agentic AI has revealed a critical flaw: current soft guardrails are failing catastrophically. These controls are often probabilistic, pattern-based or rely on LLM self-evaluation, which is easily bypassed by an agent’s core capabilities: autonomy and composability (the ability to chain tools and models).

Guardrail failure mode Core flaw  CIO takeaway: Why static fails
PII/ModerationPattern Reliance & Shallow FiltersFails when sensitive data is slightly obfuscated (e.g., using “SNN” instead of “SSN”) or harmful content is wrapped in code/leetspeak.
Hallucination/JailbreakCircular Confidence & Probabilistic DefenseRelies on one model to judge another’s truthfulness or intent. The defense is easily manipulated, as the system can be confidently wrong or tricked by multi-turn or encoded attacks.

The agent’s ability to choose an alternate, unguarded path renders simple static checks useless. Your imperative is to move from relying on these flawed, soft defenses to implementing continuous, deterministic control.

The path forward: Implementing continuous control

To address these systemic vulnerabilities, CIOs must take the following actions:

  1. Mandate hard data boundaries: Replace the agent’s probabilistic PII detection with deterministic, non-LLM-based security tools (DLP, tokenization) enforced by your API gateway. This creates an un-bypassable security layer for all data entering or leaving the agent.
  2. Shift to pre-execution governance: Require all agentic deployments to utilize an agent orchestration layer that performs a pre-execution risk assessment on every tool call and decision step. This continuous governance module checks the agent’s compliance before it executes a financial transaction or high-privilege API call.
  3. Ensure forensic traceability: Implement a “Digital Ledger” approach for all agent actions. Every LLM call, parameter passed and reasoning step must be logged sequentially and immutably to allow for forensic reconstruction and accountability.

Data security: Your ‘private-by-default’ AI strategy

The fear of proprietary data leaking into public models is palpable and for good reason. Every piece of intellectual property inadvertently fed into a large language model (LLM) becomes a potential competitive disadvantage. This is where a “private-by-default” strategy becomes non-negotiable for your organization, a necessity widely discussed by KPMG in analyses such as The new rules of data governance in the age of generative AI.

This means you need to consider:

  • Embracing private foundation models: For highly sensitive workloads, investing in or leveraging private foundation models hosted within your secure environment is paramount. This gives you ultimate control over the model, its training data and its outputs.
  • Leveraging retrieval augmented generation (RAG) architectures: RAG is a game-changer. Instead of training a model directly on your entire private dataset, RAG systems allow the AI to retrieve relevant information from your secure, internal knowledge bases and then use a public or private LLM to generate a response. This keeps your sensitive data isolated while still providing contextually rich answers.
  • Robust data anonymization and masking: For any data that must interact with external models, implement stringent anonymization and masking techniques. This minimizes the risk of personally identifiable information (PII) or sensitive business data being exposed.

Your goal isn’t just to prevent data leakage; it’s to build a resilient AI ecosystem that protects your most valuable assets from the ground up.

Explainability & auditability: The imperative for agentic AI

Generative AI creates; agentic AI acts. When an autonomous AI agent is making decisions, executing transactions or interacting with customers, the stakes escalate dramatically. Regulators, auditors and even internal stakeholders will demand to know why an agent took a particular action.

This necessitates a forensic level of explainability and auditability:

  • Comprehensive decision logging: Every single decision, every parameter change, every data point considered by an AI agent must be meticulously logged. This isn’t just about output; it’s about the entire chain of reasoning.
  • Clear audit trails: These logs must be easily accessible, searchable and structured to form a clear, human-readable audit trail. When an auditor asks how an AI agent processed a loan application, you should be able to trace every step, from input to final decision.
    • Agentic AI example: An agent is tasked with automating supplier payments. A key guardrail must be a transaction limit filter that automatically holds any payment over $100,000 for human approval. The corresponding governance policy requires that the log for that agent details the entire sequence of API calls, the exact rule that triggered the hold and the human who provided the override, creating a perfect audit trail.
  • Transparency in agent design: The design and configuration of your AI agents should be documented and version-controlled. Understanding the rules, logic and external integrations an agent uses is crucial for diagnosing issues and ensuring compliance.

Ethical oversight: Nurturing responsible AI

Beyond security and compliance lies the profound ethical dimension of AI. Addressing this requires a proactive, human-centric approach:

  • Establish an AI review board or center of excellence (CoE): This isn’t a suggestion; it’s a necessity. This multidisciplinary group, comprising representatives from legal, ethics, data science and business units, should be the conscience of your AI strategy, aligning with guidance found in resources like The CIO’s guide to AI governance. Their mandate is to:
    • Proactive bias detection: Scrutinize model training data for potential biases before deployment.
    • Fairness in agent design: Review the logic and rules governing AI agents to ensure they don’t inadvertently discriminate or produce unfair results.
    • Ethical guidelines & policies: Develop and enforce clear ethical guidelines for the use and deployment of all AI within the organization.
    • Ethical AI example: A new genAI model is deployed to screen job candidates. A technical guardrail is implemented as an output toxicity filter to block any language the model suggests that could be interpreted as discriminatory. The governance policy dictates that the AI review board must regularly audit the model’s screening outcomes to ensure the overall hiring rate for protected groups remains statistically unbiased.
  • Human-in-the-loop mechanisms: For critical AI-driven decisions, ensure there’s always an opportunity for human review and override.
  • Bias mitigation techniques: Invest in techniques like re-weighting training data and Explainable AI (XAI) tools to understand and reduce bias in your models.

Responsible AI isn’t a checkbox; it’s a continuous journey of introspection, vigilance and commitment.

The CIO’s leadership imperative

The deployment of genAI and agentic AI isn’t just an IT project; it’s a strategic transformation that touches every facet of the enterprise. As the CIO, you are uniquely positioned to lead this charge, not just as a technologist, but as a strategist, risk manager and ethical guardian.

By prioritizing a “private-by-default” data strategy, enforcing rigorous explainability and auditability for autonomous agents and establishing robust ethical oversight, you can unlock the full potential of AI while building an enterprise that is secure, compliant and profoundly responsible. The future of AI is bright, but only if you build it on a foundation of trust and accountability. Make sure your blueprints reflect that.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?

私物端末の業務利用が変える働き方の未来―BYODがもたらす光と影

23 November 2025 at 23:38

ハイブリッドワーク時代が生んだ「見えないIT資産」の急増

従業員が自分のスマートフォンやパソコンを仕事に使うBYODは、もはや特別な働き方ではなくなった。世界の調査データによれば、すでに半数近くの従業員が何らかの形で私物端末を業務に活用しており、その市場規模は年率15パーセント前後という驚異的な成長を続けている。2030年代前半には数千億ドル規模に達するという予測も出ており、この流れは今後さらに加速することが確実視されている。

興味深いのは、公式にBYODを認めている企業が半数程度にとどまる一方で、実際には8割以上の企業で従業員が私物端末を業務に使用しているという現実だ。つまり、多くの企業では社内ルールが整備される前に、現場レベルで「なし崩し的なBYOD」が広がっているのである。この状況は、従業員の約8割が「仕事用と私用で端末を分けたい」と考えながらも、実際に会社から端末を支給されている人が15パーセント程度にとどまるという調査結果にも表れている。理想と現実のギャップが、BYODの急速な普及を後押ししているのだ。

このような状況が生み出しているのが「シャドーIT」と呼ばれる問題である。IT部門の管理外で使われる私物端末や個人向けクラウドサービス、チャットアプリなどが増加し、企業が把握できないIT資産が急増している。最新の調査では、企業で利用されているクラウドアプリケーションの約4割がIT部門の知らない「シャドーIT」だという衝撃的な結果も報告されている。BYODの普及は、こうした管理の死角を拡大させる要因となっており、企業のIT統制に大きな影響を与えている。

コスト面でのメリットも無視できない。モバイルBYODプログラムを導入することで、従業員一人あたり年間300ドル以上のコスト削減が可能という試算もある。端末の調達コストだけでなく、管理や保守にかかる費用も削減できるため、特に中小企業にとっては魅力的な選択肢となっている。また、従業員側も使い慣れた端末で仕事ができることで、生産性の向上や満足度の改善につながるという声も多い。

セキュリティリスクの深刻化とゼロトラストへの転換

BYODがもたらす利便性の裏側で、セキュリティリスクは急速に深刻化している。複数の調査によれば、ITリーダーの6割前後が「BYODに関する最大の懸念はセキュリティだ」と回答しており、実際に未管理の端末が原因となったマルウェア感染や情報漏えい事案も後を絶たない。特に衝撃的なのは、ランサムウェア被害の9割以上が未管理デバイスを起点に発生しているという報告だ。組織にとって「見えていない端末」が最大の弱点となっているのである。

こうした状況を受けて、世界的には「BYODを禁止するか、ルールを厳格化するか」という議論が再燃している。イギリスでは半数以上の企業が「オフィス内での私物端末利用を禁止することを検討している」という調査結果も出ており、大手通信事業者からは「BYODから完全会社管理端末に切り替えるべきだ」という提言も出されている。しかし現実的には、ビジネスのスピードや人材確保の観点から、柔軟な働き方を完全に捨て去ることは困難だ。

そこで注目を集めているのが、ゼロトラストの考え方を取り入れたBYODの再設計である。ゼロトラストとは、「社内ネットワークだから安全」「会社PCだから信頼できる」といった従来の前提を捨て、端末の所有者や場所に関係なく、すべてのアクセスを継続的に検証し続けるアプローチだ。この考え方に基づいて、モバイルデバイス管理(MDM)や統合エンドポイント管理(UEM)、さらにはAIを組み合わせたエンドポイントセキュリティへの投資が急増している。

具体的な技術的アプローチとしては、端末全体を管理するのではなく、仮想デスクトップやセキュアブラウザ、業務アプリケーションだけをコンテナ化して、企業データと個人データを論理的に分離する手法が広がっている。これにより、端末そのものよりも、ユーザーのIDとアクセス権、アプリケーションごとのデータ保護に重点を置くことが可能になった。従業員の「私物を細かく覗かれたくない」という心理的抵抗にも配慮しやすくなり、より現実的なセキュリティ対策として受け入れられている。

さらに、第三者アクセスの管理も重要なテーマとなっている。委託先やフリーランス、取引先など、従業員以外の関係者も自分の端末から企業のクラウド環境にアクセスするケースが増えており、その管理が大きな課題となっている。2025年のトレンドレポートでも、「第三者アクセスを含めたBYOD管理」が最重要テーマの一つとして挙げられており、企業の境界を越えた包括的なセキュリティ対策が求められている。

持続可能な働き方を実現する「新しいBYOD」の設計

これからの企業がBYODとどう向き合うべきか、世界の議論を見ると明確な方向性が見えてくる。それは「漫然とBYODを続ける時代は終わり、設計し直された新しいBYODに移行する」というものだ。この新しいBYODは、セキュリティとプライバシー、利便性と管理性のバランスを慎重に考慮した、より洗練されたアプローチとなっている。

まず重要なのは、統合エンドポイント管理(UEM)への移行である。スマートフォンやタブレット、ノートPC、さらにはウェアラブルデバイスやIoT機器まで、多様なデバイスを一元管理するUEMは、BYODの拡大とともに急成長している分野だ。最新の市場調査では、MDMが依然として大きなシェアを持つ一方で、UEMが最も高い成長率を示すと予測されている。これにより、デバイスの種類や所有者を問わず、一貫したセキュリティポリシーを適用することが可能になる。

次に、仮想化やコンテナ技術を使った「データ側の防御」への転換が進んでいる。端末を完全に会社管理下に置くのではなく、アプリケーションやワークスペースだけを隔離し、企業データをローカルに残さない設計が重視されている。これにより、端末の紛失や盗難時もリモートで業務環境だけを無効化でき、個人のプライバシーとの両立がしやすくなった。

ポリシーと教育のアップデートも欠かせない要素だ。多くの企業では、勤務時間外の利用や家族との端末共有、個人クラウドへの自動バックアップなど、現代的な使い方を想定したBYODポリシーが整備されていない。また、生成AIツールの利用が広がる中で、私物端末から機密情報をAIサービスに投入してしまうリスクも無視できなくなっている。最新の解説記事では、BYODポリシーを単なる「デバイスルールの一覧」ではなく、「アイデンティティとデータを守るための行動ガイド」として再設計し、従業員との対話を通じて浸透させる必要性が強調されている。

興味深いことに、BYODは環境・サステナビリティの文脈でも語られ始めている。世界の電子廃棄物は2023年時点で約6000万トンに達し、企業が新しい端末を大量に調達し続けることは環境負荷の面でも大きな課題となっている。BYODは端末のライフサイクルを延ばし、調達量を減らすことで、環境負荷を下げる一つの手段になりうるという指摘もある。持続可能な企業経営を目指す上で、BYODは新たな意味を持ち始めているのだ。

日本企業がこれからBYODを検討・見直しする際には、「禁止するか、許可するか」という二択ではなく、「どこまで・何を・誰に」許可するのかという粒度で設計することが重要になる。機密度の高いシステムは会社支給端末と仮想デスクトップを必須にしつつ、一般的なクラウド型グループウェアやチャットはBYODからのアクセスを認めるといった、きめ細かな線引きが求められる。

また、「端末を見る」のではなく「アクセスとデータの流れを見る」仕組みへの投資も不可欠だ。UEMやゼロトラストの考え方、IDベースのアクセス制御を取り入れることで、BYODかどうかにかかわらず一貫したセキュリティレベルを担保できるようになる。そして何より重要なのは、社員とのコミュニケーションである。BYODは単にIT部門の効率の話ではなく、「自分のスマホをどこまで会社に管理されるのか」「仕事とプライベートをどう切り分けるのか」という個人の感覚にも深く関わる問題だ。ポリシー策定の段階から現場の声を取り入れ、目的やリスク、会社がどこまで端末情報にアクセスするのかを丁寧に説明することが、実効性のあるBYOD運用には欠かせない。

BYODは一見すると「便利か危険か」の二者択一のように見えるが、実際には「ルールと技術をどう組み合わせるか」という設計の問題である。ゼロトラストやUEM、仮想化、そしてプライバシー配慮型のポリシーを組み合わせることで、「社員にとっては働きやすく、企業にとっては安全なBYOD」を実現できるかどうかが、これから数年の重要なテーマになっていくだろう。働き方の多様化が不可逆的に進む中で、BYODは避けて通れない課題であると同時に、企業の競争力を左右する重要な要素となっている。

❌
❌