❌

Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

NDSS 2025 – ASGARD

19 January 2026 at 15:00

Session 9B: DNN Attack Surfaces

Authors, Creators & Presenters: Myungsuk Moon (Yonsei University), Minhee Kim (Yonsei University), Joonkyo Jung (Yonsei University), Dokyung Song (Yonsei University)

PAPER
ASGARD: Protecting On-Device Deep Neural Networks with Virtualization-Based Trusted Execution Environments

On-device deep learning, increasingly popular for enhancing user privacy, now poses a serious risk to the privacy of deep neural network (DNN) models. Researchers have proposed to leverage Arm TrustZone's trusted execution environment (TEE) to protect models from attacks originating in the rich execution environment (REE). Existing solutions, however, fall short: (i) those that fully contain DNN inference within a TEE either support inference on CPUs only, or require substantial modifications to closed-source proprietary software for incorporating accelerators; (ii) those that offload part of DNN inference to the REE either leave a portion of DNNs unprotected, or incur large run-time overheads due to frequent model (de)obfuscation and TEE-to-REE exits. We present ASGARD, the first virtualization-based TEE solution designed to protect on-device DNNs on legacy Armv8-A SoCs. Unlike prior work that uses TrustZone-based TEEs for model protection, ASGARD's TEEs remain compatible with existing proprietary software, maintain the trusted computing base (TCB) minimal, and incur near-zero run-time overhead. To this end, ASGARD (i) securely extends the boundaries of an existing TEE to incorporate an SoC-integrated accelerator via secure I/O passthrough, (ii) tightly controls the size of the TCB via our aggressive yet security-preserving platform- and application-level TCB debloating techniques, and (iii) mitigates the number of costly TEE-to-REE exits via our exit-coalescing DNN execution planning. We implemented ASGARD on RK3588S, an Armv8.2-A-based commodity Android platform equipped with a Rockchip NPU, without modifying Rockchip- nor Arm-proprietary software. Our evaluation demonstrates that ASGARD effectively protects on-device DNNs in legacy SoCs with a minimal TCB size and negligible inference latency overhead.

ABOUT NDSS
The Network and Distributed System Security Symposium (NDSS) fosters information exchange among researchers and practitioners of network and distributed system security. The target audience includes those interested in practical aspects of network and distributed system security, with a focus on actual system design and implementation. A major goal is to encourage and enable the Internet community to apply, deploy, and advance the state of available security technologies.


Our thanks to the Network and Distributed System Security (NDSS) Symposium for publishing their Creators, Authors and Presenter’s superb NDSS Symposium 2025 Conference content on the Organizations' YouTube Channel.

Permalink

The post NDSS 2025 – ASGARD appeared first on Security Boulevard.

NDSS 2025 – BitShield: Defending Against Bit-Flip Attacks On DNN Executables

19 January 2026 at 11:00

Session 9B: DNN Attack Surfaces

Authors, Creators & Presenters: Yanzuo Chen (The Hong Kong University of Science and Technology), Yuanyuan Yuan (The Hong Kong University of Science and Technology), Zhibo Liu (The Hong Kong University of Science and Technology), Sihang Hu (Huawei Technologies), Tianxiang Li (Huawei Technologies), Shuai Wang (The Hong Kong University of Science and Technology)

PAPER
BitShield: Defending Against Bit-Flip Attacks on DNN Executables

Recent research has demonstrated the severity and prevalence of bit-flip attacks (BFAs; e.g., with Rowhammer techniques) on deep neural networks (DNNs). BFAs can manipulate DNN prediction and completely deplete DNN intelligence, and can be launched against both DNNs running on deep learning (DL) frameworks like PyTorch, as well as those compiled into standalone executables by DL compilers. While BFA defenses have been proposed for models on DL frameworks, we find them incapable of protecting DNN executables due to the new attack vectors on these executables. This paper proposes the first defense against BFA for DNN executables. We first present a motivating study to demonstrate the fragility and unique attack surfaces of DNN executables. Specifically, attackers can flip bits in the section to alter the computation logic of DNN executables and consequently manipulate DNN predictions; previous defenses guarding model weights can also be easily evaded when implemented in DNN executables. Subsequently, we propose BitShield, a full-fledged defense that detects BFAs targeting both data and sections in DNN executables. We novelly model BFA on DNN executables as a process to corrupt their semantics, and base BitShield on semantic integrity checks. Moreover, by deliberately fusing code checksum routines into a DNN's semantics, we make BitShield highly resilient against BFAs targeting itself. BitShield is integrated in a popular DL compiler (Amazon TVM) and is compatible with all existing compilation and optimization passes. Unlike prior defenses, BitShield is designed to protect more vulnerable full-precision DNNs and does not assume specific attack methods, exhibiting high generality. BitShield also proactively detects ongoing BFA attempts instead of passively hardening DNNs. Evaluations show that BitShield provides strong protection against BFAs (average mitigation rate 97.51%) with low performance overhead (2.47% on average) even when faced with fully white-box, powerful attackers.

ABOUT NDSS
The Network and Distributed System Security Symposium (NDSS) fosters information exchange among researchers and practitioners of network and distributed system security. The target audience includes those interested in practical aspects of network and distributed system security, with a focus on actual system design and implementation. A major goal is to encourage and enable the Internet community to apply, deploy, and advance the state of available security technologies.


Our thanks to the Network and Distributed System Security (NDSS) Symposium for publishing their Creators, Authors and Presenter’s superb NDSS Symposium 2025 Conference content on the Organizations' YouTube Channel.

Permalink

The post NDSS 2025 – BitShield: Defending Against Bit-Flip Attacks On DNN Executables appeared first on Security Boulevard.

NDSS 2025 – Compiled Models, Built-In Exploits

18 January 2026 at 11:00

Session 9B: DNN Attack Surfaces

Authors, Creators & Presenters: Yanzuo Chen (The Hong Kong University of Science and Technology), Zhibo Liu (The Hong Kong University of Science and Technology), Yuanyuan Yuan (The Hong Kong University of Science and Technology), Sihang Hu (Huawei Technologies), Tianxiang Li (Huawei Technologies), Shuai Wang (The Hong Kong University of Science and Technology)

PAPER
Compiled Models, Built-In Exploits: Uncovering Pervasive Bit-Flip Attack Surfaces in DNN Executables

Recent research has shown that bit-flip attacks (BFAs) can manipulate deep neural networks (DNNs) via DRAM Rowhammer exploitations. For high-level DNN models running on deep learning (DL) frameworks like PyTorch, extensive BFAs have been conducted to flip bits in model weights and shown effective. Defenses have also been proposed to guard model weights. Nevertheless, DNNs are increasingly compiled into DNN executables by DL compilers to leverage hardware primitives. These executables manifest new and distinct computation paradigms; we find existing research failing to accurately capture and expose the attack surface of BFAs on DNN executables. To this end, we launch the first systematic study of BFAs on DNN executables and reveal new attack surfaces neglected or underestimated in previous work. Specifically, prior BFAs in DL frameworks are limited to attacking model weights and assume a strong whitebox attacker with full knowledge of victim model weights, which is unrealistic as weights are often confidential. In contrast, we find that BFAs on DNN executables can achieve high effectiveness by exploiting the model structure (usually stored in the executable code), which only requires knowing the (often public) model structure. Importantly, such structure-based BFAs are pervasive, transferable, and more severe (e.g., single-bit flips lead to successful attacks) in DNN executables; they also slip past existing defenses. To realistically demonstrate the new attack surfaces, we assume a weak and more realistic attacker with no knowledge of victim model weights. We design an automated tool to identify vulnerable bits in victim executables with high confidence (70% compared to the baseline 2%). Launching this tool on DDR4 DRAM, we show that only 1.4 flips on average are needed to fully downgrade the accuracy of victim executables, including quantized models which could require 23Γ— more flips previously, to random guesses. We comprehensively evaluate 16 DNN executables, covering three large-scale DNN models trained on three commonly-used datasets compiled by the two most popular DL compilers. Our finding calls for incorporating security mechanisms in future DNN compilation toolchains.

ABOUT NDSS
The Network and Distributed System Security Symposium (NDSS) fosters information exchange among researchers and practitioners of network and distributed system security. The target audience includes those interested in practical aspects of network and distributed system security, with a focus on actual system design and implementation. A major goal is to encourage and enable the Internet community to apply, deploy, and advance the state of available security technologies.


Our thanks to the Network and Distributed System Security (NDSS) Symposium for publishing their Creators, Authors and Presenter’s superb NDSS Symposium 2025 Conference content on the Organizations' YouTube Channel.

Permalink

The post NDSS 2025 – Compiled Models, Built-In Exploits appeared first on Security Boulevard.

❌
❌