Showing 1–20 of 29 results
/ Date/ Name
Jul 24, 2020T-BFA: Targeted Bit-Flip Adversarial Weight AttackNov 8, 2021DeepSteal: Advanced Model Extractions Leveraging Efficient Weight Stealing in MemoriesNov 5, 2020Deep-Dup: An Adversarial Weight Duplication Attack Framework to Crush Deep Neural Network in Multi-Tenant FPGAMay 30, 2019Robust Sparse Regularization: Simultaneously Optimizing Neural Network Robustness and CompactnessSep 10, 2019TBT: Targeted Neural Network Attack with Bit TrojanMar 28, 2019Bit-Flip Attack: Crushing Neural Network with Progressive Bit SearchJul 18, 2018Defend Deep Neural Networks Against Adversarial Examples via Fixed and Dynamic Quantized Activation FunctionsMar 22, 2021RA-BNN: Constructing Robust & Accurate Binary Neural Network to Simultaneously Defend Adversarial Bit-Flip Attack and Improve AccuracyNov 22, 2018Parametric Noise Injection: Trainable Randomness to Improve Deep Neural Network Robustness against Adversarial AttackJul 22, 2020Robust Machine Learning via Privacy/Rate-Distortion TheoryJan 20, 2021RADAR: Run-time Adversarial Weight Attack Detection and Accuracy RecoverySep 1, 2024Fisher Information guided Purification against Backdoor AttacksMar 30, 2020DeepHammer: Depleting the Intelligence of Deep Neural Networks through Targeted Chain of Bit FlipsMay 9, 2022ResSFL: A Resistance Transfer Framework for Defending Model Inversion Attack in Split Federated LearningMar 13, 2023Model Extraction Attacks on Split Federated LearningApr 23, 2025Robo-Troj: Attacking LLM-based Task PlannersJul 3, 2025EIM-TRNG: Obfuscating Deep Neural Network Weights with Encoding-in-Memory True Random Number Generator via RowHammerDec 14, 2023DRAM-Locker: A General-Purpose DRAM Protection Mechanism against Adversarial DNN Weight AttacksJan 12, 2026PROTEA: Securing Robot Task Planning and ExecutionNov 27, 2025Invisible Hands: Gray-Box Bit Flip Attack for Steering LLMs Without Knowledge of Gradients, Data, and Weights