Showing 1–20 of 22 results
/ Date/ Name
Sep 6, 2019Invisible Backdoor Attacks on Deep Neural Networks via Steganography and RegularizationMay 24, 2021Dissecting Click Fraud Autonomy in the WildMay 1, 2021Hidden Backdoors in Human-Centric Language ModelsSep 20, 2017Smoke Screener or Straight Shooter: Detecting Elite Sybil Attacks in User-Review Social NetworksSep 1, 2024VPVet: Vetting Privacy Policies of Virtual Reality AppsFeb 17, 2022Fingerprinting Deep Neural Networks Globally via Universal Adversarial PerturbationsMay 23, 2014HVSTO: Efficient Privacy Preserving Hybrid Storage in Cloud Data CenterSep 30, 2025Leveraging Scene Context with Dual Networks for Sequential User Behavior ModelingJul 22, 2025Depth Gives a False Sense of Privacy: LLM Internal States InversionNov 22, 2011YouSense: Mitigating Entropy Selfishness in Distributed Collaborative Spectrum SensingDec 6, 2018Differentially Private Data Generative ModelsOct 28, 2025Your Microphone Array Retains Your Identity: A Robust Voice Liveness Detection System for Smart SpeakersMar 12, 2026EmbTracker: Traceable Black-box Watermarking for Federated Language ModelsSep 15, 2023A Duty to Forget, a Right to be Assured? Exposing Vulnerabilities in Machine Unlearning ServicesNov 19, 2021Mate! Are You Really Aware? An Explainability-Guided Testing Framework for Robustness of Malware DetectorsDec 1, 2023The Philosopher's Stone: Trojaning Plugins of Large Language ModelsJan 10, 2025Model Inversion in Split Learning for Personalized LLMs: New Insights from Information Bottleneck TheoryMar 9, 2026SlowBA: An efficiency backdoor attack towards VLM-based GUI agentsOct 9, 2013All Your Location are Belong to Us: Breaking Mobile Social Networks for Automated User Location TrackingJun 13, 2017Automated Poisoning Attacks and Defenses in Malware Detection Systems: An Adversarial Machine Learning Approach