Showing 1–16 of 16 results
/ Date/ Name
Jul 17, 2021Automatic Fairness Testing of Neural Classifiers through Adversarial SamplingNov 17, 2021Fairness Testing of Deep Image Classification with Adequacy MetricsSep 2, 2023Towards Certified Probabilistic Robustness with High AccuracySep 12, 2023Exploiting Machine Unlearning for Backdoor Attacks in Deep Learning SystemNov 14, 2019There is Limited Correlation between Coverage and Robustness for Deep Neural NetworksOct 5, 2025Rounding-Guided Backdoor Injection in Deep Learning Model QuantizationNov 11, 2025Towards Provably Unlearnable Examples via Bayes Error OptimisationApr 13, 2026ClawGuard: A Runtime Security Framework for Tool-Augmented LLM Agents Against Indirect Prompt InjectionOct 22, 2024LLMScan: Causal Scan for LLM Misbehavior DetectionNov 15, 2024RedTest: Towards Measuring Redundancy in Deep Neural Networks EffectivelyJun 13, 2024Enhancing Diagnostic Accuracy in Rare and Common Fundus Diseases with a Knowledge-Rich Vision-Language ModelMay 10, 2025PRUNE: A Patching Based Repair Framework for Certifiable Unlearning of Neural NetworksMay 14, 2018Detecting Adversarial Samples for Deep Neural Networks through Mutation TestingDec 14, 2018Adversarial Sample Detection for Deep Neural Network through Model Mutation TestingFeb 12, 2024Efficient and Universal Watermarking for LLM-Generated Code DetectionMar 19, 2025Drone Remote Identification Based on Zadoff-Chu Sequences and Time-Frequency Images