Showing 1–20 of 52 results
/ Date/ Name
Nov 28, 2020Voting based ensemble improves robustness of defensive modelsDec 22, 2020Self-Progressing Robust TrainingJul 12, 2018Query-Efficient Hard-label Black-box Attack:An Optimization-based ApproachSep 24, 2019Sign-OPT: A Query-Efficient Hard-label Adversarial AttackMay 30, 2018Stochastic Zeroth-order Optimization via Variance Reduction methodFeb 17, 2020CAT: Customized Adversarial Training for Improved RobustnessDec 2, 2017Towards Robust Neural Networks via Random Self-ensembleMar 3, 2018Seq2Sick: Evaluating the Robustness of Sequence-to-Sequence Models with Adversarial ExamplesOct 25, 2018Attack Graph Convolutional Networks by Adding Fake NodesOct 31, 2019Enhancing Certifiable Robustness via a Deep Model EnsembleOct 30, 2024CLIPErase: Efficient Unlearning of Visual-Textual Associations in CLIPJun 1, 2021Concurrent Adversarial Learning for Large-Batch TrainingJul 20, 2022FedDM: Iterative Distribution Matching for Communication-Efficient Federated LearningJun 28, 2024One Prompt is not Enough: Automated Construction of a Mixture-of-Expert PromptsJul 4, 2024Defense Against Syntactic Textual Backdoor Attacks with Token SubstitutionFeb 25, 2024DrAttack: Prompt Decomposition and Reconstruction Makes Powerful LLM JailbreakersFeb 6, 2025Safety Reasoning with GuidelinesApr 17, 2025Exploring Expert Failures Improves LLM Agent TuningNov 13, 2025PISanitizer: Preventing Prompt Injection to Long-Context LLMs via Prompt SanitizationNov 23, 2025TASO: Jailbreak LLMs via Alternative Template and Suffix Optimization