Showing 21–40 of 145 results
/ Date/ Name
Feb 19, 2023Delving into the Adversarial Robustness of Federated LearningNov 1, 2022The Perils of Learning From Unlabeled Data: Backdoor Attacks on Semi-supervised LearningMar 31, 2023Towards Adversarially Robust Continual LearningMay 17, 2023Are You Copying My Model? Protecting the Copyright of Large Language Models for EaaS via Backdoor WatermarkJun 27, 2023When Foundation Model Meets Federated Learning: Motivations, Challenges, and Future DirectionsFeb 28, 2025Unlearning through Knowledge Overwriting: Reversible Federated Unlearning via Selective Sparse AdapterApr 22, 2025A Comprehensive Survey in LLM(-Agent) Full Stack Safety: Data, Training and DeploymentDec 31, 2024MLLM-as-a-Judge for Image Safety without Human LabelingMay 15, 2025Sybil-based Virtual Data Poisoning Attacks in Federated LearningAug 29, 2024RLCP: A Reinforcement Learning-based Copyright Protection Method for Text-to-Image Diffusion ModelMar 17, 2026Empirical Recipes for Efficient and Compact Vision-Language ModelsDec 11, 2022ResFed: Communication Efficient Federated Learning by Transmitting Deep Compressed ResidualsMar 11, 2026UniCompress: Token Compression for Unified Vision-Language Understanding and GenerationJun 22, 2021A Vertical Federated Learning Framework for Graph Convolutional NetworkMar 18, 2021Model Extraction and Adversarial Transferability, Your BERT is Vulnerable!Nov 20, 2020A Reputation Mechanism Is All You Need: Collaborative Fairness and Adversarial Robustness in Federated LearningMay 8, 2022Data-Free Adversarial Knowledge Distillation for Graph Neural NetworksMay 23, 2022IDEAL: Query-Efficient Data-Free Learning from Black-box ModelsAug 22, 2022RAIN: RegulArization on Input and Network for Black-Box Domain AdaptationJun 1, 2022Privacy for Free: How does Dataset Condensation Help Privacy?