Showing 1–20 of 120 results
/ Date/ Name
Nov 22, 2021Adversarial Examples on Segmentation Models Can be Easy to TransferDec 3, 2020Interpretable Graph Capsule Networks for Object RecognitionJun 21, 2021Simple Distillation Baselines for Improving Small Self-supervised ModelsOct 21, 2019Semantics for Global and Local Interpretation of Deep Neural NetworksJan 3, 2023Explainability and Robustness of Deep Visual Classification ModelsNov 18, 2019Improving the Robustness of Capsule Networks to Image Affine TransformationsAug 22, 2019Saliency Methods for Explaining Adversarial AttacksMar 21, 2023Influencer Backdoor Attack on Semantic SegmentationOct 26, 2023A Survey on Transferability of Adversarial Examples across Deep Neural NetworksApr 8, 2024A Survey on Responsible Generative AI: What to Generate and What NotSep 19, 2020Introspective Learning by Distilling Knowledge from Online Self-explanationNov 21, 2019Neural Network Memorization DissectionFeb 19, 2021Effective and Efficient Vote Attack on Capsule NetworksJul 24, 2023A Systematic Survey of Prompt Engineering on Vision-Language Foundation ModelsJul 25, 2022SegPGD: An Effective and Efficient Adversarial Attack for Evaluating and Boosting Segmentation RobustnessSep 12, 2023Exploring Non-additive Randomness on ViT against Query-Based Black-Box AttacksMar 14, 2024An Image Is Worth 1000 Lies: Adversarial Transferability across Prompts on Vision-Language ModelsApr 17, 2023Towards Robust Prompts on Vision-Language ModelsDec 5, 2018Understanding Individual Decisions of CNNs via Contrastive BackpropagationSep 2, 2019Understanding Bias in Machine Learning