Showing 1–20 of 25 results
/ Date/ Name
Apr 12, 2022Machine Learning Security against Data Poisoning: Are We There Yet?Jun 11, 2020Backdoor Smoothing: Demystifying Backdoor Attacks on Deep Neural NetworksAug 29, 2025I Stolenly Swear That I Am Up to (No) Good: Design and Evaluation of Model Stealing AttacksFeb 8, 2019On the security relevance of weights in deep learningMay 4, 2022Wild Patterns Reloaded: A Survey of Machine Learning Security against Training Data PoisoningJul 11, 2022Machine Learning Security in Industry: A Quantitative SurveyDec 21, 2023Manipulating Trajectory Prediction with BackdoorsMay 8, 2021Mental Models of Adversarial Machine LearningJun 10, 2025Design Patterns for Securing LLM Agents against Prompt InjectionsOct 24, 2025Gen-Review: A Large-scale Dataset of AI-Generated (and Human-written) Peer ReviewsJun 12, 2020How many winning tickets are there in one DNN?Jun 14, 2016Adversarial Perturbations Against Deep Neural Networks for Malware ClassificationFeb 21, 2017On the (Statistical) Detection of Adversarial ExamplesOct 31, 2025Prevalence of Security and Privacy Risk-Inducing Usage of AI-based Conversational AgentsNov 16, 2023Towards more Practical Threat Models in Artificial Intelligence SecurityJun 6, 2018Killing four birds with one Gaussian process: the relation between different test-time attacksDec 6, 2018The Limitations of Model Uncertainty in Adversarial SettingsJul 14, 2020Adversarial Examples and MetricsNov 17, 2017How Wrong Am I? - Studying Adversarial Examples and their Impact on Uncertainty in Gaussian Process Machine Learning ModelsJun 14, 2021Backdoor Learning Curves: Explaining Backdoor Poisoning Beyond Influence Functions