Showing 1–20 of 127 results
/ Date/ Name
Oct 18, 2016Semi-supervised Knowledge Transfer for Deep Learning from Private Training DataFeb 5, 2019Analyzing and Improving Representations with the Soft Nearest Neighbor LossJul 18, 2016On the Effectiveness of Defensive DistillationNov 3, 2018A Marauder's Map of Security and Privacy in Machine LearningNov 14, 2015Distillation as a Defense to Adversarial Perturbations against Deep Neural NetworksFeb 24, 2018Scalable Private Learning with PATEAug 30, 2019How Relevant is the Turing Test in the Age of Sophisbots?Mar 13, 2018Deep k-Nearest Neighbors: Towards Confident, Interpretable and Robust Deep LearningFeb 8, 2016Practical Black-Box Attacks against Machine LearningApr 28, 2016Crafting Adversarial Input Sequences for Recurrent Neural NetworksNov 24, 2015The Limitations of Deep Learning in Adversarial SettingsJul 28, 2020Tempered Sigmoid Activations for Deep Learning with Differential PrivacyOct 5, 2022Fine-Tuning with Differential Privacy Necessitates an Additional Hyperparameter SearchOct 3, 2016Technical Report on the CleverHans v2.1.0 Adversarial Examples LibraryNov 11, 2016Towards the Science of Security and Privacy in Machine LearningMay 15, 2017Extending Defensive DistillationMay 24, 2016Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial SamplesFeb 21, 2017On the (Statistical) Detection of Adversarial ExamplesFeb 22, 2018Adversarial Examples that Fool both Computer Vision and Time-Limited HumansOct 2, 2019Improving Differentially Private Models with Active Learning