Showing 1–20 of 29 results
/ Date/ Name
Sep 27, 2019Alleviating Privacy Attacks via Causal LearningOct 1, 2018Privado: Practical and Secure DNN Inference with EnclavesJan 8, 2020To Transfer or Not to Transfer: Misclassification Attacks Against Transfer Learned Text ClassifiersNov 8, 2019Collaborative Machine Learning Markets with Data-Replication-Robust PaymentsSep 18, 2022Distribution inference risks: Identifying and mitigating sources of leakageJun 12, 2020Leakage of Dataset Properties in Multi-Party Machine LearningMay 27, 2021Causally Constrained Data Synthesis for Private Data ReleaseJul 25, 2020SOTERIA: In Search of Efficient Neural Networks for Private InferenceFeb 22, 2024Closed-Form Bounds for DP-SGD against Record-level InferenceOct 24, 2023SoK: Memorization in General-Purpose Large Language ModelsFeb 19, 2025The Canary's Echo: Auditing Privacy Risks of LLM-Generated Synthetic TextOct 7, 2021The Connection between Out-of-Distribution Generalization and Privacy of ML ModelsDec 5, 2019An Empirical Study on the Intrinsic Privacy of SGDSep 11, 2020MACE: A Flexible Framework for Membership Privacy Estimation in Generative ModelsDec 17, 2019Analyzing Information Leakage of Updates to Natural Language ModelsDec 21, 2022SoK: Let the Privacy Games Begin! A Unified Treatment of Data Inference Privacy in Machine LearningFeb 2, 2023On the Efficacy of Differentially Private Few-shot Image ClassificationNov 27, 2023Rethinking Privacy in Machine Learning Pipelines from an Information Flow Control PerspectiveOct 4, 2024Permissive Information-Flow Analysis for Large Language ModelsOct 4, 2022Invariant Aggregator for Defending against Federated Backdoor Attacks