Showing 1–12 of 12 results
/ Date/ Name
Apr 4, 2019Open Issues in Combating Fake News: Interpretability as an OpportunityMay 19, 2019Predicting Model Failure using Saliency Maps in Autonomous Driving SystemsDec 20, 2019Practical Solutions for Machine Learning Safety in Autonomous VehiclesJul 24, 2020Machine Learning Explanations to Prevent Overtrust in Fake News DetectionNov 28, 2018A Multidisciplinary Survey and Framework for Design and Evaluation of Explainable AI SystemsJun 7, 2021Shifting Transformation Learning for Out-of-Distribution DetectionJan 16, 2018ProvThreads: Analytic Provenance Visualization and SegmentationJan 16, 2018Analytic Provenance Datasets: A Data Repository of Human Analysis Activity and Interaction LogsJan 16, 2018A Human-Grounded Evaluation Benchmark for Local Explanations of Machine LearningNov 29, 2018Combating Fake News with Interpretable News Feed AlgorithmsJun 9, 2021Taxonomy of Machine Learning Safety: A Survey and PrimerJul 8, 2019XFake: Explainable Fake News Detector with Visualizations