Showing 1–20 of 20 results
/ Date/ Name
Apr 26, 2024When to Trust LLMs: Aligning Confidence with Response QualityJul 19, 2020Adversarial Immunization for Certifiable Robustness on GraphsFeb 16, 2023Graph Adversarial Immunization for Certifiable RobustnessAug 3, 2022Adversarial Camouflage for Node Injection Attack on GraphsMay 25, 2023IDEA: Invariant Defense for Graph Adversarial RobustnessAug 30, 2021Single Node Injection Attack against Graph Neural NetworksFeb 17, 2025ToolCoder: A Systematic Code-Empowered Tool Learning Framework for Large Language ModelsAug 10, 2025Omni-SafetyBench: A Benchmark for Safety Evaluation of Audio-Visual Large Language ModelsAug 20, 2024Accelerating the Surrogate Retraining for Poisoning Attacks against Recommender SystemsDec 10, 2025d-TreeRPO: Towards More Reliable Policy Optimization for Diffusion Language ModelsNov 13, 2025AgentEvolver: Towards Efficient Self-Evolving Agent SystemAug 22, 2021Signed Bipartite Graph Neural NetworksSep 5, 2023Robust Recommender System: A Survey and Future DirectionsMay 28, 2025Enhancing Tool Learning in Large Language Models with Hierarchical Error ChecklistsMay 26, 2025Inference-time Alignment in Continuous SpaceJul 12, 2021INMO: A Model-Agnostic and Scalable Module for Inductive Collaborative FilteringDec 1, 2025CuES: A Curiosity-driven and Environment-grounded Synthesis Framework for Agentic RLFeb 17, 2025On the Diminishing Returns of Complex Robust RAG Training in the Era of Powerful LLMsMay 9, 2023Popularity Debiasing from Exposure to Interaction in Collaborative FilteringMay 26, 2025Incentivizing Strong Reasoning from Weak Supervision