Showing 1–20 of 27 results
/ Date/ Name
May 8, 2026Securing Computer-Use Agents: A Unified Architecture-Lifecycle Framework for Deployment-Grounded ReliabilityFeb 20, 2023CISum: Learning Cross-modality Interaction to Enhance Multimodal Semantic Coverage for Multimodal SummarizationDec 16, 2021Hierarchical Cross-Modality Semantic Correlation Learning Model for Multimodal SummarizationJun 5, 2025Lifelong Evolution: Collaborative Learning between Large and Small Language Models for Continuous Emergent Fake News DetectionNov 19, 2024HNCSE: Advancing Sentence Embeddings via Hybrid Contrastive Learning with Hard NegativesOct 18, 2024Feint and Attack: Attention-Based Strategies for Jailbreaking and Protecting LLMsJan 4, 2026How Real is Your Jailbreak? Fine-grained Jailbreak Evaluation with Anchored ReferenceJun 6, 2025The Scales of Justitia: A Comprehensive Survey on Safety Evaluation of LLMsFeb 10, 2026The Devil Behind Moltbook: Anthropic Safety is Always Vanishing in Self-Evolving AI SocietiesMar 30, 2024FineFake: A Knowledge-Enriched Dataset for Fine-Grained Multi-Domain Fake News DetectionMar 17, 2025MirrorShield: Towards Universal Defense Against Jailbreaks via Entropy-Guided Mirror CraftingFeb 20, 2025Beyond Self-Talk: A Communication-Centric Survey of LLM-Based Multi-Agent SystemsJan 7, 2026Jailbreaking LLMs & VLMs: Mechanisms, Evaluation, and Unified DefenseJan 4, 2026LANCET: Neural Intervention via Structural Entropy for Mitigating Faithfulness Hallucinations in LLMsAug 5, 2025Attack the Messages, Not the Agents: A Multi-round Adaptive Stealthy Tampering Framework for LLM-MASJul 8, 2025LLMs are IntrovertJan 17, 2021A Framework of State-dependent Utility Optimization with General BenchmarksJun 5, 2025Diffusion with a Linguistic Compass: Steering the Generation of Clinically Plausible Future sMRI Representations for Early MCI Conversion PredictionDec 3, 2025From static to adaptive: immune memory-based jailbreak detection for large language modelsJun 5, 2025One SPACE to Rule Them All: Jointly Mitigating Factuality and Faithfulness Hallucinations in LLMs