Showing 1–12 of 12 results
/ Date/ Name
Jan 26, 2026TEA-Bench: A Systematic Benchmarking of Tool-enhanced Emotional Support Dialogue AgentMay 21, 2025Teaching Language Models to Evolve with Users: Dynamic Profile Modeling for Personalized AlignmentMay 21, 2025When Less Language is More: Language-Reasoning Disentanglement Makes LLMs Better Multilingual ReasonersJan 20, 2026OP-Bench: Benchmarking Over-Personalization for Memory-Augmented Personalized Conversational AgentsFeb 28, 2025Beware of Your Po! Measuring and Mitigating AI Safety Risks in Role-Play Fine-Tuning of LLMsMar 7, 2025Chain of Strategy Optimization Makes Large Language Models Better Emotional SupporterMar 23, 2025Trade-offs in Large Reasoning Models: An Empirical Analysis of Deliberative and Adaptive Reasoning over Foundational CapabilitiesMay 22, 2024Towards Comprehensive Post Safety Alignment of Large Language Models via Safety PatchingOct 6, 2024Lens: Rethinking Multilingual Enhancement for Large Language ModelsJan 25, 2026When Personalization Legitimizes Risks: Uncovering Safety Vulnerabilities in Personalized Dialogue AgentsJun 18, 2025Exploring and Exploiting the Inherent Efficiency within Large Reasoning Models for Self-Guided Efficiency EnhancementApr 13, 2025AdaSteer: Your Aligned LLM is Inherently an Adaptive Jailbreak Defender