Showing 1–19 of 19 results
/ Date/ Name
Apr 18, 2024AdvisorQA: Towards Helpful and Harmless Advice-seeking Question Answering with Collective IntelligenceFeb 20, 2025Drift: Decoding-time Personalized Alignments with Implicit User PreferencesOct 9, 2024Guaranteed Generation from Large Language ModelsNov 16, 2023LifeTox: Unveiling Implicit Toxicity in Life AdviceJun 13, 2024VLind-Bench: Measuring Language Priors in Large Vision-Language ModelsNov 17, 2023Breaking Temporal Consistency: Generating Video Universal Adversarial Perturbations Using Image ModelsDec 11, 2024Doubly-Universal Adversarial Perturbations: Deceiving Vision-Language Models Across Both Images and Text with a Single PerturbationFeb 13, 2026Beyond Normalization: Rethinking the Partition Function as a Difficulty Scheduler for RLVRDec 21, 2022Critic-Guided Decoding for Controlled Text GenerationOct 17, 2024Mitigating Hallucinations in Large Vision-Language Models via Summary-Guided DecodingApr 14, 2026ReflectCAP: Detailed Image Captioning with Reflective MemoryMar 25, 2026Why Does Self-Distillation (Sometimes) Degrade the Reasoning Capability of LLMs?Jul 11, 2024VideoMamba: Spatio-Temporal Selective State Space ModelSep 25, 2024A Character-Centric Creative Story Generation via ImaginationSep 22, 2025Program Synthesis via Test-Time TransductionFeb 8, 2026CausalArmor: Efficient Indirect Prompt Injection Guardrails via Causal AttributionMay 26, 2025Benign-to-Toxic Jailbreaking: Inducing Harmful Responses from Harmless PromptsMay 21, 2025ReflAct: World-Grounded Decision Making in LLM Agents via Goal-State ReflectionMar 16, 2026Understanding Reasoning in LLMs through Strategic Information Allocation under Uncertainty