Showing 1–16 of 16 results
/ Date/ Name
Mar 7, 2024AUFormer: Vision Transformers are Parameter-Efficient Facial Action Unit DetectorsAug 5, 2025CoEmoGen: Towards Semantically-Coherent and Scalable Emotional Image Content GenerationAug 15, 2023Multi-scale Promoted Self-adjusting Correlation Learning for Facial Action Unit DetectionMay 19, 2025FEALLM: Advancing Facial Emotion Analysis in Multimodal Large Language Models with Emotional Synergy and ReasoningAug 21, 2024EMO-LLaMA: Enhancing Facial Emotion Understanding with Instruction TuningApr 12, 2026TurboEvolve: Towards Fast and Robust LLM-Driven Program EvolutionMar 30, 2025AU-TTT: Vision Test-Time Training model for Facial Action Unit DetectionJun 3, 2025ANT: Adaptive Neural Temporal-Aware Text-to-Motion ModelJun 23, 2025MedTVT-R1: A Multimodal LLM Empowering Medical Reasoning and DiagnosisApr 26, 2026$Z^2$-Sampling: Zero-Cost Zigzag Trajectories for Semantic Alignment in Diffusion ModelsMar 9, 2024GPT as Psychologist? Preliminary Evaluations for GPT-4V on Visual Affective ComputingMay 30, 2025Period-LLM: Extending the Periodic Capability of Multimodal Large Language ModelNov 29, 2025POLARIS: Projection-Orthogonal Least Squares for Robust and Adaptive Inversion in Diffusion ModelsJul 29, 2025AU-LLM: Micro-Expression Action Unit Detection via Enhanced LLM-Based Feature FusionMar 9, 2026$Δ$VLA: Prior-Guided Vision-Language-Action Models via World Knowledge VariationMar 9, 2026AULLM++: Structural Reasoning with Large Language Models for Micro-Expression Recognition