AffectGPT-R1: Leveraging Reinforcement Learning for Open-Vocabulary Multimodal Emotion Recognition
/ Authors
/ Abstract
Open-Vocabulary Multimodal Emotion Recognition (OV-MER) aims to predict emotions without being constrained by label spaces, enabling fine-grained emotion understanding. Unlike traditional discriminative methods, OV-MER leverages generative models to capture the full spectrum of emotions and employs emotion wheels (EWs) for metric calculation. Previous approaches (e.g., AffectGPT) primarily rely on token-level loss during training. However, this objective is misaligned with the metrics used in OV-MER, while these metrics cannot be optimized via gradient backpropagation. To address this limitation, we propose AffectGPT-R1, a reinforcement learning framework that treats EW-based metrics as a reward function and applies policy optimization to maximize this reward. Additionally, we introduce an explicit reasoning process and examine its necessity in OV-MER. To further guide model behavior, we incorporate auxiliary rewards that regularize both emotion reasoning and emotion prediction. We also apply length penalties to mitigate reward hacking. Experimental results demonstrate that AffectGPT-R1 yields significant performance improvements on OV-MER. Moreover, our approach enhances generalized emotion understanding, achieving state-of-the-art results on MER-UniBench. Our code is provided in the supplementary material and will be released to facilitate future research.