FlowDubber: Movie Dubbing with LLM-based Semantic-aware Learning and Flow Matching based Voice Enhancing
/ Authors
/ Abstract
Movie Dubbing aims to convert scripts into speeches that align with the given movie clip in both temporal and emotional aspects while preserving the vocal timbre of a given brief reference audio. Existing methods focus primarily on reducing the word error rate while ignoring the importance of lip-sync and acoustic quality. To address these issues, we propose a novel dubbing architecture based on Large Language Model (LLM) and Conditional Flow Matching (CFM), named FlowDubber, which achieves high-quality audio-visual sync and pronunciation by incorporating a large speech language model with dual contrastive alignment while improving acoustic quality via Flow-based Voice Enhancing (FVE). First, we introduce Qwen2.5 as the backbone of large speech language model to learn the in-context sequence from movie scripts and reference audio. Second, the proposed semantic-aware learning focuses on capturing LLM semantic knowledge at the phoneme level, which facilitates mutual alignment with lip movement from silent video via Dual Contrastive Alignment (DCA). Third, the FVE introduces an LLM-based acoustics flow matching guidance to strengthen clarity by decoupling Classifier-Free Guidance (CFG) enhancement. Extensive experiments demonstrate that our method outperforms several state-of-the-art methods on two primary benchmarks. The demos are available at https://galaxycong.github.io/LLM-Flow-Dubber/.
Journal: Proceedings of the 33rd ACM International Conference on Multimedia