MOSS-Speech: Towards True Speech-to-Speech Models Without Text Guidance
/ Authors
Xing-quan Zhao, Zhe Xu, Qinyuan Cheng, Zhaoye Fei, Luozhijie Jin, Yang Wang, Hanfu Chen, Ya Jiang, Qinghui Gao, Ke Chen
and 13 more authors
Ruixiao Li, Mingshu Chen, Ruimin Wang, Wenbo Zhang, Yiyan Zhang, Donghua Yu, Yang Gao, Xiaogui Yang, Y. Gong, Yuanfang Xu, Yaqian Zhou, Xuanjing Huang, Xipeng Qiu
/ Abstract
Spoken dialogue systems often rely on cascaded pipelines that transcribe, process, and resynthesize speech. While effective, this design discards paralinguistic cues and limits expressivity. Recent end-to-end methods reduce latency and better preserve these cues, yet still rely on text intermediates, creating a fundamental bottleneck. We present MOSS-Speech, a true speech-to-speech large language model that directly understands and generates speech without relying on text guidance. Our approach combines a modality-based layer-splitting architecture with a frozen pre-training strategy, preserving the reasoning and knowledge of pretrained text LLMs while adding native speech capabilities. Experiments show that our model achieves state-of-the-art results in spoken question answering and delivers comparable speech-to-speech performance relative to existing text-guided systems, while still maintaining competitive text performance. By narrowing the gap between text-guided and direct speech generation, our work establishes a new paradigm for expressive and efficient end-to-end speech interaction.
Journal: ArXiv