dots.llm1 Technical Report
/ Authors
B. Huo, Binghao Tu, Chengwei Qin, Da Zheng, Debing Zhang, Dongjie Zhang, En Li, Fu-Ming Guo, Jian Yao, Jie Lou
and 17 more authors
Junfeng Tian, Li Hu, Ran Zhu, Sheng Chen, Shuo Liu, Su Guang, Te Wo, Weijun Zhang, Xiaoming Shi, Xinxin Peng, Xing Wu, Yawen Liu, Yuqiu Ji, Zengxuan Wen, Zhenhai Liu, Zichao Li, Zilong Liao
/ Abstract
Mixture of Experts (MoE) models have emerged as a promising paradigm for scaling language models efficiently by activating only a subset of parameters for each input token. In this report, we present dots.llm1, a large-scale MoE model that activates 14B parameters out of a total of 142B parameters, delivering performance on par with state-of-the-art models while reducing training and inference costs. Leveraging our meticulously crafted and efficient data processing pipeline, dots.llm1 achieves performance comparable to Qwen2.5-72B after pretraining on 11.2T high-quality tokens and post-training to fully unlock its capabilities. Notably, no synthetic data is used during pretraining. To foster further research, we open-source intermediate training checkpoints at every one trillion tokens, providing valuable insights into the learning dynamics of large language models.
Journal: ArXiv