MemFine: Memory-Aware Fine-Grained Scheduling for MoE Training
/ Authors
Lu Zhao, Rong Shi, Yue Sun, Shaoqing Zhang, Hongxin Niu, Yueqiang Chen, Baoguo He, Hongfeng Sun, Ziqing Yin, Shangchao Su
and 13 more authors
Zhiyan Cui, Liang Dong, Xiyuan Li, Lingbin Wang, Jianwei He, Jie Ma, Wei Huang, Jianglei Tong, Dongdong Gao, Jian Zhang, Hong Tian, Zhaoqun Sun, Huifeng Shen
/ Abstract
The training of large-scale Mixture of Experts (MoE) models faces a critical memory bottleneck due to severe load imbalance caused by dynamic token routing. This imbalance leads to memory overflow on GPUs with limited capacity, constraining model scalability. Existing load balancing methods, which cap expert capacity, compromise model accuracy and fail on memory-constrained hardware. To address this, we propose MemFine, a memory-aware fine-grained scheduling framework for MoE training. MemFine decomposes the token distribution and expert computation into manageable chunks and employs a chunked recomputation strategy, dynamically optimized through a theoretical memory model to balance memory efficiency and throughput. Experiments demonstrate that MemFine reduces activation memory by 48.03% and improves throughput by 4.42% compared to full recomputation-based baselines, enabling stable large-scale MoE training on memory-limited GPUs.
Journal: ArXiv