Showing 1–20 of 21 results
/ Date/ Name
Jun 16, 2025MiniMax-M1: Scaling Test-Time Compute Efficiently with Lightning AttentionJan 14, 2025MiniMax-01: Scaling Foundation Models with Lightning AttentionNov 16, 2021INTERN: A New Learning Paradigm Towards General VisionNov 5, 2021MQBench: Towards Reproducible and Deployable Model Quantization BenchmarkOct 9, 2020Once Quantization-Aware Training: High Performance Extremely Low-bit Architecture SearchOct 2, 2020Dynamic Graph: Learning Instance-aware Connectivity for Neural NetworksSep 24, 2020MimicDet: Bridging the Gap Between One-Stage and Two-Stage Object DetectionAug 19, 2020Learning Connectivity of Neural Networks from a Topological PerspectiveMay 21, 2020Powering One-shot Topological NAS with Stabilized Share-parameter ProxyMay 7, 2020DMCP: Differentiable Markov Channel Pruning for Neural NetworksMar 11, 2020Equalization Loss for Long-Tailed Object RecognitionDec 29, 2019Towards Unified INT8 Training for Convolutional Neural NetworkDec 24, 2019Computation Reallocation for Object DetectionNov 12, 2019Equalization Loss for Large Vocabulary Instance SegmentationSep 2, 2019Towards Flops-constrained Face RecognitionAug 14, 2019Differentiable Soft Quantization: Bridging Full-Precision and Low-Bit Neural NetworksJun 13, 2019Grid R-CNN Plus: Faster and BetterFeb 19, 2019WIDER Face and Pedestrian Challenge 2018: Methods and ResultsDec 5, 2018An Embarrassingly Simple Approach for Knowledge DistillationNov 29, 2018Grid R-CNN