Hera: A Heterogeneity-Aware Multi-Tenant Inference Server for Personalized Recommendations
/ Authors
/ Abstract
While providing low latency is a fundamental requirement in deploying recommendation services, achieving high resource utility is also crucial in cost-effectively maintaining the datacenter. Co-locating model workers is an effective way to maximize query-level parallelism and server throughput, but the interference caused by concurrent workers at shared resources can prevent server queries from meeting its SLA. Hera utilizes the heterogeneous memory requirement of multi-tenant recommendation models to intelligently determine a productive set of colocated models and its resource allocation, providing fast response time while achieving high throughput. Hera achieves an average 37.3% improvement in effective machine utilization, enabling 26% reduction in required servers, significantly improving upon the baseline recommendation inference server.
Journal: 2025 34th International Conference on Parallel Architectures and Compilation Techniques (PACT)