Integrating Large Language Models Into Recommendation via Mutual Augmentation and Adaptive Aggregation
/ Authors
Sichun Luo, Yuxuan Yao, Bowei He, Wei Shao, Jian Xu, Yinya Huang, Aojun Zhou, Xinyi Zhang, Yuanzhang Xiao, Hanxu Hou
and 2 more authors
/ Abstract
Conventional recommender systems and Large Language Model (LLM)-based recommender systems each have their strengths and weaknesses. While conventional recommendation methods excel at mining collaborative information and modeling sequential behavior, they struggle with data sparsity and the long-tail problem. LLM, on the other hand, is proficient at utilizing rich textual contexts but faces challenges in mining collaborative or sequential information. Despite their individual successes, there is a significant gap in leveraging their ensemble potential to enhance recommendation performance. In this paper, we introduce a general and model-agnostic framework known as Large language models with mutual augmentation and adaptive aggregation for Recommendation (Llama4Rec), aiming to bridge this gap via explicitly ensemble LLM and conventional recommendation model for more effective recommendation. We propose data augmentation and prompt augmentation strategies tailored to enhance the conventional recommendation model and LLM respectively. An adaptive aggregation module is adopted to combine the predictions of both kinds of models to refine the final recommendation results. Empirical studies on three datasets validate the superiority of Llama4Rec, demonstrating significant improvements in recommendation performance.
Journal: IEEE Journal of Selected Topics in Signal Processing