AllHands :Ask Me Anything on Large-scale Verbatim Feedback via Large Language Models
/ Authors
Chaoyun Zhang, Zicheng Ma, Yuhao Wu, Shilin He, Si Qin, Ming-Jie Ma, X. Qin, Yu Kang, Yuyi Liang, Xiaoyun Gou
and 5 more authors
Yajie Xue, Qingwei Lin, S. Rajmohan, Dongmei Zhang, Qi Zhang
/ Abstract
Verbatim feedback constitutes a valuable repository of user experiences, opinions, and requirements, crucial for data engineering and software development. Extracting meaningful insights from large-scale feedback data presents a significant challenge. This paper introduces Allhands, an innovative ana-lytic framework that transforms traditional large-scale feedback analysis tasks through a natural language interface, leveraging large language models (LLMs). Allhands performs initial classification and topic modeling on feedback to convert it into a structurally augmented format, enhancing accuracy, robustness and generalization with the aid of LLMs. Subsequently, an LLM-based code-first agent interprets users' diverse natural language questions about the feedback, automatically translates them into executable call of analytic tools or code, and delivers comprehensive multi-modal responses, including text, code, tables, and images. This eliminates the need for developing individual feedback analytic tools for each request, reducing human effort and making the system more accessible and flexible to users. We evaluate Allhands across three diverse feedback datasets, demonstrating its superior efficacy in all stages of analysis, from classification and topic modeling to providing an “ask me anything” experience with comprehensive, accurate, and human-readable responses. To the best of our knowl-edge, Allhands is the first comprehensive feedback analysis framework supporting diverse and customized insight extraction requirements through a natural language interface.
Journal: 2025 IEEE 41st International Conference on Data Engineering (ICDE)