TeleQnA: A Benchmark Dataset to Assess Large Language Models Telecommunications Knowledge
/ Abstract
We introduce TeleQnA, the first benchmark dataset designed to evaluate the knowledge of Large Language Models (LLMs) in telecommunications. Comprising 10,000 questions and answers, this dataset draws from diverse sources, including standards and research manuscripts. This paper outlines the automated question generation framework responsible for creating this dataset and provides details on how we integrated human input at various stages to ensure the quality of the questions. Afterwards, using the provided dataset, an evaluation is conducted to assess the capabilities of several LLMs, including GPT-3.5, GPT-4, and Mixtral. The results highlight that these models struggle with complex standards-related questions but exhibit proficiency in addressing general telecom-related inquiries. Additionally, our results showcase how incorporating telecom knowledge context significantly enhances their performance, thus shedding light on the need for a specialized telecom foundation model. Finally, the dataset is shared with active telecom professionals, whose performance is subsequently benchmarked against that of the LLMs. The findings illustrate that LLMs can rival the performance of active professionals in telecom knowledge, thanks to their capacity to process vast amounts of information, underscoring the potential of LLMs within this domain. The dataset has been made publicly accessible on Hugging Face (https://huggingface.co/datasets/netop/TeleQnA).
Journal: IEEE Network