Challenging the Validity of Personality Tests for Large Language Models
/ Authors
/ Abstract
With large language models (LLMs) like GPT-4 appearing to behave increasingly human-like in text-based interactions, it has become popular to attempt to evaluate personality traits of LLMs using questionnaires originally developed for humans. While reusing measures is a resource-efficient way to evaluate LLMs, careful adaptations are usually required to ensure that assessment results are valid even across human subpopulations. Works that have applied human personality tests to LLMs have not investigated whether these tests measure in LLMs what they measure in humans. In this work, we provide evidence that LLMs’ responses to personality tests systematically deviate from human responses, implying that the results of these tests cannot be interpreted in the same way. Concretely, reverse-coded items (false key items) (e.g. “I am introverted” vs. “I am extraverted”) are often both answered affirmatively. Furthermore, variation across prompts designed to steer LLMs to simulate particular personality types does not follow the clear separation into five independent personality factors, as it does in human samples. In light of these results, we believe that it is important to investigate tests’ validity for LLMs before drawing conclusions about potentially ill-defined concepts such as the “personality” of LLMs.
Journal: Proceedings of the 2025 Equity and Access in Algorithms, Mechanisms, and Optimization