Demographic Stereotype Elicitation in LLMs through Personality and Dark Triad Trait Attribution
Journal of Engineering Research and Sciences, Volume 5, Issue 1, Page # 46-65, 2026; DOI: 10.55708/js0501005
Keywords: AI Ethics, Bias, Personality, Big Five, Dark Triad, Demographic Stereotypes, Large Language Models (LLMs), Psychometrics
(This article belongs to the Special Issue on SP7 (Special Issue on Multidisciplinary Sciences and Advanced Technology (SI-MSAT 2025)) and the Section Artificial Intelligence – Computer Science (AIC))
Export Citations
Cite
Oikonomou, N. V. , Palaiokrassas, I. , Oikonomou, D. V. , Chaliasou, S. P. and Rigas, N. (2026). Demographic Stereotype Elicitation in LLMs through Personality and Dark Triad Trait Attribution. Journal of Engineering Research and Sciences, 5(1), 46–65. https://doi.org/10.55708/js0501005
Nikolaos Vasileios Oikonomou, Ioannis Palaiokrassas, Dimitrios Vasileios Oikonomou, Sofia Panagiota Chaliasou and Nikolaos Rigas. "Demographic Stereotype Elicitation in LLMs through Personality and Dark Triad Trait Attribution." Journal of Engineering Research and Sciences 5, no. 1 (January 2026): 46–65. https://doi.org/10.55708/js0501005
N.V. Oikonomou, I. Palaiokrassas, D.V. Oikonomou, S.P. Chaliasou and N. Rigas, "Demographic Stereotype Elicitation in LLMs through Personality and Dark Triad Trait Attribution," Journal of Engineering Research and Sciences, vol. 5, no. 1, pp. 46–65, Jan. 2026, doi: 10.55708/js0501005.
This study investigates how Large Language Models (LLMs), specifically Meta LLaMA-3.1-8B-Instruct, implicitly attribute personality and Dark Triad traits to demographic personas. By prompting the model with 660 synthetic identity descriptors (constructed from balanced combinations of gender, race, religion, and region) and standardized psychometric questionnaires, we extract Likert-scale responses and compute aggregated Big Five (EACNO) and Dark Triad (SD3) scores. Statistical analyses (Z-score normalization, ANOVA, PCA) reveal systematic differences across demographic categories, highlighting implicit stereotypes encoded in model representations. Key findings indicate that the model attributes significantly higher Dark Triad traits to mixed-race identities, while religious personas are consistently associated with higher Agreeableness and Conscientiousness. Furthermore, female personas are depicted with greater emotional stability and prosocial traits compared to males. These results demonstrate that demographic bias extends beyond linguistic patterns to latent psychometric behavior, raising important ethical concerns regarding automated decision-making systems.
- I. O. Gallegos et al., “Bias and fairness in large language models: A survey,” arXiv preprint arXiv:2309.00770, 2023.
- National Institute of Standards and Technology, “Towards a standard for identifying and managing bias in artificial intelligence,” NIST Special Publication 1270, Gaithersburg, MD, 2023.
- X. Bai, A. Wang, I. Sucholutsky, and T. L. Griffiths, “Explicitly unbiased large language models still form biased associations,” Proceedings of the National Academy of Sciences, vol. 122, no. 8, p. e2416228122, 2025, doi: 10.1073/pnas.2416228122.
- O. Gupta, S. Marrone, F. Gargiulo, R. Jaiswal, and L. Marassi, “Understanding social biases in large language models,” AI, vol. 6, no. 5, p. 106, 2025, doi: 10.3390/ai6050106.
- S. Lee et al., “Do LLMs have distinct and consistent personality? TRAIT: Personality testset designed for LLMs with psychometrics,” in Findings of the Association for Computational Linguistics: NAACL 2024, 2024, doi: 10.48550/arXiv.2406.14703.
- OpenAI, “GPT-4 Technical Report,” arXiv preprint arXiv:2303.08774, 2023.
- L. P. Argyle et al., “Out of one, many: Using language models to simulate human samples,” Political Analysis, vol. 31, no. 3, pp. 337–351, 2023, doi: 10.1017/pan.2023.2.
- D. Dodou, J. C. F. de Winter, and T. Driessen, “The use of ChatGPT for personality research: Administering questionnaires using generated personas,” Personality and Individual Differences, vol. 228, p. 112729, 2024, doi: 10.1016/j.paid.2024.112729.
- M. I. Radaideh, O. H. Kwon, and M. I. Radaideh, “Fairness and social bias quantification in large language models for sentiment analysis,” Knowledge-Based Systems, vol. 319, p. 113569, 2025, doi: 10.1016/j.knosys.2025.113569.
- D. S. Porat and E. Rabinovich, “Who are you, ChatGPT? Personality and demographic style in LLM-generated content,” arXiv preprint arXiv:2510.11434, 2025.
- S. Wang et al., “Exploring the impact of personality traits on LLM bias and toxicity,” arXiv preprint arXiv:2502.12566, 2025.
- H. Peters and S. C. Matz, “Large language models can infer psychological dispositions of social media users,” PNAS Nexus, vol. 3, no. 6, p. pgae231, 2024, doi: 10.1093/pnasnexus/pgae231.
- F. A. Tan et al., “PHAnToM: Persona-based prompting has an effect on theory-of-mind reasoning in large language models,” in Proceedings of the International AAAI Conference on Web and Social Media (ICWSM 2025), 2025.
- T. Sühr, F. E. Dörner, S. Samadi, and A. Kelava, “Challenging the validity of personality tests for large language models,” arXiv preprint arXiv:2311.10805, 2023.
- Nikolaos Vasileios Oikonomou, Ioannis Palaiokrassas, Dimitrios Vasileios Oikonomou, Sofia Panagiota Chaliasou, Nikolaos Rigas, “Product in Product Type Estimator with Exponential and Log Function to Estimate Population Mean Using DSS”, Journal of Engineering Research and Sciences, vol. 4, no. 2, pp. 11–17, 2025. doi: 10.55708/js0402002