Language Models Exhibit Inconsistent Biases Towards Algorithmic Agents and Human Experts
By: Jessica Y. Bo, Lillio Mok, Ashton Anderson
Published: 2026-02-26
View on arXiv →#cs.AI
Abstract
This research investigates how large language models perceive and exhibit biases when evaluating the capabilities and trustworthiness of algorithmic agents versus human experts. Findings reveal inconsistent biases, suggesting that the context and framing significantly influence LLM judgments, with critical implications for designing fair and ethical AI systems that interact with both humans and other AI. Understanding these biases is paramount for developing trustworthy AI applications in sensitive domains like healthcare, finance, and legal services.