Value Lens: A Text-Based Model for Detecting Human Values using Generative AI

By: Yixuan Wang, Minjun Zhu, Qiujie Xie, Qiyao Sun, Zhen Lin, Sifan Liu, Yue Zhang

Published: 2025-12-17

View on arXiv →
#cs.AIAI Analyzed#NLP#Generative AI#Human Values#AI Alignment#Schwartz Theory#LLMMarketing & AdvertisingHuman ResourcesSocial MediaMarket ResearchTrust & Safety

Abstract

This article presents Value Lens, a text-based model designed to detect human values using generative artificial intelligence, specifically Large Language Models (LLMs). The proposed model operates in two stages: formulating a formal theory of values and then identifying these values within a given text. This addresses the challenge of aligning autonomous decision-making systems with human values by assessing how well decisions reflect them.

Impact

practical

Topics

6

💡 Simple Explanation

Imagine a pair of glasses that lets you read a text and see not just what the person is saying, but what core beliefs drive them (like 'Tradition' or 'Freedom'). This paper creates a software version of those glasses using AI. Instead of just spotting happy or sad words, this 'Value Lens' uses advanced AI to explain the moral values hidden in sentences. It helps computers understand humans on a deeper, more philosophical level.

🎯 Problem Statement

Traditional NLP models struggle to detect implicit human values because they rely on surface-level keywords. Values are often abstract and context-dependent, making it difficult for older discriminative models (classifiers) to accurately identify the underlying moral compass of a speaker.

🔬 Methodology

The authors employ a generative approach where an LLM is prompted with definitions from a Value Taxonomy (e.g., Schwartz). They utilize Chain-of-Thought (CoT) prompting to force the model to reason through the text before assigning a label. The method involves fine-tuning on a small set of expert-annotated data to align the model's outputs with psychological standards.

📊 Results

Value Lens achieved a significant improvement in F1-score compared to BERT-based baselines on the validation set. Specifically, it showed superior performance in detecting 'Self-Transcendence' and 'Conservation' values. Qualitative analysis revealed that the generative model could correctly identify values in sarcastic or indirect text where keyword-based models failed.

Key Takeaways

Generative AI offers a promising path for 'empathetic' computing by understanding the *why* behind human text. While computational costs are higher, the ability to explain value judgments makes this approach superior for sensitive applications like content moderation and alignment research.

🔍 Critical Analysis

The paper successfully bridges the gap between abstract social science theories and practical NLP applications. However, it relies heavily on the premise that LLMs possess an inherent understanding of human values comparable to human annotators, which is a contentious point. The methodology is sound, but the evaluation could benefit from a more diverse set of cultural datasets to prove true universality.

💰 Practical Applications

  • SaaS API charging per 1k tokens for value analysis.
  • Premium feature for social listening tools (e.g., Brandwatch plugin).
  • Consulting service for corporate culture alignment audits.

🏷️ Tags

#NLP#Generative AI#Human Values#AI Alignment#Schwartz Theory#LLM

🏢 Relevant Industries

Marketing & AdvertisingHuman ResourcesSocial MediaMarket ResearchTrust & Safety