From Visual Perception to Deep Empathy: An Automated Assessment Framework for House-Tree-Person Drawings Using Multimodal LLMs and Multi-Agent Collaboration

By: Yutong Zhang, Qingyu Zhang, Yaxin Wang, Yujie Li, Xiangmin Xu

Published: 2025-12-23

View on arXiv →
#cs.AI

Abstract

The House-Tree-Person (HTP) drawing test, introduced by John Buck in 1948, remains a widely used projective technique in clinical psychology. However, it has long faced challenges such as heterogeneous scoring standards, reliance on examiners subjective experience, and a lack of a unified quantitative coding system. This paper introduces an automated assessment framework for HTP drawings using Multimodal Large Language Models (MLLMs) and multi-agent collaboration. Quantitative experiments showed that the mean semantic similarity between MLLM interpretations and human expert interpretations was approximately 0.75. Qualitative analyses demonstrated that the multi-agent system, by integrating social-psychological perspectives and destigmatizing narratives, effectively corrected visual hallucinations and produced psychological reports with high ecological validity and internal coherence. The findings confirm the potential of multimodal large models as standardized tools for projective assessment. The proposed multi-agent framework, by dividing roles, decouples feature recognition from psychological inference and offers a new paradigm for digital mental-health services.

FEEDBACK

Projects

No projects yet