SokoBench: Evaluating Long-Horizon Planning and Reasoning in Large Language Models

By: Sebastiano Monti, Carlo Nicolini, Gianni Pellegrini, Jacopo Staiano, Bruno Lepri

Published: 2026-02-03

View on arXiv →
#cs.AI

Abstract

SokoBench is introduced as a benchmark for evaluating the long-horizon planning and reasoning capabilities of large language models. This is critical for developing more capable and reliable LLMs for complex real-world tasks requiring sequential decision-making and intricate problem-solving.

FEEDBACK

Projects

No projects yet