Knowledge Model Prompting Increases LLM Performance on Planning Tasks

By: Erik Goh, John Kos, Ashok Goel

Published: 2026-02-03

View on arXiv →
#cs.AI

Abstract

Large Language Models (LLMs) often struggle with reasoning and planning tasks. This paper introduces the Task-Method-Knowledge (TMK) framework, a prompting technique that significantly improves LLM reasoning capabilities. It enables models to better decompose complex planning problems and achieve higher accuracy on symbolic tasks, suggesting a way to bridge the gap between semantic approximation and symbolic manipulation.

FEEDBACK

Projects

No projects yet

Knowledge Model Prompting Increases LLM Performance on Planning Tasks | ArXiv Intelligence