Why Pass@k Optimization Can Degrade Pass@1: Prompt Interference in LLM Post-training

By: Emily Chen, David Lee, Sarah Johnson, Michael Brown, Anna Garcia, Daniel Wilson, Olivia Taylor

Published: 2026-02-25

View on arXiv →
#cs.AI

Abstract

This study uncovers the phenomenon of "prompt interference" during LLM post-training, explaining why optimizing for Pass@k metrics (where k>1) can inadvertently lead to a degradation in Pass@1 performance, particularly in code generation tasks. The paper provides insights into mitigating these conflicting objectives for more robust and reliable LLM fine-tuning, which is vital for developing effective AI coding assistants and improving software development workflows.

FEEDBACK

Projects

No projects yet