Large Language Model-Powered Evolutionary Code Optimization on a Phylogenetic Tree

By: Leyi Zhao, Weijie Huang, Yitong Guo, Jiang Bian, Chenghong Wang, Xuhong Zhang

Published: 2026-01-20

View on arXiv →
#cs.AI#LLM#Knowledge Graph#Scientific Discovery#Neuro-Symbolic AI#NLP#GraphRAGPharmaceuticalsBiotechnologyMaterials ScienceAcademic ResearchChemical Engineering

Abstract

Optimizing scientific computing algorithms for modern GPUs is a labor-intensive and iterative process involving repeated code modification, benchmarking, and tuning across complex hardware and software stacks. This paper introduces a novel framework that leverages large language models (LLMs) to automate and enhance this optimization process. We propose PhyloEvolve, an LLM-agent system that reframes GPU-oriented algorithm optimization as an In-Context Reinforcement Learning (ICRL) problem, enabling trajectory-conditioned reuse of optimization experience without model retraining. The system uses a phylogenetic tree representation to organize optimization history and demonstrates consistent improvements in runtime, memory efficiency, and correctness on scientific computing workloads.

FEEDBACK

Projects

No projects yet

Large Language Model-Powered Evolutionary Code Optimization on a Phylogenetic Tree | ArXiv Intelligence