Pruning as a Game: Equilibrium-Driven Sparsification of Neural Networks
By: Zubair Shah, Noaman Khan
Published: 2025-12-26
View on arXiv →Abstract
Neural network pruning is widely used to reduce model size and computational cost. However, most existing methods treat sparsity as an extrinsic constraint enforced via heuristic importance scores or regularization during training. In this work, we propose a fundamentally different perspective: viewing pruning as an outcome of strategic interactions among model components. We model parameter groups (e.g., weights, neurons, or filters) as players in a continuous non-cooperative game, where each player chooses its level of participation in the network to balance contribution against redundancy and competition. Sparsity naturally emerges within this framework when continued participation becomes a dominated strategy at equilibrium. We analyze the resulting game and show that for players under mild conditioning, those at a disadvantage converge to zero participation, providing a principled explanation for pruning behavior. Based on this, we derive a simple equilibrium-driven pruning algorithm that jointly updates network parameters and participation variables without relying on explicit importance scores. Experiments on standard benchmarks demonstrate that the proposed approach achieves competitive sparsity-accuracy trade-offs while offering an interpretable, theory-grounded alternative to existing pruning methods.