PixelGen: Pixel Diffusion Beats Latent Diffusion with Perceptual Loss

By: Zehong Ma, Ruihan Xu, Shiliang Zhang

Published: 2026-02-03

View on arXiv →
#cs.AI

Abstract

PixelGen is a pixel-space diffusion framework that uses perceptual supervision through LPIPS and DINO-based losses to generate high-quality images without requiring VAEs or latent representations. Pixel diffusion generates images directly in pixel space in an end-to-end manner, avoiding the artifacts and bottlenecks introduced by VAEs in two-stage latent diffusion. We propose PixelGen, a simple pixel diffusion framework with perceptual supervision. An LPIPS loss facilitates learning better local patterns, while a DINO-based perceptual loss strengthens global semantics. With perceptual supervision, PixelGen surpasses strong latent diffusion baselines. It achieves an FID of 5.11 on ImageNet-256 without classifier-free guidance using only 80 training epochs, and demonstrates favorable scaling performance on large-scale text-to-image generation with a GenEval score of 0.79. PixelGen requires no VAEs, no latent representations, and no auxiliary stages, providing a simpler yet more powerful generative paradigm.

FEEDBACK

Projects

No projects yet

PixelGen: Pixel Diffusion Beats Latent Diffusion with Perceptual Loss | ArXiv Intelligence