Neural Compression

Neural compression leverages deep learning to achieve superior data compression compared to traditional methods, aiming to minimize storage and transmission costs while preserving data fidelity. Current research focuses on developing efficient neural architectures, including autoencoders, diffusion models, and normalizing flows, often combined with entropy coding techniques like arithmetic coding and vector quantization, to compress diverse data types such as images, videos, 3D models, and even atmospheric data. This field is significant because it promises substantial improvements in data efficiency for various applications, ranging from high-fidelity video conferencing to large-scale data storage and processing in scientific domains. The ongoing development of faster and more robust neural compression algorithms is driving progress in both theoretical understanding and practical deployment.

Papers