Smooth Representation

Smooth representation in machine learning aims to create continuous and differentiable representations of data, improving model generalization, robustness, and uncertainty quantification. Current research focuses on applying this principle to various model architectures, including implicit neural representations (INRs), graph neural networks (GNNs), and pre-trained language models (PLMs), often employing techniques like Jacobian and Hessian regularization or k-nearest neighbor smoothing. These advancements address challenges like over-smoothing in GNNs and discrete input limitations in NLP, leading to improved performance in tasks ranging from pose estimation and 3D reconstruction to natural language processing. The resulting smoother representations enhance model accuracy, efficiency, and interpretability across diverse applications.

Papers