Paper ID: 2408.09583

Convolutional Conditional Neural Processes

Wessel P. Bruinsma

Neural processes are a family of models which use neural networks to directly parametrise a map from data sets to predictions. Directly parametrising this map enables the use of expressive neural networks in small-data problems where neural networks would traditionally overfit. Neural processes can produce well-calibrated uncertainties, effectively deal with missing data, and are simple to train. These properties make this family of models appealing for a breadth of applications areas, such as healthcare or environmental sciences. This thesis advances neural processes in three ways. First, we propose convolutional neural processes (ConvNPs). ConvNPs improve data efficiency of neural processes by building in a symmetry called translation equivariance. ConvNPs rely on convolutional neural networks rather than multi-layer perceptrons. Second, we propose Gaussian neural processes (GNPs). GNPs directly parametrise dependencies in the predictions of a neural process. Current approaches to modelling dependencies in the predictions depend on a latent variable, which consequently requires approximate inference, undermining the simplicity of the approach. Third, we propose autoregressive conditional neural processes (AR CNPs). AR CNPs train a neural process without any modifications to the model or training procedure and, at test time, roll out the model in an autoregressive fashion. AR CNPs equip the neural process framework with a new knob where modelling complexity and computational expense at training time can be traded for computational expense at test time. In addition to methodological advancements, this thesis also proposes a software abstraction that enables a compositional approach to implementing neural processes. This approach allows the user to rapidly explore the space of neural process models by putting together elementary building blocks in different ways.

Submitted: Aug 18, 2024