Learned Function

Learned function research focuses on developing methods for neural networks and other machine learning models to effectively approximate and generalize from complex, often high-dimensional functions. Current efforts concentrate on improving model architectures (like Transformers and Kolmogorov-Arnold Networks), incorporating prior knowledge about function properties (e.g., invertibility, Lipschitz continuity), and developing efficient algorithms for learning functions on various data structures (matrices, point clouds, graphs). This field is crucial for advancing numerous applications, from improving the accuracy and efficiency of existing machine learning models to enabling new capabilities in areas like visual question answering, drug discovery, and policy analysis.

Papers