Function Space
Function space research explores the properties of spaces where the objects of study are functions, rather than vectors or scalars, aiming to understand how functions are represented and manipulated within machine learning and scientific computing. Current research focuses on characterizing the geometry and properties of function spaces generated by neural networks, particularly deep ReLU networks and attention mechanisms, and developing algorithms for learning and approximating functions within these spaces, including Bayesian methods and neural operators. This work has significant implications for improving the efficiency and accuracy of machine learning models, particularly in areas like scientific data analysis, generative modeling, and reinforcement learning, by enabling more sophisticated representations and algorithms.