Simple Function
The study of functions, encompassing their representation, optimization, and application across diverse domains, is a central theme in numerous scientific fields. Current research focuses on developing novel activation functions for neural networks, improving the efficiency and generalizability of function calling in large language models, and exploring the use of Bayesian optimization and graph neural networks for optimizing functions in complex spaces, including those defined on graphs or involving high-dimensional data. These advancements have significant implications for various applications, ranging from materials discovery and building classification to improving the performance of AI agents and enhancing the interpretability of complex systems.
Papers
Pretrained transformer efficiently learns low-dimensional target functions in-context
Kazusato Oko, Yujin Song, Taiji Suzuki, Denny Wu
Training on test proteins improves fitness, structure, and function prediction
Anton Bushuiev, Roman Bushuiev, Nikola Zadorozhny, Raman Samusevich, Hannes Stärk, Jiri Sedlar, Tomáš Pluskal, Josef Sivic