Extension Study
Extension studies in various fields focus on enhancing existing models and methods to improve performance, efficiency, or applicability. Current research emphasizes developing flexible and adaptable architectures, such as mixture-of-experts models and inductive graph-based learning, to handle diverse data and tasks, often incorporating techniques like diffusion models and transformers. These advancements are significant for improving the scalability and robustness of machine learning models across domains, ranging from image processing and natural language processing to robotics and materials science, ultimately leading to more efficient and effective solutions in diverse applications.
Papers
Complexity of Neural Network Training and ETR: Extensions with Effectively Continuous Functions
Teemu Hankala, Miika Hannula, Juha Kontinen, Jonni Virtema
Flexible and Inherently Comprehensible Knowledge Representation for Data-Efficient Learning and Trustworthy Human-Machine Teaming in Manufacturing Environments
Vedran Galetić, Alistair Nottle