Practical Method
Practical methods in machine learning and related fields are currently focused on improving efficiency, accuracy, and generalizability of existing algorithms and models. Research emphasizes developing faster solvers for optimization problems (e.g., using parallel-in-time methods and novel optimizers like the generalized Newton's method), enhancing model robustness through techniques such as low-rank approximations and prompt portfolios, and creating more reliable uncertainty quantification methods. These advancements are crucial for deploying machine learning models in resource-constrained environments and for building more trustworthy and explainable AI systems across diverse applications.
Papers
Representation Topology Divergence: A Method for Comparing Neural Network Representations
Serguei Barannikov, Ilya Trofimov, Nikita Balabin, Evgeny Burnaev
Deep-learning-based upscaling method for geologic models via theory-guided convolutional neural network
Nanzhe Wang, Qinzhuo Liao, Haibin Chang, Dongxiao Zhang