Practical Method
Practical methods in machine learning and related fields are currently focused on improving efficiency, accuracy, and generalizability of existing algorithms and models. Research emphasizes developing faster solvers for optimization problems (e.g., using parallel-in-time methods and novel optimizers like the generalized Newton's method), enhancing model robustness through techniques such as low-rank approximations and prompt portfolios, and creating more reliable uncertainty quantification methods. These advancements are crucial for deploying machine learning models in resource-constrained environments and for building more trustworthy and explainable AI systems across diverse applications.
Papers
BGE Landmark Embedding: A Chunking-Free Embedding Method For Retrieval Augmented Long-Context Large Language Models
Kun Luo, Zheng Liu, Shitao Xiao, Kang Liu
MatPlotAgent: Method and Evaluation for LLM-Based Agentic Scientific Data Visualization
Zhiyu Yang, Zihan Zhou, Shuo Wang, Xin Cong, Xu Han, Yukun Yan, Zhenghao Liu, Zhixing Tan, Pengyuan Liu, Dong Yu, Zhiyuan Liu, Xiaodong Shi, Maosong Sun