Practical Method
Practical methods in machine learning and related fields are currently focused on improving efficiency, accuracy, and generalizability of existing algorithms and models. Research emphasizes developing faster solvers for optimization problems (e.g., using parallel-in-time methods and novel optimizers like the generalized Newton's method), enhancing model robustness through techniques such as low-rank approximations and prompt portfolios, and creating more reliable uncertainty quantification methods. These advancements are crucial for deploying machine learning models in resource-constrained environments and for building more trustworthy and explainable AI systems across diverse applications.
Papers
FORCE: Dataset and Method for Intuitive Physics Guided Human-object Interaction
Xiaohan Zhang, Bharat Lal Bhatnagar, Sebastian Starke, Ilya Petrov, Vladimir Guzov, Helisa Dhamo, Eduardo Pérez-Pellitero, Gerard Pons-Moll
OSTAF: A One-Shot Tuning Method for Improved Attribute-Focused T2I Personalization
Ye Wang, Zili Yi, Rui Ma
BGE Landmark Embedding: A Chunking-Free Embedding Method For Retrieval Augmented Long-Context Large Language Models
Kun Luo, Zheng Liu, Shitao Xiao, Kang Liu
MatPlotAgent: Method and Evaluation for LLM-Based Agentic Scientific Data Visualization
Zhiyu Yang, Zihan Zhou, Shuo Wang, Xin Cong, Xu Han, Yukun Yan, Zhenghao Liu, Zhixing Tan, Pengyuan Liu, Dong Yu, Zhiyuan Liu, Xiaodong Shi, Maosong Sun