Practical Method
Practical methods in machine learning and related fields are currently focused on improving efficiency, accuracy, and generalizability of existing algorithms and models. Research emphasizes developing faster solvers for optimization problems (e.g., using parallel-in-time methods and novel optimizers like the generalized Newton's method), enhancing model robustness through techniques such as low-rank approximations and prompt portfolios, and creating more reliable uncertainty quantification methods. These advancements are crucial for deploying machine learning models in resource-constrained environments and for building more trustworthy and explainable AI systems across diverse applications.
Papers
A Unified Framework for Iris Anti-Spoofing: Introducing IrisGeneral Dataset and Masked-MoE Method
Hang Zou, Chenxi Du, Ajian Liu, Yuan Zhang, Jing Liu, Mingchuan Yang, Jun Wan, Hui Zhang
Revisiting Reciprocal Recommender Systems: Metrics, Formulation, and Method
Chen Yang, Sunhao Dai, Yupeng Hou, Wayne Xin Zhao, Jun Xu, Yang Song, Hengshu Zhu