Recent Advance
Recent advancements in various machine learning subfields are significantly impacting diverse scientific and engineering domains. Current research focuses on improving model efficiency and interpretability across applications like robotics process automation, protein structure prediction, and communication systems, often leveraging large language models (LLMs) and deep learning architectures. These improvements are driving progress in areas such as natural language processing, medical image analysis, and computational fluid dynamics, leading to more accurate, efficient, and reliable systems. The resulting advancements hold significant potential for improving healthcare, optimizing industrial processes, and accelerating scientific discovery.
Papers
Recent advances in interpretable machine learning using structure-based protein representations
Luiz Felipe Vecchietti, Minji Lee, Begench Hangeldiyev, Hyunkyu Jung, Hahnbeom Park, Tae-Kyun Kim, Meeyoung Cha, Ho Min Kim
Joint Source-Channel Coding: Fundamentals and Recent Progress in Practical Designs
Deniz Gündüz, Michèle A. Wigger, Tze-Yang Tung, Ping Zhang, Yong Xiao
Recent Advances in Non-convex Smoothness Conditions and Applicability to Deep Linear Neural Networks
Vivak Patel, Christian Varner
A Survey on Moral Foundation Theory and Pre-Trained Language Models: Current Advances and Challenges
Lorenzo Zangari, Candida M. Greco, Davide Picca, Andrea Tagarelli
Recent Advancement of Emotion Cognition in Large Language Models
Yuyan Chen, Yanghua Xiao