Representation Learning
Representation learning aims to create meaningful and efficient data representations that capture underlying structure and facilitate downstream tasks like classification, prediction, and control. Current research focuses on developing robust and generalizable representations, often employing techniques like contrastive learning, transformers, and mixture-of-experts models, addressing challenges such as disentanglement, handling noisy or sparse data, and improving efficiency in multi-task and continual learning scenarios. These advancements have significant implications for various fields, improving the performance and interpretability of machine learning models across diverse applications, from recommendation systems to medical image analysis and causal inference.
Papers
Competing Mutual Information Constraints with Stochastic Competition-based Activations for Learning Diversified Representations
Konstantinos P. Panousis, Anastasios Antoniadis, Sotirios Chatzis
Comparison of Representation Learning Techniques for Tracking in time resolved 3D Ultrasound
Daniel Wulff, Jannis Hagenah, Floris Ernst
Reproducing BowNet: Learning Representations by Predicting Bags of Visual Words
Harry Nguyen, Stone Yun, Hisham Mohammad
Learning Target-aware Representation for Visual Tracking via Informative Interactions
Mingzhe Guo, Zhipeng Zhang, Heng Fan, Liping Jing, Yilin Lyu, Bing Li, Weiming Hu
Spatio-Temporal Graph Representation Learning for Fraudster Group Detection
Saeedreza Shehnepoor, Roberto Togneri, Wei Liu, Mohammed Bennamoun