Representation Learning
Representation learning aims to create meaningful and efficient data representations that capture underlying structure and facilitate downstream tasks like classification, prediction, and control. Current research focuses on developing robust and generalizable representations, often employing techniques like contrastive learning, transformers, and mixture-of-experts models, addressing challenges such as disentanglement, handling noisy or sparse data, and improving efficiency in multi-task and continual learning scenarios. These advancements have significant implications for various fields, improving the performance and interpretability of machine learning models across diverse applications, from recommendation systems to medical image analysis and causal inference.
Papers
Towards Generalizable Trajectory Prediction Using Dual-Level Representation Learning And Adaptive Prompting
Kaouther Messaoud, Matthieu Cord, Alexandre Alahi
GaussianVideo: Efficient Video Representation via Hierarchical Gaussian Splatting
Andrew Bond, Jui-Hsien Wang, Long Mai, Erkut Erdem, Aykut Erdem
Quantum-inspired Embeddings Projection and Similarity Metrics for Representation Learning
Ivan Kankeu, Stefan Gerd Fritsch, Gunnar Schönhoff, Elie Mounzer, Paul Lukowicz, Maximilian Kiefer-Emmanouilidis
Semise: Semi-supervised learning for severity representation in medical image
Dung T. Tran, Hung Vu, Anh Tran, Hieu Pham, Hong Nguyen, Phong Nguyen
SLAM: Towards Efficient Multilingual Reasoning via Selective Language Alignment
Yuchun Fan, Yongyu Mu, Yilin Wang, Lei Huang, Junhao Ruan, Bei Li, Tong Xiao, Shujian Huang, Xiaocheng Feng, Jingbo Zhu
Deep Learning within Tabular Data: Foundations, Challenges, Advances and Future Directions
Weijieying Ren, Tianxiang Zhao, Yuqing Huang, Vasant Honavar
Information-Maximized Soft Variable Discretization for Self-Supervised Image Representation Learning
Chuang Niu, Wenjun Xia, Hongming Shan, Ge Wang
ProjectedEx: Enhancing Generation in Explainable AI for Prostate Cancer
Xuyin Qi, Zeyu Zhang, Aaron Berliano Handoko, Huazhan Zheng, Mingxi Chen, Ta Duc Huy, Vu Minh Hieu Phan, Lei Zhang, Linqi Cheng, Shiyu Jiang, Zhiwei Zhang, Zhibin Liao, Yang Zhao, Minh-Son To
Information Subtraction: Learning Representations for Conditional Entropy
Keng Hou Leong, Yuxuan Xiu, Wai Kin (Victor) Chan
Transformer-Based Contrastive Meta-Learning For Low-Resource Generalizable Activity Recognition
Junyao Wang, Mohammad Abdullah Al Faruque
Bird Vocalization Embedding Extraction Using Self-Supervised Disentangled Representation Learning
Runwu Shi, Katsutoshi Itoyama, Kazuhiro Nakadai
Hawkes based Representation Learning for Reasoning over Scale-free Community-structured Temporal Knowledge Graphs
Yuwei Du, Xinyue Liu, Wenxin Liang, Linlin Zong, Xianchao Zhang