Individual Representation
Individual representation in machine learning focuses on creating effective data representations that capture essential information for downstream tasks, improving model performance and robustness. Current research emphasizes developing novel representation learning methods using architectures like variational autoencoders, transformers, and graph neural networks, often incorporating contrastive learning or self-supervised techniques to handle diverse data types and address challenges like non-IID data and co-occurrence issues. These advancements are crucial for various applications, including image generation, object detection, user modeling, and scientific discovery, by enabling more accurate and efficient models across diverse domains. The development of robust and generalizable representations is a key challenge driving ongoing research.
Papers
Enhancing CTR Prediction in Recommendation Domain with Search Query Representation
Yuening Wang, Man Chen, Yaochen Hu, Wei Guo, Yingxue Zhang, Huifeng Guo, Yong Liu, Mark Coates
GPRec: Bi-level User Modeling for Deep Recommenders
Yejing Wang, Dong Xu, Xiangyu Zhao, Zhiren Mao, Peng Xiang, Ling Yan, Yao Hu, Zijian Zhang, Xuetao Wei, Qidong Liu
The Representation of Meaningful Precision, and Accuracy
A Mani
On Representation of 3D Rotation in the Context of Deep Learning
Viktória Pravdová, Lukáš Gajdošech, Hassan Ali, Viktor Kocur
Animate-X: Universal Character Image Animation with Enhanced Motion Representation
Shuai Tan, Biao Gong, Xiang Wang, Shiwei Zhang, Dandan Zheng, Ruobing Zheng, Kecheng Zheng, Jingdong Chen, Ming Yang
ChartKG: A Knowledge-Graph-Based Representation for Chart Images
Zhiguang Zhou, Haoxuan Wang, Zhengqing Zhao, Fengling Zheng, Yongheng Wang, Wei Chen, Yong Wang
Universal scaling laws in quantum-probabilistic machine learning by tensor network towards interpreting representation and generalization powers
Sheng-Chen Bai, Shi-Ju Ran