Continuous Representation
Continuous representation in machine learning aims to model data as continuous signals, rather than discrete points, enabling more nuanced and efficient processing. Current research focuses on developing and applying continuous representations within various neural network architectures, including implicit neural representations (INRs), neural fields, and graph neural networks, often incorporating techniques like vector quantization and spherical harmonics. This approach enhances performance in diverse applications such as medical image analysis, geometry processing, and dynamical systems modeling, by improving model robustness, generalization, and interpretability compared to traditional discrete methods. The resulting improvements in accuracy and efficiency have significant implications across numerous scientific fields and practical applications.
Papers
Causal Graph ODE: Continuous Treatment Effect Modeling in Multi-agent Dynamical Systems
Zijie Huang, Jeehyun Hwang, Junkai Zhang, Jinwoo Baik, Weitong Zhang, Dominik Wodarz, Yizhou Sun, Quanquan Gu, Wei Wang
Theoretically Achieving Continuous Representation of Oriented Bounding Boxes
Zi-Kai Xiao, Guo-Ye Yang, Xue Yang, Tai-Jiang Mu, Junchi Yan, Shi-min Hu