Compositional Representation
Compositional representation learning aims to enable artificial systems to understand and generate complex data by breaking it down into meaningful, reusable parts, mirroring human cognitive abilities. Current research focuses on developing models that learn these representations in an unsupervised or weakly supervised manner, employing techniques like autoencoders, variational autoencoders, diffusion models, and architectures incorporating inductive biases such as object slots or relational bottlenecks. This field is crucial for advancing artificial intelligence, particularly in areas like image and audio processing, where it promises improved generalization, interpretability, and efficiency in tasks ranging from medical image segmentation to music generation and robotic manipulation.
Papers
H4D: Human 4D Modeling by Learning Neural Compositional Representation
Boyan Jiang, Yinda Zhang, Xingkui Wei, Xiangyang Xue, Yanwei Fu
Integer Factorization with Compositional Distributed Representations
Denis Kleyko, Connor Bybee, Christopher J. Kymn, Bruno A. Olshausen, Amir Khosrowshahi, Dmitri E. Nikonov, Friedrich T. Sommer, E. Paxon Frady