Masked Modeling
Masked modeling is a self-supervised learning technique that trains models to reconstruct missing portions of input data, fostering robust representation learning across various modalities. Current research focuses on applying this approach to diverse data types, including time series, point clouds, images, and text, often employing transformer-based architectures or adapting them for specific data structures like sparse convolutions. This technique's significance lies in its ability to improve model performance on downstream tasks with limited labeled data, impacting fields ranging from weather forecasting and medical image analysis to high-energy physics and sensor fault detection. The resulting data-efficient and generalizable models are proving valuable across numerous scientific and engineering applications.
Papers
Masked Event Modeling: Self-Supervised Pretraining for Event Cameras
Simon Klenk, David Bonello, Lukas Koestler, Nikita Araslanov, Daniel Cremers
MM-3DScene: 3D Scene Understanding by Customizing Masked Modeling with Informative-Preserved Reconstruction and Self-Distilled Consistency
Mingye Xu, Mutian Xu, Tong He, Wanli Ouyang, Yali Wang, Xiaoguang Han, Yu Qiao