Data Modality
Data modality research explores the integration and analysis of information from diverse sources, such as text, images, audio, and sensor data, to improve the performance and capabilities of machine learning models. Current research focuses on developing efficient multimodal fusion techniques, often employing transformer architectures and contrastive learning methods, to overcome challenges like missing data and modality inconsistencies. This field is significant because it enables more robust and accurate models for applications ranging from healthcare diagnostics and personalized medicine to industrial control system security and financial forecasting, mirroring the human ability to integrate multiple sensory inputs for decision-making.
Papers
A Methodological and Structural Review of Hand Gesture Recognition Across Diverse Data Modalities
Jungpil Shin, Abu Saleh Musa Miah, Md. Humaun Kabir, Md. Abdur Rahim, Abdullah Al Shiam
SAMSA: Efficient Transformer for Many Data Modalities
Minh Lenhat, Viet Anh Nguyen, Khoa Nguyen, Duong Duc Hieu, Dao Huu Hung, Truong Son Hy