Multi Modal Distillation

Multi-modal distillation focuses on transferring knowledge from large, complex multi-modal models (teachers) to smaller, more efficient models (students), improving the latter's performance and reducing computational demands. Current research emphasizes strategies for effective knowledge transfer across various modalities (e.g., vision, language, LiDAR), exploring techniques like joint token and logit alignment, competitive distillation with bidirectional feedback, and apprentice-friendly expert models to minimize domain gaps. This approach is significant for deploying advanced multi-modal models in resource-constrained environments and improving the performance of camera-only systems by leveraging information from other modalities, ultimately advancing applications in areas like robotics and computer vision.

Papers