Paper ID: 2409.06754

Scaling Law Hypothesis for Multimodal Model

Qingyun Sun, Zhen Guo

We propose a scaling law hypothesis for multimodal models processing text, audio, images, and video within a shared token and embedding space. Our framework predicts model performance based on modality-specific compression and tokenization efficiency, extending established scaling laws from text-based decoder models to mixed-modality systems. We explore whether leveraging more training data in multiple modalities can reduce the size of the multimodal model, enabling efficient deployment on resource-constrained devices.

Submitted: Sep 10, 2024