Super Network Training
Super network training is a technique in neural architecture search (NAS) that involves training a large, highly parameterized "super-network" encompassing a vast space of potential model architectures. Research focuses on efficiently training these super-networks, often employing techniques like low-rank fine-tuning and novel sampling strategies to mitigate gradient conflicts, particularly for vision transformers (ViTs) and large language models (LLMs). This approach enables the efficient discovery of smaller, specialized sub-networks optimized for specific hardware constraints and tasks, leading to improved performance and reduced resource consumption in various applications, including mobile device deployment and assistive technologies.