High Pruning Regime
High pruning regimes, aiming to drastically reduce the size and computational cost of deep learning models without significant accuracy loss, are a major focus in current research. This involves developing efficient algorithms, such as combinatorial optimization and variational methods, to strategically remove model parameters or entire structures (e.g., neurons, attention heads, or tiles) in one-shot post-training approaches. Research is actively exploring this in various architectures, including convolutional neural networks (CNNs), graph convolutional networks (GCNs), and large language models, with a particular emphasis on maintaining topological consistency in GCNs and mitigating accuracy degradation in high-pruning scenarios. These advancements are crucial for deploying large models on resource-constrained devices and improving the scalability of privacy-preserving training techniques.