Potential Scalability
Scalability in machine learning focuses on developing algorithms and architectures capable of efficiently handling massive datasets and complex models, addressing limitations of existing methods when dealing with increasingly large-scale data. Current research emphasizes techniques like distributed training for graph neural networks, efficient negative sampling strategies for extreme classification, and optimized algorithms for tasks such as recommendation systems and causal discovery, often employing novel architectures like Mamba and leveraging hardware acceleration (e.g., FPGAs and GPUs). These advancements are crucial for enabling the application of powerful machine learning models to real-world problems involving vast amounts of data, impacting fields ranging from scientific computing and personalized medicine to environmental monitoring and industrial automation.
Papers
On the Scalability of GNNs for Molecular Graphs
Maciej Sypetkowski, Frederik Wenkel, Farimah Poursafaei, Nia Dickson, Karush Suri, Philip Fradkin, Dominique Beaini
Quantum-inspired Techniques in Tensor Networks for Industrial Contexts
Alejandro Mata Ali, Iñigo Perez Delgado, Aitor Moreno Fdez. de Leceta
CATGNN: Cost-Efficient and Scalable Distributed Training for Graph Neural Networks
Xin Huang, Weipeng Zhuo, Minh Phu Vuong, Shiju Li, Jongryool Kim, Bradley Rees, Chul-Ho Lee
Towards Scalable & Efficient Interaction-Aware Planning in Autonomous Vehicles using Knowledge Distillation
Piyush Gupta, David Isele, Sangjae Bae