Trade Offs
Trade-offs in various computational and machine learning contexts involve balancing competing objectives, such as accuracy versus efficiency (energy consumption, computational cost, or latency). Current research focuses on optimizing these trade-offs across diverse applications, employing techniques like ensemble learning, low-rank decomposition of large language models, and innovative neural network architectures (e.g., spiking neural networks). Understanding and mitigating these trade-offs is crucial for developing sustainable and efficient AI systems, improving the performance of resource-constrained applications, and advancing the broader field of machine learning.
Papers
Statistical-Computational Trade-offs for Greedy Recursive Partitioning Estimators
Yan Shuo Tan, Jason M. Klusowski, Krishnakumar Balasubramanian
PhoneLM:an Efficient and Capable Small Language Model Family through Principled Pre-training
Rongjie Yi, Xiang Li, Weikai Xie, Zhenyan Lu, Chenghua Wang, Ao Zhou, Shangguang Wang, Xiwen Zhang, Mengwei Xu