Input Sparsity

Input sparsity research focuses on efficiently processing large datasets by leveraging the inherent sparsity—the presence of many zero or negligible values—within the data. Current efforts concentrate on developing algorithms that achieve input sparsity time complexity, meaning computational cost scales linearly with the number of non-zero entries, for tasks like tensor low-rank approximation and solving regularized regression problems arising in large language models and video question answering. This is achieved through techniques such as adaptive input selection, leveraging score sampling, and approximate Newton methods. These advancements promise significant improvements in computational efficiency and scalability for various machine learning applications, particularly those involving high-dimensional data.

Papers