Large Matrix
Large matrix computations are central to many scientific and machine learning applications, posing significant computational challenges due to their size and the cost of data movement. Current research focuses on developing faster algorithms for matrix operations like diagonalization and inversion, often leveraging techniques from randomized numerical linear algebra, compressed sensing, and deep learning to improve efficiency and scalability. These advancements are crucial for accelerating computations in diverse fields, including scientific computing, machine learning model training, and large-scale data analysis, enabling the handling of increasingly complex problems. The development of efficient algorithms for handling structured matrices and the exploration of compressed-domain operations are particularly active areas of investigation.
Papers
What Operations can be Performed Directly on Compressed Arrays, and with What Error?
Tripti Agarwal, Harvey Dam, Dorra Ben Khalifa, Matthieu Martel, P. Sadayappan, Ganesh Gopalakrishnan
Recent and Upcoming Developments in Randomized Numerical Linear Algebra for Machine Learning
Michał Dereziński, Michael W. Mahoney