Determinant Maximization
Determinant maximization focuses on selecting a subset of vectors that maximizes the volume of the parallelepiped they define, a problem with applications in diverse fields like machine learning and signal processing. Current research emphasizes efficient algorithms, such as greedy approaches and projected Newton-like methods, for solving this problem, particularly within the context of coreset construction for scalability and biologically-plausible neural network architectures for applications in blind source separation. These advancements improve the efficiency and applicability of determinant maximization in high-dimensional settings, impacting areas ranging from data analysis to quantum physics simulations.
Papers
September 29, 2023
September 26, 2023
September 27, 2022
February 19, 2022
December 7, 2021