Computational Hardness
Computational hardness explores the inherent difficulty of solving computational problems, focusing on establishing lower bounds on the time or resources required. Current research investigates this in various machine learning contexts, including the interpretability of models, the efficiency of training certain neural network architectures (like those with additive structures), and the difficulty of inverting generative models. These findings have significant implications for algorithm design, informing the development of more efficient methods and providing theoretical limits on what is computationally feasible in areas like AI and cryptography.
Papers
August 7, 2024
June 17, 2024
September 11, 2023
February 19, 2023
February 13, 2023
February 4, 2023
July 28, 2022