Computational Hardness

Computational hardness explores the inherent difficulty of solving computational problems, focusing on establishing lower bounds on the time or resources required. Current research investigates this in various machine learning contexts, including the interpretability of models, the efficiency of training certain neural network architectures (like those with additive structures), and the difficulty of inverting generative models. These findings have significant implications for algorithm design, informing the development of more efficient methods and providing theoretical limits on what is computationally feasible in areas like AI and cryptography.

Papers