Hardness Result
Hardness results in machine learning and related fields investigate the computational complexity of learning problems, aiming to identify inherent limitations and guide the development of more efficient algorithms. Current research focuses on characterizing hardness across various learning paradigms, including neural networks (e.g., exploring the impact of architecture and data distribution on learnability), reinforcement learning (analyzing the difficulty of policy evaluation and optimization under different assumptions), and knowledge graph embedding (investigating the challenges of negative sample generation). These findings are crucial for advancing theoretical understanding and informing the design of practical algorithms that are both efficient and robust, particularly in resource-constrained settings or when dealing with complex, high-dimensional data.
Papers
On the Hardness of Robustness Transfer: A Perspective from Rademacher Complexity over Symmetric Difference Hypothesis Space
Yuyang Deng, Nidham Gazagnadou, Junyuan Hong, Mehrdad Mahdavi, Lingjuan Lyu
Testing Stationarity Concepts for ReLU Networks: Hardness, Regularity, and Robust Algorithms
Lai Tian, Anthony Man-Cho So