Random Duality Theory
Random Duality Theory (RDT) is a mathematical framework used to analyze the capacity and performance of high-dimensional systems, particularly in machine learning contexts. Current research focuses on applying RDT to precisely characterize the performance of various models, including linear regression, treelike neural networks (with diverse activation functions), and spherical perceptrons, often yielding closed-form expressions for key quantities like prediction risk and network capacity. These analyses provide rigorous theoretical bounds and insights into the behavior of these models, improving our understanding of their capabilities and limitations in tasks such as causal inference and low-rank recovery. The resulting precise characterizations have implications for optimizing model design and interpreting their performance in practical applications.