Large Sample

Large sample theory investigates the behavior of statistical models and algorithms as the amount of training data grows. Current research focuses on understanding the asymptotic properties of various methods, including gradient boosting, stochastic gradient algorithms, and kernel regression, often employing techniques like multiply robust estimation to improve reliability. This work aims to provide rigorous theoretical foundations for practical applications, leading to improved tuning strategies, more accurate performance predictions, and a deeper understanding of model behavior in high-dimensional settings. Ultimately, these advancements enhance the reliability and efficiency of machine learning models across diverse scientific and engineering domains.

Papers