Unbounded Variance
Unbounded variance, a condition where data's spread lacks a finite second moment, poses significant challenges for statistical modeling and machine learning. Current research focuses on developing robust algorithms and estimators that can handle this scenario, particularly within Bayesian neural networks (employing α-stable processes and conditionally Gaussian representations) and stochastic optimization methods (like Stochastic Gradient Descent and Mirror Descent). Addressing unbounded variance is crucial for improving the reliability and applicability of these methods in various domains, including online control, federated learning, and regression analysis with heavy-tailed data, where traditional assumptions often fail.
Papers
October 21, 2024
October 2, 2024
August 5, 2024
May 24, 2024
May 6, 2024
February 15, 2024
December 5, 2023
November 2, 2023
May 18, 2023
February 2, 2023
October 3, 2022
August 5, 2022
February 23, 2022
February 19, 2022