Near Optimality
Near-optimality research focuses on developing algorithms and models that achieve solutions very close to the theoretical best possible, addressing limitations in computational efficiency and generalization. Current efforts concentrate on improving the performance of existing methods like stochastic gradient descent and evolutionary strategies, particularly in high-dimensional spaces and noisy environments, as well as exploring Bayesian frameworks and novel approaches like multi-fidelity best-arm identification. These advancements have significant implications for various fields, including machine learning, optimization, and control systems, by enabling more efficient and robust solutions to complex problems.
Papers
Private Stochastic Convex Optimization with Heavy Tails: Near-Optimality from Simple Reductions
Hilal Asi, Daogao Liu, Kevin Tian
Contextual Dynamic Pricing: Algorithms, Optimality, and Local Differential Privacy Constraints
Zifeng Zhao, Feiyu Jiang, Yi Yu
Optimality of Matrix Mechanism on $\ell_p^p$-metric
Jingcheng Liu, Jalaj Upadhyay, Zongrui Zou