DR Submodular
DR-submodular optimization focuses on maximizing functions exhibiting the diminishing returns property, a crucial characteristic in many non-convex problems arising in machine learning and operations research. Current research emphasizes developing efficient algorithms, particularly projection-free Frank-Wolfe methods and boosted gradient ascent, to address various settings including online learning (with full-information, semi-bandit, and bandit feedback), adversarial scenarios, and decentralized optimization. These advancements improve approximation guarantees and convergence rates, offering significant improvements over existing techniques for problems constrained by general or down-closed convex sets. The resulting algorithms find applications in diverse fields, impacting areas such as resource allocation, influence maximization, and sensor placement.