Markovian Sampling

Markovian sampling explores the challenges and opportunities of using Markov chain-generated data in various machine learning algorithms, particularly focusing on improving the efficiency and convergence guarantees of stochastic approximation methods. Current research emphasizes analyzing the impact of Markovian sampling on algorithms like temporal difference learning, actor-critic methods, and stochastic gradient descent, often employing continuous normalizing flows or operator-valued stochastic gradient descent for model training. This research is crucial for advancing the theoretical understanding and practical application of reinforcement learning, federated learning, and other areas where data is inherently sequential or generated by a stochastic process, leading to more efficient and robust algorithms.

Papers