Model Free
Model-free methods in machine learning and control aim to learn optimal strategies or models without relying on explicit knowledge of the underlying system dynamics. Current research focuses on developing efficient model-free algorithms for reinforcement learning, including those employing neural networks, normalizing flows, and advanced optimization techniques like damped Newton methods, to address challenges in various domains such as robotics, process control, and anomaly detection. These approaches are significant because they offer greater flexibility and robustness compared to model-based methods, particularly when dealing with complex or unknown systems, leading to improved performance and reduced computational costs in diverse applications.
Papers
Curriculum Learning and Imitation Learning for Model-free Control on Financial Time-series
Woosung Koh, Insu Choi, Yuntae Jang, Gimin Kang, Woo Chang Kim
A model-free approach to fingertip slip and disturbance detection for grasp stability inference
Dounia Kitouni, Mahdi Khoramshahi, Veronique Perdereau
Model-Free Source Seeking by a Novel Single-Integrator with Attenuating Oscillations and Better Convergence Rate: Robotic Experiments
Shivam Bajpai, Ahmed A. Elgohary, Sameh A. Eisa
TWIST: Teacher-Student World Model Distillation for Efficient Sim-to-Real Transfer
Jun Yamada, Marc Rigter, Jack Collins, Ingmar Posner