Simulation Benchmark
Simulation benchmarks are increasingly used to evaluate algorithms and models across diverse fields, providing standardized testing environments and datasets to compare performance objectively. Current research focuses on developing realistic simulations for robotics (including manipulation, fetching, and human-robot interaction), causal inference in biological systems, and AI applications in manufacturing and retail, often employing reinforcement learning, deep learning, and contextual bandit algorithms. These benchmarks facilitate rigorous evaluation of novel methods, accelerate progress in AI and related fields, and ultimately contribute to the development of more robust and reliable systems for real-world applications.
Papers
August 17, 2024
July 8, 2024
June 17, 2024
May 23, 2024
May 16, 2024
March 14, 2024
February 16, 2024
December 7, 2023
July 21, 2023
April 30, 2023
May 19, 2022