Benchmark Platform
Benchmark platforms in various scientific domains aim to provide standardized evaluations of models and algorithms, enabling fair comparisons and driving research progress. Current research focuses on developing comprehensive benchmarks across diverse areas, including natural language processing, computer vision, robotics, and healthcare, often incorporating novel model architectures like large language models and deep learning frameworks. These platforms are crucial for advancing the field by facilitating reproducible research, identifying limitations of existing methods, and ultimately leading to more robust and reliable systems with real-world applications. The resulting insights inform the development of improved algorithms and contribute to a more rigorous and transparent scientific process.
Papers
Improved statistical benchmarking of digital pathology models using pairwise frames evaluation
Ylaine Gerardin, John Shamshoian, Judy Shen, Nhat Le, Jamie Prezioso, John Abel, Isaac Finberg, Daniel Borders, Raymond Biju, Michael Nercessian, Vaed Prasad, Joseph Lee, Spencer Wyman, Sid Gupta, Abigail Emerson, Bahar Rahsepar, Darpan Sanghavi, Ryan Leung, Limin Yu, Archit Khosla, Amaro Taylor-Weiner
On the Detectability of ChatGPT Content: Benchmarking, Methodology, and Evaluation through the Lens of Academic Writing
Zeyan Liu, Zijun Yao, Fengjun Li, Bo Luo