Large Scale Benchmark Dataset
Large-scale benchmark datasets are crucial for advancing various fields of computer vision, robotics, and natural language processing by providing the substantial, high-quality training data needed to develop and evaluate robust machine learning models. Current research focuses on creating these datasets for diverse applications, including damage assessment from aerial imagery, scene understanding in remote sensing, high-dynamic range video reconstruction, and multimodal intent recognition in conversations. The availability of such datasets significantly impacts the development of more accurate and efficient algorithms, ultimately leading to improvements in real-world applications like disaster response, autonomous systems, and human-computer interaction.
Papers
A High-Quality and Large-Scale Dataset for English-Vietnamese Speech Translation
Linh The Nguyen, Nguyen Luong Tran, Long Doan, Manh Luong, Dat Quoc Nguyen
MetaGraspNet: A Large-Scale Benchmark Dataset for Scene-Aware Ambidextrous Bin Picking via Physics-based Metaverse Synthesis
Maximilian Gilles, Yuhao Chen, Tim Robin Winter, E. Zhixuan Zeng, Alexander Wong