Related Task
Related task research focuses on improving the efficiency and effectiveness of machine learning models across diverse applications. Current efforts concentrate on developing novel algorithms and architectures, such as incorporating structured sparsity in multi-task learning and employing knowledge distillation in end-to-end models, to address challenges like data scarcity, computational cost, and generalization. These advancements are crucial for enhancing the performance of various tasks, including natural language processing, computer vision, and robotics, leading to more robust and efficient AI systems. The resulting improvements have significant implications for fields ranging from healthcare and finance to manufacturing and environmental monitoring.
Papers
Enhancing Textbook Question Answering Task with Large Language Models and Retrieval Augmented Generation
Hessa Abdulrahman Alawwad, Areej Alhothali, Usman Naseem, Ali Alkhathlan, Amani Jamal
VlogQA: Task, Dataset, and Baseline Models for Vietnamese Spoken-Based Machine Reading Comprehension
Thinh Phuoc Ngo, Khoa Tran Anh Dang, Son T. Luu, Kiet Van Nguyen, Ngan Luu-Thuy Nguyen
Task Oriented Dialogue as a Catalyst for Self-Supervised Automatic Speech Recognition
David M. Chan, Shalini Ghosh, Hitesh Tulsiani, Ariya Rastrow, Björn Hoffmeister
Shayona@SMM4H23: COVID-19 Self diagnosis classification using BERT and LightGBM models
Rushi Chavda, Darshan Makwana, Vraj Patel, Anupam Shukla