New Task
Research on "new tasks" in machine learning focuses on developing and evaluating models capable of handling diverse and complex data modalities and problem types. Current efforts concentrate on improving multimodal embedding models (e.g., using contrastive learning and transformer architectures), addressing challenges in long-context processing and few-shot learning, and creating benchmarks for evaluating model performance across various domains (e.g., legal, medical, financial). This work is significant because it pushes the boundaries of AI capabilities, enabling more robust and adaptable systems with applications ranging from improved medical diagnosis to more efficient industrial processes.
Papers
Simultaneous Localization and Affordance Prediction for Tasks in Egocentric Video
Zachary Chavis, Hyun Soo Park, Stephen J. Guy
PLANTS: A Novel Problem and Dataset for Summarization of Planning-Like (PL) Tasks
Vishal Pallagani, Biplav Srivastava, Nitin Gupta
MO-EMT-NAS: Multi-Objective Continuous Transfer of Architectural Knowledge Between Tasks from Different Datasets
Peng Liao, XiLu Wang, Yaochu Jin, WenLi Du
4M-21: An Any-to-Any Vision Model for Tens of Tasks and Modalities
Roman Bachmann, Oğuzhan Fatih Kar, David Mizrahi, Ali Garjani, Mingfei Gao, David Griffiths, Jiaming Hu, Afshin Dehghan, Amir Zamir
Sharing Matters: Analysing Neurons Across Languages and Tasks in LLMs
Weixuan Wang, Barry Haddow, Minghao Wu, Wei Peng, Alexandra Birch