Large Relevance Improvement
Large relevance improvement research focuses on enhancing the performance and efficiency of various systems by optimizing existing models and algorithms. Current efforts concentrate on improving model architectures like transformers and convolutional neural networks, employing techniques such as parameter-efficient fine-tuning, dynamic loss weighting, and ensemble learning to achieve better accuracy, stability, and generalization. These advancements have significant implications across diverse fields, including computer vision, natural language processing, reinforcement learning, and scientific computing, leading to more robust and effective solutions in applications ranging from autonomous vehicles to biomedical image analysis.
Papers
Ontology-Aware RAG for Improved Question-Answering in Cybersecurity Education
Chengshuai Zhao, Garima Agrawal, Tharindu Kumarage, Zhen Tan, Yuli Deng, Ying-Chih Chen, Huan Liu
Rate-In: Information-Driven Adaptive Dropout Rates for Improved Inference-Time Uncertainty Estimation
Tal Zeevi, Ravid Shwartz-Ziv, Yann LeCun, Lawrence H. Staib, John A. Onofrey
Scaling Inference-Time Search with Vision Value Model for Improved Visual Comprehension
Wang Xiyao, Yang Zhengyuan, Li Linjie, Lu Hongjin, Xu Yuancheng, Lin Chung-Ching Lin, Lin Kevin, Huang Furong, Wang Lijuan
IRisPath: Enhancing Off-Road Navigation with Robust IR-RGB Fusion for Improved Day and Night Traversability
Saksham Sharma, Akshit Raizada, Suresh Sundaram