Visual Understanding
Visual understanding research aims to enable computers to interpret and reason about images and videos as humans do, focusing on tasks like object recognition, scene description, and complex visual reasoning. Current research heavily utilizes large language and vision models (LLVMs), often incorporating vision transformers and leveraging techniques like chain-of-thought prompting and visual instruction tuning to improve performance. This field is crucial for advancing artificial intelligence, with applications ranging from robotics and autonomous driving to medical image analysis and accessibility tools for visually impaired individuals.
Papers
ClassifyViStA:WCE Classification with Visual understanding through Segmentation and Attention
S. Balasubramanian, Ammu Abhishek, Yedu Krishna, Darshan Gera
Unveiling Visual Perception in Language Models: An Attention Head Analysis Approach
Jing Bi, Junjia Guo, Yunlong Tang, Lianggong Bruce Wen, Zhang Liu, Chenliang Xu