Vision Based
Vision-based research focuses on using computer vision and machine learning to interpret visual data for various applications. Current efforts concentrate on improving the accuracy and robustness of vision systems, particularly using deep learning architectures like convolutional neural networks and transformers, often incorporating techniques like self-supervised learning and vision-language models for enhanced performance and generalization. This field is crucial for advancements in autonomous driving, robotics, precision agriculture, and healthcare, enabling more efficient and intelligent systems across diverse sectors. The development of large, high-quality datasets and rigorous evaluation metrics are also key areas of ongoing research.
Papers
Flex: End-to-End Text-Instructed Visual Navigation with Foundation Models
Makram Chahine, Alex Quach, Alaa Maalouf, Tsun-Hsuan Wang, Daniela Rus
Risk Assessment for Autonomous Landing in Urban Environments using Semantic Segmentation
Jesús Alejandro Loera-Ponce, Diego A. Mercado-Ravell, Israel Becerra-Durán, Luis Manuel Valentin-Coronado
Optical Lens Attack on Deep Learning Based Monocular Depth Estimation
Ce Zhou (1), Qiben Yan (1), Daniel Kent (1), Guangjing Wang (1), Ziqi Zhang (2), Hayder Radha (1) ((1) Michigan State University, (2) Peking University)
A vision-based framework for human behavior understanding in industrial assembly lines
Konstantinos Papoutsakis, Nikolaos Bakalos, Konstantinos Fragkoulis, Athena Zacharia, Georgia Kapetadimitri, Maria Pateraki
HVT: A Comprehensive Vision Framework for Learning in Non-Euclidean Space
Jacob Fein-Ashley, Ethan Feng, Minh Pham