Event Description
Event description research focuses on accurately representing and understanding events from various data modalities, including video, sensor data (e.g., event cameras, LiDAR), and text. Current research emphasizes developing robust models, often employing neural networks (like transformers and convolutional neural networks) and advanced algorithms (e.g., Kalman filters, Hawkes processes), to fuse heterogeneous data sources and improve event localization, reconstruction, and reasoning. This work is significant for advancing computer vision, robotics, and natural language processing, with applications ranging from autonomous driving and anomaly detection to high-energy physics and medical diagnosis. The development of large, diverse datasets is also a key focus, enabling more accurate and generalizable models.
Papers
Generating event descriptions under syntactic and semantic constraints
Angela Cao, Faye Holt, Jonas Chan, Stephanie Richter, Lelia Glass, Aaron Steven White
A Novel Task-Driven Method with Evolvable Interactive Agents Using Event Trees for Enhanced Emergency Decision Support
Xingyu Xiao, Peng Chen, Ben Qi, Jingang Liang, Jiejuan Tong, Haitao Wang
Thinking Fast and Laterally: Multi-Agentic Approach for Reasoning about Uncertain Emerging Events
Stefan Dernbach, Alejandro Michel, Khushbu Agarwal, Christopher Brissette, Geetika Gupta, Sutanay Choudhury
EventSplat: 3D Gaussian Splatting from Moving Event Cameras for Real-time Rendering
Toshiya Yura, Ashkan Mirzaei, Igor Gilitschenski