End to End
"End-to-end" systems aim to streamline complex processes by integrating multiple stages into a single, unified model, eliminating the need for intermediate steps and potentially improving efficiency and performance. Current research focuses on applying this approach across diverse fields, utilizing architectures like transformers, reinforcement learning, and spiking neural networks to tackle challenges in autonomous driving, robotics, speech processing, and natural language processing. This approach offers significant potential for improving the accuracy, speed, and robustness of various applications, while also simplifying development and deployment.
Papers
EPro-PnP: Generalized End-to-End Probabilistic Perspective-n-Points for Monocular Object Pose Estimation
Hansheng Chen, Wei Tian, Pichao Wang, Fan Wang, Lu Xiong, Hao Li
Dense Distinct Query for End-to-End Object Detection
Shilong Zhang, Xinjiang Wang, Jiaqi Wang, Jiangmiao Pang, Chengqi Lyu, Wenwei Zhang, Ping Luo, Kai Chen
RegFormer: An Efficient Projection-Aware Transformer Network for Large-Scale Point Cloud Registration
Jiuming Liu, Guangming Wang, Zhe Liu, Chaokang Jiang, Marc Pollefeys, Hesheng Wang
End-to-End Integration of Speech Separation and Voice Activity Detection for Low-Latency Diarization of Telephone Conversations
Giovanni Morrone, Samuele Cornell, Luca Serafini, Enrico Zovato, Alessio Brutti, Stefano Squartini
LEAPS: End-to-End One-Step Person Search With Learnable Proposals
Zhiqiang Dong, Jiale Cao, Rao Muhammad Anwer, Jin Xie, Fahad Khan, Yanwei Pang
Efficient Multi-stage Inference on Tabular Data
Daniel S Johnson, Igor L Markov
One-to-Few Label Assignment for End-to-End Dense Detection
Shuai Li, Minghan Li, Ruihuang Li, Chenhang He, Lei Zhang
Towards End-to-End Generative Modeling of Long Videos with Memory-Efficient Bidirectional Transformers
Jaehoon Yoo, Semin Kim, Doyup Lee, Chiheon Kim, Seunghoon Hong
Knowledge Distillation from Multiple Foundation Models for End-to-End Speech Recognition
Xiaoyu Yang, Qiujia Li, Chao Zhang, Philip C. Woodland