Deep Network
Deep networks, complex artificial neural networks with multiple layers, aim to learn intricate patterns from data by approximating complex functions. Current research focuses on improving their efficiency (e.g., through dataset distillation and novel activation functions), enhancing their interpretability (e.g., via re-label distillation and analysis of input space mode connectivity), and addressing challenges like noisy labels and domain shifts. These advancements are crucial for expanding the applicability of deep networks across diverse fields, from financial modeling and medical image analysis to time series classification and natural language processing, while simultaneously increasing their reliability and trustworthiness.
Papers
Sparse Interaction Additive Networks via Feature Interaction Detection and Sparse Selection
James Enouen, Yan Liu
Neural Collapse with Normalized Features: A Geometric Analysis over the Riemannian Manifold
Can Yaras, Peng Wang, Zhihui Zhu, Laura Balzano, Qing Qu
Lazy vs hasty: linearization in deep networks impacts learning schedule based on example difficulty
Thomas George, Guillaume Lajoie, Aristide Baratin
Restructurable Activation Networks
Kartikeya Bhardwaj, James Ward, Caleb Tung, Dibakar Gope, Lingchuan Meng, Igor Fedorov, Alex Chalfin, Paul Whatmough, Danny Loh
An Evolutionary, Gradient-Free, Query-Efficient, Black-Box Algorithm for Generating Adversarial Instances in Deep Networks
Raz Lapid, Zvika Haramaty, Moshe Sipper
How does the degree of novelty impacts semi-supervised representation learning for novel class retrieval?
Quentin Leroy, Olivier Buisson, Alexis Joly