Deep Network
Deep networks, complex artificial neural networks with multiple layers, aim to learn intricate patterns from data by approximating complex functions. Current research focuses on improving their efficiency (e.g., through dataset distillation and novel activation functions), enhancing their interpretability (e.g., via re-label distillation and analysis of input space mode connectivity), and addressing challenges like noisy labels and domain shifts. These advancements are crucial for expanding the applicability of deep networks across diverse fields, from financial modeling and medical image analysis to time series classification and natural language processing, while simultaneously increasing their reliability and trustworthiness.
Papers
Beyond Transfer Learning: Co-finetuning for Action Localisation
Anurag Arnab, Xuehan Xiong, Alexey Gritsenko, Rob Romijnders, Josip Djolonga, Mostafa Dehghani, Chen Sun, Mario Lučić, Cordelia Schmid
Complementing Brightness Constancy with Deep Networks for Optical Flow Prediction
Vincent Le Guen, Clément Rambour, Nicolas Thome
Combining Deep Learning with Good Old-Fashioned Machine Learning
Moshe Sipper
Transfer Learning via Test-Time Neural Networks Aggregation
Bruno Casella, Alessio Barbaro Chisari, Sebastiano Battiato, Mario Valerio Giuffrida
Guillotine Regularization: Why removing layers is needed to improve generalization in Self-Supervised Learning
Florian Bordes, Randall Balestriero, Quentin Garrido, Adrien Bardes, Pascal Vincent