Deep Network
Deep networks, complex artificial neural networks with multiple layers, aim to learn intricate patterns from data by approximating complex functions. Current research focuses on improving their efficiency (e.g., through dataset distillation and novel activation functions), enhancing their interpretability (e.g., via re-label distillation and analysis of input space mode connectivity), and addressing challenges like noisy labels and domain shifts. These advancements are crucial for expanding the applicability of deep networks across diverse fields, from financial modeling and medical image analysis to time series classification and natural language processing, while simultaneously increasing their reliability and trustworthiness.
Papers
Attacking deep networks with surrogate-based adversarial black-box methods is easy
Nicholas A. Lord, Romain Mueller, Luca Bertinetto
Deep vanishing point detection: Geometric priors make dataset variations vanish
Yancong Lin, Ruben Wiersma, Silvia L. Pintea, Klaus Hildebrandt, Elmar Eisemann, Jan C. van Gemert
Learning to Generate Synthetic Training Data using Gradient Matching and Implicit Differentiation
Dmitry Medvedev, Alexander D'yakonov
Extracting associations and meanings of objects depicted in artworks through bi-modal deep networks
Gregory Kell, Ryan-Rhys Griffiths, Anthony Bourached, David G. Stork
DKMA-ULD: Domain Knowledge augmented Multi-head Attention based Robust Universal Lesion Detection
Manu Sheoran, Meghal Dani, Monika Sharma, Lovekesh Vig