Convolutional Neural Network
Convolutional Neural Networks (CNNs) are a class of deep learning models designed for processing grid-like data, excelling in image analysis and related tasks. Current research focuses on improving CNN efficiency and robustness, exploring architectures like EfficientNet and Swin Transformers, as well as novel approaches such as Mamba models to address limitations in computational cost and long-range dependency capture. This active field of research has significant implications across diverse applications, including medical image analysis (e.g., cancer detection, Alzheimer's diagnosis), damage assessment, and art forgery detection, demonstrating the power of CNNs for automating complex visual tasks.
Papers
TabConv: Low-Computation CNN Inference via Table Lookups
Neelesh Gupta, Narayanan Kannan, Pengmiao Zhang, Viktor Prasanna
Exploring Quantization and Mapping Synergy in Hardware-Aware Deep Neural Network Accelerators
Jan Klhufek, Miroslav Safar, Vojtech Mrazek, Zdenek Vasicek, Lukas Sekanina
Comparative Analysis of Image Enhancement Techniques for Brain Tumor Segmentation: Contrast, Histogram, and Hybrid Approaches
Shoffan Saifullah, Andri Pranolo, Rafał Dreżewski
HSViT: Horizontally Scalable Vision Transformer
Chenhao Xu, Chang-Tsun Li, Chee Peng Lim, Douglas Creighton
Equivariant graph convolutional neural networks for the representation of homogenized anisotropic microstructural mechanical response
Ravi Patel, Cosmin Safta, Reese E. Jones
Enhancing Breast Cancer Diagnosis in Mammography: Evaluation and Integration of Convolutional Neural Networks and Explainable AI
Maryam Ahmed, Tooba Bibi, Rizwan Ahmed Khan, Sidra Nasir
On the Efficiency of Convolutional Neural Networks
Andrew Lavin
InsectMamba: Insect Pest Classification with State Space Model
Qianning Wang, Chenglin Wang, Zhixin Lai, Yucheng Zhou
HAPNet: Toward Superior RGB-Thermal Scene Parsing via Hybrid, Asymmetric, and Progressive Heterogeneous Feature Fusion
Jiahang Li, Peng Yun, Qijun Chen, Rui Fan
ChangeMamba: Remote Sensing Change Detection With Spatiotemporal State Space Model
Hongruixuan Chen, Jian Song, Chengxi Han, Junshi Xia, Naoto Yokoya
DeiT-LT Distillation Strikes Back for Vision Transformer Training on Long-Tailed Datasets
Harsh Rangwani, Pradipto Mondal, Mayank Mishra, Ashish Ramayee Asokan, R. Venkatesh Babu
The SaTML '24 CNN Interpretability Competition: New Innovations for Concept-Level Interpretability
Stephen Casper, Jieun Yun, Joonhyuk Baek, Yeseong Jung, Minhwan Kim, Kiwan Kwon, Saerom Park, Hayden Moore, David Shriver, Marissa Connor, Keltin Grimes, Angus Nicolson, Arush Tagade, Jessica Rumbelow, Hieu Minh Nguyen, Dylan Hadfield-Menell
Learning in Convolutional Neural Networks Accelerated by Transfer Entropy
Adrian Moldovan, Angel Caţaron, Răzvan Andonie