Layer Representation

Layer representation in deep learning focuses on understanding and leveraging the information encoded at different levels of neural networks, aiming to improve model efficiency, accuracy, and interpretability. Current research explores how layer representations vary across different model architectures (e.g., transformers, convolutional networks) and tasks, investigating phenomena like neural collapse and the role of specific layers in contextualization and concept acquisition. This research is crucial for advancing both fundamental understanding of deep learning mechanisms and practical applications, including efficient inference, improved generalization, and robust out-of-distribution detection.

Papers