Layer Similarity
Layer similarity research focuses on understanding and quantifying the relationships between representations learned at different layers within deep neural networks (DNNs), particularly in transformer and convolutional neural networks. Current research employs various methods, including cosine similarity, centered kernel alignment (CKA), and graph-based approaches, to analyze these relationships across different models and training strategies, often with applications in model compression, efficient training, and federated learning. These analyses provide insights into model behavior, enabling improvements in training efficiency (e.g., through knowledge distillation or multi-exit architectures) and facilitating better understanding of DNN internal representations for improved interpretability and robustness.