Intra Class

Intra-class research focuses on improving the representation of data points within the same class in machine learning models, aiming for increased intra-class compactness and better separation from other classes (inter-class separability). Current research emphasizes techniques like loss function modifications (e.g., incorporating margin losses or centroid-based constraints), the use of contrastive learning methods, and the integration of attention mechanisms within various architectures, including vision transformers and autoencoders. These advancements are crucial for improving the accuracy and robustness of various machine learning tasks, particularly in challenging scenarios such as few-shot learning, class incremental learning, and domain adaptation, leading to more effective models across diverse applications.

Papers