Feature Wise
Feature-wise research explores how individual features within data contribute to model performance and interpretability across diverse machine learning tasks. Current efforts focus on developing methods for feature selection, extraction, and fusion, employing techniques like sparse autoencoders, attention mechanisms, and graph convolutional networks to optimize feature utilization and enhance model accuracy and explainability. This work is significant for improving model efficiency, robustness, and trustworthiness, with applications ranging from medical image analysis and malware detection to natural language processing and financial forecasting.
Papers
Which Features are Learnt by Contrastive Learning? On the Role of Simplicity Bias in Class Collapse and Feature Suppression
Yihao Xue, Siddharth Joshi, Eric Gan, Pin-Yu Chen, Baharan Mirzasoleiman
Sharpness-Aware Minimization Leads to Low-Rank Features
Maksym Andriushchenko, Dara Bahri, Hossein Mobahi, Nicolas Flammarion
LFTK: Handcrafted Features in Computational Linguistics
Bruce W. Lee, Jason Hyung-Jong Lee
A Tale of Two Features: Stable Diffusion Complements DINO for Zero-Shot Semantic Correspondence
Junyi Zhang, Charles Herrmann, Junhwa Hur, Luisa Polania Cabrera, Varun Jampani, Deqing Sun, Ming-Hsuan Yang
Error Feedback Shines when Features are Rare
Peter Richtárik, Elnur Gasanov, Konstantin Burlachenko