Redundant Representation
Redundant representation in machine learning focuses on mitigating the inefficiencies arising from multiple, overlapping features or model components within a system. Current research explores methods to reduce redundancy through techniques like optimized indexing for approximate nearest neighbor search, ensemble pruning using liquid democracy-inspired algorithms, and masked quantization for autoregressive image generation. These efforts aim to improve model efficiency, reduce computational costs, and enhance performance by focusing on the most informative features, ultimately leading to faster and more accurate models across various applications, including image processing and natural language processing.
Papers
October 21, 2024
September 30, 2024
March 31, 2024
January 30, 2024
May 23, 2023
January 10, 2023
December 14, 2022
July 25, 2022
May 25, 2022
January 28, 2022
December 17, 2021