Representation Alignment
Representation alignment focuses on aligning the internal representations of different systems, such as humans and AI models, to improve understanding, trust, and collaboration. Current research explores this through various methods, including aligning feature maps in deep neural networks, optimizing large language models based on human feedback, and developing metrics to quantify representational similarity across modalities (e.g., EEG signals and language). This work is crucial for enhancing AI trustworthiness, improving the efficiency of AI training, and enabling more effective human-AI interaction across diverse applications, from autonomous driving to personalized medicine.
Papers
June 20, 2024
June 14, 2024
June 6, 2024
May 24, 2024
March 24, 2024
March 13, 2024
February 7, 2024
December 30, 2023
December 26, 2023
December 21, 2023
December 1, 2023
October 18, 2023
October 11, 2023
September 12, 2023
June 18, 2023
May 17, 2023
February 3, 2023
January 27, 2023
November 23, 2022
September 14, 2022