Representation Alignment
Representation alignment focuses on aligning the internal representations of different systems, such as humans and AI models, to improve understanding, trust, and collaboration. Current research explores this through various methods, including aligning feature maps in deep neural networks, optimizing large language models based on human feedback, and developing metrics to quantify representational similarity across modalities (e.g., EEG signals and language). This work is crucial for enhancing AI trustworthiness, improving the efficiency of AI training, and enabling more effective human-AI interaction across diverse applications, from autonomous driving to personalized medicine.
Papers
October 31, 2024
October 26, 2024
October 9, 2024
September 20, 2024
September 18, 2024
September 17, 2024
September 12, 2024
September 2, 2024
August 28, 2024
August 15, 2024
August 10, 2024
July 25, 2024
July 24, 2024
June 27, 2024
June 21, 2024
June 20, 2024
June 14, 2024
June 6, 2024
May 24, 2024