Negative Example
Negative examples are crucial for training effective machine learning models, particularly in natural language processing, by providing crucial information about what a model *should not* generate or predict. Current research focuses on improving the selection, generation, and utilization of negative examples, exploring techniques like contrastive learning, targeted negative training, and dynamic hard negative mining within various model architectures. These advancements aim to enhance model performance, reduce biases, and improve the efficiency of training processes across diverse applications, including language modeling, knowledge graph embedding, and question answering.
Papers
August 29, 2024
August 28, 2024
June 19, 2024
February 18, 2024
January 30, 2024
November 7, 2023
October 6, 2023
September 23, 2023
May 17, 2023
May 13, 2023
February 21, 2023
October 26, 2022
September 22, 2022
August 19, 2022