Context Detection
Context detection research focuses on identifying information presented out of its original context, a prevalent issue in misinformation and large language model (LLM) safety. Current efforts concentrate on developing robust multimodal models, often employing transformer architectures and leveraging techniques like contrastive learning and logic regularization to improve accuracy and interpretability. This work is crucial for mitigating the spread of misinformation and enhancing the trustworthiness and safety of LLMs by addressing their potential to infer and utilize knowledge implicitly present in their training data.
Papers
November 6, 2024
July 18, 2024
June 20, 2024
June 11, 2024
June 7, 2024
May 18, 2024
January 29, 2024
January 22, 2024
September 1, 2023