Attribution Mask

Attribution masks are computational tools used to understand and explain the decision-making processes of complex models, particularly large language models and deep neural networks. Current research focuses on improving the accuracy and efficiency of attribution methods, addressing issues like bias, noise, and scalability, often employing techniques like influence functions, watermarking, and gradient-based approaches within various architectures including transformers and graph convolutional networks. This work is crucial for enhancing model transparency, protecting intellectual property in AI-generated content, and improving the reliability and trustworthiness of AI systems across diverse applications.

Papers