Neural Network Output
Neural network output analysis focuses on understanding and improving the reliability, interpretability, and usability of predictions generated by deep learning models. Current research emphasizes uncertainty quantification, constraint satisfaction within the output, and methods for enhancing the faithfulness and robustness of these outputs, often employing techniques like gradient descent, simulated annealing, and Bayesian approaches within various architectures including convolutional and spiking neural networks. These advancements are crucial for deploying neural networks in high-stakes applications like aerospace and healthcare, where confidence in predictions and explainability of decisions are paramount.
Papers
October 4, 2024
September 26, 2024
August 29, 2024
August 22, 2024
July 2, 2024
June 5, 2024
January 4, 2024
December 28, 2023
December 7, 2023
October 10, 2023
October 3, 2023
September 22, 2023
September 15, 2023
July 19, 2023
July 15, 2023
April 19, 2023
March 17, 2023
March 15, 2023
February 23, 2023