Neural Network Output
Neural network output analysis focuses on understanding and improving the reliability, interpretability, and usability of predictions generated by deep learning models. Current research emphasizes uncertainty quantification, constraint satisfaction within the output, and methods for enhancing the faithfulness and robustness of these outputs, often employing techniques like gradient descent, simulated annealing, and Bayesian approaches within various architectures including convolutional and spiking neural networks. These advancements are crucial for deploying neural networks in high-stakes applications like aerospace and healthcare, where confidence in predictions and explainability of decisions are paramount.
Papers
November 7, 2022
November 5, 2022
November 2, 2022
September 16, 2022
August 4, 2022
July 23, 2022
July 6, 2022
January 3, 2022
January 2, 2022
December 31, 2021
December 19, 2021