Nondeterministic Stack
"Stacking," in machine learning, refers to ensemble methods that sequentially build models, using the outputs of earlier models to inform the training of subsequent ones. Current research focuses on leveraging stacking for improved training efficiency in deep neural networks, enhancing the reasoning capabilities of language models, and boosting the performance of various prediction tasks, including mutagenicity prediction and load forecasting. These advancements are significant because stacking offers a powerful technique for improving model accuracy and efficiency across diverse applications, from drug discovery to resource management.
Papers
September 27, 2024
September 3, 2024
June 15, 2024
June 7, 2024
April 30, 2024
March 8, 2024
December 7, 2023
July 10, 2023
June 16, 2023
April 25, 2023
October 4, 2022
July 29, 2022
June 6, 2022
March 16, 2022