Local Loss
Local loss methods in deep learning aim to improve training efficiency and biological plausibility by decentralizing the learning process, focusing on optimizing smaller, localized parts of the network rather than the global loss function. Current research explores various approaches, including forward-forward algorithms (which avoid backpropagation), distributed personalized learning schemes, and integrated methods combining local and global loss functions, often within novel architectures like LocalMixer. These techniques offer potential advantages in resource-constrained environments, such as federated learning, and may lead to more robust and biologically-inspired neural network training paradigms.
Papers
April 23, 2024
October 26, 2023
May 22, 2023
April 8, 2023
October 7, 2022
August 1, 2022