Restricted Boltzmann Machine
Restricted Boltzmann Machines (RBMs) are probabilistic neural networks used for unsupervised learning, aiming to model the probability distribution of input data. Current research focuses on improving training efficiency through novel weight initialization methods and alternative divergence learning approaches beyond traditional Kullback-Leibler divergence, as well as enhancing the interpretability of RBM outputs and their application in diverse fields like quantum chemistry and anomaly detection. RBMs' simple architecture and ability to model complex data make them valuable tools for various applications, including data generation, clustering, and feature extraction, contributing significantly to both theoretical understanding and practical advancements in machine learning.
Papers
Dataset-Free Weight-Initialization on Restricted Boltzmann Machine
Muneki Yasuda, Ryosuke Maeno, Chako Takahashi
Ratio Divergence Learning Using Target Energy in Restricted Boltzmann Machines: Beyond Kullback--Leibler Divergence Learning
Yuichi Ishida, Yuma Ichikawa, Aki Dote, Toshiyuki Miyazawa, Koji Hukushima