Sequence Prior
Sequence priors are probabilistic models that incorporate assumptions about the structure or regularity within sequential data, aiming to improve the efficiency and robustness of learning algorithms. Current research focuses on developing and applying these priors in diverse fields, including reinforcement learning, video analysis, and molecular structure determination, often employing techniques like autoregressive models and information-theoretic objectives to encode these assumptions. This work leads to improved model performance, such as faster learning in reinforcement learning and enhanced resolution in cryo-EM reconstruction, demonstrating the broad applicability and impact of sequence priors across scientific domains.
Papers
March 18, 2024
May 26, 2023
May 12, 2022