Meta Model
Meta-modeling involves creating higher-level models that represent classes of systems or tasks, rather than individual instances. Current research focuses on improving meta-model performance through techniques like probabilistic frameworks, recurrent patching for handling long sequences, and the use of transformer architectures, particularly in applications such as system identification and LLM behavior analysis. These advancements aim to enhance the efficiency and accuracy of predictions, particularly in scenarios with limited data or computationally expensive simulations, leading to improved model interpretability and more effective evaluation metrics for generation tasks. The broader impact spans diverse fields, from optimizing virtual prototyping processes to mitigating risks associated with large language model hallucinations.
Papers
Meta-Models: An Architecture for Decoding LLM Behaviors Through Interpreted Embeddings and Natural Language
Anthony Costarelli, Mat Allen, Severin Field
MetaMetrics: Calibrating Metrics For Generation Tasks Using Human Preferences
Genta Indra Winata, David Anugraha, Lucky Susanto, Garry Kuwanto, Derry Tanti Wijaya