Structured Output
Structured output in machine learning focuses on generating outputs with predefined formats and structures, improving the reliability and usability of AI systems. Current research emphasizes enhancing large language models (LLMs) to produce structured outputs like JSON, code, or tables, often employing techniques like retrieval-augmented generation (RAG) and constrained decoding to improve accuracy and efficiency. This area is crucial for deploying LLMs in real-world applications requiring precise and interpretable results, addressing challenges such as hallucination and bias while improving the overall reliability and trustworthiness of AI systems.
Papers
LongWriter: Unleashing 10,000+ Word Generation from Long Context LLMs
Yushi Bai, Jiajie Zhang, Xin Lv, Linzhi Zheng, Siqi Zhu, Lei Hou, Yuxiao Dong, Jie Tang, Juanzi Li
Automated Software Tool for Compressing Optical Images with Required Output Quality
Sergey Krivenko, Alexander Zemliachenko, Vladimir Lukin, Alexander Zelensky
Feature learning in finite-width Bayesian deep linear networks with multiple outputs and convolutional layers
Federico Bassetti, Marco Gherardi, Alessandro Ingrosso, Mauro Pastore, Pietro Rotondo
LLM as a Scorer: The Impact of Output Order on Dialogue Evaluation
Yi-Pei Chen, KuanChao Chu, Hideki Nakayama