NL2VIS Benchmark
NL2VIS research focuses on automatically generating data visualizations from natural language descriptions, aiming to bridge the gap between human-understandable queries and visual data analysis. Current efforts concentrate on developing robust and accurate models, often leveraging large language models (LLMs) and exploring techniques like retrieval-augmented generation to improve performance across diverse datasets and handle variations in phrasing. The development of comprehensive benchmarks, such as VisEval and nvBench, is crucial for evaluating and advancing these models, ultimately enabling more intuitive and accessible data exploration for a wider range of users.
Papers
July 1, 2024
April 26, 2024
April 10, 2024