Paper ID: 2411.11768
AdaptLIL: A Gaze-Adaptive Visualization for Ontology Mapping
Nicholas Chow, Bo Fu
This paper showcases AdaptLIL, a real-time adaptive link-indented list ontology mapping visualization that uses eye gaze as the primary input source. Through a multimodal combination of real-time systems, deep learning, and web development applications, this system uniquely curtails graphical overlays (adaptations) to pairwise mappings of link-indented list ontology visualizations for individual users based solely on their eye gaze.
Submitted: Nov 18, 2024