Multimodal Game
Multimodal games leverage diverse data sources like video, audio, and text to create richer and more engaging gaming experiences. Current research focuses on developing models that can understand and generate game commentary from multimodal inputs, improve AI agent performance through multimodal instruction tuning, and create immersive virtual reality experiences using data from multiple sensors. This research is significant for advancing AI capabilities in understanding complex dynamic situations, improving game design and accessibility, and creating novel educational tools.
Papers
May 2, 2024
April 30, 2024
February 6, 2024