Physic Based Metaverse Synthesis
Physics-based metaverse synthesis focuses on creating realistic and highly detailed virtual environments for applications like robotics, healthcare, and autonomous driving, leveraging physics engines to simulate realistic object interactions and behaviors. Current research emphasizes efficient data generation and compression techniques, often employing deep learning models (e.g., generative models, graph convolutional networks) for tasks such as 3D human pose estimation, scene reconstruction, and resource allocation in multi-user environments. This approach addresses the limitations of real-world data acquisition by generating large, diverse datasets for training AI models, ultimately improving the accuracy and efficiency of metaverse applications across various domains.
Papers
ResLearn: Transformer-based Residual Learning for Metaverse Network Traffic Prediction
Yoga Suhas Kuruba Manjunath, Mathew Szymanowski, Austin Wissborn, Mushu Li, Lian Zhao, Xiao-Ping Zhang
Discern-XR: An Online Classifier for Metaverse Network Traffic
Yoga Suhas Kuruba Manjunath, Austin Wissborn, Mathew Szymanowski, Mushu Li, Lian Zhao, Xiao-Ping Zhang