Music Performance
Music performance research currently explores the intersection of human artistry and technological augmentation, focusing on improving both the creation and experience of musical works. Key areas include developing AI models, such as variational autoencoders and deep convolutional neural networks, to analyze and generate musical performances, often incorporating multimodal data (audio, video, MIDI) and leveraging transfer learning techniques for efficient model training. This research aims to enhance musical expression through tools that offer detailed control over synthesis parameters, facilitate score following and alignment, and provide insightful explanations of AI-driven processes within a performance context. Ultimately, these advancements aim to empower musicians with innovative tools and deepen our understanding of musical interpretation and perception.