Vector Valued
Vector-valued data, encompassing multiple correlated outputs or features, is a central theme in modern machine learning, with research focusing on developing algorithms and models capable of effectively handling such data's inherent complexities. Current efforts concentrate on adapting existing techniques like decision trees, Q-learning, and Gaussian processes to vector-valued settings, alongside exploring novel architectures such as vector-valued neural networks and capsule networks. These advancements are crucial for addressing diverse applications, including multi-objective optimization, multi-task learning, and the analysis of complex systems where multiple interdependent variables are involved.
Papers
Variation Spaces for Multi-Output Neural Networks: Insights on Multi-Task Learning and Network Compression
Joseph Shenouda, Rahul Parhi, Kangwook Lee, Robert D. Nowak
Small Total-Cost Constraints in Contextual Bandits with Knapsacks, with Application to Fairness
Evgenii Chzhen, Christophe Giraud, Zhen Li, Gilles Stoltz