GNN Framework
Graph Neural Networks (GNNs) are a powerful class of deep learning models designed to analyze graph-structured data, aiming to learn meaningful representations of nodes and edges for various tasks like classification and prediction. Current research focuses on improving GNN efficiency (e.g., through hardware-aware architecture search and optimized computation on GPUs), addressing biases and fairness in predictions, and enhancing their expressiveness and generalization capabilities by incorporating techniques like high-frequency information and multi-module architectures. These advancements are significant because they enable GNNs to tackle increasingly complex real-world problems across diverse domains, including social networks, healthcare, and combinatorial optimization, while also improving their scalability and reliability.
Papers
Disentangling, Amplifying, and Debiasing: Learning Disentangled Representations for Fair Graph Neural Networks
Yeon-Chang Lee, Hojung Shin, Sang-Wook Kim
HGNAS: Hardware-Aware Graph Neural Architecture Search for Edge Devices
Ao Zhou, Jianlei Yang, Yingjie Qi, Tong Qiao, Yumeng Shi, Cenlin Duan, Weisheng Zhao, Chunming Hu