Kolmogorov Arnold Network
Kolmogorov-Arnold Networks (KANs) are a novel type of neural network architecture that uses learnable activation functions placed on edges, rather than nodes, offering a potential alternative to traditional Multi-Layer Perceptrons (MLPs). Current research focuses on improving KAN efficiency and accuracy through variations like SincKANs and EKANs (Equivariant KANs), exploring their application in diverse fields such as image processing, function approximation, and solving partial differential equations. The significance of KANs lies in their potential for enhanced interpretability and performance in specific tasks, although comparisons with MLPs reveal varying degrees of success depending on the application and dataset characteristics.
Papers
U-KAN Makes Strong Backbone for Medical Image Segmentation and Generation
Chenxin Li, Xinyu Liu, Wuyang Li, Cheng Wang, Hengyu Liu, Yixuan Yuan
A comprehensive and FAIR comparison between MLP and KAN representations for differential equations and operator networks
Khemraj Shukla, Juan Diego Toscano, Zhicheng Wang, Zongren Zou, George Em Karniadakis
Leveraging KANs For Enhanced Deep Koopman Operator Discovery
George Nehma, Madhur Tiwari
Kolmogorov-Arnold Networks for Time Series: Bridging Predictive Power and Interpretability
Kunpeng Xu, Lifei Chen, Shengrui Wang
A Temporal Kolmogorov-Arnold Transformer for Time Series Forecasting
Remi Genet, Hugo Inzirillo
ReLU-KAN: New Kolmogorov-Arnold Networks that Only Need Matrix Addition, Dot Multiplication, and ReLU
Qi Qiu, Tao Zhu, Helin Gong, Liming Chen, Huansheng Ning