Dimensional Approximation

Dimensional approximation focuses on efficiently representing high-dimensional data or functions using lower-dimensional representations, improving computational tractability and interpretability. Current research emphasizes developing novel neural network architectures, such as those based on Kolmogorov-Arnold networks and incorporating techniques like path signatures and hypernetworks, alongside more classical approaches like principal component analysis and spline quasi-interpolation. These advancements are crucial for tackling challenges in diverse fields, including scientific computing (e.g., solving partial differential equations), financial modeling (e.g., predicting Volterra processes), and machine learning (e.g., improving the robustness of classifiers).

Papers