Tabular Data
Tabular data, ubiquitous in various fields, presents unique challenges for machine learning due to its structured nature and mixed data types. Current research focuses on improving model performance through techniques like self-supervised learning (e.g., JEPA), generative models (e.g., GANs, VAEs, diffusion models) for data augmentation and synthesis, and the integration of large language models (LLMs) for enhanced feature extraction and data generation. These advancements aim to address limitations in existing methods, such as gradient boosted decision trees, and improve accuracy, efficiency, and robustness in applications ranging from medical diagnosis to anomaly detection and scientific simulations.
Papers
PORTAL: Scalable Tabular Foundation Models via Content-Specific Tokenization
Marco Spinaci, Marek Polewczyk, Johannes Hoffart, Markus C. Kohler, Sam Thelin, Tassilo Klein
TabSeq: A Framework for Deep Learning on Tabular Data via Sequential Ordering
Al Zadid Sultan Bin Habib, Kesheng Wang, Mary-Anne Hartley, Gianfranco Doretto, Donald A. Adjeroh
Targeted synthetic data generation for tabular data via hardness characterization
Tommaso Ferracci, Leonie Tabea Goldmann, Anton Hinel, Francesco Sanna Passino
ERASMO: Leveraging Large Language Models for Enhanced Clustering Segmentation
Fillipe dos Santos Silva, Gabriel Kenzo Kakimoto, Julio Cesar dos Reis, Marcelo S. Reis