Supervised Baseline
Supervised baselines serve as crucial benchmarks in machine learning, providing a standard against which new, often unsupervised or semi-supervised, methods are compared. Current research focuses on improving these baselines and developing alternatives that require less labeled data, leveraging techniques like incorporating rough annotations, employing deep image priors, or utilizing efficient architectures such as SlowFast networks and LinearGNNs. These advancements are significant because they address the limitations of fully supervised approaches, particularly in data-scarce scenarios, leading to more efficient and robust models across diverse applications including image processing, natural language processing, and medical imaging.
Papers
Cross-table Synthetic Tabular Data Detection
G. Charbel N. Kindji (LACODAM), Lina Maria Rojas-Barahona, Elisa Fromont (LACODAM), Tanguy Urvoy
ChatDiT: A Training-Free Baseline for Task-Agnostic Free-Form Chatting with Diffusion Transformers
Lianghua Huang, Wei Wang, Zhi-Fan Wu, Yupeng Shi, Chen Liang, Tong Shen, Han Zhang, Huanzhang Dou, Yu Liu, Jingren Zhou