Random Transformation
Random transformations are being extensively studied for enhancing the robustness and capabilities of machine learning models, particularly deep learning architectures like transformers. Current research focuses on leveraging these transformations for tasks such as data purification to defend against poisoning attacks, improving the reliability of vision-language models by mitigating hallucinations, and accelerating the convergence of optimization algorithms like Adam and RMSProp. These techniques offer significant potential for improving the generalization, reliability, and efficiency of machine learning systems across various applications, from image processing and natural language processing to anomaly detection.
Papers
PureGen: Universal Data Purification for Train-Time Poison Defense via Generative Model Dynamics
Sunay Bhat, Jeffrey Jiang, Omead Pooladzandi, Alexander Branch, Gregory Pottie
RITUAL: Random Image Transformations as a Universal Anti-hallucination Lever in LVLMs
Sangmin Woo, Jaehyuk Jang, Donguk Kim, Yubin Choi, Changick Kim