Trading Devil
"Trading Devil" research broadly investigates the vulnerabilities and limitations of various machine learning models, focusing on mitigating adversarial attacks and improving model robustness and fairness. Current research emphasizes developing novel algorithms and model architectures, such as diffusion models and transformer networks, to address issues like backdoor attacks, data poisoning, and bias in areas ranging from medical image analysis to natural language processing. This work is crucial for enhancing the reliability and trustworthiness of AI systems across diverse applications, particularly in sensitive domains where security and fairness are paramount.
Papers
SyncTalk: The Devil is in the Synchronization for Talking Head Synthesis
Ziqiao Peng, Wentao Hu, Yue Shi, Xiangyu Zhu, Xiaomei Zhang, Hao Zhao, Jun He, Hongyan Liu, Zhaoxin Fan
The devil is in the fine-grained details: Evaluating open-vocabulary object detectors for fine-grained understanding
Lorenzo Bianchi, Fabio Carrara, Nicola Messina, Claudio Gennaro, Fabrizio Falchi
The Devil is in the Data: Learning Fair Graph Neural Networks via Partial Knowledge Distillation
Yuchang Zhu, Jintang Li, Liang Chen, Zibin Zheng
The Devil in the Details: Simple and Effective Optical Flow Synthetic Data Generation
Kwon Byung-Ki, Kim Sung-Bin, Tae-Hyun Oh
The Devil is in the Errors: Leveraging Large Language Models for Fine-grained Machine Translation Evaluation
Patrick Fernandes, Daniel Deutsch, Mara Finkelstein, Parker Riley, André F. T. Martins, Graham Neubig, Ankush Garg, Jonathan H. Clark, Markus Freitag, Orhan Firat