Machine Learning Model
Machine learning models aim to create systems that can learn from data and make predictions or decisions without explicit programming. Current research emphasizes improving model accuracy, interpretability, and robustness, focusing on architectures like deep neural networks, decision tree ensembles, and transformer models, as well as exploring decentralized learning and techniques for mitigating biases and vulnerabilities. These advancements are crucial for diverse applications, ranging from optimizing resource management (e.g., smart irrigation) to improving healthcare diagnostics and enhancing the security and trustworthiness of AI systems.
Papers
Machine Learning for Synthetic Data Generation: A Review
Yingzhou Lu, Minjie Shen, Huazheng Wang, Xiao Wang, Capucine van Rechem, Tianfan Fu, Wenqi Wei
Revisit the Algorithm Selection Problem for TSP with Spatial Information Enhanced Graph Neural Networks
Ya Song, Laurens Bliek, Yingqian Zhang
Participatory Personalization in Classification
Hailey Joren, Chirag Nagpal, Katherine Heller, Berk Ustun
Stop overkilling simple tasks with black-box models and use transparent models instead
Matteo Rizzo, Matteo Marcuzzo, Alessandro Zangari, Andrea Gasparetto, Andrea Albarelli
Multipath agents for modular multitask ML systems
Andrea Gesmundo
A Scalable and Efficient Iterative Method for Copying Machine Learning Classifiers
Nahuel Statuto, Irene Unceta, Jordi Nin, Oriol Pujol
Vertical Federated Learning: Taxonomies, Threats, and Prospects
Qun Li, Chandra Thapa, Lawrence Ong, Yifeng Zheng, Hua Ma, Seyit A. Camtepe, Anmin Fu, Yansong Gao
Example-Based Explainable AI and its Application for Remote Sensing Image Classification
Shin-nosuke Ishikawa, Masato Todo, Masato Taki, Yasunobu Uchiyama, Kazunari Matsunaga, Peihsuan Lin, Taiki Ogihara, Masao Yasui
Partitioning Distributed Compute Jobs with Reinforcement Learning and Graph Neural Networks
Christopher W. F. Parsonson, Zacharaya Shabka, Alessandro Ottino, Georgios Zervas
An investigation of challenges encountered when specifying training data and runtime monitors for safety critical ML applications
Hans-Martin Heyn, Eric Knauss, Iswarya Malleswaran, Shruthi Dinakaran
Demystifying Disagreement-on-the-Line in High Dimensions
Donghwan Lee, Behrad Moniri, Xinmeng Huang, Edgar Dobriban, Hamed Hassani
Fairness and Accuracy under Domain Generalization
Thai-Hoang Pham, Xueru Zhang, Ping Zhang
MOSAIC, acomparison framework for machine learning models
Mattéo Papin, Yann Beaujeault-Taudière, Frédéric Magniette
Investigating Feature and Model Importance in Android Malware Detection: An Implemented Survey and Experimental Comparison of ML-Based Methods
Ali Muzaffar, Hani Ragab Hassen, Hind Zantout, Michael A Lones
ChatGPT or Human? Detect and Explain. Explaining Decisions of Machine Learning Model for Detecting Short ChatGPT-generated Text
Sandra Mitrović, Davide Andreoletti, Omran Ayoub
Robust Meta Learning for Image based tasks
Penghao Jiang, Xin Ke, ZiFeng Wang, Chunxi Li
Bagging Provides Assumption-free Stability
Jake A. Soloff, Rina Foygel Barber, Rebecca Willett