Shield Machine
"Shield" in the context of recent research refers to a diverse set of AI-based systems designed to enhance safety, security, and robustness across various applications. Current research focuses on developing "shield" mechanisms using large language models (LLMs) and other machine learning techniques to address challenges such as content moderation, copyright infringement in text generation, adversarial attacks on models, and even environmental impact mitigation in data centers. These efforts aim to improve the reliability and trustworthiness of AI systems, impacting fields ranging from cybersecurity and supply chain management to environmental sustainability and smart city development. The overarching goal is to create more resilient and responsible AI systems.
Papers
Robust image classification with multi-modal large language models
Francesco Villani, Igor Maljkovic, Dario Lazzaro, Angelo Sotgiu, Antonio Emanuele Cinà, Fabio Roli
A General Safety Framework for Autonomous Manipulation in Human Environments
Jakob Thumm, Julian Balletshofer, Leonardo Maglanoc, Luis Muschal, Matthias Althoff