Black Box
"Black box" refers to systems whose internal workings are opaque, hindering understanding and analysis. Current research focuses on methods to analyze and mitigate the limitations of black-box models, particularly deep neural networks, across diverse applications like code generation, robot design, and autonomous systems. Key approaches involve developing surrogate models, employing novel optimization techniques, and designing explainable AI (XAI) methods to enhance interpretability and trustworthiness. This research is crucial for ensuring the safety, reliability, and fairness of increasingly prevalent AI systems in various fields.
Papers
Opening the Black-Box: A Systematic Review on Explainable AI in Remote Sensing
Adrian Höhl, Ivica Obadic, Miguel Ángel Fernández Torres, Hiba Najjar, Dario Oliveira, Zeynep Akata, Andreas Dengel, Xiao Xiang Zhu
ARL2: Aligning Retrievers for Black-box Large Language Models via Self-guided Adaptive Relevance Labeling
Lingxi Zhang, Yue Yu, Kuan Wang, Chao Zhang
M4GT-Bench: Evaluation Benchmark for Black-Box Machine-Generated Text Detection
Yuxia Wang, Jonibek Mansurov, Petar Ivanov, Jinyan Su, Artem Shelmanov, Akim Tsvigun, Osama Mohanned Afzal, Tarek Mahmoud, Giovanni Puccetti, Thomas Arnold, Alham Fikri Aji, Nizar Habash, Iryna Gurevych, Preslav Nakov
Trust Regions for Explanations via Black-Box Probabilistic Certification
Amit Dhurandhar, Swagatam Haldar, Dennis Wei, Karthikeyan Natesan Ramamurthy
Embracing the black box: Heading towards foundation models for causal discovery from time series data
Gideon Stein, Maha Shadaydeh, Joachim Denzler
Ten Words Only Still Help: Improving Black-Box AI-Generated Text Detection via Proxy-Guided Efficient Re-Sampling
Yuhui Shi, Qiang Sheng, Juan Cao, Hao Mi, Beizhe Hu, Danding Wang
Under manipulations, are some AI models harder to audit?
Augustin Godinot, Gilles Tredan, Erwan Le Merrer, Camilla Penzo, Francois Taïani