BRIO Tool
BRIO is a suite of tools designed to analyze and mitigate bias and unfairness in AI systems, particularly focusing on model-agnostic bias detection and fairness risk evaluation. Current research emphasizes applying BRIO to assess fairness in specific domains like credit scoring, and explores various model architectures (e.g., neural networks, transformers) and algorithms (e.g., random features, Bayesian regularization) for improved accuracy and efficiency. The significance of BRIO lies in its potential to promote ethical AI development and deployment by providing quantitative methods for identifying and addressing societal biases embedded within AI models, thereby enhancing fairness and trustworthiness in various applications.
Papers
ToolSword: Unveiling Safety Issues of Large Language Models in Tool Learning Across Three Stages
Junjie Ye, Sixian Li, Guanyu Li, Caishuang Huang, Songyang Gao, Yilong Wu, Qi Zhang, Tao Gui, Xuanjing Huang
DataDreamer: A Tool for Synthetic Data Generation and Reproducible LLM Workflows
Ajay Patel, Colin Raffel, Chris Callison-Burch
Combining shape and contour features to improve tool wear monitoring in milling processes
M. T. García-Ordás, E. Alegre-Gutiérrez, V. González-Castro, R. Alaiz-Rodríguez
Tool wear monitoring using an online, automatic and low cost system based on local texture
M. T. García-Ordás, E. Alegre-Gutiérrez, R. Alaiz-Rodríguez, V. González-Castro