Post Processing
Post-processing in various fields involves refining initial outputs from models or systems to improve accuracy, reliability, and fairness. Current research focuses on leveraging deep learning architectures, such as U-Nets and transformers, alongside statistical methods like quantile regression forests and scoring rule minimization, to achieve these objectives. Applications range from enhancing weather forecasts and improving the quality of audio and image data to mitigating bias in machine learning models and optimizing the performance of speech recognition systems. These advancements contribute to more accurate, reliable, and equitable outcomes across diverse scientific and practical domains.
Papers
Improving probabilistic forecasts of extreme wind speeds by training statistical post-processing models with weighted scoring rules
Jakob Benjamin Wessel, Christopher A. T. Ferro, Gavin R. Evans, Frank Kwasniok
Regression under demographic parity constraints via unlabeled post-processing
Evgenii Chzhen, Mohamed Hebiri, Gayane Taturyan
SEL-CIE: Knowledge-Guided Self-Supervised Learning Framework for CIE-XYZ Reconstruction from Non-Linear sRGB Images
Shir Barzel, Moshe Salhov, Ofir Lindenbaum, Amir Averbuch
Refining Coded Image in Human Vision Layer Using CNN-Based Post-Processing
Takahiro Shindo, Yui Tatsumi, Taiju Watanabe, Hiroshi Watanabe