Multimodal Biometric
Multimodal biometrics aims to enhance the accuracy and robustness of biometric authentication by combining data from multiple sources, such as face, iris, fingerprint, gait, and voice. Current research emphasizes improving fusion techniques, particularly addressing challenges like missing data through imputation methods and developing adaptive fusion strategies that account for varying data quality and environmental conditions. This field is significant for advancing security applications, particularly in areas like access control and surveillance, by creating more reliable and resilient identification systems.
Papers
The Multiscenario Multienvironment BioSecure Multimodal Database (BMDB)
Javier Ortega-Garcia, Julian Fierrez, Fernando Alonso-Fernandez, Javier Galbally, Manuel R Freire, Joaquin Gonzalez-Rodriguez, Carmen Garcia-Mateo, Jose-Luis Alba-Castro, Elisardo Gonzalez-Agulla, Enrique Otero-Muras, Sonia Garcia-Salicetti, Lorene Allano, Bao Ly-Van, Bernadette Dorizzi, Josef Kittler, Thirimachos Bourlai, Norman Poh, Farzin Deravi, Ming NR Ng, Michael Fairhurst, Jean Hennebert, Andreas Humm, Massimo Tistarelli, Linda Brodo, Jonas Richiardi, Andrezj Drygajlo, Harald Ganster, Federico M Sukno, Sri-Kaushik Pavani, Alejandro Frangi, Lale Akarun, Arman Savran
Benchmarking Quality-Dependent and Cost-Sensitive Score-Level Multimodal Biometric Fusion Algorithms
Norman Poh, Thirimachos Bourlai, Josef Kittler, Lorene Allano, Fernando Alonso-Fernandez, Onkar Ambekar, John Baker, Bernadette Dorizzi, Omolara Fatukasi, Julian Fierrez, Harald Ganster, Javier Ortega-Garcia, Donald Maurer, Albert Ali Salah, Tobias Scheidat, Claus Vielhauer