Paper ID: 2202.05331

Describing image focused in cognitive and visual details for visually impaired people: An approach to generating inclusive paragraphs

Daniel Louzada Fernandes, Marcos Henrique Fonseca Ribeiro, Fabio Ribeiro Cerqueira, Michel Melo Silva

Several services for people with visual disabilities have emerged recently due to achievements in Assistive Technologies and Artificial Intelligence areas. Despite the growth in assistive systems availability, there is a lack of services that support specific tasks, such as understanding the image context presented in online content, e.g., webinars. Image captioning techniques and their variants are limited as Assistive Technologies as they do not match the needs of visually impaired people when generating specific descriptions. We propose an approach for generating context of webinar images combining a dense captioning technique with a set of filters, to fit the captions in our domain, and a language model for the abstractive summary task. The results demonstrated that we can produce descriptions with higher interpretability and focused on the relevant information for that group of people by combining image analysis methods and neural language models.

Submitted: Feb 10, 2022