Paper ID: 2204.02448
Predicting and Explaining Mobile UI Tappability with Vision Modeling and Saliency Analysis
Eldon Schoop, Xin Zhou, Gang Li, Zhourong Chen, Björn Hartmann, Yang Li
We use a deep learning based approach to predict whether a selected element in a mobile UI screenshot will be perceived by users as tappable, based on pixels only instead of view hierarchies required by previous work. To help designers better understand model predictions and to provide more actionable design feedback than predictions alone, we additionally use ML interpretability techniques to help explain the output of our model. We use XRAI to highlight areas in the input screenshot that most strongly influence the tappability prediction for the selected region, and use k-Nearest Neighbors to present the most similar mobile UIs from the dataset with opposing influences on tappability perception.
Submitted: Apr 5, 2022