Paper ID: 2208.00465

Vector-Based Data Improves Left-Right Eye-Tracking Classifier Performance After a Covariate Distributional Shift

Brian Xiang, Abdelrahman Abdelmonsef

The main challenges of using electroencephalogram (EEG) signals to make eye-tracking (ET) predictions are the differences in distributional patterns between benchmark data and real-world data and the noise resulting from the unintended interference of brain signals from multiple sources. Increasing the robustness of machine learning models in predicting eye-tracking position from EEG data is therefore integral for both research and consumer use. In medical research, the usage of more complicated data collection methods to test for simpler tasks has been explored to address this very issue. In this study, we propose a fine-grain data approach for EEG-ET data collection in order to create more robust benchmarking. We train machine learning models utilizing both coarse-grain and fine-grain data and compare their accuracies when tested on data of similar/different distributional patterns in order to determine how susceptible EEG-ET benchmarks are to differences in distributional data. We apply a covariate distributional shift to test for this susceptibility. Results showed that models trained on fine-grain, vector-based data were less susceptible to distributional shifts than models trained on coarse-grain, binary-classified data.

Submitted: Jul 31, 2022