By combining eye tracking technologies and a brain-based selection indicator, the human’s intended point-of-action can be extracted from the scene to capture the sophisticated human ability to identify points in complex scenes at the speed of thought. This project involves the development and evaluation of this eye-brain interface (EBI) system using commercial off-the-shelf eye tracking (maps gaze location and duration) and EEG systems combined with custom algorithms and processing to detect selection triggers from brain-based signals and combine all sensor outputs in real-time.
Such a system will allow a human-robot team to operate together to accomplish a task that requires no overt action by the human, with response times and cognitive workload requirements that rival current state-of-the-art systems, including eye-mouse and eye-keyboard interfaces. Data can also be generated from this system to train machine vision to autonomously mimic human capabilities.