Motor disability can be caused by amyotrophic lateral sclerosis (ALS), multiple sclerosis (MS), cerebral palsy, spinal cord injury, traumatic brain injuries and stroke, as well as muscular dystrophy and other serious illnesses. In particular, speech of ALS patients will degenerate and eventually become incomprehensible, which significantly limits the quality of life. As the motor and speech capability of a patient at different stages evolves, the need for assistive communication keeps on changing. There is a lack of research-driven technical development to ensure all patients have access to effective communication and are able to participate to their greatest potential, by taking full advantage of various communication capabilities of individuals for maximizing human computer interaction.
The goal of the EyeCanDo project is to develop a lightweight, cost-effective, multi-modal based assistive communication app EyeCanDo running on iPad (or iPhone), with an optional consumer grade wireless EEG headset, to take full advantage of the available capabilities of an ALS patient, including eye gaze, facial expressions, and brain-computer interfaces, for optimal communication and improving the quality of life.
EyeCanDo provides augmented reality (AR) based approach for gaze and facial expression detection, Bayesian inference method to synergize information for improved accuracy and efficiency, novel gesture typing based on gaze, advanced deep learning-based approach for target selection and BCI P300 signal detection.