Umut Orhan, Andrew Fowler, Marzieh Haghighi, Mohammad Moghadamfalahi, Asieh Ahani
RSVP Keyboard™ is an electroencephalography (EEG) based brain computer interface (BCI) typing system, designed for people with locked-in syndrome (LIS). It is a novel BCI system aiming to create accessibility options even for people with the most severe speech and physical impairments. There are two main features introduced to the BCI field by RSVP Keyboard™. Firstly, for visual stimulation it utilizes rapid serial visual presentation (RSVP), which does not require a precise gaze control. Secondly, RSVP Keyboard™ uses context information with statistical language modeling in an efficient manner, consequently speeding up typing considerably.
There exist millions of people in the world with complex communication needs due to motor and speech impairments. Assistive technologies relying on body or eye movements, voice or eye gaze might not be suitable for those who present with them most severe speech and physical impairments. Therefore brain computer interfaces have emerged as an alternative access method, possibly the only way of communication for people with locked-in syndrome.
Symbolic communication is one critical feature that distinguishes humans from other mammals. Having a means to communicate is a basic right for everyone. However, there is a group of individuals with locked-in syndrome who cannot rely on most currently available assistive technologies to control potential physiological input mechanisms because of their complex speech and physical impairments. In this project, we aim to design a portable and fast brain computer interface typing system to be used for daily communication even by individuals with the highest degree of speech and motor disability.
Scalp electroencephalography (EEG) is a non-invasive recording technique that measures brain electrical activity. Nowadays, usage of EEG to measure brain activity is becoming increasingly popular due to its portability, temporal resolution and low cost. These advantages draw our interest for using EEG as the base system, like most BCIs. In our design, we use gTec’s commercially available EEG acquisition system. Most existing EEG-based BCIs employ a visual stimulus that requires precise eye gaze control. Consequently, they are not usable by people who cannot control their gaze. To alleviate this problem, we employed rapid serial visual presentation (RSVP), a presentation methodology that only uses a fixed location on the screen. Hence even a person without the ability to control their eye gaze might be able to attend RSVP. The RSVP Keyboard™ detects the event related potential (ERP) elicited when a user sees a desired symbol appear in a series of symbols. The most prominent component of this ERP is called P300, which is a surprise or event recognition related positive voltage signal maximal at centroparietal regions. EEG also has very low signal to noise ratio making design of accurate typing systems very difficult. As a solution, to be able to increase the accuracy most of the BCI designers sacrificed the speed of typing and pursued stimuli being repeated multiple times. However this drawback considerably reduces the usability of such a system. To address this challenge, we integrate a statistical language model which uses contextual information (in this case, symbols and letters). In our design, we employed a tight fusion of a language model directly to the P300 classification decision process. Incorporation of language models at such an early stage of decision making is unprecedented within BCI design.
RSVP Keyboard™ hardware consists of three main components.
- EEG cap and electrodes: g.GAMMAcap with g.butterfly active electrodes from gTec. This prototype u
- tilizes 16 EEG sensors.
- Biosignal amplifier: g.USBAmp from gTec.
- Laptop computer: To acquire signals from the biosignal amplifier, display stimuli, apply signal processing, estimate context information and utilize machine learning.
The software implementation is done in Matlab. It consists of the following elements.
- Data acquisition: EEG signals and triggers coming from presentation are acquired in real time.
- Signal Processing: Signals are band pass filtered and partitioned into trials using the triggers. Dimensionality reduction and feature extraction are applied for each trial. Probability densities are calculated for the features.
- Language modeling: Character based n-gram models, trained from a text database, are utilized. For the whole alphabet, it generates context based probabilities.
- Decision making: The context based probabilities and the EEG features are fused in a probabilistic manner. Decision making process returns a probability mass for each symbol. This is also utilized to decide the next action, e.g. make a decision, setting the next sequence of presentation.
- Presentation: Shows the stimuli and feedback.
RSVP Keyboard™ opens up an accessibility route for expression to people who do not have any other way to communicate. The RSVP Keyboard™ brings two unique design features to the BCI field. Firstly, it utilizes RSVP, a gaze independent stimulus presentation method. Secondly, it employs a tight fusion of EEG and context information through joint decision making between a language model and the P300 classification. The system was successfully tested not only on participants without disabilities in our lab, as well as with people who have complex communication needs because of incomplete LIS. Cost currently remains a challenge for availability of this novel BCI to most people. The most expensive component of RSVP Keyboard™ is the biosignal amplifier. However, cheaper EEG acquisition systems have started to become available in the market. We are currently examining the feasibility of incorporating these new EEG acquisition systems into the RSVP Keyboard™ in order to considerably decrease the cost and, consequently, marketability of this novel accessible typing system.
The RSVP Keyboard™ is a collaborative project between REKNEW Projects and CSLU at OHSU, and CSL at NEU. We would like to thank Dr. Deniz Erdogmus and Dr. Murat Akcakaya from Northeastern University for their guidance; Dr. Melanie Fried-Oken, Betts Peters and Aimee Mooney from OHSU (Oregon Health and Science University) for sharing their experience about assistive technologies and testing the system on people with locked-in syndrome; Dr. Brian Roark from CSLU at OHSU for providing advice for the language model; Dr. Barry Oken from OHSU for sharing his experience on EEG; Meghan Miller for helping out with experiments. This work is supported by a grant from the National Institutes of Health/NIDCD #1RO1DC009834-01 and by the National Science Foundation IIS-0914808.