eyeReader Interface

eyeReader: EEG Brain Computer Interface for Turning eBook Pages (Washington University in St. Louis)

One of the authors wearing the Emotiv EEG headset, reading an eBook on a laptop.

Jenny Liu, Jason Dunkley, Will Ransohoff, Jasmine Kwasa, Matt Everett, David Welshon, Mason Mahoney (Washington University in St. Louis)


Approximately 900,000 Americans have little to no control over their hands, which causes daily tasks to become difficult and decreases quality of life. Our goal is to help improve quality of life with a brain computer interface that allows subjects to read an electronic book (eBook), using affordable, commercially available components. Concentrating on a blinking square induces SSVEP (steady-state visually evoked potentials) signals in the brain’s occipital lobe at the same frequency as the stimulus. The SSVEP signal is used as a control signal to flip pages of an eBook. EEG signals from the occipital lobe are acquired with the Emotiv EPOC headset while the subject concentrates on two squares blinking at different frequencies. Frequency domain features detect if the subject is concentrating on one of the blinking squares, and the control signal is used to turn eBook pages.




In the US, motor neuron diseases (MNDs) such as multiple sclerosis (MS) and amyotrophic lateral sclerosis (ALS) affect 7 out of every 100,000 people (MNDA, 2013). Roughly 900,000 Americans have little or no control over their hands (Bureau, 2005). Such individuals may have difficulty interacting with their environment. Many individuals with advanced-stage MNDs cannot use existing assistive-living technologies that require coordinated muscle control, such as voice recognition, puff and sip, and tongue interfaces. As a result, these individuals experience reduced quality of life due to a decreased ability to interact with their environments and act independently. Hiring an aide may be cost-prohibitive, and in an assisted-living environment, the aide may only have a limited amount of time for each resident. Moreover, noninvasive interfaces that require minimal voluntary motion are expensive, have a bulky physical setup, and are still in the development phase.

We propose an assistive technology, the eyeReader, which is a completely hands-free eBook reader that the user controls with a noninvasive EEG-based brain computer interface system. We offer that the eyeReader will improve the quality of life of MND patients by providing entertainment and the satisfaction of performing the task of turning pages independently. The eyeReader allows the user to read comfortably from a laptop or tablet. On the screen, an eBook page is flanked by two rectangles that blink at controlled rates. When the user wants to turn the page forward or backward, they will concentrate on one of the blinking rectangles (blinker). Concentrating on a blinking light produces steady-state visually evoked potentials (SSVEPs) in the brain’s occipital lobe. SSVEPs range between 3.5Hz and 75Hz and are well-characterized, robust signals with a relatively high signal to noise ratio (Vialatte, Maruice, Dauwels, & Cichocki, 2010). These brain signals are accessible via EEG.  We used an inexpensive, easy-to-use EEG headset available to consumers to record from the occipital lobe. Depending on the user’s choice to focus on one of the two blinkers, the eBook page will turn forwards or backwards. The ability for individuals with advanced MNDs to navigate a computer interface with their brain signals will improve quality of life.

Our goal is to enable people affected by severe loss of voluntary motion due to motor neuron diseases and other conditions, by providing an easy-to-use and inexpensive eBook interface.



The eyeReader allows users to use an eBook via an EEG headset and a laptop or tablet (Figure 1). Our system has four components: the interface, signal acquisition, signal processing, and calibration.

eyeReader data flow shows that EEG signals are used to turn eBook pages

Figure 1 - Block diagram of data flow through the eyeReader system. EEG signals are acquired with the Emotiv EPOC (TM) headset with a 128Hz sample rate. The signal is processed on a laptop for processing. Signal processing performed with C++ consists of signal feature selection, detection, and generation of the control signal. The control signal flips the eBook page, which provides visual feedback for the user.


Concentrating on a continuously flashing rectangle (blinker) induces SSVEP signals in the brain’s occipital lobe at the same frequency as the stimulus. Our user interface has two windows flashing at 7Hz and 10Hz. When the software detects that the user is looking at either window, the eBook pages turn forward or backward, respectively. If neither frequency is detected, the eBook stays at the current page. When the user sees the page turned correctly, the visual feedback improves the user’s ability to concentrate on the stimuli and increases future user accuracy (Middendorf, McMillan, Calhoun, & Jones, 2000).

eyeReader Interface

Figure 2 - (Left) eyeReader interface is an eBook page flanked by two flashing lights (blinkers), (right) a user wearing the Emotiv EEG headset can concentrate on one of two blinkers to navigate the eBook

Signal Acquisition

The Emotiv EPOC Headset collects EEG data from 14 electrodes and outputs a signal digitized at 128 samples/second wirelessly to the laptop. Our software only uses EEG data from two electrodes, O1 and O2 electrodes, which correspond to the left and right occipital regions (Emotiv, 2013). The Emotiv headset’s O1 and O2 electrodes can be easily adjusted to sit right above the occipital protuberance.

Signal Processing

Data collection and processing are performed in C++ by a module named EEGLogger (Figure 3) . EEGLogger fills a ten-second long first-in-first-out buffer with data from the Emotiv headset. Every second EEGLogger updates its buffer, performs signal averaging (Figure 4A and B), and performs the Fourier Transform (FFTW library) on the full ten seconds available. Since the SSVEP signals have the same frequency as the stimuli frequency, EEGLogger looks for the largest peak in the vicinity of the stimuli frequency and records its magnitude. If there is not a distinct peak, EEGLogger takes the magnitude of the frequency corresponding to the stimuli frequency. The magnitude at the stimuli frequency is compared to the magnitudes of two frequencies nearby, one a little above and one a little below the stimuli frequency to calculate two frequency domain features (Figure 4C). The user concentrating on one of the stimuli frequencies is detected when both frequency domain features exceed predetermined thresholds.

Data flow diagram for EEG Logger

Figure 3 - (Left) eyeReader interface is an eBook page flanked by two flashing lights (blinkers), (right) a user wearing the Emotiv EEG headset can concentrate on one of two blinkers to navigate the eBook

A - Signal averaging improves signal to noise ratio. B - 'Sliding window' signal averaging uses slightly overlapping samples for averaging, C - Frequency domain features

Figure 4 - (A): Signal averaging improves the SNR of the EEG signal showing a previously hidden peak at 16Hz. (B) The 'Sliding Window' signal averaging scheme fits allows more averages to be taken from a given sample. A frequency is introduced at 1.6Hz by this technique, but this is outside our frequency range of interest. (C): Frequency domain features must exceed thresholds to detect an SSVEP signal.


To improve detection accuracy, a training interface uses audio commands to guide the user to look at the blinkers or the eBook page, set the detection algorithm thresholds.



The eyeReader was tested on two healthy subjects to validate our method for detecting which of two blinkers in the visual field the subject was concentrating on.

  1. Brain signals over the occipital lobe acquired using the commercially available Emotiv EPOC ™ headset have distinct peaks in the frequency domain corresponding to the stimuli frequency (Figure 5 and Figure 6).
  2. Our detection algorithm successfully uses SSVEP signals to interface with an eBook reader (video above).
  3. Accuracy may be improved by customizing the stimuli frequency to the user (Figure 7A). JD was tested using 7.2Hz and 16Hz, while JL was tested using 12.8Hz and 16Hz.
  4. Subject accuracy may improve with training (Figure 7).
A - Electrode placement, B - Frequency domain peaks at stimuli frequencies, C - Separatability of SSVEP signals from different stimuli

Figure 5 – (A) Positions of Emotiv EPOC (TM) electrodes. We used O1 and O2 electrodes. (B): Frequency domain analysis shows distinct peaks at 10Hz and 7.2Hz when the subject JD looks at one of two stimuli and no peaks when the subject looks at neither stimuli. (C) Brain signals with and without stimuli can be separated using peak magnitudes in the frequency domain (data from JL). As shown, the brain signals from the subject focusing on the 16 Hz stimulus and no stimulus occupy distinct locations on the plot, indicating very good separability. 16Hz and 12.8Hz also show very good separability. 12.8Hz stimulus and no stimulus show medium separability due to large variations in the height of the 12.8Hz peak.

Concentrating on one of the two blinkers produces brain signals at the stimuli frequency evidenced by the peaks in Figure 5(B). Distinct peaks in the frequency domain are used to detect which blinker the subject is concentrating on Figure 5(C). In the study, the subject was screened for a variety of stimuli frequencies. Figure 7 shows three of ten frequencies tested on JL. The 8Hz stimulus was not used due to low accuracy. The subject was randomly directed to look at one of the two blinkers using recorded verbal commands. The detection accuracies (listed in Table 1) range from 80% and 99% and average to 86.2%. At least one control signal can be detected every 10 seconds, for an approximately 5.2 bits/minute information transfer rate as calculated by Kronegg et al (Kronegg, 2005). We expect the detection accuracy to improve with better signal detection algorithms and subject training because JL was able to achieve 99% detection accuracy with 16Hz after training for 8 sessions over a month (Figure 7).

Table of statistics from each subject

Table 1 - Summary of data collected from JD and JL.

Raw EEG signal from the O1 electrode. The last ten seconds is highlighted in magenta, and analyzed in the frequency domain [B] which shows a strong peak at 10Hz (black circle) and no peak at 7.2Hz (red circle). [C]: The frequency detected by the algorithm over time (blue lines) follow the frequency the user was commanded verbally to look at (green line).”]A - Raw EEG data, B - Frequency domain peak, C - Command control plot
ROC curves

Figure 7 - ROC curves (A) before training and (B) after eight training sessions spread over four months, about two hours long each time.



SSVEP-based brain computer interfaces such as the eyeReader can provide a wealth of control signals that can be used to make assistive-living technologies. Since the individual only needs to concentrate on one of several blinkers in the visual field, individuals with advanced motor neuron diseases and other conditions that restrict voluntary muscular control may benefit from this non-invasive interface, especially if they do not have sufficient muscle control for speech recognition, puff-and-sip, and chin/tongue lever-based technologies. Specifically, the eyeReader is a system with commercially available components that allows users with limited hand control to read eBooks. The user can read up to twelve hours straight, which improves quality of living by providing a source of entertainment and the satisfaction of performing a task independently. The eyeReader can be used by anyone capable of concentrating on one of two blinkers in the visual field. This provides an easy-to-use, inexpensive interface for individuals with advanced-stage MNDs.

For individuals who already own a laptop or computer, the cost of the Emotiv EEG Headset is $299 retail. This is comparable to the $200-$700 cost of switch-based interfaces we found online.

Prototype cost table

Table 2 - Device cost is rather inexpensive if the user already owns a computer.

Current interfaces for patients with motor neuron diseases (MNDs) and quadriplegia use a variety of control signals derived from EOG (Mark Sagar, 2007), voice recognition (Anschutz, Luther-Krug, Seel, & Jones, 2012), sip-and-puff (Koerlin, 2011), or tongue and chin position (Caltenco, Struijk, & Breidegard, 2010). SSVEP-based brain computer interfaces only require the user to concentrate on a blinker in the visual field, so this technology is uniquely suited for patients who are physically unable to operate those devices. While SSVEP-based computer interfaces have been implemented in various research contexts (Zhu, 2010), the eyeReader uses the Emotiv EPOC EEG headset as an inexpensive and accessible platform.

Based on the recommendation of a physical therapist with two patients who have MS, we will make improvements to the prototype. We plan to create a mount for the laptop to interface with the patients’ beds. Moreover, now that we have a reasonable detection algorithm with simple commands, our next goal is to use this control signal to accept and terminate a phone call, using a web-based telephone service such as Google Voice and a smartphone, the first item on a patient’s ‘wish list’. Afterwards, we will work on a physical interface that opens and closes the patient’s room door. The eyeReader is a first step towards expanding the choice of interfaces available to individuals with advanced MNDs to improve quality of life.


We would like to thank Professor Arye Nehorai for providing the BCI student group with workspace. Our graduate student advisers Elad Gilboa and Chuck Holmes, and Professors Ed Richter and Robert Morley gave us good advice about signal processing. Special thanks goes to the two test subjects who helped us gather data. We would like to thank physical therapist Arlene Goldberg for providing guidance on applications. This work is supported by the McKelvey Undergraduate Research Fellowship, the Washington University Electrical and Systems Engineering Department, and the Washington University Engineering Project Review Board. We would like to thank Emotiv Systems for donating a headset.


Anschutz, J. R., Luther-Krug, M. V., Seel, R. T., & Jones, M. L. (2012, Nov 20). Patent No. US8314691 B2. US.

Arnaud Delorme, A. L. (2012, April 10). Patent No. US8155736 B2. US.

Bureau, U. C. (2005). Retrieved from Prevalence of Disability Among Individuals 15 Years and Older by Specific Measures of : http://www.census.gov/hhes/www/disability/sipp/disab05/d05tb1.pdf

Caltenco, H. A., Struijk, L. N., & Breidegard, B. (2010). TongueWise: Tongue-computer interface software for people with tetraplegia. 32nd Annual International Conference of the IEEE EMBS. Buenos Aires.

Collura, T. F. (2007, September 11). Patent No. US7269456 B2. US.

Emotiv. (2013). EEG Features. Retrieved April 5, 2013, from http://www.emotiv.com/eeg/features.php

Koerlin, J. M. (2011, Jan 26). Patent No. EP1774944 B1. Europe.

Kronegg, J. S. (2005). Information-transfer rate modeling of EEG-based synchronized brain-computer interfaces.

Mark Sagar, R. S. (2007, June 20). Patent No. EP1797537 A2. Europe.

Middendorf, M., McMillan, G., Calhoun, G., & Jones, K. S. (2000). Brain-computer interfaces based on the steady-state visual-evoked response. IEEE Transactions on Rehabilitation Engineering, 8(2), 211-214.

MNDA. (2013). About MND. Retrieved April 5, 2013, from http://www.mndassociation.org/what-is-mnd/Brief+guide+to+MND

Vialatte, F.-B., Maruice, M., Dauwels, J., & Cichocki, A. (2010). Steady-state visually evoked potentials: focus on essential paradigms and future perspectives. Progress in Neurobiology, 90, 418-438.

Zhu, D. (2010). A Survey of Stimulation Methods Used in SSVEP-Based BCIs. Computational Intelligence and Neuroscience, 1-12.


, , , , ,

Powered by WordPress. Designed by WooThemes

Skip to toolbar