Signal Processing to Drive Human-Computer Interaction: EEG and eye-controlled interfaces
2: Institute for Web Science and Technologies, Koblenz, Germany
The evolution of eye tracking and brain-computer interfaces has given a new perspective on the control channels that can be used for interacting with computer applications. In this book leading researchers show how these technologies can be used as control channels with signal processing algorithms and interface adaptations to drive a human-computer interface. Topics included in the book include a comprehensive overview of eye-mind interaction incorporating algorithm and interface developments; modeling the (dis)abilities of people with motor impairment and their computer use requirements and expectations from assistive interfaces; and signal processing aspects including acquisition, preprocessing, enhancement, feature extraction, and classification of eye gaze, EEG (Steady-state visual evoked potentials, motor imagery and error-related potentials) and near-infrared spectroscopy (NIRS) signals. Finally, the book presents a comprehensive set of guidelines, with examples, for conducting evaluations to assess usability, performance, and feasibility of multi-model interfaces combining eye gaze and EEG based interaction algorithms. The contributors to this book are researchers, engineers, clinical experts, and industry practitioners who have collaborated on these topics, providing an interdisciplinary perspective on the underlying challenges of eye and mind interaction and outlining future directions in the field.
Inspec keywords: electroencephalography; human computer interaction; multimedia computing; medical signal processing; patient rehabilitation; visual evoked potentials; gaze tracking; learning (artificial intelligence); brain-computer interfaces; home computing; handicapped aids
Other keywords: graph signal processing; home environment; patient rehabilitation; eye-controlled interfaces; steady-state visual evoked potentials; persuasive design; signal processing; EEG-based BCIs; sensorimotor rhythms; motor impairment; machine learning; user models; eye tracking; multimedia interface; EEG controlled interfaces; NIRS signals; patient communication; motor imagery; human-computer interaction; brain-computer interfaces
Subjects: Knowledge engineering techniques; Electrodiagnostics and other electrical measurement techniques; User interfaces; Digital signal processing; Multimedia; Signal processing and detection; Home computing; Monographs, and collections; Computer assistance for persons with handicaps; General and management topics; Bioelectric signals; General electrical engineering topics; Electrical activity in neurophysiological processes; Interactive-input devices; Aids for the handicapped; Biology and medical computing
- Book DOI: 10.1049/PBCE129E
- Chapter DOI: 10.1049/PBCE129E
- ISBN: 9781785619199
- e-ISBN: 9781785619205
- Page count: 309
- Format: PDF
-
Front Matter
- + Show details - Hide details
-
p.
(1)
1 Introduction
- + Show details - Hide details
-
p.
1
–6
(6)
A radically new perspective on natural computer interaction is considered that has gained momentum aiming to deliver the technology that will allow people to operate computer applications using their eyes and mind. This interaction uses eye movements captured through eye-tracking devices following the user's gaze and brain electrical signals captured through electroencephalography (EEG) devices reflecting the brain's state. The fundamental assumption underlying this radically new perspective is that the signals (i.e. eye movements and brain electrical signals) obtained from people with motor impairment that are in good mental health can be reliably used to drive the interface control of a computer application.
-
Part I. Reviewing existing literature on the benefits of BCIs, studying the computer use requirements and modeling the (dis) abilities of people with motor impairment
2 The added value of EEG-based BCIs for communication and rehabilitation of people with motor impairment
- + Show details - Hide details
-
p.
9
–32
(24)
The development of novel BCIs raises new hopes for the communication and control as well as the motor rehabilitation of people with motor impairment. However, the majority of current published works are basically proof of concept studies with no clinical-based evidence of daily use by people with motor impairment. Research interest in the field of BCI systems is expected to increase and BCI design and development and will most probably continue to bring benefits to the daily lives of people with motor impairment. Moreover, to address the need for extensive training for self-regulation of SMR, and considering the effect of motivation in the BCI control performance, more enjoyable solutions such as Virtual Reality or Gaming/Painting could be used. These approaches re-enable patients to be creatively active and consequently promote feelings of happiness, self-esteem and well-being, and promote better quality-of-life. Also, as the goal of future studies should be the demonstration of a long-term beneficial impact of BCI technology on functional recovery and motor rehabilitation, extensive randomized controlled trials are required.
3 Brain–computer interfaces in a home environment for patients with motor impairment—the MAMEM use case
- + Show details - Hide details
-
p.
33
–47
(15)
Individuals with motor disabilities are marginalized and unable to keep up with the rest of the society in a digitized world with little opportunity for social inclusion. Specially designed electronic devices are required so as to enable patients to overcome their handicap and bypass the loss of their hand motor dexterity, which constitutes computer use impossible. The MAMEM's ultimate goal is to deliver technology in order to enable people with motor disabilities to operate the computer using interface channels that can be controlled through eye-movements and mental commands. Three groups of 10 patients with motor disabilities each were recruited to try the MAMEM platform at their home: patients diagnosed with high spinal cord injuries, patients with Parkinson's disease and patients with neuromuscular diseases. Patients had the MAMEM platform - including a built-in monitoring mechanism - at home for 1 month. Some of the participants used the platform extensively participating in social networks, while others did not use it that much. In general, patients with motor disabilities perceived the platform as a useful and satisfactory assistive device that enabled computer use and digital social activities.
4 Persuasive design principles and user models for people with motor disabilities
- + Show details - Hide details
-
p.
49
–79
(31)
When developing effective assistive technology, it is crucial to focus on how acceptance and continued use of the technology can be optimized considering the (complexity of the) user and his or her situation. Therefore, this chapter describes methods for creating user models and shows how these were applied to user groups (patients with spinal cord injury, Parkinson's disorder and neuromuscular disorders) of a newly developed assistive technology (AT). The user models include user characteristics such as demographics, relevant medical information, computer interaction behaviour and attitudes towards novel assistive devices. Next, this chapter describes persuasive strategies to improve user acceptance and continued use of AT, specifically aimed at motivating individuals with disabilities to learn to operate the AT and to use it, in order to increase their social participation. Also, this chapter shows how empirical research has tested the effectiveness of the proposed persuasive and personalization (i.e., incorporating user model knowledge) design elements. Finally, this chapter shows how the implications of these findings were used to improve the persuasive design requirements of the AT. In sum, this chapter shows how persuasive personalized design principles (implemented into the AT) improve user acceptance (evaluations) and continued use (performance).
-
Part II. Algorithms and interfaces for interaction control through eyes and mind
5 Eye tracking for interaction: adapting multimedia interfaces
- + Show details - Hide details
-
p.
83
–116
(34)
This chapter describes how eye tracking can be used for interaction. The term eye tracking refers to the process of tracking the movement of eyes in relation to the head, to estimate the direction of eye gaze. The eye gaze direction can be related to the absolute head position and the geometry of the scene, such that a point -of -regard (POR) may be estimated. We call the sequential estimations of the POR gaze signals in the following, and a single estimation gaze sample. In Section 5.1, we provide basic description of the eye anatomy, which is required to understand the technologies behind eye tracking and the limitations of the same. Moreover, we discuss popular technologies to perform eye tracking and explain how to process the gaze signals for real-time interaction. In Section 5.2, we describe the unique challenges of eye tracking for interaction, as we use the eyes primarily for perception and potentially overload them with interaction. In Section 5.3, we survey graphical interfaces for multimedia access that have been adapted to work effectively with eye-controlled interaction. After discussing the state-of-the-art in eye-controlled multimedia interfaces, we outline in Section 5.4 how the contextualized integration of gaze signals might proceed in order to provide richer interaction with eye tracking.
6 Eye tracking for interaction: evaluation methods
- + Show details - Hide details
-
p.
117
–144
(28)
Eye tracking as a hands -free input method can be a significant addition to the lives of people with a motor disability. With this motivation in mind, so far research in eye -controlled interaction has focused on several aspects of interpreting eye tracking as input for pointing, typing, and interaction methods with interfaces. In this chapter, we review and elaborate different evaluation methods used in gaze interaction research, so the readers can inform themselves of the procedure and metrics to assess their novel gaze interaction method or interface.
7 Machine-learning techniques for EEG data
- + Show details - Hide details
-
p.
145
–168
(24)
In this chapter, we present an introductory overview of machine-learning techniques that can be used to recognize mental states from electroencephalogram (EEG) signals in brain-computer interfaces (BCIs). More particularly, we discuss how to extract relevant and robust information from noisy EEG signals. Due to the spatial properties of the EEG acquisition modality, learning robust spatial filters is a crucial step in the analysis of EEG signals. Optimal spatial filters will help us extract relevant and robust features, helping considerably the subsequent recognition of mental states. Also, a few classification algorithms are presented to assign this information into a mental state. Furthermore, particular care will be given on algorithms and techniques related to steady-state visual evoked potentials (SSVEPs) BCI and sensorimotor rhythms (SMRs) BCI systems. The overall objective of this chapter is to provide the reader with practical knowledge about how to analyze EEG signals.
8 BCIs using steady-state visual-evoked potentials
- + Show details - Hide details
-
p.
169
–183
(15)
Brain-computer interfaces (BCIs) have been gaining momentum in making human-computer interaction more natural, especially for people with neuromuscular disabilities. Among the existing solutions, the systems relying on electroencephalograms (EEGs) occupy the most prominent place due to their noninvasiveness. However, the process of translating EEG signals into computer commands is far from trivial, since it requires the optimization of many different parameters that need to be tuned jointly. In this chapter, we focus on the category of EEG-based BCIs that rely on steady-state-visual-evoked potentials (SSVEPs) and perform a comparative evaluation of the most promising algorithms existing in the literature. Moreover, we will also describe four novel approaches that are able to improve the accuracy of the interaction under different operational context.
9 BCIs using motor imagery and sensorimotor rhythms
- + Show details - Hide details
-
p.
185
–210
(26)
Motor imagery (MI) Brain Computer Interfaces (BCI) are considered the most prominent paradigms of endogenous BCIs, as they comply with the requirements of asynchronous implementations. As MI BCIs can be operated via the movement imagination of one limb (e.g., left hand), after a training period, the user can harness such an interface without the aid of external cue(s) that are considered ideal for self-paced implementations. MI BCIs have been employed in several cases as a means of both communication restoration and neurorehabilitation. Neuromuscular disease (NMD), although rarely studied within the context of MI BCIs, presents significant interest mainly due to the disease's progressive nature and the impact it has on each patient's brain reorganization.
10 Graph signal processing analysis of NIRS signals for brain–computer interfaces
- + Show details - Hide details
-
p.
211
–227
(17)
Graph signal processing (GSP) is an emerging field in signal processing that aims at analyzing high-dimensional signals using graphs. The GSP analysis is intended to take into account the signals' inner graphical structure and expand traditional signal processing techniques to the graph-network domain. In this chapter, we present a GSP analysis framework for the implementation of brain-computer interfaces (BCI) based on function near-infrared spectroscopy (NIRS) signals. Firstly, a GSP approach for feature extraction is presented based on the Graph Fourier Transform (GFT). The aforementioned approach captures the spatial information of the NIRS signals. The feature extraction method is applied on a publicly available dataset ofNIRS recordings during mental arithmetic task and shows higher classification rates, up to 92.52%, as compared to the classification rates of two state-of-the-art feature extraction methodologies. Moreover, in order to better demonstrate the spatial distribution of the NIRS information and to quantify the smoothness or not of the NIRS signals across the channel montage we present a GSP, Dirichlet energy-based analysis approach of NIRS signals over a graph. The application of the proposed measure on the same NIRS dataset further shows the spatial characteristics of the NIRS data and the efficiency of this GSP approach to capture it. Moreover, Dirichlet energy-based approach shows high classification rates, >97%, when used to extract features from NIRS signals. In sum, the presented methods show the efficacy of the GSP-based analysis of NIRS signals for BCI applications and pave the way for more robust and efficient implementations.
-
Part III. Multimodal prototype interfaces that can be operated through eyes and mind
11 Error-aware BCIs
- + Show details - Hide details
-
p.
231
–259
(29)
The ability of recognizing and correcting the erroneous actions is an integral part of human nature. Plenty of neuroscientific studies have been investigating the ability of human brain to recognize errors. The distinct neuronal responses that are produced by the human brain during the perception of an erroneous action are referred to as error-related potentials (ErrPs). Although research in brain-computer interfaces (BCIs) has managed to achieve significant improvement in terms of detecting the users'intentions over the last years, in a real-world setting, the interpretation of brain commands still remains an error-prone procedure leading to inaccurate interactions. Even for multimodal interaction schemes, the attained performance is far from optimal. As a means to overcome these debilities, and apart from developing more sophisticated machine-learning techniques or adding further modalities, scientists have also exploited the users' ability to perceive errors. During the rapid growth of the BCI/Human-Machine Interaction (HMI) technology over the last years, ErrPs have been used widely in order to enhance several existing BCI applications serving as a passive correction mechanism towards a more user-friendly environment. The principal idea is that a BCI system may incorporate, as feedback, the user's judgement about its function and use this feedback to correct its current output. In this chapter, we discuss the potentials and applications of ErrPs into developing hybrid BCI systems that emphasize in reliability and user experience by introducing the so-called error awareness.
12 Multimodal BCIs–the hands-free Tetris paradigm
- + Show details - Hide details
-
p.
261
–276
(16)
In this chapter, we will explore ways to integrate three natural sensory modalities, i.e. vision, brain commands and stress levels into a single visceral experience that allows simultaneous control of various interface options. In this direction, we present MM-Tetris, the multimodal reinvention of the popular Tetris game, modified to be controlled with the user's eye-movements, mental commands and bio-measurements. MM-Tetris is intended for use by motor -impaired people who are not able to operate computing devices through the regular controllers (i.e. mouse and keyboard). In the proposed version of the game, the use of eye -movements and mental commands works in a complementary fashion, by facilitating two different controls, the horizontal movement of the tiles (i.e. tetriminos) through the coordinates of the gaze and the tile rotation through sensorimotor rhythm (SMR) signals detection, respectively. Additionally, bio-measurements provide the stress levels of the player, which in turn determines the speed of the tiles' drop. In this way, the three modalities smoothly collaborate to facilitate playing a game like Tetris. Eventually, the design of the game provides a natural gamified interface for user training in generating more discriminative SMR signals for better detection of imaginary movements.
13 Conclusions
- + Show details - Hide details
-
p.
277
–280
(4)
Interaction with computer applications is usually performed using conventional input devices such as mouse or keyboard. However, people lacking fine motor skills are often not able to use these devices, which limits their ability to interact with computer applications and thus excludes them from the digital information spaces that help us stay connected with families, friends, and colleagues. The evolution of eye-tracking systems and brain-computer interfaces (BCI) has given a new perspective on the control channels that can be used for interacting with computer applications. This book presented a study on end user characteristics and their needs for such control channels, and the knowledge on how it can be fulfilled with eye tracking and BCI interaction using signal-processing algorithms and interface adaptations. The contributors of various chapters are researchers, engineers, clinical experts, and industry practitioners, who collaborated in the context of the 3-year research and innovation action MAMEM - Multimedia Authoring and Management using your Eyes and Mind. Hence, the book covers the underlying challenges of eye and mind interaction, and possible solutions that identify future directions to encourage the researchers around the world.
-
Back Matter
- + Show details - Hide details
-
p.
(1)