Decoding music perception and imagination using deep-learning techniques

Decoding music perception and imagination using deep-learning techniques

For access to this article, please select a purchase option:

Buy chapter PDF
(plus tax if applicable)

IET members benefit from discounts to all IET publications and free access to E&T Magazine. If you are an IET member, log in to your account and the discounts will automatically be applied.

Learn more about IET membership 

Recommend Title Publication to library

You must fill out fields marked with: *

Librarian details
Your details
Why are you recommending this title?
Select reason:
Signal Processing and Machine Learning for Brain-Machine Interfaces — Recommend this title to your library

Thank you

Your recommendation has been sent to your librarian.

Deep learning is a sub-field of machine learning that has recently gained substantial popularity in various domains such as computer vision, automatic speech recognition, natural language processing, and bioinformatics. Deep-learning techniques are able to learn complex feature representations from raw signals and thus also have potential to improve signal processing in the context of brain-computer interfaces (BCIs). However, they typically require large amounts of data for training - much more than what can often be provided with reasonable effort when working with brain activity recordings of any kind. In order to still leverage the power of deep-learning techniques with limited available data, special care needs to be taken when designing the BCI task, defining the structure of the deep model, and choosing the training method. This chapter presents example approaches for the specific scenario of musicbased brain-computer interaction through electroencephalography - in the hope that these will prove to be valuable in different settings as well. We explain important decisions for the design of the BCI task and their impact on the models and training techniques that can be used. Furthermore, we present and compare various pretraining techniques that aim to improve the signal-to-noise ratio. Finally, we discuss approaches to interpret the trained models.

Chapter Contents:

  • Abstract
  • 13.1 Introduction and motivation
  • 13.1.1 Evidence from research on auditory perception and imagination
  • 13.1.2 Existing auditory and music-based BCIs
  • 13.2 Deep learning for EEG analysis–the state of the art
  • 13.2.1 Challenges
  • 13.2.2 Deep learning applied to EEG analysis
  • 13.2.3 Custom solutions developed for EEG analysis
  • 13.2.4 The need for open science
  • 13.2.5 Summary
  • 13.3 Experimental design
  • 13.3.1 Stimulus selection
  • 13.3.2 Equipment and procedure
  • 13.3.3 Preprocessing
  • 13.4 Representation learning techniques for pre-training
  • 13.4.1 Basic auto-encoder
  • 13.4.2 Cross-trial encoder
  • 13.4.3 Hydra-net cross-trial encoder
  • 13.4.4 Similarity-constraint encoder
  • 13.4.5 Siamese networks and triplet networks
  • 13.5 Interpreting trained models
  • 13.6 Conclusions
  • References

Inspec keywords: medical signal processing; feature extraction; brain-computer interfaces; learning (artificial intelligence); computer vision; natural language processing; music; speech recognition; electroencephalography

Other keywords: bioinformatics; training techniques; natural language processing; deep model; automatic speech recognition; deep learning; brain-computer interfaces; decoding music perception; machine learning; BCI task; signal-to-noise ratio; brain-computer interaction; electroencephalography; computer vision

Subjects: Knowledge engineering techniques; Computer vision and image processing techniques; Humanities computing; Biology and medical computing; Natural language processing; Speech recognition and synthesis; Optical, image and video signal processing; Speech processing techniques

Preview this chapter:
Zoom in

Decoding music perception and imagination using deep-learning techniques, Page 1 of 2

| /docserver/preview/fulltext/books/ce/pbce114e/PBCE114E_ch13-1.gif /docserver/preview/fulltext/books/ce/pbce114e/PBCE114E_ch13-2.gif

Related content

This is a required field
Please enter a valid email address