Publication | Open Access
Deep learning with convolutional neural networks for EEG decoding and visualization
3.2K
Citations
105
References
2017
Year
Deep learning with convolutional neural networks has revolutionized computer vision and is increasingly applied to end‑to‑end EEG analysis, but how to design and train ConvNets for EEG decoding and visualize their learned features remains unclear. We studied deep ConvNets with various architectures to decode imagined or executed tasks from raw EEG. Our results demonstrate that incorporating batch normalization, exponential linear units, and a cropped training strategy enables deep ConvNets to match or surpass FBCSP accuracy (84.0% vs 82.1%) while visualization reveals that the networks learn to exploit alpha, beta, and high‑gamma spectral power, providing spatial maps of feature contributions and highlighting the feasibility of end‑to‑end EEG decoding without handcrafted features. Hum Brain Mapp 38:5391–5420, 2017; © 2017 Wiley Periodicals, Inc.
Abstract Deep learning with convolutional neural networks (deep ConvNets) has revolutionized computer vision through end‐to‐end learning, that is, learning from the raw data. There is increasing interest in using deep ConvNets for end‐to‐end EEG analysis, but a better understanding of how to design and train ConvNets for end‐to‐end EEG decoding and how to visualize the informative EEG features the ConvNets learn is still needed. Here, we studied deep ConvNets with a range of different architectures, designed for decoding imagined or executed tasks from raw EEG. Our results show that recent advances from the machine learning field, including batch normalization and exponential linear units, together with a cropped training strategy, boosted the deep ConvNets decoding performance, reaching at least as good performance as the widely used filter bank common spatial patterns (FBCSP) algorithm (mean decoding accuracies 82.1% FBCSP, 84.0% deep ConvNets). While FBCSP is designed to use spectral power modulations, the features used by ConvNets are not fixed a priori. Our novel methods for visualizing the learned features demonstrated that ConvNets indeed learned to use spectral power modulations in the alpha, beta, and high gamma frequencies, and proved useful for spatially mapping the learned features by revealing the topography of the causal contributions of features in different frequency bands to the decoding decision. Our study thus shows how to design and train ConvNets to decode task‐related information from the raw EEG without handcrafted features and highlights the potential of deep ConvNets combined with advanced visualization techniques for EEG‐based brain mapping. Hum Brain Mapp 38:5391–5420, 2017 . © 2017 Wiley Periodicals, Inc.
| Year | Citations | |
|---|---|---|
Page 1
Page 1