Your browser does not support JavaScript!

Multi-view learning for benign epilepsy with centrotemporal spikes

Multi-view learning for benign epilepsy with centrotemporal spikes

For access to this article, please select a purchase option:

Buy article PDF
(plus tax if applicable)
Buy Knowledge Pack
10 articles for $120.00
(plus taxes if applicable)

IET members benefit from discounts to all IET publications and free access to E&T Magazine. If you are an IET member, log in to your account and the discounts will automatically be applied.

Learn more about IET membership 

Recommend Title Publication to library

You must fill out fields marked with: *

Librarian details
Your details
Why are you recommending this title?
Select reason:
IET Computer Vision — Recommend this title to your library

Thank you

Your recommendation has been sent to your librarian.

Benign epilepsy with centrotemporal spikes (BECT) may be the most popular epilepsy to attack children. In recent years, more and more studies have shown that magnetic resonance imaging (MRI) and functional magnetic resonance imaging (fMRI) are promising techniques in distinguishing BECT patients from healthy controls. However, these existing works have suffered from two limitations. On the one hand, they have paid more attention to the brain changes between BETC and healthy controls than developing machine learning methods that can recognize BECT patients. On the other hand, most of the existing approaches extract hand-crafted features from MRI or fMRI, which cannot obtain the desired performance due to the limited representative capacity of the used features. To address these issues, we propose a novel classification method by fusing the predictions of three different views: hand-crafted features view, MRI view, and fMRI view. The final result is obtained by passing through those predictions after a fusing neural network. The basic idea of our method is that multiple views could provide complementary information and thus can boost the classification performance. Extensive experiments show that the proposed multi-view method is remarkably superior to single-view methods.

Related content

This is a required field
Please enter a valid email address