Sunteți pe pagina 1din 6

Proceedings of the 34th Chinese Control Conference

July 28-30, 2015, Hangzhou, China

Deep Learning EEG Response Representation for Brain


Computer Interface
LIU Jingwei1,2 , CHENG Yin1,2 , ZHANG Weidong1,2
1. Department of Automation, Shanghai Jiao Tong University, Shanghai 200240, PRC.
2. Key Laboratory of System Control and Information Processing, Ministry of Education, Shanghai, 200240, PRC
E-mail: wdzhang@sjtu.edu.cn

Abstract: In this paper, the multi-scale deep convolutional neural networks are introduced to deal with the representation for
imagined motor Electroencephalography (EEG) signals. We propose to learn a set of high-level feature representations through
deep learning algorithm, referred to as Deep Motor Features (DeepMF), for brain computer interface (BCI) with imagined motor
tasks. As the extracted DeepMF are dissimilar for different tasks and alike for the same tasks, it is convenient to separate the
diverse EEG signals for imagined motor tasks apart. Our approach achieves 100% accuracy for 4 classes imagined motor EEG
signals classication on Project BCI - EEG motor activity dataset. Moreover, thanks to the highly abstract features DeepMF
learned, only 4.125 seconds trials of training data are needed, compared with the conventional BLDA algorithm for 8.75 seconds
trials demand to achieve the same accuracy, accordingly the BCI response time and the required trials for training are almost
declined by half. Experiments are provided to illustrate the effectiveness of the proposed design approach.
Key Words: deep learning, electroencephalography (EEG), brain computer interface (BCI), convolutional neural networks
(CNNs)

1 Introduction the EEG signals to remove ocular artifacts in BCI systems.


Nevertheless, the artifact suppression may also corrupt the
The Brain Computer Interface (BCI), also referred to as power spectrum of the underlying neural activity[16].
brain machine interface (BMI), is a hardware and software Some previous studies have further reduce the similarities
system. This system mainly aims at making it possible and enlarge the difference between different classes through
for disabled users to interact with other people, to con- Common Spatial Pattern (CSP). Ramoser et al.[17] projected
trol articial devices, or to communicate with their sur- multichannel EEG signals into a subspace, aiming at make
roundings without the participation of peripheral nerves the subsequent classication more effective, which was ex-
and muscles[2]. Recent years, some low cost BCI de- tended to multiclass BCIs. Since the difference of discrimi-
vices accompanied with an Application Programming In- native information providing between electrodes, the perfor-
terface (API) have been available[3], with this kind of in- mance of CSP is affected by the spatial resolution[25].
terface, the severe motor disabilities would improve their A few articial intelligence algorithms have been applied
quality of life and greatly reduce the workload of in- into the feature extraction of EEG signals e.g. Genetic Al-
tensive care. A BCI is an articial intelligence system gorithm (GA). Dal et al.[19] extracted an optimal set of rel-
that using control signals collected from the electroen- evant features for BCI through GA automatic method. Usu-
cephalographic (EEG) activity[25]. An EEG signal is ally, it is possible to converge towards a local suboptimum
a muti-channel random time-series with tremendous non- solution before exploring the entire space.
stationariness and nonlinearity[5], the law of EEG signals The time-frequency based approaches have been intro-
is extremely complex[4], thanks to the self-regulating mech- duced to estimate the EEG signals (e.g.AutoRegressive
anism of brain, the shallow feature cannot serve as an ef- Components (AR)). AR models the EEG signals as the out-
fective approach for precise analysis. Accordingly, the key put of random signal of a linear time leter[25]. Jiang et
point to utilize the EEG signals more effectively is to learn a al.[1] applied a multivariate adaptive AR (MVAAR) model
set of higher-level features representation of EEG. for the classication of motor imagery. However, AR per-
Disparate brain activities result in diverse EEG signals. forms poorly when the signal is not stationary[18].
Many EEG feature extraction methods are based on dimen-
sion reduction approach, e.g. PCA and ICA algorithm. Lin
Conv Layer1 Conv Layer2 Conv Layer3 Conv Layer4
et al.[12] projected the input data on k-dimension eigenspace 11 9
3 60 Softmax
50 48 24 22 Layer
of k eigenvectors, in order to reduce the space dimensional- 3 100
3 3 4
ity. Boye et al.[13] identify the artifactual components in 3 2 3 60 Deep
40 1
2 60 1 Motor
EEG signals and reconstruct the signals without the artifac- 20 Pooling 80 Feature
19 40
tual components through PCA. However, PCA does not al- Layer3
40
ways guarantee a good classication since that the best dis- 20 Pooling Layer2
19 20
criminating components may not lie in the largest principal Pooling Layer1
Input Layer
components[14]. Chiappa et al.[15] modied ICA to classify
Fig. 1: DeepMF with multi-scale CNNs.
This paper is partly supported by the National Science Foundation of
China (61025016, 61473183, 61034008, 61221003), Program of Shanghai
Subject Chief Scientist (14XD1402400), and SJTU M&E Joint Research Deep Learning is a class of overwhelmingly superior ma-
Foundation (YG2013MS04). chine learning algorithm whose motivation is to simulate

3518
Feature Extracting
(a) Task 1. (b) Task 2. (c) Task 3. (d) Task 4.
950 480 440 180 100
Fig. 3: Visulization of DeepMF for random trials.
DeepMF

tor imaginal EEG signals classication on Project BCI -


EEG signals 19 channels EEG motor activity dataset[11] within 4.125 seconds trials
Multi-scale of training data. Additionally, with the number of eletrodes
and depth of networks increase, the classication perfor-
Fig. 2: Feature extracting process. mance steadily improves, the multi-scale structure is also
crucial to learn more effective features.
2 Materials and prepocessing
the deep structure and mechanism of humans brain, which The EEG data used in this study is from Project BCI -
is considered to be an exceedingly signicant breakthrough EEG motor activity data set, Brain Computer Interface re-
technology in recent years[6]. With the greater depth, the search at NUST Pakistan[11]. This subject is right handed
deep neural networks will obtain more abstract features of male without known medical conditions. The EEG consists
data and they are producing remarkable advances in vari- of actual and imaginal random movements of left and right
ous areas. In face vericaiton, the human-level performance hand recorded with eyes closed to reduce the large amplitude
(97.53%) on Wild benchmark is surpassed by deep neural outliers in the EEG[5]. The recording was done at 500Hz
networks (99.15%) [7]. In speech recognition, deep learn- using Neurofax EEG System which uses a daisy chain mon-
ing is also becoming a mainstream technology at industrial tage. The data was exported with a common reference using
scale[8].As the difculty of intensely high feature dimen- Eemagine EEG, AC Lines in this country work at 50 Hz.
sionality spanning EEG channel, frequency, and time[9], The EEG electrodes were placed in accordance with the In-
the application of deep learning in EEG-based BCI is still ternational 10-20 system. The subject was asked to do the
rare[10]. following movements, and the corresponding EEG signals
In this paper, an effective approach was proposed to learn of the subject was recorded.
high-level features of EEG signals with deep convolution- (i) Imagined left hand backward movement (Fig.3(a)).
al neural networks (Fig.1). It can be seen that deep learn- (ii) Imagined left hand forward movement (Fig.3(b)).
ing provides more promising tools to deal with EEG feature (iii) Imagined right hand backward movement (Fig.3(c)).
extraction. Thanks to the deep structure and large capac- (iv) Imagined right hand forward movement (Fig.3(d)).
ity of deep neural networks, higher-level features of Mo- Before deep neural convolutional networks are introduced
tor Imagery EEG signals can be learned through hierarchi- to the EEG signals, several preprocessing operations were
cal nonlinear mappings. The learned features, referred as applied in the order stated below.
Deep Motor Features (DeepMF), aiming at the classication
(i) Electrode selection. The position of 19 electrodes are
of EEG data based on motor imagery. The illustration of
FP1 , FP2 , F3 , F4 , C3 , C4 , P3 , P4 , O1 , O2 , F7 , F8 , T3 ,
our EEG feature extraction process is shown in Fig.2. From
T4 , T5 , T6 , Fz , Cz , and Pz . The reference electrodes Cz
bottom to the top the ConvNets take massive EEG time-
was placed on the top of the central skull(Fig.??).
series as input, the low-level features are extracted in the
(ii) Referencing. The average signal from the Cz electrodes
bottom layers, these features are served as input for the nex-
was used for referencing.
t ConvNet. The higher-level features are gradually formed
(iii) Single trial extraction. Single trials of 50 sample points
through the ConvNets from bottom to the top, highly com-
were extracted from the original data.
pact 100-dimensional DeepMF vectors are acquired in the
(iv) Scaling. The samples from each electrode were nor-
top layer.
malized as z-scores.
To characterize EEG signals from different phases,
DeepMF vectors are extracted from various regions of EEG 3 Learning DeepMF for EEG representation
time-series. Considering the multi-channel character of Con-
3.1 Analysis of EEG characteristics
vNets, the feature can be learned from the combination of all
the EEG channels. We constrain the DeepMF to be markedly EEG signals are easily recorded through electrodes placed
fewer than the size of input signals, which is crucial to ob- on the scalp to measure electric brain activity caused by the
tain highly abstract and compact features. Since the learned ow of electric currents during synaptic excitations of the
DeepMF vectors are diverse among different motor imaginal dendrites in the neurons[20]. However, the signals recorder
activities but similar with each other within the same motor cross the scalp, skull, and many other organ layers, togeth-
imaginal activities, it provides a channel to classify the mo- er with the inherent features, the EEG signals is intensely
tor imaginal EEG signals much more easier. The DeepMF complex, some critical properties must be considered.
of new users can be easily generated from just the same deep (i) Noise and outliers: EEG signals have a poor signal-to-
convolutional neural networks. Consequently, the BCI sys- noise ratio, the features extracted are inevitably noisy
tem based on DeepMF can be conveniently shared by various and contain outliers.
users in need. (ii) High dimensionality: The features are generally ex-
Our approach achieves 100% accuracy for 4 classes mo- tracted from multi-channels and from several time

3519
segments before being concatenanted into feature The convolutional operation is expressed as
vectors[21].  
j (s) j
 ij i
(iii) Non-stationarity: EEG signals may rapidly vary over y = Activation b + k x
time, consequently the features extracted are non- i
(s)
stationary. Activation
 (x) (1)
(iv) Randomness: Many factors in the environment may in- Relu (x) = max (0, x) s = 0
=
uence the EEG signals, to avoid the factors occurred tanh (x) = exp(x)exp(x)
exp(x)+exp(x) s=1
by chance, the real features of EEG needs to be extract-
ed from various adverse effects. where xi and y i are the i-th input map and j-th output map,
(v) Nonlinearity: Thanks to self-regulating mechanism of meanwhile the k ij is the convolution kernel between the i-th
brain, linear methods cannot serve as an effective ap- input map and the j-th output map. * denotes the convolu-
proach for precise analysis, higher-level features ex- tional computation. We alternately use the activation func-
tracting approach is required. tion between hyperbolic tangent function and rectied linear
unit (ReLU) which is shown closer to biological behaviour
3.2 Conguration of Deep ConvNets and also have better tting abilities[22]. When s = 1, we
Our deep convolutional neural networks contain four con- initialized the biases to be 0 and the weights Wij at each
volutional layers and three max-pooling layers to extract fea- layer with the following commonly used heuristic[24]:
tures layer-by-layer, two fully-connected layers are applied  
to generate the DeepMF. The softmax output layer is used 1 1
Wij U , (2)
to identify the different imagined motor tasks. For each s- n n
ingle trial, the 50 sample points of 19 channels are resized
where U [a, a] is the uniform distribution in the interval
to 19 1 50 for square patches (Fig.4). Fig. 1 shows the
(a, a) and n is the size of previous layer.
detailed architecture of deep convolutional neural networks
The max-pooling operation is expressed as
which takes one signal trial for 19150 input and predicts
i
 i
on the four imagined motor tasks, the details of parameters yj,k = max xjs+m,ks+n (3)
0m,ns
are listed in Tab. 1.
where s stands for pooling size, accordingly the i-th output
map y i pools over an s s non-overlapping area in for the
i-th input map xi .
19 The DeepMF layer is fully connected to both the 3rd pool-
Channels ing layer and 4th convolutional layer. The ConvNets will
be able to learn the multi-scale features through this dou-
19 50
Channels ble fully connected structure[23]. This is crucial to learn
50 more effective features since this design provides different
Samples
scales of receptive elds to the last softmax layer for identi-
Fig. 4: The input EEG data resizing. cation. We will show the performance gain of using such
layer-skipping structure in section Experiments.
The DeepMF layer is expressed as
1
The feature numbers gradually reduce along the data ow DeepMF = Activation
  + BDeepMF )
(ADeepMF
until the DeepMF layer, in this layer the highly abstract atten (Xpool ) (4)
ADeepMF = WDeepMF
features to represent imagined motor activities are formed. atten (Xconv )
With these high-level features, the imagined motor can be
where Xpool , Xconv are the output of the 3rd pooling
identied more conveniently, further more, the BCI system
layer and 4th convolutional layer. All the data are in
will benet from these high-level features both in the speed
the form of 4D tensor, each dimension stands for (batch-
accelerating and also tremendous declines in memory con-
size, channelsize, patchsize, patchsize), the operation of
sumption.
atten resizes the 4D tensor into 2D matrix as (batch-
size, channelsizepatchsizepatchsize). The WDeepMF and
BDeepMF are the weight matrix and bias item of DeepMF
Table 1: Conguration of multi-scale deep CNNs layer.
Layer Layer type Kernel shape Output shape The last layer of deep convolutional neural networks is
0 Input - [5, 19, 1, 50] a 4-way softmax predicting the probability distribution of
1 Convolutional [20, 19, 1, 3] [5, 20, 1, 48] corresponding 4 imagined motor tasks. The output of last
2 Pooling [1, 2] [5, 20, 1, 24] layer is expressed as
3 Convolutional [40, 20, 1, 3] [5, 40, 1, 22]
 len
4 Pooling [1, 2] [5, 40, 1, 11] exp i=1 xi wi,j + bj
5 Convolutional [60, 40, 1, 3] [5, 60, 1, 9] yi =  n
 len (5)
j=1 exp i=1 xi wi,j + bj
6 Pooling [1, 3] [5, 60, 1, 3]
7 Convolutional [80, 60, 1, 3] [100, 80, 1, 1] where xi is one of the DeepMF processed by neuron j. The
8 DeepMF - [100] variable len is the length of DeepMF, the bj is the bias item,
9 Softmax - [4] and yi is the output probability distribution.

3520
5.0





Same 2.5










Task


1

Dimension2

0.0


2

3
(a) Both dissimilar random trails for task1. 2.5

5.0





different 7.5

6 3 0 3 6
Dimension1

Fig. 6: The rst two PCA dimensions of DeepMF.


(b) Random trails for task3 and task4.

Fig. 5: DeepMF for random trials in two situations.


1.00

Minibatch

err.1

The lost can be computed through stochastic gradient de- 0.75


err.2

err.3

scent (SGD) algorithm as err.4

Accuray


err.5

costi = mean log (Prediction)labeli 0.50 err.6

err.7
(6)
Prediction = softmax (xi wi,j + bj ) err.8

err.9

0.25 err.10

where the wsoft and bsoft are the weight and bias item of
the softmax layer, and labeli stands for the imagined motor 0 10 20 30
The number of imput batches
40 50

task label of the i-th trail, xi is the DeepMF needed to be


identied. Fig. 7: The convergence curve of multi-scale.
4 Experiments
Our deep convolutional neural networks were developed indicated of different colors, the corresponding clusters are
in Python with Theano deep learning frame work which is very wide apart, making it convenient to distinguish. Fig. 7
tightly integrated with NumPy, accordingly it can efciently presents the accuracy on test dataset, evaluate during learn-
works for exceedingly large-scale computationally intensive ing. It can be inferred that the convergence result could be
scientic tasks. The EEG data used in these series of exper- rapidly achieved within 50 minibatches to input through our
iments is from Project BCI - EEG motor activity data set, deep convolutional neural networks with multi-scale.
Brain Computer Interface research at NUST Pakistan[11]. We compared the performance of directly connecting the
140 dissimilar EEG trails are randomly extracted from each output of the third pooling layer to the hidden layer. Fig.11
imagined motor task, the trials add up to 560, after the shufe shows the structure of CNNs with single-scale, and the Tab.
process, the rst 200 trials act as training dataset for DeepM- 2 shows the corresponding details of parameters. In this
F learning, the rst 100 trials act as validation dataset for de- case, the single stage will not be able to provide the diverse s-
bugging, and last 360 trials act as test dataset for accuracy cales of receptive elds to the softmax classier behind. This
test. We bundled 5 trials as a minibatch for input, and one conguration will reduce the ability of networks to learn
epoch stands for input the whole training dataset for a round. more effective features through the information that skips the
50 sample points are contained in each trial, and each sample layer. Fig.8 shows the convergence curve of accuracy for test
point is composed of 19 channels of EEG signals. dataset through the single-scale networks, the jittering of the
Fig.3 shows the visualization of DeepMF in each different curve has been greatly strengthened, the bouts of turbulence
imagined motor tasks for random trials. Each image corre- for accuracy appears much more frequently than multi-scale
sponds to a imagined motor task in a random trial from the structure. Further more, it is not easy for single-scale struc-
rst minibatch and the bottom bar of each image stands for ture to be thoroughly stabilized around convergence within
its DeepMF. The extracted features are different between di- 50 minibatches to input.
verse tasks, but they are much alike within the same task In the conguration of convolutional neural networks, the
even for different random trials. Fig.5 shows DeepMF for depth also play an important role in better accuracy perfor-
random trials in two situations, for the situation both dissim- mance. In contrast, we also build a shallow CNNs to exam-
ilar trials are for task1, the DeepMF extracted from them are ine the effect of depth on the convergence. Fig. 9 shows the
very similar, the second situation, in contrast, the DeepMF structure of shallow CNNs, and Tab. 3 shows the details of
extrated from random trials for task3 and task4 are quite d- conguration, only two convolutional and pooling layers are
ifferent. The feature values are non-negative since they are contained within network. From Fig. 10, it can be seen that
processed by ReLUs, the brighter squares indicates higher thanks to the reduction of parameters, the shallowing of C-
value of activation. NNs would cause the rapid convergence at the start, but the
Fig.6 shows the rst two PCA dimensions of DeepMF features extracted from shallow structure may lead to over-
learned from random trials diverse tasks in the rst mini- tting, the corresponding shallow features are not effective
batch. The EEG signals of four imagined motor tasks are enough to represent the essential character of imagined mo-

3521
Conv Layer1 Conv Layer2 Conv Layer3 Conv Layer4
1.00
3 60 Softmax
50 48 24 22 11 9
Minibatch
3 100 Layer
err.1 3 3 4
err.2
3 2 3 60
0.75 40 Hidden
err.3 2 60
20 Layer
err.4 Pooling
19
Accuray

err.5 40 Layer3
0.50 err.6 40
err.7 20 Pooling Layer2
err.8
19 20
err.9
Input Layer Pooling Layer1
0.25 err.10

0 10 20 30 40 50
Fig. 11: CNNs with single-scale structure.
The number of imput batches

Fig. 8: The convergence curve of single-scale. 1.00

Conv Layer1 Conv Layer2 0.75


Softmax Method
50 48 24 22 11 100 Layer Multi.scale

Accuray
4 Single.scale

3 2 Shallow.net

3 Hidden 0.50
2 BLDA

20 Layer
19 40
0.25
40
20 Pooling Layer2
19 20 0.0 2.5 5.0
Time(s)
7.5

Input Layer Pooling Layer1


Fig. 12: Method comparison of accuracy vs. time.
Fig. 9: Shallow convolutional networks structure.
Table 2: Conguration of single-scale deep CNNs
Layer Layer type Kernel shape Output shape
tor EEG signals.
0 Input - [5, 19, 1, 50]
Fig.12 shows the comparison of performance on above 1 Convolutional [20, 19, 1, 3] [5, 20, 1, 48]
convolutional neural networks and conventional BLDA 2 Pooling [1, 2] [5, 20, 1, 24]
algorithm[2]. It can be seen that, the overall performance 3 Convolutional [40, 20, 1, 3] [5, 40, 1, 22]
of the CNNs on demand of training trials is intensely lower 4 Pooling [1, 2] [5, 40, 1, 11]
than BLDA, although BLDA achieved classication accura- 5 Convolutional [60, 40, 1, 3] [5, 60, 1, 9]
cy of %100 at the end. For multi-scale deep neural networks, 6 Pooling [1, 3] [5, 60, 1, 3]
after 4.125 seconds trials of training, the accuracy of 100% 7 Hidden - [100]
has been achieved while conventional BLDA takes twice tri- 8 Softmax - [4]
als for 8.75 seconds to achieve the convergence.
Table 3: Conguration of shallow CNNs
5 Conclusion Layer Layer type Kernel shape Output shape
In this paper, the effective high-level features of EEG rep- 0 Input - [100, 19, 45, 45]
resentation for brain computer interface based on the multi- 1 Convolutional [20, 19, 4, 4] [100, 20, 42, 42]
scale deep convolutional neural network s are proposed. The 2 Pooling [2, 2] [100, 20, 21, 21]
3 Convolutional [40, 20, 2, 2] [100, 40, 20, 20]
features extracted from EEG signals boost the performance
4 Pooling [2, 2] [100, 40, 10, 10]
on the accuracy of imagined motor tasks identication into 5 Hidden - [100]
100%, while only requiring 4.125 seconds of training trials 6 Softmax - [4]
compared with the demand of conventional BLDA for 8.75
seconds to achieve the same accuracy. We also evaluate the
performance of depth and scale conguration for learning architecture, our results showed better convergence perfor-
mance on multi-scale deep convolutional neural networks.
Since the BCI response time is correlated with the time of
1.00
training trials, the proposed DeepMF can signicantly re-
Minibatch

err.1 duce the time delay for BCI application, accordingly better
0.75
err.2

err.3
user experience will be achieved for BCI users.
err.4
Accuray

err.5

err.6
References
0.50
err.7

err.8
[1] W. Jiang, G.Z. Xu, L. Wang and H.Y. Zhang, Feature extrac-
err.9 tion of brain-computer interface based on improved multivari-
0.25 err.10
ate adaptive autoregressive models, in Biomedical Engineer-
0 10 20 30 40 50
ing and Informatics (BMEI), 2010 3rd International Confer-
The number of imput batches
ence on, 2010: 895898.
[2] U. Hoffmann, J-M. Vesin, T. Ebrahimi and K. Diserens, An ef-
Fig. 10: The convergence curve of shallow net.
cient P300-based braincomputer interface for disabled sub-

3522
jects, in Journal of neuroscience methods, 167(1):11125, ic brain mapping, in Signal Processing Magazine, IEEE ,
2008. 18(6):1430, 2001.
[3] M. Duvinage, T. Castermans and T. Dutoit, A P300-based [21] A. Rakotomamonjy, V. Guigue and G. Mallet .etal, Ensem-
quantitative comparision between the Emotiv Epoc headset and ble of SVMs for improving brain computer interface P300
a medical EEG device, in Biomedical Engineering, 765: 2012 speller performances, in Articial Neural Networks: Biolog-
764, 2012. ical InspirationsICANN 2005, 3696: 4550, 2005.
[4] T. Mutanen, H. Maki and R. Ilmoniemi, The effect of stimulus [22] V. Nair and G.E. Hinton, Rectied linear units improve re-
parameters on TMSEEG muscle artifacts, in Brain stimula- stricted boltzmann machines, in Proceedings of the 27th Inter-
tion, 6(3):371376, 2013. national Conference on Machine Learning (ICML-10), 2010:
[5] J. Hu, C.S. Wang, M. Wu, Y.X. Du, Y. He and J.H. She , Re- 807814.
moval of EOG and EMG artifacts from EEG using combination [23] P. Sermanet and Y. LeCun, Trafc sign recognition with
of functional link neural network and adaptive neural fuzzy in- multi-scale convolutional networks, in Neural Networks (IJC-
ference system, in Neurocomputing, 151.1(0):278287, 2015 NN), The 2011 International Joint Conference on, 2011:2809
[6] Y. Bengio, Scaling Up Deep Learning, in Proceedings of the 2813.
20th ACM SIGKDD International Conference on Knowledge [24] X. Glorot and Y.Bengio, Understanding the difculty of train-
Discovery and Data Mining, KDD14, 2014:19661966 ing deep feedforward neural networks, in International Con-
[7] Y. Sun, Y.H. Chen, X.G. Wang and X.O. Tang, Deep Learn- ference on Articial Intelligence and Statistics, 2010: 249
ing Face Representation by Joint Identication-Verication, in 256.
Advances in Neural Information Processing Systems 27, ac- [25] L.F. Nicolas-Alonso, J. Gomez-Gil, Brain computer inter-
cepted. faces, a review, in Sensors, 12(2): 12111279, 2012.
[8] L. Deng, J.Y. Li, and J.T. Huang .etl, Rescent advances in
deep learning for speech research at Microsoft, in Acoustics,
Speech and Signal Processing (ICASSP), 2013 IEEE Interna-
tional Conference on, May.2013: 86048608.
[9] K.J. Friston, Characterizing functional aysmmetries with brain
mapping, in The asymmetrical brain, 161186, 2003.
[10] A. Xiu, D.P. Kuang and X.J. Guo, A Deep Learning Method
for Classication of EEG Data Based on Motor Imagery, in
Intelligent Computing in Bioinformatics, accepted.
[11] H. Piroska and S. Janos, Specic Movement Detection in
EEG Signal Using Time-Frequency Analysis, in Complexity
and Intelligence of the Articial and Natural Complex System-
s, Medical Applications of the Complex Systems, Biomedical
Computing, 2008. CANS 08. First International Conference
on, Nov.2008: 209215.
[12] C.J. Lin and M.H. Hsieh, Classication of mental task from
EEG data using neural networks based on particle swarm opti-
mization, in Neurocomputing , 72(4): 11211130, 2009.
[13] A.T. Boye, U.Q. Ulrik and M. Billinger, Identication of
movement-related cortical potentials with optimized spatial l-
tering and principal component analysis, in Biomedical Signal
Processing and Control, 3(4): 300304, 2008.
[14] D.J. McFarland, C.W. Anderson and K. Muller, BCI meeting
2005-workshop on BCI signal processing: feature extraction
and translation, in IEEE transactions on neural systems and
rehabilitation engineering, 14(2): 135, 2006.
[15] S.Chiappa, and D. Barber, EEG classication using genera-
tive independent component analysis, in Nerroputing, 69(7):
769777, 2006.
[16] G.L. Wallstrom, R.E. Kass, and A. Miller, Automatic correc-
tion of ocular artifacts in the EEG: a comparison of regression-
based and component-based methods, in International journal
of psychophysiology, 532:105119, 2004
[17] H. Ramoser, J. Muller-Gerking, G. Pfurtscheller, Optimal s-
patial ltering of single trial EEG during imagined hand move-
ment, in Rehabilitation Engineering, IEEE Transactions on,
8(4): 441446, 2000.
[18] G. Florian, and G. Pfurtscheller, Dynamic spectral analysis of
event-related EEG data, in Electroencephalography and clini-
cal neurophysiology, 95(5): 393396, 1995.
[19] B. DalSeno, M. Matteucci and L. Mainardi, A genetic algo-
rithm for automatic feature extraction in P300 detection, in
Neural Networks, 2008. IJCNN 2008.(IEEE World Congress
on Computational Intelligence). IEEE International Joint Con-
ference on, 2008: 31453152.
[20] S. Baillet, J.C. Mosher and R.M. Leahy, Electromagnet-

3523

S-ar putea să vă placă și