Documente Academic
Documente Profesional
Documente Cultură
fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TMI.2016.2538465, IEEE
Transactions on Medical Imaging
1
I. I NTRODUCTION
Gliomas are the brain tumors with the highest mortality
rate and prevalence [1]. These neoplasms can be graded into
Low Grade Gliomas (LGG) and High Grade Gliomas (HGG),
with the former being less aggressive and infiltrative than the
latter [1], [2]. Even under treatment, patients do not survive
on average more than 14 months after diagnosis [3]. Current
treatments include surgery, chemotherapy, radiotherapy, or a
combination of them [4]. MRI is especially useful to assess
gliomas in clinical practice, since it is possible to acquire MRI
sequences providing complementary information [1].
Copyright (c) 2010 IEEE. Personal use of this material is permitted.
However, permission to use this material for any other purposes must be
obtained from the IEEE by sending a request to pubs-permissions@ieee.org.
S. Pereira, A. Pinto and C. A. Silva are with CMEMS-UMinho Research
Unit, University of Minho, Campus Azurem, Guimaraes, Portugal (e-mail:
id5692@alunos.uminho.pt (S. Pereira), csilva@dei.uminho.pt (C. Silva))
S. Pereira and V. Alves are with Centro Algoritmi, Universidade do Minho,
Braga, Portugal (e-mail: valves@di.uminho.pt (V. Alves))
0278-0062 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TMI.2016.2538465, IEEE
Transactions on Medical Imaging
2
more recently, Random Forests (RF) [14][21] were successfully applied in brain tumor segmentation. The RF became
very used due to its natural capability in handling multi-class
problems and large feature vectors. A variety of features were
proposed in the literature: encoding context [15], [16], [21],
first-order and fractals-based texture [14], [15], [18], [21],
[22], gradients [14], [15], brain symmetry [14], [15], [19], and
physical properties [19]. Using supervised classifiers, some
authors developed other ways of applying them. Tustison et al.
[19] developed a two-stage segmentation framework based on
RFs, using the output of the first classifier to improve a second
stage of segmentation. Geremia et al. [20] proposed a Spatially
Adaptive RF for hierarchical segmentation, going from coarser
to finer scales. Meier et al. [23] used a semi-supervised RF to
train a subject-specific classifier for post-operative brain tumor
segmentation.
Other methods known as Deep Learning deal with representation learning by automatically learning an hierarchy of
increasingly complex features directly from data [24]. So, the
focus is on designing architectures instead of developing handcrafted features, which may require specialized knowledge
[25]. CNNs have been used to win several object recognition
[26], [27] and biological image segmentation [28] challenges.
Since a CNN operates over patches using kernels, it has the
advantages of taking context into account and being used with
raw data. In the field of brain tumor segmentation, recent
proposals also investigate the use of CNNs [29][35]. Zikic
et al. [29] used a shallow CNN with two convolutional layers
separated by max-pooling with stride 3, followed by one
fully-connected (FC) layer and a softmax layer. Urban et al.
[30] evaluated the use of 3D filters, although the majority
of authors opted for 2D filters [31][35]. 3D filters can take
advantage of the 3D nature of the images, but it increases the
computational load. Some proposals evaluated two-pathway
networks to allow one of the branches to receive bigger patches
than the other, thus having a larger context view over the image
[31], [32]. In addition to their two-pathway network, Havaei
et al. [32] built a cascade of two networks and performed a
two-stage training, by training with balanced classes and then
refining it with proportions near the originals. Lyksborg et al.
[33] use a binary CNN to identify the complete tumor. Then,
a cellular automata smooths the segmentation, before a multiclass CNN discriminates the sub-regions of tumor. Rao et al.
[34] extracted patches in each plane of each voxel and trained
a CNN in each MRI sequence; the outputs of the last FC
layer with softmax of each CNN are concatenated and used
to train a RF classifier. Dvora k and Menze [35] divided the
brain tumor regions segmentation tasks into binary sub-tasks
and proposed structured predictions using a CNN as learning
method. Patches of labels are clustered into a dictionary of
label patches, and the CNN must predict the membership of
the input to each of the clusters.
In this paper, inspired by the groundbreaking work of Simonyan and Zisserman [36] on deep CNNs, we investigate the
potential of using deep architectures with small convolutional
kernels for segmentation of gliomas in MRI images. Simonyan
and Zisserman proposed the use of small 3 3 kernels to
obtain deeper CNNs. With smaller kernels we can stack more
A. Pre-processing
MRI images are altered by the bias field distortion. This
makes the intensity of the same tissues to vary across the
image. To correct it, we applied the N4ITK method [6].
However, this is not enough to ensure that the intensity
distribution of a tissue type is in a similar intensity scale
across different subjects for the same MRI sequence, which
is an explicit or implicit assumption in most segmentation
methods [37]. In fact, it can vary even if the image of the
same patient is acquired in the same scanner in different time
points, or in the presence of a pathology [7], [38]. So, to
make the contrast and intensity ranges more similar across
patients and acquisitions, we apply the intensity normalization
method proposed by Nyul et al. [7] on each sequence. In this
intensity normalization method, a set of intensity landmarks
IL = {pc1 , i p10 , i p20 , , i p90 , pc2 } are learned for each sequence
from the training set. pc1 and pc2 are chosen for each MRI
sequence as described in [38]. i pl represents the intensity at
the l th percentile. After training, the intensity normalization is
accomplished by linearly transforming the original intensities
between two landmarks into the corresponding learned landmarks. In this way, the histogram of each sequence is more
similar across subjects.
After normalizing the MRI images, we compute the mean
intensity value and standard deviation across all training
patches extracted for each sequence. Then, we normalize
the patches on each sequence to have zero mean and unit
variance1 .
1 The mean and standard deviation computed in the training patches are
used to normalize the testing patches.
0278-0062 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TMI.2016.2538465, IEEE
Transactions on Medical Imaging
3
(1)
(2)
(3)
jvoxels kclasses
0278-0062 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TMI.2016.2538465, IEEE
Transactions on Medical Imaging
4
Layer 1
Layer 2
Layer 3
Layer 4
Layer 5
Layer 6
Layer 7
Layer 8
Layer 9
Layer 10
Layer 11
Type
Filter size
HGG
Stride
# filters
FC units
Input
Conv.
Conv.
Conv.
Max-pool.
Conv.
Conv.
Conv.
Max-pool.
FC
FC
FC
33
33
33
33
33
33
33
33
-
11
11
11
22
11
11
11
22
-
64
64
64
128
128
128
-
256
256
5
43333
643333
643333
643333
641616
1281616
1281616
1281616
6272
256
256
Layer
Layer
Layer
Layer
Layer
Layer
Layer
Layer
Layer
1
2
3
4
5
6
7
8
9
Type
Filter size
LGG
Stride
# filters
FC units
Input
Conv.
Conv.
Max-pool.
Conv.
Conv.
Max-pool.
FC
FC
FC
33
33
33
33
33
33
-
11
11
22
11
11
22
-
64
64
128
128
-
256
256
5
43333
643333
643333
641616
1281616
1281616
6272
256
256
0278-0062 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TMI.2016.2538465, IEEE
Transactions on Medical Imaging
5
Hyperparameter
bias
0.1
Xavier
Leaky ReLU
Dropout
0.333
p HGG
0.1
p LGG
0.5
epochs HGG
20
epochs LGG
25
0.9
Training
Post-processing
Value
weights
Initialization
Initial
0.003
Final
0.00003
Batch
128
VOL HGG
10000
VOL LGG
3000
2T P
,
FP + 2T P + FN
(4)
TP
.
T P + FP
(5)
TP
.
T P + FN
4 http://www.dei.uminho.pt/pessoas/csilva/brats
cnn/
(6)
0278-0062 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TMI.2016.2538465, IEEE
Transactions on Medical Imaging
6
TABLE IV: Study of key components of the proposed method. In each test, just the referred component was modified in
the Proposed method. Results in bold represent metrics with p-value < 0.05 computed with the two-sided paired Wilcoxon
Signed-Rank Test when comparing the results with each component of the Proposed method in each grade, or combination of
grades; underlined results represent the one with the highest metric for each region in each grade, or combination of grades.
PPV
DSC
Dataset
Leaderboard
Challenge
Sensitivity
Method
Grade
Complete
Core
Enhancing
Complete
Core
Enhancing
Complete
Core
Enhancing
Proposed
HGG
LGG
Combined
0.88
0.65
0.84
0.76
0.53
0.72
0.73
0.00
0.62
0.91
0.54
0.85
0.90
0.42
0.82
0.72
0.00
0.60
0.86
0.86
0.86
0.74
0.86
0.76
0.81
0.00
0.68
HGG
LGG
Combined
0.87
0.34
0.78
0.74
0.33
0.67
0.71
0.00
0.60
0.89
0.29
0.79
0.92
0.29
0.82
0.73
0.00
0.61
0.86
0.63
0.82
0.69
0.44
0.65
0.75
0.00
0.63
HGG
LGG
Combined
0.88
0.46
0.81
0.76
0.34
0.69
0.73
0.00
0.62
0.91
0.37
0.82
0.90
0.27
0.80
0.72
0.00
0.60
0.86
0.71
0.84
0.74
0.63
0.72
0.81
0.00
0.68
Using no rotations
HGG
LGG
Combined
0.87
0.47
0.80
0.77
0.31
0.69
0.73
0.00
0.61
0.86
0.39
0.78
0.83
0.25
0.74
0.70
0.00
0.59
0.89
0.68
0.85
0.78
0.66
0.76
0.83
0.00
0.70
HGG
LGG
Combined
0.87
0.62
0.83
0.77
0.49
0.72
0.74
0.00
0.62
0.92
0.49
0.85
0.89
0.38
0.81
0.76
0.00
0.64
0.84
0.91
0.85
0.76
0.87
0.78
0.79
0.00
0.66
Using ReLU
HGG
LGG
Combined
0.87
0.53
0.82
0.77
0.47
0.72
0.73
0.00
0.61
0.87
0.40
0.79
0.88
0.37
0.80
0.69
0.00
0.58
0.89
0.86
0.88
0.77
0.89
0.79
0.86
0.00
0.72
Large/small kernels 1
HGG
LGG
Combined
0.85
0.52
0.80
0.74
0.36
0.68
0.72
0.00
0.60
0.92
0.42
0.84
0.87
0.27
0.77
0.73
0.00
0.61
0.81
0.84
0.81
0.71
0.71
0.71
0.77
0.00
0.65
Large/small kernels 2
HGG
LGG
Combined
0.85
0.52
0.79
0.74
0.34
0.67
0.72
0.00
0.60
0.92
0.42
0.84
0.91
0.26
0.81
0.78
0.00
0.66
0.81
0.85
0.81
0.71
0.71
0.71
0.77
0.00
0.64
Coronal patches
HGG
LGG
Combined
0.86
0.59
0.82
0.75
0.44
0.70
0.72
0.00
0.61
0.88
0.46
0.81
0.83
0.34
0.75
0.74
0.00
0.62
0.86
0.92
0.87
0.74
0.86
0.76
0.76
0.00
0.64
Sagittal patches
HGG
LGG
Combined
0.86
0.45
0.79
0.75
0.32
0.68
0.71
0.00
0.60
0.86
0.36
0.78
0.79
0.26
0.70
0.70
0.00
0.59
0.87
0.87
0.87
0.78
0.70
0.76
0.79
0.00
0.66
Proposed
Using pre-processing as in [19]
Using no rotations
Random rotations (5.625 )
Using ReLU
Large kernels/shallow arq. 1
Large kernels/shallow arq. 2
Coronal patches
Sagittal patches
HGG
HGG
HGG
HGG
HGG
HGG
HGG
HGG
HGG
0.88
0.80
0.85
0.88
0.86
0.87
0.87
0.85
0.84
0.83
0.78
0.79
0.82
0.81
0.81
0.81
0.81
0.76
0.77
0.73
0.74
0.76
0.74
0.75
0.75
0.74
0.73
0.88
0.75
0.81
0.90
0.82
0.90
0.90
0.81
0.78
0.87
0.86
0.78
0.84
0.80
0.89
0.88
0.85
0.73
0.74
0.71
0.70
0.76
0.66
0.76
0.75
0.75
0.67
0.89
0.92
0.91
0.86
0.90
0.84
0.84
0.90
0.92
0.83
0.74
0.86
0.84
0.85
0.78
0.78
0.79
0.85
0.81
0.77
0.82
0.78
0.86
0.76
0.76
0.75
0.82
0278-0062 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TMI.2016.2538465, IEEE
Transactions on Medical Imaging
7
Fig. 2: Boxplot for each of the experiments in Table IV in the Leaderboard data set. The boxplot for the experiment of
sampling training samples from HGG into LGG is not shown given the reduced number of subjects (4 LGG in 25 subjects for
the Leaderboard data set). The diamond marks the mean.
Fig. 3: Boxplot for each of the experiments in Table IV in the Challenge data set. The diamond marks the mean.
0278-0062 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TMI.2016.2538465, IEEE
Transactions on Medical Imaging
8
(a)
(b)
Fig. 4: Examples of segmentations obtained with cross-validation, showing the effect of each component of the proposed
method. In the first row, we have a HGG, and in the bottom row a LGG. Each color represents a tumor class: green edema,
blue necrosis, yellow non-enhancing tumor and red enhancing tumor.
0278-0062 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TMI.2016.2538465, IEEE
Transactions on Medical Imaging
9
0278-0062 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TMI.2016.2538465, IEEE
Transactions on Medical Imaging
10
TABLE V: Results in the Leaderboard and Challenge data sets of BRATS 2013. The relative rank refers to the combination of
the ranking in each metric for the referred class, while the position is the global ranking, as provided by the online evaluation
platform [47].
DSC
Sensitivity
Relative Rank
Position
Complete
Core
Enh.
Complete
Core
Enh.
Complete
Core
Enh.
Complete
Core
Enh.
Leaderboard
PPV
Proposed
Kwon et al. [11]
Zhao et al.8 [5]
agnm19
havam29
Urban et al.9 [30]
Havaei et al.10 [32]
Davy et al. [31]
0.84
0.86
0.83
0.83
0.82
0.70
0.84
0.72
0.72
0.79
0.73
0.71
0.69
0.57
0.71
0.63
0.62
0.59
0.55
0.54
0.56
0.54
0.57
0.56
0.85
0.88
0.77
0.85
0.83
0.65
0.88
0.69
0.82
0.84
0.67
0.73
0.77
0.55
0.79
0.64
0.60
0.60
0.46
0.59
0.62
0.52
0.54
0.50
0.86
0.86
0.94
0.84
0.83
0.87
0.84
0.82
0.76
0.81
0.89
0.82
0.69
0.67
0.72
0.68
0.68
0.63
0.78
0.58
0.58
0.60
0.68
0.68
3.67
3.33
4.67
6.00
7.67
14.00
3.33
1.67
4.00
4.33
7.00
18.67
1.67
5.00
9.33
10.33
8.00
12.33
1
2
3
4
5
17
Challenge
Methods
Proposed
Kwon et al. [11], [52]
Tustison et al. [19]
havam29
al-ss19
Urban et al.8 [30]
Havaei et al. [32]
Davy et al. [31]
0.88
0.88
0.87
0.88
0.87
0.86
0.88
0.85
0.83
0.83
0.78
0.78
0.78
0.75
0.79
0.74
0.77
0.72
0.74
0.73
0.70
0.73
0.73
0.68
0.88
0.92
0.85
0.89
0.89
0.82
0.89
0.85
0.87
0.90
0.74
0.79
0.83
0.75
0.79
0.74
0.74
0.74
0.69
0.68
0.75
0.79
0.68
0.62
0.89
0.84
0.89
0.87
0.86
0.92
0.87
0.85
0.83
0.78
0.88
0.79
0.78
0.79
0.79
0.78
0.81
0.72
0.83
0.80
0.70
0.70
0.80
0.77
7.00
9.33
10.33
8.33
9.67
11.67
3.33
5.00
11.67
10.67
8.67
16.00
5.33
13.00
9.00
13.33
14.67
11.67
1
2
3
4
5
12
0278-0062 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TMI.2016.2538465, IEEE
Transactions on Medical Imaging
11
V. C ONCLUSIONS
In summary, we propose a novel CNN-based method for
segmentation of brain tumors in MRI images. We start by
a pre-processing stage consisting of bias field correction,
intensity and patch normalization. After that, during training,
the number of training patches is artificially augmented by
rotating the training patches, and using samples of HGG to
augment the number of rare LGG classes. The CNN is built
over convolutional layers with small 3 3 kernels to allow
deeper architectures.
In designing our method, we address the heterogeneity
caused by multi-site multi-scanner acquisitions of MRI images
using intensity normalization as proposed by Nyul et al. We
show that this is important in achieving a good segmentation.
Brain tumors are highly variable in their spatial localization
and structural composition, so we have investigated the use of
data augmentation to cope with such variability. We studied
augmenting our training data set by rotating the patches
The authors would like to thank the questions and suggestions of the Anonymous Reviewers that helped to improve this
document. This work is supported by FCT with the reference
project UID/EEA/04436/2013, by FEDER funds through the
COMPETE 2020 Programa Operacional Competitividade e
Internacionalizaca o (POCI) with the reference project POCI01-0145-FEDER-006941. Sergio Pereira was supported by a
scholarship from the Fundaca o para a Ciencia e Tecnologia
(FCT), Portugal (scholarship number PD/BD/105803/2014).
Brain tumor image data used in this article were obtained from
the MICCAI 2013 and 2015 Challenges on Multimodal Brain
Tumor Segmentation. The challenge database contain fully
anonymized images from the Cancer Imaging Atlas Archive
and the BRATS 2012 challenge.
R EFERENCES
[1] S. Bauer et al., A survey of mri-based medical image analysis for brain
tumor studies, Physics in medicine and biology, vol. 58, no. 13, pp. 97
129, 2013.
[2] D. N. Louis et al., The 2007 who classification of tumours of the central
nervous system, Acta neuropathologica, vol. 114, no. 2, pp. 97109,
2007.
[3] E. G. Van Meir et al., Exciting new advances in neuro-oncology:
The avenue to a cure for malignant glioma, CA: a cancer journal for
clinicians, vol. 60, no. 3, pp. 166193, 2010.
[4] G. Tabatabai et al., Molecular diagnostics of gliomas: the clinical
perspective, Acta neuropathologica, vol. 120, no. 5, pp. 585592, 2010.
[5] B. Menze et al., The multimodal brain tumor image segmentation
benchmark (brats), IEEE Transactions on Medical Imaging, vol. 34,
no. 10, pp. 19932024, 2015.
[6] N. J. Tustison et al., N4itk: improved n3 bias correction, IEEE
Transactions on Medical Imaging, vol. 29, no. 6, pp. 13101320, 2010.
[7] L. G. Nyul, J. K. Udupa, and X. Zhang, New variants of a method
of mri scale standardization, IEEE Transactions on Medical Imaging,
vol. 19, no. 2, pp. 143150, 2000.
0278-0062 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TMI.2016.2538465, IEEE
Transactions on Medical Imaging
12
[31] A. Davy et al., Brain tumor segmentation with deep neural networks,
MICCAI Multimodal Brain Tumor Segmentation Challenge (BraTS), pp.
3135, 2014.
[32] M. Havaei et al., Brain tumor segmentation with deep
neural networks, arXiv:1505.03540v1, 2015. [Online]. Available:
http://arxiv.org/abs/1505.03540
[33] M. Lyksborg et al., An ensemble of 2d convolutional neural networks
for tumor segmentation, in Image Analysis. Springer, 2015, pp. 201
211.
[34] V. Rao, M. Sharifi, and A. Jaiswal, Brain tumor segmentation with deep
learning, MICCAI Multimodal Brain Tumor Segmentation Challenge
(BraTS), pp. 5659, 2015.
[35] P. Dvora k and B. Menze, Structured prediction with convolutional
neural networks for multimodal brain tumor segmentation, MICCAI
Multimodal Brain Tumor Segmentation Challenge (BraTS), pp. 1324,
2015.
[36] K. Simonyan and A. Zisserman, Very deep convolutional networks
for large-scale image recognition, arXiv preprint arXiv:1409.1556v6,
2014. [Online]. Available: http://arxiv.org/abs/1409.1556
[37] M. Shah et al., Evaluating intensity normalization on mris of human
brain with multiple sclerosis, Medical image analysis, vol. 15, no. 2,
pp. 267282, 2011.
[38] L. Nyul and J. Udupa, On standardizing the mr image intensity scale,
Magnetic Resonance in Medicine, vol. 42, no. 6, pp. 10721081, 1999.
[39] Y. LeCun et al., Backpropagation applied to handwritten zip code
recognition, Neural computation, vol. 1, no. 4, pp. 541551, 1989.
[40] , Gradient-based learning applied to document recognition, Proceedings of the IEEE, vol. 86, no. 11, pp. 22782324, 1998.
[41] X. Glorot and Y. Bengio, Understanding the difficulty of training deep
feedforward neural networks, in International conference on artificial
intelligence and statistics, 2010, pp. 249256.
[42] K. Jarrett et al., What is the best multi-stage architecture for object
recognition? in Computer Vision, 2009 IEEE 12th International Conference on. IEEE, 2009, pp. 21462153.
[43] A. L. Maas, A. Y. Hannun, and A. Y. Ng, Rectifier nonlinearities
improve neural network acoustic models, in Proc. ICML, vol. 30, 2013.
[44] N. Srivastava et al., Dropout: A simple way to prevent neural networks
from overfitting, The Journal of Machine Learning Research, vol. 15,
no. 1, pp. 19291958, 2014.
[45] G. E. Hinton et al., Improving neural networks by preventing
co-adaptation of feature detectors, arXiv preprint arXiv:1207.0580v1,
2012. [Online]. Available: http://arxiv.org/abs/1207.0580
[46] M. Kistler et al., The virtual skeleton database: An open access repository for biomedical research and collaboration, Journal of Medical
Internet Research, vol. 15, no. 11, Nov 2013.
[47] VirtualSkeleton,
BRATS
2013,
2013.
[Online].
Available:
https://www.virtualskeleton.ch/BRATS/Start2013 [Accessed: September
30, 2015]
[48] F. Bastien et al., Theano: new features and speed improvements, Deep
Learning and Unsupervised Feature Learning NIPS 2012 Workshop,
2012.
[49] J. Bergstra et al., Theano: a CPU and GPU math expression compiler,
in Proceedings of the Python for Scientific Computing Conference
(SciPy), Jun. 2010.
[50] S. Dieleman et al., Lasagne: First release. Aug. 2015. [Online].
Available: http://dx.doi.org/10.5281/zenodo.27878
[51] L. R. Dice, Measures of the amount of ecologic association between
species, Ecology, vol. 26, no. 3, pp. 297302, 1945.
[52] D. Kwon et al., Multimodal brain tumor image segmentation using
glistr, in MICCAI Multimodal Brain Tumor Segmentation Challenge
(BraTS), 2014, pp. 1819.
0278-0062 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.