

ORIGINAL ARTICLE 

Year : 2017  Volume
: 3
 Issue : 2  Page : 7685 

Level set framework of multiatlas label fusion with applications to magnetic resonance imaging segmentation of brain region of interests and cardiac left ventricles
Zhaoxuan Gong^{1}, Zhentai Lu^{2}, Dazhe Zhao^{1}, Shuai Wang^{3}, Yu Liu^{4}, Yihua Song^{5}, Kai Xuan^{3}, Wenjun Tan^{1}, Chunming Li^{3}
^{1} Department of Computer Science, Key Laboratory of Medical Image Computing of Ministry of Education, Northeastern University, Shenyang, China ^{2} Department of Computer Science, Key Laboratory for Medical Imaging of Southern Medical University, Shenyang, China ^{3} Department of Electronic Engineering, School of Electronic Engineering, University of Electronic Science and Technology of China, Chengdu, China ^{4} Department of Computer Science, College of Electronic Science and Engineering, Jilin University, Changchun, China ^{5} Department of Computer Science, School of Information Technology, Nanjing University of Chinese Medicine, Nanjing, China
Date of Web Publication  18Sep2017 
Correspondence Address: Chunming Li School of Electronic Engineering, University of Electronic Science and Technology of China, Chengdu China
Source of Support: None, Conflict of Interest: None  Check 
DOI: 10.4103/digm.digm_23_17
Background and Objectives: This paper evaluates the performance of a variational level set method for performing label fusion through the use of a penalty term, label fusion term, and length regularization term, which automatically labels objects of interest in biomedical images. This paper is an extension of our preliminary work in the conference paper. We mainly focus on the validation of the variational level set method. Subjects and Methods: Label fusion is achieved by combining the three terms: label fusion term, image data term, and regularization term. The curve evolution derived from the energy minimization is impacted by the three terms simultaneously to achieve optimal label fusion. Each label obtained from the nonlinear registration method is represented by a level set function whose zero level contour encloses the labeled region. In Lu et al.’s paper, they employ the level set formulation only for hippocampus segmentation. Results: Our method is compared with majority voting (MV), local weighted voting (LWV), and Simultaneous Truth and Performance Level Estimation (STAPLE). The method is evaluated on MICCAI 2012 MultiAtlas Labeling challenge and MICCAI 2012 ventricle segmentation challenge. The mean Dice metric is computed using different atlases and produces results with 0.85 for the hippocampus, 0.77 for the amygdala, 0.87 for the caudate, 0.78 for the pallidum, 0.89 for the putamen, 0.91 for the thalamus, and 0.78 for cardiac left ventricles. Conclusions: Experimental results demonstrate that our method is robust to parameter setting and outperforms MV, LWV, and STAPLE. The image data term plays a key role in improving the segmentation accuracy. Our method can obtain satisfactory results with fewer atlases.
Keywords: Label fusion, level set, multiatlas, registration, segmentation
How to cite this article: Gong Z, Lu Z, Zhao D, Wang S, Liu Y, Song Y, Xuan K, Tan W, Li C. Level set framework of multiatlas label fusion with applications to magnetic resonance imaging segmentation of brain region of interests and cardiac left ventricles. Digit Med 2017;3:7685 
How to cite this URL: Gong Z, Lu Z, Zhao D, Wang S, Liu Y, Song Y, Xuan K, Tan W, Li C. Level set framework of multiatlas label fusion with applications to magnetic resonance imaging segmentation of brain region of interests and cardiac left ventricles. Digit Med [serial online] 2017 [cited 2021 Dec 8];3:7685. Available from: http://www.digitmedicine.com/text.asp?2017/3/2/76/215029 
Introduction   
Accurate segmentation of a region of interest (ROI) is an important task in neuroimaging but remains quite challenging due to the complex shape, low contrast of its neighboring structures, and the quality of the original images. Manual masking is generally considered as the gold standard but is difficult and timeconsuming. Hence, it is important to develop automatic segmentation methods for modern medical science.
Recently, multi Atlas ^{More Details} approaches have received more attentions due to its ability of segmenting complex tissues.^{[1],[2],[3],[4],[5],[6],[7]} In the past few years, multiatlas segmentation (MAS) develops rapidly. MAS has many advantages compared to singleatlasbased segmentation methods.^{[8],[9],[10],[11],[12]} Heckemann et al.^{[13]} proposed a multiatlas approach using different selection strategies among atlases for bee brains. In their method, the volume of some brain compartments can be computed by an atlasbased segmentation model. They applied nonrigid registration algorithms to create a target atlas of the bee brain. The method is also suitable for segmenting human bones from computed tomography images, and they intended to apply their method in more applications. Heckemann et al.^{[14]} proposed an atlasbased segmentation method combining label propagation and decision fusion. In their method, brain labels are first extracted from a freeform registration method; then, brain tissues can be obtained by label fusion.
In MAS, multiple atlases are registered to the target and the resulting voxelwise label conflicts are resolved using label fusion.^{[15]} Some multiatlasbased methods have been introduced to segment brain tissues in clinical magnetic resonance imaging (MRI) studies. For example, Eugenio Iglesias et al.^{[16]} proposed a MAS method which integrates generative probabilistic model. In their model, the registration and label fusion can be implemented simultaneously; hence, the atlases and target image are not restricted to modalities. Del Re et al.^{[17]} proposed a multiatlas method for brain segmentation. In their model, atlases with similarity to the target image will be assigned more weights. Brain tissues are segmented by label fusion. Ma et al.^{[18]} applied a multiatlasbased framework to mouse brain MRI. A nonrigid Bspline is implemented for registration, and the Simultaneous Truth and Performance Level Estimation (STAPLE) is applied for label fusion. Gholipour et al.^{[19]} proposed a novel multiatlasbased segmentation method. By considering multiple shapes with intensity and local spatial information, they build a probabilistic framework for label fusion. However, the above multiatlasbased methods are computationally expensive due to the nonlinear registration although using more atlases can improve the segmentation accuracy.
Meanwhile, some methods have been proposed for segmenting the left ventricle (LV) in image sequences. Liu et al.^{[20]} proposed a level set framework for segmentation of left and right ventricles. They extended distance regularized level set evolution model to a twophase level set formulation; therefore, the endocardium and epicardium can be extracted simultaneously. Bouzidi et al.^{[21]} proposed a new semiautomated segmentation for the LV in cardiac magnetic resonance (MR) images. In their method, segmentation is achieved through a deformable mode which is driven by a new external energy. Yang et al.^{[22]} proposed a circular shapeconstrained Fuzzy Cmeans clustering (FCM) algorithm for the LV segmentation. The initial LV extraction is achieved by using FCM method according to its image information. Then, a circular shape function is incorporated into FCM model to obtain accurate segmentation.
In this paper, we present a level set segmentation framework that combines three energy terms. Our variational framework is similar to some active contour models,^{[13],[23],[24]} the object is considered as a variable of an energy functional to be minimized instead of treating each voxel independently in traditional label fusion methods.^{[13],[25],[26],[27],[28],[29]} Our work is the extension of Lu et al.^{[30]} They applied the level set formulation for hippocampus segmentation. We mainly focus on applying the method to brain ROI segmentation and ventricle segmentation. We also evaluate the image data term and atlas number on our level set formulation and compare against other fusion methods in terms of Dice coefficient (DC). [Figure 1] demonstrates the workflow of our method.
Subjects and Methods   
The segmentation of ROIs can be obtained using warped labels from multiple atlases and registration algorithms, we then look for an optimal labeling as the desired segmentation of ROI by label fusion. To improve segmentation accuracy, the desirable label fusion should take advantage of image information of the ROI to be segmented. From this perspective, we propose a variational framework for label fusion, in which the ROI as a whole is considered as a variable of an energy functional to be minimized.^{[1]} Let Ω denote the target image domain. An ROI is defined as a label image L, which is a binary map such that L (x) = 1 for x in the label region and L (x) = 0 otherwise. For each label image L, we let the level set function ϕ take negative values for , and positive values for . Therefore, the zero level contour of the level set function ϕ can be viewed as the boundary of the ROI, which is labeled by L. The zero level contour is denoted by C. Given the warped labels L_{i}, i = 1…n, from multiple atlases and registration algorithms, our key idea is to find an optimal label L by using a label fusion method under the following proposed variational framework.
General framework
The energy function consists of three terms: label fusion term, image data term, and length regularization term. The energy function is minimized by means of level set evolution. We propose the following level set formulation for label fusion. The energy E of a level set function ϕ is defined as follows:
Where F (ϕ;ϕ_{1},…ϕ_{n}) is the label fusion term, ϕ;ϕ_{1},…ϕ_{n} are the level set functions associated with the warped labels. D (ϕ) is the image data fitting term, and L (ϕ) is the regularization term. α, β, and η are the corresponding coefficients. We choose the following specific definitions of the three energy terms to provide a special case of our general framework in Eq. (1).
Label fusion term and length regularization term
The label fusion term F (ϕ;ϕ_{1},…ϕ_{n}) is defined by:
Where w_{i} can be set as constant coefficient or a spatially varying weighting function. By minimizing the energy F, the contour ϕ is forced to be close to the zero level contours of the level set functions ϕ_{1},…ϕ_{n}, which are the boundaries of the ROIs given by the warped labels L_{1},… L_{n}.
The regularization term in our variational framework is defined as follows:
This regularization term ensures the regularity of the boundary of the ROI, which is important for label fusion.
lmage data term and its impact on the proposed model
Li et al.^{[31]} proposed an regionscalable fitting (RSF) term which considers intensity information in local regions at a controllable scale; The RSF term can be used as the image data term in our model.
The image data fitting term D (ϕ) can be explicitly expressed as
Where a kernel function , defined by and where 0 is a normalization factor such that . and . σ is is a positive scale parameter, I (x) denotes the image intensity, the Dirac delta function δ and Heaviside function Hare approximated by the following smooth functions and , , defined by
Keeping level set function ϕ fixed, and minimizing the function D in Eq. (4) with respect to the function f_{1} and f_{2}, we have
Image data term plays a key role in the proposed model since the inhomogeneities often occur in MR images and the target object. It also has desirable performance for images with weak object boundaries. In most cases, the proposed variational framework can obtain better segmentation results than the energy combination that do not use image data term. The effect of image data term is not obvious in rare instance, pallidum, for example.
All experiments are performed on MATLAB R2012b installed in a computer with a 3.20 GHz Intel Core i5 CPU and 8 GB of RAM. We verified the impact of the energy term of our method. Unless otherwise specified, the following parameters are fixed in this paper: α = 1, w_{i} = 0.05, i = 1,...n, β = 1, η = 0.255, σ = 1.
We also assess the quantitative performance of our method using the average DC, measured as
Where C_{seg} and C_{truth} are the segmented regions from the achieved and groundtruth segmentation, respectively.
Results   
Brain region of interest segmentation
To illustrate the effectiveness of our model, we compared our method with different registration approaches for label fusion on the data set from the MICCAI 2012 MultiAtlas Labeling Challenge. The 15 T1 MR images with manually segmented ROI labels, which we tested different methods on, have been provided as ground truth. Multiatlas registration methods are increasingly integrated into numerous medical image computing approaches. Imageguided registration is performed by DRAMMS^{[32]} and ANTS^{[33]} between each pair of the atlas reference image and the test image. The challenge website provided the warped groundtruth labels from ANTS registration. To compare our fusion term with other label fusion strategies, we used DRAMMS and ANTS registration to obtain labels for the entire datasets.
[Figure 2] shows threedimensional visualization of 6 brain ROI (pallidum, hippocampus, amygdala, caudate, putamen, thalamus), segmented by our method and two warped segmentations. Three warped segmentations are used in this experiment, the warped segmentations of [Figure 2]b are obtained by DRAMMS registration, and the warped segmentations of [Figure 2]c are obtained by ANTS registration, those results can be taken as labels for our fusion term. The results of our method as shown in [Figure 2]d give regular and accurate boundaries of the segmented hippocampus.  Figure 2: Results of manual segmentation and different methods for six brain ROIs:pallidum, hippocampus, amygdala, caudate, putamen, Thalamus. (a) Manual segmentations; (b) warped segmentations by using DRAMMS registration; (c) warped segmentations by using ANTS registration. (d) the final segmentations of our fusion algorithm
Click here to view 
[Figure 3] shows the influence of the image data term to our method on 6 brain ROIs. Three atlases are used for this experiment. We select two energy combination to validate the performance of the image data term. From [Figure 3], we can see that the energy with image data term yields better results than the energy without image data term in most cases (hippocampus, caudate, thalamus, amygdala, putamen). However, the effect of image data term is not obvious, pallidum, for example. The length regularization term helps to maintain the regularity of the contour. The parameter η is set to 0.025 for this experiment. The parameter β is optimized by evaluating a range of values .  Figure 3: Validation on energy term of the proposed model. Two energy combination (with image data term without image data term) are compared with respect to Dice coefficient
Click here to view 
We further performed segmentation with a combination of different registration and label fusion strategies. [Figure 4] shows the results in terms of Dice overlap produced by our method and other fusion methods. The performance of our method is compared against the widely used majority voting (MV),^{[34]} local weighted voting (LWV)^{[35]} methods, and STAPLE^{[36]} for label fusion. DRAMMS and ANTS are used for registration. MV has the drawbacks that weights or performance estimates are the same for all the voxels of the segmentation. WV does not consider the intensity similarity and the registration error between target and atlases. STAPLE makes use of expectationmaximization algorithm to accomplish fusion procedure and outperforms MV and WV. We tested our method on 6 brain ROIs and received highest DC value. The mean Dice metric was 0.77 for the amygdala, 0.87 for the caudate, 0.85 for the hippocampus, 0.78 for the pallidum, 0.89 for the putamen, and 0.91 for the thalamus.  Figure 4: Quantitative comparison of our method and multiatlas fusion methods
Click here to view 
[Figure 5] shows the effect of the size of atlas subset on segmentation accuracy. From [Figure 5], it can be seen that the segmentation performance rapidly grows as number of atlases increases until it gradually plateaus. By using only 3 atlases, the mean Dice metric of our method can be achieved at 0.9 for the thalamus, 0.78 for the pallidum, 0.89 for the putamen, 0.85 for the hippocampus, 0.87 for the caudate, and 0.79 for the amygdala. Fifteen data sets are used for this experiment, and each data set has 15 atlases. Experiment results demonstrate that the number of atlas has little influence on our model. Our fusion strategy can achieve high DC value with small numbers of atlases, which is important to clinical use.  Figure 5: Segmentation accuracy for 6 brain regions of interest in terms of numbers of atlases by our method: (a) Results of thalamus; (b) Results of pallidum; (c) Results of putamen; (d) Results of hippocampus; (e) Results of caudate; (f) Results of amygdala
Click here to view 
Ventricle segmentation
[Figure 6] shows the segmentation result of our method and other fusion methods using 20 images from the MICCAI 2012 ventricle segmentation challenge. The optimal parameters for this experiment are α = 1, w_{i} = 0.05, i = 1,...n, β = 1, η = 0.255, σ = 1. [Figure 6]c and [Figure 6]d are warped segmentation results which are obtained from ANTS registration. Those warped segmentations are used in the fusion term of our model, and [Figure 6]f shows the fusion results by our method. From [Figure 6], it can be seen that the boundary of some warped segmentation results skips some parts of ventricle and is finally attracted to nonventricle region. The object boundaries of our results are more accurately extracted with more smooth contours.  Figure 6: Comparisons of the majority voting fusion, the local weighted voting model, and our method on cardiac left ventricles segmentation results. (a) Original image; (b) Manual labeling; (ce) Warp segmentation results by ANTS; (f) The final segmentation of our fusion algorithm
Click here to view 
[Figure 7] compares our method to other fusion methods in terms of DC. Five atlases are used for each target image. Atlases are obtained from ANTS registration. We focus on comparing our label fusion method with MV, LWV, and STAPLE. From [Figure 7], it can be seen that MV obtains lowest result, there is no significant difference existed between LWV and STAPLE. Overall, our method outperforms MV, LWV, and STAPLE.  Figure 7: The performance in terms of Dice coefficient produced by each method for cardiac left ventricle segmentation
Click here to view 
Discussion   
In this paper, we evaluated the performance of a variational level set method for performing label fusion. Labels are obtained from nonlinear registration methods. The image data term has an intrinsic capability to distinguish the desired object from its background. The length regularization term is necessary to maintain the regularity of the contour. The fusion term forced the level set contour close to the labels, making our method useful for automatic applications. The experimental results indicate that our method can obtain robust segmentation results and outperform other fusion methods in terms of Dice value and number of atlases. In the experiments, one of our contributions is that we found that the image data term has a great impact on segmentation performance. However, the image data term also decreases the segmentation accuracy for some brain ROI, such as pallidum. The second contribution is that our method is robust to atlas number. Our method can obtain accurate results with 3–5 atlases. We expect that the proposed method will find its utility in more applications in the area of MRI segmentation, as well as other areas where level set method has been and could be applied.
Conclusion   
Multiatlas label fusion is a vital image segmentation strategy that is increasingly popular in medical imaging.
Financial support and sponsorship
This research was partly supported by the National Natural Science Foundation of China (NSFC) key project no. 11531005, NSFC under Grant No. 61302012, the Fundamental Research Funds for the Central Universities under Grant N161604006, and China scholarship council.
Conflicts of interest
There are no conflicts of interest.
References   
1.  Khan AR, Cherbuin N, Wen W, Anstey KJ, Sachdev P, Beg MF. Optimal weights for local multiatlas fusion using supervised learningand dynamic information (SuperDyn): Validation on hippocampus segmentation. Neuroimage 2011;56:12639. 
2.  Wang H, Suh JW, Das SR, Pluta JB, Craige C, Yushkevich PA. Multiatlas segmentation with joint label fusion. IEEE Trans Pattern Anal Mach Intell 2013;35:61123. 
3.  van Rikxoort EM, Isgum I, Arzhaeva Y, Staring M, Klein S, Viergever MA, et al. Adaptive local multiatlas segmentation: Application to the heart and the caudate nucleus. Med Image Anal 2010;14:3949. 
4.  Coupé P, Manjón JV, Fonov V, Pruessner J, Robles M, Collins DL. Patchbased segmentation using expert priors: Application to hippocampus and ventricle segmentation. Neuroimage 2011;54:94054. 
5.  Sabuncu MR, Yeo BT, Van Leemput K, Fischl B, Golland P. A generative model for image segmentation based on label fusion. IEEE Trans Med Imaging 2010;29:171429. 
6.  Wu G, Wang Q, Zhang D, Nie F, Huang H, Shen D. A generative probability model of joint label fusion for multiatlas based brain segmentation. Med Image Anal 2014;18:88190. 
7.  Doshi J, Erus G, Ou Y, Resnick SM, Gur RC, Gur RE, et al. MUSE: Multiatlas region Segmentation utilizing ensembles of registration algorithms and parameters, and locally optimal atlas selection. Neuroimage 2016;127:18695. 
8.  Ashburner J, Friston KJ. Unified segmentation. Neuroimage 2005;26:83951. 
9.  Pohl KM, Fisher J, Grimson WE, Kikinis R, Wells WM. A Bayesian model for joint segmentation and registration. Neuroimage 2006;31:22839. 
10.  Yeo B, Sabuncu M, Desikan R, Fischl B, Golland P. Effectsof registration regularization and atlas sharpness on segmentation accuracy. Med Image Anal 2008;12:60315. 
11.  Van Leemput K, Bakkour A, Benner T, Wiggins G, Wald LL, Augustinack J, et al. Automated segmentation of hippocampal subfields from ultrahigh resolution in vivo MRI. Hippocampus 2009;19:54957. 
12.  Brandt TR, Menzel R, Maurer CR. Evaluation of atlas selection strategies for atlasbased image segmentation with application to confocal microscopy images of bee brains. Neuroimage 2004;21:142842. 
13.  Heckemann RA, Hajnal JV, Aljabar P, Rueckert D, Hammers A. Automatic anatomical brain MRI segmentation combining label propagation and decision fusion. Neuroimage 2006;33:11526. 
14.  Svarer C, Madsen K, Hasselbalch SG, Pinborg LH, Haugbøl S, Frøkjaer VG, et al. MRbased automatic delineation of volumes of interest in human brain PET images using probability maps. Neuroimage 2005;24:96979. 
15.  Asman AJ, Bryan FW, Smith SA, Reich DS, Landman BA. Group wise multiatlas segmentation of the spinal cords internal structure. Med Image Anal 2014;18:46071. 
16.  Eugenio Iglesias J, Rory Sabuncu M, Van Leemput K. A unified framework for crossmodality multiatlas segmentation of brain MRI. Med Image Anal 2013;17:118191. 
17.  Del Re Ec, Gao Y, Eckbo R, Petryshen TL, Blokland G, Seidman LJ, et al. A new MRI masking technique based on multiatlas brain segmentationin controls and schizophrenia: A rapid and viable alternative to manual masking. Med Image Anal 2016;26:2836. 
18.  Ma D, Cardoso MJ, Modat M, Powell N, Wells J, Holmes H, et al. Automatic structural parcellation of mouse brain MRI using multiatlas label fusion. PLoS One 2014;9:e86576. 
19.  Gholipour A, AkhondiAsl A, Estroff JA, Warfield SK. Multiatlas multishape segmentation of fetal brain MRI for volumetric and morphometric analysis of ventriculomegaly. Neuroimage 2012;60:181931. 
20.  Liu Y, Captur G, Moon JC, Guo S, Yang X, Zhang S, et al. Distance regularized two level sets for segmentation of left and right ventricles from cineMRI. Magn Reson Imaging 2016;34:699706. 
21.  Bouzidi S, Emilien A, BenoisPineau J, Quesson B, Amar CB, Desbarats P. Segmentation of left ventricle on dynamic MRI sequences for blood flow cancellation in thermotherapy. Int Conf Image Process Theory 2017;62:4349. 
22.  Yang X, Song Q, Su Y. Automatic segmentation of left ventricle cavity from shortaxis cardiac magnetic resonance images. Med Biol Eng Comput 2017;1:115. 
23.  Jorge Cardoso M, Leung K, Modat M, Keihaninejad S, Cash D, Barnes J, et al. STEPS: Similarity and truth estimation for propagated segmentations and its application to hippocampal segmentation and brain parcelation. Med Image Anal 2013;17:67184. 
24.  Isgum I, Staring M, Rutten A, Prokop M, Viergever MA, van Ginneken B. Multiatlasbased segmentation with local decision. IEEE Trans Med Image 2009;28:100010. 
25.  Chan TF, Vese LA. Active contours without edges. IEEE Trans Image Process 2001;10:26677. 
26.  Zhang K, Zhang L, Song H, Zhou W. Active contours with selective local or global segmentation: A new formulation and level set method. Image Vis Comput 2010;28:66876. 
27.  Caselles V, Kimmel R, Sapiro G. Geodesic active contours. Int J Comput Vis 1997;22:6179. 
28.  Li C, Xu C, Gui C, Fox MD. Level set evolution without reinitialization: A new variational formulation. Conference on Computer Vision and Pattern Recognition 2005;1:4306. 
29.  Nowinski WL, Raphel JK, Nguyen BT. AtlasBased Identication of Cortical Sulci. In: SPIE2707. Springer; 1996. p. 6474. 
30.  Lu Z, Li C, Chen W, Davatzikos C. Level Set Formulation of Label Fusion for MultiAtlas Segmentation. MICCAI Challenge Workshop SATA; 2013. 
31.  Li C, Kao CY, Gore JC, Ding Z. Minimization of regionscalable fitting energy for image segmentation. IEEE Trans Image Process 2008;17:19409. 
32.  Ou Y, Sotiras A, Paragios N, Davatzikos C. DRAMMS: Deformable registration via attribute matching and mutualsaliency weighting. Med Image Anal 2011;15:62239. 
33.  Avants BB, Tustison NJ, Song G, Cook PA, Klein A, Gee JC. A reproducible evaluation of ANTs similarity metric performance in brain image registration. Neuroimage 2011;54:203344. 
34.  Kittler J. Combining classifiers a theoretical framework. Pattern Anal Appl 1998;1:1827. 
35.  Kuncheva LI. Combining Pattern Classifiers: Methods and Algorithms. New York: Wiley; 2004. 
36.  Warfield SK, Zou KH, Wells WM. Simultaneous truth and performance level estimation (STAPLE): An algorithm for the validation of image segmentation. IEEE Trans Med Imaging 2004;23:90321. 
[Figure 1], [Figure 2], [Figure 3], [Figure 4], [Figure 5], [Figure 6], [Figure 7]
