Documente Academic
Documente Profesional
Documente Cultură
INTRODUCTION
Automatically image processing means image segmentation(i.e. dividing an image into different types of regions or classes), recognizing of objects and detecting of edges,etc by machine. All of these can be done after segmentation of a pictures. So image segmentation is the most important image problems. In addition noise removing and noise reduction of pictures always are important in classical image problems. In this paper, we do both segmentation and noise reduction with a probabilistic approach. There are many mathematical and statistical methods for image problems, but this paper argues about GMM as a general Gaussian distribution, EM-algorithm and Bayesian rule. But Bayesian framework usually has many difculties because the posterior probability has complex form. So we must use Markov Chain Monte Carlo algorithms or Variational methods with high computing to nd MAP estimation. These methods have worked well in the last decades [1,5]. In this paper, a new numerically EM-MAP algorithm base on Bayesian rule has constructed. We have used Bernardos theory about reference analysis [2] in practice and in image segmentation. it means that in reference analysis a sequence of priors and
posteriors are made and they convergent to a posterior probability which is called reference posterior probability. In this paper, we have used this idea. So we have modied EM-algorithm for our image segmentation. After nding reference posterior probability, MAP estimation and pixel labeling can easily make segmented of image. This paper organized as follows. It rst reviews of GMM and its properties. Then we introduce the EM-MAP algorithm for learning parameters for a given image as the training data. Choosing initial values of EM-algorithm have discussed in next section. EM -MAP algorithm never can not convergent without choosing a suitable starting points. But these initial information are made by histogram of image. Finally we show some experiments with simulated images.
i 1
piN x i
i2
1 pi
(1) 1 (2)
Wherei i are mean, standard deviation of class i. For a given image X, the lattice data are the values of pixels and GMM is our pixel base model. However, the parameters are 2 2 p1 pk 1 k 1 k and we can guess the number of regions in GMM by histogram of lattice data. This will show in experiments.
EM-MAP ALGORITHM
There is a popular EM algorithm for GMM in several papers [3,6]. We modify it to the following algorithm with the name of EM-MAP algorithm. 2. Initialize: 0 3. (E-step) 1. Input:Observed Image in a vector x j j
p0 1
12
n and i 1 2
0
k labels set
p0 10 k
2 k 0 1
j
2 k
pr1
ij
Pr1 i x
pr N x j ir i2 i f x j
(3)
4. (M-step)
pi r1 r1
i
1 n n j
j 1
prj i
(4)
n pi r1
r1 x 1 pi j j
(5)
r1 i2
n j
r1 x r1 i 1 pi j j
n pi r1
(6)
5. Iterate steps 3 and 4 until an specify error i.e. i e2 i 6. Compute pl j ArgMaxi pjf inal j 1 2 i
(7)
7. Construct labeled image corresponding of each true image pixel. This EM-MAP algorithm is a pixel labeling base method such that the labeled image shows each segment or object by different type of labels. Note that formula (3) is Bayes rule, pr is discrete prior probability in stage r and p r1 is discrete posterior probability i ij in the next stage. In this algorithm, we make a sequence of prior and then posterior until to get convergence. The labeled images chooses with MAP of the nal posterior.
A practical way that we can guess the prior of parameters is drawing the histogram of the observed image. Not only image histogram gives us four above parameters for using initial values in the EM algorithm, but also this extracted information usually has maximum entropy. This claim has shown in experiments. Besides, the nal posterior probability will get a stable entropy. We also compute the number of misclassications of results. This shows that how much our algorithm is well.
EXPERIMENTS
In rst example, we make three boxes in an image and add it white noise. The observed image and its histogram in the gure 1 and gure 2 are shown. Information extracted
1000 900 5 800 700 10 600 500 15 400 300 20 200 100 25 5 10 15 20 25 30 35 40 45 50 0 0 50 100 150 200 250
0.78
0.76
25 5 10 15 20 25 30 35 40 45 50
0.66
10
from the histogram are k 4 p0 0 0320 0 1344 0 0576 0 7760 or empirical prob40 75 210 220and 0 100 100 100 100. ability with entropy 0 7411 0 The stopping time occurs when L-2 norm of absolute error has very small value. After running EM-MAP, we had ten-times iteration in gure 2 and the entropy of each iteration which goes to a stable or maximum case. We see in segmented image that Blue l, cyan 2,Yellow 3 andRed 4. There is 0.0008 percentage misclassication that is only one red instead of yellow pixel is wrong. In this example, pixel labeling and noise removing are well done. In the second example, some different shapes such as circle, triangle, rectangle, etc have considered. The observed image and its histogram in gure 3 have shown. For EM-MAP nding, we need to get initial values by histogram of this image. We
50
100
150
200
250
0.9 5 10 15 20 25 30 35 40 45 50 10 20 30 40 50 60 70 80 90 100 0.85 0.8 0.75 0.7 0.65 0.6 0.55 0.5 0.45 0.4 0 5 10 15 20 25
choose k 3 p 0 08 0 34 0 58 as empirical probability or relative frequency, 38 63 88 and 12 12 12 with norm of error less than 0.01. In gure 4, we made 3 classes Blue 1,Green 2 andRed 3. There are only 25 missing of classications or percentage of 0.005. If in EM-MAP algorithm, we compute the entropy of the posterior probability in Each stage of iteration, this entropy will be decreasing to reach a stable form. The third example is more complex. The number of components is great. In addition, there is dependent noise in image which noise reduction in this case is more difcult. The true image and observed image have shown in gure 5. Again,information extraction can nd by drawing histogram of observed image, it means
10
10
15
15
20
20
25
25
30 5 10 15 20 25 30
30 5 10 15 20 25 30
30 5 10 15 20 25 30
k p
10 0 0977 0 1377 0 0625 0 0693 0 0361 0 0361 0 1182 0 1904 0 1572 0 0947 1 2 2 5 3 5 4 5 5 5 6 7 7 5 8 5 0 5 0 5 0 5 0 5 0 5 0 5 0 5 0 5 0 5 0 5
The results have shown in gure 6 with 20 times iteration. In gure 7 the results are shown in 50 times iteration and entropy has reached in a stable case.
2.19
2.185
CONCLUSIONS
In this paper, we make a new numerical EM-GMM-Map algorithm for image segmentation and noise reduction. This paper is used Bernardos idea about sequence of prior and posterior in reference analysis. We have used known EM-GMM algorithm and we added numerically MAP estimation. Also the initial values by histogram of image have suggested which is caused to convergence of EM-MAP method. After convergence of our algorithm, we had stability in entropy. EM-algorithm is iteration algorithm of rst order [3], so we had slow convergence. We used acceleration convergence such as Steffensen algorithm to have the second order convergence. But later we note that in EMMAP method, the number of classes will reduce to real classes of image. Finally, EMalgorithm is linear iteration method, so our method is suitable for simple images. It is important to note that "for segmentation of real images, the results depend critically on the features and feature models used" [4] that is not the focus of this paper.
ACKNOWLEDGMENTS
We have many thanks to prof.Mohammad-Djafari for his excellent ideas.
REFERENCES
1. C.Andrieu, N.D.Freitas, A.Doucet, M.I.Jordan, An Introduction to MCMC for Machine Learning, Journal of Machine Learning, 2003, 50, pp. 5-43. 2. J.M. Bernardo and A.F.M. Smith, Bayesian Theory, John Wiley & Sons, 2000. 3. L.Xu, M.I.Jordan, On Convergence Properties of the EM Algorithm for Gaussian Mixture, Neural Computation, 8, 1996, pp. 129-151. 4. M.A.T.Figueiredo, Bayesian Image Segmentation Using Gaussian Field Prior, EMMCVPR 2005, LNCS 3757, pp. 74-89.
5. M.I.Jordan, Z.Ghahramani, T.S.Jaakkola, L.K.Saul, An Introduction to Variational Methods for Graphical Models, Journal of Machine Learning, 1999, 37, pp. 183-233. 6. R.Farnoosh, B.Zarpak, Image Restoration with Gaussian Mixture Models, Wseas Trans. on Mathematics, 2004, 4, 3, pp.773-777.