Sunteți pe pagina 1din 60

PROJECT REPORT, JUNE 2012

IMAGE DENOISING AND COMPRESSION USING ADAPTIVE WAVELET THRESHOLDING

CHAPTER 1 INTRODUCTION
An Image is often corrupted by noise in its acquisition or transmission. The goal of denoising is to remove the noise while retaining as much as possible the important signal features. Traditionally, this is achieved by linear processing such as Wiener filtering. From a historical point of view, wavelet analysis is a new method, though its mathematical date back to the work of Joseph Fourier in the nineteenth century. Fourier laid the foundations with his theories of frequency analysis, which proved to be enormously important and influential. The attention of researchers gradually turned from frequency-based analysis to scale-based analysis when it started to become clear that an approach measuring average fluctuations at different scales might prove less sensitive to noise. The first recorded mention of what we now call a wavelet seems to be in 1909, in a thesis by Alfred Haar. In the late nineteen-eighties, when Daubechies and Mallat first explored and popularized the ideas of wavelet transforms, skeptics described this new field as contributing additional useful tools to a growing toolbox of transforms.The inquiring skeptic, however maybe reluctant to accept these claims based on asymptotic theory without looking at real-world evidence.Fortunately, there is an increasing amount of literature now addressing these concerns that help us appraise of the utility of wavelet shrinkage more realistically.Wavelet denoising attempts to remove the noise present in the signal while preserving the signal characteristics,regardless of its frequency content. It involves three steps: a linear forward wavelet transform, nonlinear thresholding step and a linear inverse wavelet transform.Wavelet denoising must not be confused with smoothing; smoothing only removes the high frequencies and retains the lower ones.Wavelet shrinkage is a non-linear process and iswhat distinguishes it from entire linear denoising technique such as least squares. As will be explained later, wavelet shrinkage depends heavily on the choice of a thresholding parameter and the choice of this threshold determines, to a great extent the efficacy of denoising. Researchers have developed various techniques for choosing denoising parameters and so far there is no best universal threshold determination technique.A more precise explanation of the wavelet denoising procedure can be given as follows. Assume that the observed data is X(t) = S(t) + N(t) where S(t) is the uncorrupted signal with additivenoise N(t).Let W(.) and W-1(.) denotes the forward and inverse of wavelet transform transform. Let D(Y, ) denote the
Dept. OF ECE , SBCE 1

PROJECT REPORT, JUNE 2012

IMAGE DENOISING AND COMPRESSION USING ADAPTIVE WAVELET THRESHOLDING

denoising operator with threshold. We intend to denoise X(t) to recover (t) as an estimate of S(t). The procedure can be summarized in three steps. Y=W(X) Z=D(Y,) =W-1(Z)

D(Y,) being the threshold operator and being the threshold. Thresholding is a simple nonlinear technique, which operates on one wavelet coefficient at a time. In its most basic form, each coefficient is thresholded by comparing against threshold, if the coefficient is smaller than threshold, set to zero; otherwise it is kept or modified. Replacing the small noisy coefficients by zero and inverse wavelet transform on the result may lead to reconstruction with the essential signal characteristics and with less noise. The aim of this project is to study various thresholding techniques such as Sureshrink, Visushrink and BayeShrink and determine the best one for image denoising. In the course of the project,we also aimed to use wavelet denoising as a means of compression and were successfully able to implement a compression technique based on a unified denoising and compression principle. Two main issues regarding image denoising were addressed in this project. Firstly, an adaptive threshold for wavelet thresholding images was proposed, based on the GGD(Generalized Gaussian Response) modeling of subband coefficients, and test results showed excellent performance. Secondly, a coder was designed specifically for simultaneous compression and denoising.

Dept. OF ECE , SBCE

PROJECT REPORT, JUNE 2012

IMAGE DENOISING AND COMPRESSION USING ADAPTIVE WAVELET THRESHOLDING

CHAPTER 2 LITERATURE SURVEY


A vast literature has emerged recently on signal denoising using nonlinear techniques, in the setting of additive white Gaussian noise.. The seminal work on signal denoising via wavelet thresholding or shrinkage of Donoho and Johnstone ([13],[14],[15],[16]) have shown that various wavelet thresholding schemes for denoising have near-optimal properties in the minimax sense and perform well in simulation studies of one-dimensional curve estimation. It has been shown to have better rates of convergence than linear methods for approximating functions in Besov spaces ([13], [14]). Thresholding is a nonlinear technique, yet it is very simple because it operates on one wavelet coefficient at a time. Alternative approaches to nonlinear wavelet-based denoising can be found in, for example,[1], [4], [8],[9],[10], [12], [18], [19], [24], [27],[28],[29], [32], [33],[35], and references therein. Both theoretical and experimental results indicate that our choice of shrinkage parameters yields uniformly better results than Donoho and Johnstone's VisuShrink procedure; an example suggests, however, that Donoho and Johnstone's (1994, 1995, 1996) SureShrink method, which uses a different shrinkage parameter for each dyadic level, achieves a lower error. On a seemingly unrelated front, lossy compression has been proposed for denoising in several works [6], [5], [21], [25],[28]. Concerns regarding the compression rate were explicitly addressed. This is important because any practical coder must assume a limited resource (such as bits) at its disposal for representing the data. Building on this works, explain about why

compression (via coefficient quantization) is appropriate for filtering noise from signal by making the connection that quantization of transform coefficients approximates the operation of wavelet thresholding for denoising. That is, denoising is mainly due to the zero-zone and that the full precision of the thresholded coefficients is of secondary importance. The method of quantization is facilitated by a criterion similar to Rissanen's minimum description length principle. An important issue is the threshold value of the zero-zone (and of wavelet thresholding). For a natural image, it has been observed that its subband coefficients can be well modeled by a Laplacian distribution. With this assumption, we derive a threshold which is easy to compute and is intuitive. Experiments show that the proposed threshold performs close to
Dept. OF ECE , SBCE 3

PROJECT REPORT, JUNE 2012

IMAGE DENOISING AND COMPRESSION USING ADAPTIVE WAVELET THRESHOLDING

optimal thresholding.Other works [4],[12],[13],[14],[15],[16] also addressed the connection between compression and denoising, especially with nonlinear algorithms such as wavelet thresholding in a mathematical framework. However, these latter works were not concerned with quantization and bitrates: compression results from a reduced number of nonzero wavelet coefficients, and not from an explicit design of a coder. The intuition behind using lossy compression for denoising may be explained as follows. A signal typically has structural correlations that a good coder can exploit to yield a concise representation.White noise, however, does not have structural redundancies and thus is not easily compressable. Hence, a good compression method can provide a suitable model for distinguishing between signal and noise. The discussion will be restricted to wavelet-based coders, though these insights can be extended to other transform-domain coders as well. A concrete connection between lossy compression and denoising can easily be seen when one examines the similarity between thresholding and quantization, the latter of which is a necessary step in a practical lossy coder. The emergence of wavelets has led to a convergence of linear expansion methods used in signal processing and applied mathematics. In particular, subband coding methods and their associated filters are closely related to wavelet constructions That is, the quantization of wavelet coefficients with a zero-zone is an approximation to the thresholding function.Thus, provided that the quantization outside of the zero-zone does not introduce significant distortion, it follows that wavelet-based lossy compression achieves denoising.With this connection in mind, this paper is about wavelet thresholding for image denoising and also for lossy compression.The threshold choice aids the lossy coder to choose its zero-zone,and the resulting coder achieves simultaneous denoising and compression if such property is desired. The theoretical formalization of filtering additive Gaussian noise (of zero-mean and standard deviation) via thresholding wavelet coefficients was pioneered by Donoho and Johnstone[14]. A wavelet coefficient is compared to a given threshold and is set to zero if its magnitude is less than the threshold; otherwise, it is kept or modified (depending on the thresholding rule). The threshold acts as an oracle which distinguishes between the insignificant coefficients likely due to noise, and the significant coefficients consisting of important signal structures. Thresholding rules are especially effective for signals with sparse or near-sparse
Dept. OF ECE , SBCE 4

PROJECT REPORT, JUNE 2012

IMAGE DENOISING AND COMPRESSION USING ADAPTIVE WAVELET THRESHOLDING

representations where only a small subset of the coefficients represents all or most of the signal energy. Thresholding essentially creates a region around zero where the coefficients are considered negligible. Outside of this region, the thresholded coefficients are kept to full precision (that is, without quantization).Their most well-known thresholding methods include VisuShrink [14] and SureShrink [15]. These threshold choices enjoy asymptotic minimax optimalities over function spaces such as Besov spaces. For image denoising, however, VisuShrink is known to yield overly smoothed images. This is because its threshold choice (called the universal threshold and 2 is the noise

variance), can be unwarrantedly large due to its dependence on the number of samples,M ,which is more than 105 for a typical test image of size 512*512 SureShrink uses a hybrid of the universal threshold and the SURE threshold, derived from minimizing Steins unbiased risk estimator [30], and has been Shown toper formwell.SureShrink will be themain comparison to the method proposed here, and, as will be seen later in this paper,our proposed threshold often yields better result. Since the works of Donoho and Johnstone, there has been much research on finding thresholds for nonparametric estimation in statistics. However, few are specifically tailored for images.In this project, we propose a framework and a near-optimal threshold in this framework more suitable for image denoising. This approach can be formally described as Bayesian,but this only describes our mathematical formulation, not our philosophy. The formulation is grounded on the empirical observation that the wavelet coefficients in a subband of a natural image can be summarized adequately by a Generalized Gaussian Distribution (GGD). This observation is wellaccepted in the image processing community (for example, see [20], [22], [23],[29], [34], [36]) and is used for state-of-the-art image coding [20], [22], [36]. It follows from this observation that the average MSE (in a subband) can be approximated by the corresponding Bayesian squared error risk with the GGD. That is, a sum is approximated by an integral. We emphasize that this is an analytical approximation and our framework is broader than assuming wavelet coefficients are draws from a GGD. The goal is to find the soft-threshold that minimizes this Bayesian risk, and we call our method BayesShrink. The proposed Bayesian risk minimization is subband-dependent.Given the signal being generalized Gaussian distributed and the noise being Gaussian, via numerical calculation a
Dept. OF ECE , SBCE 5

PROJECT REPORT, JUNE 2012

IMAGE DENOISING AND COMPRESSION USING ADAPTIVE WAVELET THRESHOLDING

nearly optimal threshold for soft-thresholding is found to be TB=2/x(where 2 is the noise variance and x the signal variance). This threshold gives a risk within 5% of the minimal risk over a broad range of parameters in the GGD family. To make this threshold data-driven, the parameters 2 and x are estimated from the observed data, one set for each subband. To achieve simultaneous denoising and compression, the nonzero thresholded wavelet coefficients need to be quantized.Uniform quantizer and centroid reconstruction is used on the GGD. The design parameters of the coder, such as the number of quantization levels and binwidths,are decided based on a criterion derived from Rissanens minimum description length(MDL) principle [26]. While achieving mean-squared -error performance comparable with other popular thresholding schemes,the MDL procedure tends to keep far fewer coefficients. From this property, we demonstrate that our method is an excellent tool for simultaneous denoising and compression. This criterion balances the tradeoff between the compression rate and distortion, and yields a nice interpretation of operating at a fixed slope on the rate-distortion curve. .

Dept. OF ECE , SBCE

PROJECT REPORT, JUNE 2012

IMAGE DENOISING AND COMPRESSION USING ADAPTIVE WAVELET THRESHOLDING

CHAPTER 3 INTRODUCTION TO IMAGE DENOISING

3.1 THRESHOLDING 3.1.1 Motivation for Wavelet thresholding The plot of wavelet coefficients in Fig.3.2 suggests that small coefficients are dominated by noise, while coefficients with a large absolute value carry more signal information than noise. Replacing noisy coefficients (small coefficients below a certain threshold value) by zero and an inverse wavelet transform may lead to a reconstruction that has lesser noise. Stated more precisely, we are motivated to this thresholding idea based on the following assumptions: The decorrelating property of a wavelet transform creates a sparse signal: most untouched coefficients are zero or close to zero. Noise is spread out equally along all coefficients. The noise level is not too high so that we can distinguish the signal wavelet coefficients from the noisy ones.

Fig.3.1 A noisy signal in time domain

Dept. OF ECE , SBCE

PROJECT REPORT, JUNE 2012

IMAGE DENOISING AND COMPRESSION USING ADAPTIVE WAVELET THRESHOLDING

Fig 3.2. The same signal in wavelet domain. As it turns out, this method is indeed effective and thresholding is a simple and efficient method for noise reduction. Further, inserting zeros creates more sparsity in the wavelet domain and here we see a link between wavelet denoising and compression which has been described in sources. 3.1.2 Hard and soft thresholding Hard and soft thresholding with threshold are defined as follows.The hard thresholding operator is defined as D(U,)=U for all |U|> =0 otherwise The soft thresholding operator on the other hand is defined as D(U,)=sgn(U) max(0,|U|-) Hard threshold is a keep or kill procedure and is more intuitively appealing. The transfer function of the same is shown in Fig 3.3. The alternative, soft thresholding (whose transfer function is shown in Fig 3.4 ), shrinks coefficients above the threshold in absolute value. While at first sight hard thresholding may seem to be natural, the continuity of soft thresholding has some advantages.

Dept. OF ECE , SBCE

PROJECT REPORT, JUNE 2012

IMAGE DENOISING AND COMPRESSION USING ADAPTIVE WAVELET THRESHOLDING

Fig 3.3 Hard thresholding

Fig 3.4 Soft thresholding It makes algorithms mathematically more tractable . Moreover, hard thresholding does not even work with some algorithms. Sometimes, pure noise coefficients may pass the hard threshold and appear as annoying blips in the output. Soft thesholding shrinks these false structures. 3.1.3 Threshold determination As one may observe, threshold determination is an important question when denoising. A small threshold may yield a result close to the input, but the result may still be noisy. A large threshold on the other hand, produces a signal with a large number of zero coefficients. This leads to a smooth signal.Paying too much attention to smoothness, however,destroys details and in image processing may cause blur and artifacts. 3.1.4 Comparison with Universal threshold The threshold univ= (N being the signal length, 2 being the noise variance) is

well known in wavelet literature as the Universal threshold. It is the optimal threshold in the
Dept. OF ECE , SBCE 9

PROJECT REPORT, JUNE 2012

IMAGE DENOISING AND COMPRESSION USING ADAPTIVE WAVELET THRESHOLDING

asymptotic sense and minimises the cost function of the difference between the function and the soft thresholded version of the same in the L2 norm sense. In our case, N=2048,=1 therefore theoretically univ= = 3.905 Eqn.3.1

It seems that the universal threshold is not useful to determine a threshold. However,it is useful for obtain a starting value when nothing is known of the signal condition. One can surmise that the universal threshold may give a better estimate for the soft threshold if the number of samples are larger (since the threshold is optimal in the asymptotic sense). 3.2 EDGE PRESERVING DENOISING Denoising is a fundamental step in many image processing tasks. Linear methods have been very popular for their simplicity and speed but their usage is limited since they tend to blur images. Nonlinear methods are more time consuming but they perform much better in general. There are different nonlinear denoising methods. The key idea is to perform anisotropic diffusion as oppose to isotropic diffusion done by linear methods. Nonlinear methods behave differently depending on the image content. Close to edges they diffuse along the edges but not across and in smooth areas perform standard isotropic diffusion. Thus nonlinear methods remove noise and simultaneously preserve edges. 3.3 NOISE MODELS The principle sources of noise in digital images arise during image acquisition and/or transmission.The perfomance of imaging sensors is affected by a veriety of factors,such as environment conditions during image acuisition, and by the quality of the sencing elements themselves.For instance, in acquiring images with CCD camera,light levels and sensor temperature are major factors affecting the amount of noise in the resulting image.Images are corrupted during transmission principally due to interference in the channel used for transmission.For example,an image transmitted using a wireless network might be corrupted as a result of lighting or other atmospheric disturbance.

Dept. OF ECE , SBCE

10

PROJECT REPORT, JUNE 2012

IMAGE DENOISING AND COMPRESSION USING ADAPTIVE WAVELET THRESHOLDING

3.3.1 Spatial and Frequency Properties of Noise Relevant to our discussion are parameter that define the spatial characteristics of noise, and whether the noise is correlated with the image. Frequency properties refer to the frequency content of noise in the Fourier sense. For example, when the Fourier spectrum of noise is constant, the noise usually is called white noise. This terminology is a carryover from the physical properties of white light, which contains nearly all frequency in the visible spectrum in equal proportions. It is not difficult to show that the Fourier spectrum of a function containing all frequencies in equal proportions is a constant. 3.3.2 Some Important Noise Probability Density Functions Based on the assumptions in the previous section, The spatial noise descriptor with which we shall be concerned is the statistical behavior of the intensity values in the noise component. These may be considered random variable, characterized by a probability density function(PDF).The following are among the most common PDFs found in image processing applications. Gaussian noise

Because of its mathematical tractability in both spatial and frequency domains, Gaussian(also called normal) noise models are used frequently in practice. In fact. this tractability is so convenient that it often results in Gaussian models being used in situations in which they marginally applicable at best. The PDF of a Gaussian random variable,z,is given by
2

P(z) =

/22

Eqn.3.1

Where z represents intensity, is the mean(average) value of z and is its standard deviation. The standard deviation squared, 2is called the variance of z. Reyleigh noise

The PDF of Reyleigh noise is given by

P(z)

Eqn.3.2

The mean and variance of this density are given by


Dept. OF ECE , SBCE 11

PROJECT REPORT, JUNE 2012

IMAGE DENOISING AND COMPRESSION USING ADAPTIVE WAVELET THRESHOLDING

and 2 =

a+

Eqn.3.3

Eqn.3.4

Erlang (gamma) noise The PDF of Erlang noise is given by P(z) = { Eqn.3.5

Where the parameter are such that a>0,b is a positive integer, and ! indicates factorial. The mean and variance of this density are given by And 2 = b/a2 Exponential noise b/a

The PDF of exponential noise is given by

P(z)

Eqn.3.6

Where a >0,The mean and variance of this density function are And 2 = 1/a2 Note that this PDF is a special case of the erlang PDF,with b=1. 1/a

Uniform noise

The PDF of uniform is given by P(z) { Eqn.3.7

The mean and variance is given by And


Dept. OF ECE , SBCE 12

a+b/2

PROJECT REPORT, JUNE 2012

IMAGE DENOISING AND COMPRESSION USING ADAPTIVE WAVELET THRESHOLDING

2 = (b-a)2/12

Impulse(salt and pepper)noise

The PDF of (bipolar) impulse noise is given by P(z) = { Eqn.3.8

If b>a, intensity b will appear as a light dot in the image.Conversely,level a will appear like a dark dot. If either pa or pb is zero, the impulse noise is called unipolar.If neither probability is zero and especially if they are approximately equal, impulse noise values will resemble salt and pepper noise. Data dropout and Spike noise also terms used to refer to this type of noise. We use terms impulse or salt and pepper noise interchangeably. 3.4 NOISE IN NATURAL COLOR PHOTOS With the surging popularity of digital cameras, digital photography is rapidly replacing the traditional photography as the photography of choice for virtually all but a few devoted professionals. In digital photography, post image processing is an integral part for obtaining better images even for the casual picture takers. Post image processing is especially important for people who are willing to go beyond point-and-shoot, and one of the key steps in image processing is denoising.All digital cameras today take color photos. (Some cameras allow for black-and-white images, but these are converted from color images using in-cameras.) Noise is present in virtually all digital photos, and there are several sources for it. When light (photons) strike the image sensor, electrons are produced. These \photoelectrons" give rise to analog signals which are then converted into digital pixels by an Analog to Digital (A/D) Converter. The random nature of photons striking the image sensor is an important source for noise. This type of noise, known as photon shot noise, is roughly proportional to the square root of the signal level as a result of the Central Limit Theorem. Thus the lower the signal is the higher the noise becomes relative to the signal. As a result noise in color images can be very pronounced in images shot under low light conditions because the signals must be amplified more. In general, noise level is very low for photos shot outdoor using low ISO (ISO 100 or less). But with most consumer compact cameras noise becomes visible at ISO 200, and it becomes unacceptable at ISO 400 or higher. With more advanced and expensive digital SLRs noise remains low even at
Dept. OF ECE , SBCE 13

PROJECT REPORT, JUNE 2012

IMAGE DENOISING AND COMPRESSION USING ADAPTIVE WAVELET THRESHOLDING

ISO 400, and becomes unacceptable at ISO 1600 or higher. Noise is in general much worse under artificial lighting, especially under fluorescent lighting. One of the important characteristics of digital noise is that they are not uniform across all channels. Very often noise is concentrated in the blue channel while the green and the red channels are relatively clean. For photos taken under artificial lighting (without a ash), the blue channel can be so noisy that it is often unrecognizable

Fig.3.5 The original natural color image without artificial noise. Another significant source of noise is the so called leakage current. Semiconductor image sensors work by converting energy from photons into electrical energy, in the form of a current or voltage signal. Unfortunately thermal energy present in the semiconductor can also generate an electrical signal that is indistinguishable from the optical signal. As temperature increases, so does leakage current in the circuit. The effects of leakage current are most apparent in long exposures in which the light signal is very low.
Dept. OF ECE , SBCE 14

PROJECT REPORT, JUNE 2012

IMAGE DENOISING AND COMPRESSION USING ADAPTIVE WAVELET THRESHOLDING

Fig.3.6 A zoom-in (upper-left) of the color image in 3.5 and its RGB

channels. Color noise is more evident. The red (upper-right) and green (lower-left) channels are much cleaner than the blue (lower-right) channel. Modeling noise in digital color photos can be a difficult task. The photon shot noise is clearly signal dependent and thus not uniform from pixel to pixel. Nearly all digital cameras today use the so-called Bayer Pattern in their photo sensors, where half of their pixels are used to capture the green channel and the other half are divided evenly to capture the red and the blue channels. These partial data are then interpolated to complete the RGB channels of a color photo. So unless we access the raw data (most consumer digital cameras do not have this feature) it is clear that noise is not independent from pixel to pixel in any channel. Most digital photos are in JPEG format, which degrade images through quantization and artifacts. Furthermore, all cameras employ proprietary in-camera sharpening, denoising and anti-aliasing. These factors combine to make effective modeling of noise, at least in the images taken by consumer cameras exported in JPEG format, virtually impossible. For this
15

Dept. OF ECE , SBCE

PROJECT REPORT, JUNE 2012

IMAGE DENOISING AND COMPRESSION USING ADAPTIVE WAVELET THRESHOLDING

reason, any noise model assuming independent and identically distributed noise from pixel to pixel can be unrealistic. 3.5 PEAK SIGNAL-TO-NOISE RATIO The phrase peak signal-to-noise ratio, often abbreviated PSNR, is an engineering term for the ratio between the maximum possible power of a signal and the power of corrupting noise that affects the fidelity of its representation. Because many signals have a very wide dynamic range, PSNR is usually expressed in terms of the logarithmic decibel scale. The PSNR is most commonly used as a measure of quality of reconstruction of lossy compression codecs (e.g., for image compression). The signal in this case is the original data, and the noise is the error introduced by compression. When comparing compression codecs it is used as an approximation to human perception of reconstruction quality, therefore in some cases one reconstruction may appear to be closer to the original than another, even though it has a lower PSNR (a higher PSNR would normally indicate that the reconstruction is of higher quality). One has to be extremely careful with the range of validity of this metric; it is only conclusively valid when it is used to compare results from the same codec (or codec type) and same content. It is most easily defined via the mean squared error (MSE) which for two mn monochrome images I and K where one of the images is considered a noisy approximation of the other is defined as: MSE =
2

The PSNR is defined as: PSNR = 10 . log10(MAX2I/MSE) =20 . log10(MAXI/ )

Here, MAXI is the maximum possible pixel value of the image. When the pixels are represented using 8 bits per sample, this is 255. More generally, when samples are represented using linear PCM with B bits per sample, MAXI is 2B1. For color images with three RGB values per pixel, the definition of PSNR is the same except the MSE is the sum over all squared value differences
Dept. OF ECE , SBCE 16

PROJECT REPORT, JUNE 2012

IMAGE DENOISING AND COMPRESSION USING ADAPTIVE WAVELET THRESHOLDING

divided by image size and by three. Alternately, for color images the image is converted to a different color space and PSNR is reported against each channel of that color space, e.g., YCbCr or HSL. Typical values for the PSNR in lossy image and video compression are between 30 and 50 dB, where higher is better. Acceptable values for wireless transmission quality loss are considered to be about 20 dB to 25 dB. When the two images are identical, the MSE will be zero.

Q=90, PSNR 45.53dB

Q=30, PSNR 36.81dB

Fig.3.7 Image at different PSNR values

Dept. OF ECE , SBCE

17

PROJECT REPORT, JUNE 2012

IMAGE DENOISING AND COMPRESSION USING ADAPTIVE WAVELET THRESHOLDING

CHAPTER 4 IMAGE DENOISING AND COMPRESSION


A signal typically has structural correlations that a good coder can exploit to yield a concise representation. White noise, however, does not have structural redundancies and thus is not easily compressible. Hence, a good compression method can provide a suitable model for distinguishing between signal and noise. The discussion will be restricted to wavelet-based coders, though these insights can be extended to other transform-domain coders as well. A concrete connection between lossy compression and denoising can easily be seen when one examines the similarity between thresholding and quantization, the latter of which is a necessary step in a practical lossy coder. That is, the quantization of wavelet coefficients with a zero-zone is an approximation to the thresholding function (see Fig.4.1). Thus, provided that the quantization outside of the zero-zone does not introduce significant distortion, it follows that wavelet-based lossy compression achieves denoising.

Fig.4.1 Thresholding function can be approximated by quantization with a zero-zone. A wavelet coefficient is compared to a given threshold and is set to zero if its magnitude is less than the threshold; otherwise, it is kept or modified (depending on the thresholding rule). The threshold acts as an oracle which distinguishes between the insignificant coefficients likely due to noise, and the significant coefficients consisting of important signal structures. Thresholding rules are especially effective for signals with sparse or near-sparse representations where only a small subset of the coefficients represents all or most of the signal energy.
18

Dept. OF ECE , SBCE

PROJECT REPORT, JUNE 2012

IMAGE DENOISING AND COMPRESSION USING ADAPTIVE WAVELET THRESHOLDING

Thresholding essentially creates a region around zero where the coefficients are considered negligible. 4.1 THE WAVELET TRANSFORM The Wavelet transform is a transform which provides the time-frequency representation. (There are other transforms which give this information too, such as short time Fourier transform,Wigner distributions,etc.).Often times a particular spectral component occurring at any instant can be of particular interest. In these cases it may be very beneficial to know the time intervals these particular spectral components occur. For example, in EEGs, the latency of an event-related potential is of particular interest (Event-related potential is the response of the brain to a specific stimulus like flash-light, the latency of this response is the amount of time elapsed between the onset of the stimulus and the response). Wavelet transform is capable of providing the time and frequency information simultaneously, hence giving a time-frequency representation of the signal. How wavelet transform works is completely a different fun story, and should be explained after short time Fourier Transform (STFT) . The WT was developed as an alternative to the STFT. The STFT will be explained in great detail in the second part of this tutorial. It suffices at this time to say that the WT was developed to overcome some resolution related problems of the STFT.To make a real long story short, we pass the time-domain signal from various high pass and low pass filters, which filters out either high frequency or low frequency portions of the signal. This procedure is repeated, every time some portion of the signal corresponding to some frequencies being removed from the signal. Here is how this works: Suppose we have a signal which has frequencies up to 1000 Hz. In the first stage we split up the signal in to two parts by passing the signal from a high pass and a low pass filter (filters should satisfy some certain conditions, so-called admissibility condition) which results in two different versions of the same signal: portion of the signal corresponding to 0-500 Hz (low pass portion), and 500-1000 Hz (high pass portion). Then, we take either portion (usually low pass portion) or both, and do the same thing again. This operation is called decomposition .Assuming that we have taken the low pass portion, we now have 3 sets of data, each corresponding to the same signal at frequencies 0-250 Hz, 250-500 Hz, 500-1000 Hz. Then we take the lowpass portion again and pass it through low and high pass filters; we now have 4 sets of signals corresponding
Dept. OF ECE , SBCE 19

PROJECT REPORT, JUNE 2012

IMAGE DENOISING AND COMPRESSION USING ADAPTIVE WAVELET THRESHOLDING

to 0-125 Hz, 125-250 Hz,250-500 Hz, and 500-1000 Hz. We continue like this until we have decomposed the signal to a pre-defined certain level. Then we have a bunch of signals, which actually represent the same signal, but all corresponding to different frequency bands. We know which signal corresponds to which frequency band, and if we put all of them together and plot them on a 3-D graph, we will have time in one axis, frequency in the second and amplitude in the third axis. The uncertainty principle, originally found and formulated by Heisenberg, states that, the momentum and the position of a moving particle cannot be known simultaneously. This applies to our subject as follows:The frequency and time information of a signal at some certain point in the time-frequency plane cannot be known. In other words: We cannot know what spectral component exists at any given time instant. The best we can do is to investigate what spectral components exist at any given interval of time. This is a problem of resolution, and it is the main reason why researchers have switched to WT from STFT. 4.1.1 Discrete wavelet transform The foundations of the DWT go back to 1976 when Croiser, Esteban, and Galand devised a technique to decompose discrete time signals. Crochiere, Weber, and Flanagan did a similar work on coding of speech signals in the same year. They named their analysis scheme as subband coding. In 1983, Burt defined a technique very similar to subband coding and named it pyramidal coding which is also known as multiresolution analysis. Later in 1989, Vetterli and Le Gall made some improvements to the subband coding scheme, removing the existing redundancy in the pyramidal coding scheme. Subband coding is explained below Subband coding and Multiresolution The main idea is the same as it is in the CWT. A time-scale representation of a digital signal is obtained using digital filtering techniques. Recall that the CWT is a correlation between a wavelet at different scales and the signal with the scale (or the frequency) being used as a measure of similarity. The continuous wavelet transform was computed by changing the scale of the analysis window, shifting the window in time, multiplying by the signal, and integrating over all times. In the discrete case, filters of different cutoff frequencies are used to analyze the signal at different scales. The signal is passed through a series of high pass filters to analyze the high frequencies, and it is passed through a series of low pass filters to analyze the low frequencies. The resolution of the signal, which is a measure of the amount of detail information in the signal,
Dept. OF ECE , SBCE 20

PROJECT REPORT, JUNE 2012

IMAGE DENOISING AND COMPRESSION USING ADAPTIVE WAVELET THRESHOLDING

is changed by the filtering operations, and the scale is changed by upsampling and downsampling (subsampling) operations. Subsampling a signal corresponds to reducing the sampling rate, or removing some of the samples of the signal. For example, subsampling by two refers to dropping every other sample of the signal.Subsampling by a factor n reduces the number of samples in the signal n times.Upsampling a signal corresponds to increasing the sampling rate of a signal by adding new samples to the signal. For example, upsampling by two refers to adding a new sample, usually a zero or an interpolated value, between every two samples of the signal. Upsampling a signal by a factor of n increases the number of samples in the signal by a factor of n. The procedure starts with passing this signal (sequence) through a half band digital low pass filter with impulse response h[n]. Filtering a signal corresponds to the mathematical operation of convolution of the signal with the impulse response of the filter. The convolution operation in discrete time is defined as follows: Eqn. 4.1

A half band lowpass filter removes all frequencies that are above half of the highest frequency in the signal. For example, if a signal has a maximum of 1000 Hz component, then half band low pass filtering removes all the frequencies above 500 Hz.The unit of frequency is of particular importance at this time. In discrete signals, frequency is expressed in terms of radians. Accordingly, the sampling frequency of the signal is equal to 2p radians in terms of radial frequency. Therefore, the highest frequency component that exists in a signal will be p radians, if the signal is sampled at Nyquists rate (which is twice the maximum frequency that exists in the signal); that is, the Nyquists rate corresponds to p rad/s in the discrete frequency domain. Therefore using Hz is not appropriate for discrete signals. However, Hz is used whenever it is needed to clarify a discussion, since it is very common to think of frequency in terms of Hz. It should always be remembered that the unit of frequency for discrete time signals is radians. After passing the signal through a half band lowpass filter, half of the samples can be eliminated according to the Nyquists rule, since the signal now has a highest frequency of p/2
Dept. OF ECE , SBCE 21

PROJECT REPORT, JUNE 2012

IMAGE DENOISING AND COMPRESSION USING ADAPTIVE WAVELET THRESHOLDING

radians instead of p radians. Simply discarding every other sample will subsample the signal by two, and the signal will then have half the number of points. The scale of the signal is now doubled. Note that the lowpass filtering removes the high frequency information, but leaves the scale unchanged. Only the subsampling process changes the scale.Resolution, on the other hand, is related to the amount of information in the signal, and therefore, it is affected by the filtering operations. Half band lowpass filtering removes half of the frequencies, which can be interpreted as losing half of the information. Therefore, the resolution is halved after the filtering operation. Note, however, the subsampling operation after filtering does not affect the resolution, since removing half of the spectral components from the signal makes half the number of samples redundant anyway. Half the samples can be discarded without any loss of information. In summary, the lowpass filtering halves the resolution, but leaves the scale unchanged. The signal is then subsampled by 2 since half of the number of samples are redundant. This doubles the scale.This procedure can mathematically be expressed as Having said that, we now look how the DWT is actually computed: The DWT analyzes the signal at different frequency bands with different resolutions by decomposing the signal into a coarse approximation and detail information. DWT employs two sets of functions, called scaling functions and wavelet functions,which are associated with low pass and highpass filters, respectively. The decomposition of the signal into different frequency bands is simply obtained by successive high pass and low pass filtering of the time domain signal. The original signal x[n] is first passed through a half band high pass filter g[n] and a lowpass filter h[n]. After the filtering, half of the samples can be eliminated according to the Nyquists rule, since the signal now has a highest frequency of p /2 radians instead of p . The signal can therefore be subsampled by 2, simply by discarding every other sample. This constitutes one level of decomposition and can mathematically be expressed as follows yhigh[k]= ylow[k]= 2k-n) 2k-n) Eqn. 4.2 Eqn.4.3

where yhigh[k] and ylow[k] are the outputs of the highpass and lowpass filters, respectively, after subsampling by 2. Figure 3.5 illustrates this procedure, where x[n] is the original signal to be decomposed, and h[n] and g[n] are lowpass and highpass filters, respectively.
Dept. OF ECE , SBCE 22

PROJECT REPORT, JUNE 2012

IMAGE DENOISING AND COMPRESSION USING ADAPTIVE WAVELET THRESHOLDING

Fig.4.2 The Subband Coding Algorithm As an example, suppose that the original signal x[n] has 512 sample points, spanning a frequency band of zero to p rad/s. At the first decomposition level, the signal is passed through the highpass and lowpass filters, followed by subsampling by 2. The output of the high pass filter has 256 points (hence half the time resolution), but it only spans the frequencies p/2 to p rad/s (hence double the frequency resolution). These 256 samples constitute the first level of DWT coefficients. The output of the lowpass filter also has 256 samples, but it spans the other half of the frequency band, frequencies from 0 to p/2 rad/s. This signal is then passed through the same
Dept. OF ECE , SBCE 23

PROJECT REPORT, JUNE 2012

IMAGE DENOISING AND COMPRESSION USING ADAPTIVE WAVELET THRESHOLDING

low pass and high pass filters for further decomposition. The output of the second low pass filter followed by subsampling has 128 samples spanning a frequency band of 0 to p/4 rad/s, and the output of the second highpass filter followed by subsampling has 128 samples spanning a frequency band of p/4 to p/2 rad/s. The second high pass filtered signal constitutes the second level of DWT coefficients. This signal has half the time resolution, but twice the frequency resolution of the first level signal. In other words, time resolution has decreased by a factor of 4, and frequency resolution has increased by a factor of 4 compared to the original signal. The lowpass filter output is then filtered once again for further decomposition. This process continues until two samples are left. For this specific example there would be 8 levels of decomposition, each having half the number of samples of the previous level. The DWT of the original signal is then obtained by concatenating all coefficients starting from the last level of decomposition (remaining two samples, in this case). The DWT will then have the same number of coefficients as the original signal. The frequencies that are most prominent in the original signal will appear as high amplitudes in that region of the DWT signal that includes those particular frequencies. The difference of this transform from the Fourier transform is that the time localization of these frequencies will not be lost. However, the time localization will have a resolution that depends on which level they appear. If the main information of the signal lies in the high frequencies, as happens most often, the time localization of these frequencies will be more precise, since they are characterized by more number of samples. If the main information lies only at very low frequencies, the time localization will not be very precise, since few samples are used to express signal at these frequencies. This procedure in effect offers a good time resolution at high frequencies, and good frequency resolution at low frequencies. Most practical signals encountered are of this type. The frequency bands that are not very prominent in the original signal will have very low amplitudes, and that part of the DWT signal can be discarded without any major loss of information, allowing data reduction.

Dept. OF ECE , SBCE

24

PROJECT REPORT, JUNE 2012

IMAGE DENOISING AND COMPRESSION USING ADAPTIVE WAVELET THRESHOLDING

Fig.4.3 1-D wavelet decomposition The wavelet decomposition of an image is done as follows: In the first level of decomposition, the image is split into 4 subbands,namely the HH,HL,LH and LL subbands. The HH subband gives the diagonal details of the image;the HL subband gives the horizontal features while the LH subband represent the vertical structures. The LL subband is the low resolution residual consisiting of low frequency components and it is this subband which is further split at higher levels of decomposition.

Fig.4.4 Subbands of 2D orthogonal wavelet transform

Dept. OF ECE , SBCE

25

PROJECT REPORT, JUNE 2012

IMAGE DENOISING AND COMPRESSION USING ADAPTIVE WAVELET THRESHOLDING

Fig.4.5 Original image used for demonstrating the 2-D wavelet transform.

Fig.4.6 A one-level (K =1), 2-D wavelet transform using the symmetric wavelet transform with the 9/7 Daubechies coefficients (the high-frequency bands have been enhanced to show detail).

Dept. OF ECE , SBCE

26

PROJECT REPORT, JUNE 2012

IMAGE DENOISING AND COMPRESSION USING ADAPTIVE WAVELET THRESHOLDING

4.2 WAVELET THRESHOLDING AND THRESHOLD SELECTION Let the signal be {ij,i,j=1,....,N},where N is some integer power of 2.It has been corrupted by additive noise and one observes gij=ij+ij, where i,j=1,.....N Eqn.4.4

Where {ji} are independent and identically distributed(iid) as normal N(0,2) and independent of {ij}.The goal is to remove the noise,or denoise {gij},and obtain an estimate{ij} of {ij} which minimizes the mean square error. MSE( )=
2

Eqn.4.5

We can use the same principles of thresholding and shrinkage to achieve denoising as in 1-D signals. The problem again boils down to finding an optimal threshold such that the mean squared error between the signal and its estimate is minimized. The different methods for denoising we investigate differ only in the selection of the threshold. The basic procedure remains the same : Calculate the DWT of the image. Threshold the wavelet coefficients.(Threshold may be universal or subband adaptive) Compute the IDWT to get the denoised estimate.

Soft thresholding is used for all the algorithms due to the following reasons: Soft thresholding has been shown to achieve near minimax rate over a large number of Besov spaces. Moreover, it is also found to yield visually more pleasing images. Hard thresholding is found to introduce artifacts in the recovered images.We now study three thresholding techniques-

VisuShrink,SureShrink and BayesShrink and investigate their performance for denoising various standard images.
4.2.2

VisuShrink Visushrink is thresholding by applying the Universal threshold proposed by Donoho and

Johnstone. This threshold is given by

where is the noise variance and M is the

number of pixels in the image. It is proved that the maximum of any M values iid as N(0,2)will be smaller than the universal threshold with high probability, with the probability approaching 1

Dept. OF ECE , SBCE

27

PROJECT REPORT, JUNE 2012

IMAGE DENOISING AND COMPRESSION USING ADAPTIVE WAVELET THRESHOLDING

as M increases. Thus, with high probability, a pure noise signal is estimated as being identically zero. This is because the universal threshold(UT) is derived under the constraint that with high probability the estimate should be at least as smooth as the signal. So the UT tends to be high for large values of M, killing many signal coefficients along with the noise. Thus, the threshold does not adapt well to discontinuities in the signal. 4.2.3 SureShrink What is SURE ?Let = (i : i = 1,.. d) be a length-d vector, and let x = {xi} (with xi distributed as N(i,1)) be multivariate normal observations with mean vector . Let ^ = ^ (x) be an fixed estimate of based on the observations. SURE (Steins unbiased Risk Estimator) is a method for estimating the loss ||^- || 2 in an unbiased fashion. SURE(t;x)=d-2.#{i:|xi|<T}+ ( ) Eqn.4.6

For an observed vector x(in our problem, x is the set of noisy wavelet coefficients in a subband, we want to find the threshold tS that minimizes SURE(t;x),i.e ts=argmintSURE(t;x) Eqn.4.7

The above optimization problem is computationally straightforward. Without loss of generality, we can reorder x in order of increasing |xi|. Then on intervals of t that lie between two values of |xi| , SURE(t) is strictly increasing. Therefore the minimum value of ts is one of the data values |xi|.There are only d values and the threshold can be obtained using O(d log(d)) computations. The `Db4' wavelet was used with 4 levels of decomposition. Clearly, the results are much better than VisuShrink. The sharp features of the image are retained and the MSE is considerably lower. This is because SureShrink is subband adaptive threshold is computed for each detail subband. 3.2.4 Bayes shrink In BayesShrink we determine the threshold for each subband assuming a Generalized Gaussian Distribution(GGD) . The GGD is given by GGx,(x )=C(x, )exp-[(x, )x ] where (x, )= x-1[(3/ )/ (3/ ]1/2
Dept. OF ECE , SBCE 28

Eqn.4.8

PROJECT REPORT, JUNE 2012

IMAGE DENOISING AND COMPRESSION USING ADAPTIVE WAVELET THRESHOLDING

and C(x, )=.(x, )/2(1/ ) and (t )=


-u t-1

Eqn.4.9

u du

Eqn.4.10

The parameter x is the standard deviation and is the shape parameter.It has been observed that with a shape parameter ranging from 0.5 to 1, we can describe the the distribution of coefficients in a subband for a large set of natural images.Assuming such a distribution for the wavelet coefficients, we empiricallyestimate and X for each subband and try to find the threshold T which minimizes the Bayesian Risk, i.e, the expected value of the mean square error. (T)=E( -X)2=EXEY|X( -X)2 Eqn.4.11

Where = T(Y),Y|X ~N(x,2 ) and X ~ GGx,.The optimal threshold T* is given by T*(x,)=arg min r(T) Eqn.4.12

This is a function of the parameter x and .Since there is no closed form solution for T*,numerical calculation is used to find its value.It is observed that threshold value is set by TB(x)=2/ x Eqn.4.13

It is very close to T*.The estimated threshold TB=2/ x is not only nearly optimal but also has intuitive appeal. The normalized threshold, TB/. is inversely proportional to , the standard deviation of X, and proportional to X, the noise standard deviation. When /X << 1, the signal is much stronger than the noise,Tb/ is chosen to be small in order to preserve most of the signal and remove some of the noise; when /X >>1, the noise dominates and the

normalized threshold is chosen to be large to remove the noise which has overwhelmed the signal. Thus, this threshold choice adapts to both the signal and the noise characteristics as reflected in the parameters and X.

Dept. OF ECE , SBCE

29

PROJECT REPORT, JUNE 2012

IMAGE DENOISING AND COMPRESSION USING ADAPTIVE WAVELET THRESHOLDING

Parameter Estimation to determine the Threshold The GGD parameters x, need to be estimated to compute TB(x).The noise variance 2

is estimated from the subband HH1 by the robust median estimator. = , Yij subband HH1 Eqn.4.14

The parameter does not explicitly enter to the expression of TB(x).Therefore it suffices to the estimate directly the signal standard x.The observation model is Y=X+V,with X and V independent of each other,hence
2 y =

x2+2 the variance of Y.since Y is modelled as zero-mean,


2 y can

Eqn.4.15 be found empirically by

Where

2 y is

y2=

Eqn.4.16

Where n x n is the size of the subband under consideration.Thus B( )=2/x Where


x=

Eqn.4.17

To summarize,Bayes Shrink performs soft thresholding,with the data-driven, subband dependent threshold, B( )= 2/x

4.3 MDL PRINCIPLE FOR COMPRESSION-BASED DENOISING Recall our hypothesis is that compression achieves denoising because the zero-zone in the quantization step (typical in compression methods) corresponds to thresholding in denoising. For the purpose of compression, after using the adaptive threshold B( ) for the zero-zone, there still remains the questions of how to quantize the coefficients outside of the zero-zone and how to code them. Fig. 4.2 illustrates the block diagram of the compression method. It shows that the coder needs to decide on the design parameters m, (the number of quantization bins and the binwidth, respectively), in addition to the zero-zone threshold.

Dept. OF ECE , SBCE

30

PROJECT REPORT, JUNE 2012

IMAGE DENOISING AND COMPRESSION USING ADAPTIVE WAVELET THRESHOLDING

Fig.4.9 Block diagram for compression based denoising. Compression is Process of reducing the amount of data required to represent a given quantity of information.Data are the means by which is information is conveyed. Various amounts of data can be used to represent the same amount of information, representation that contain irrelevant and repeated information are said to contain redundant data. Quantization means Digitizing the amplitude values. Denoising is achieved in the wavelet transform domain by lossy-compression, which involves the design of parameters T,m and , relating to the zero-zone width, the number of quantization levels, and the quantization binwidth, respectively. The choice of these parameters is discussed next.When compressing a signal, two important objectives are to be kept in mind. On the one hand, the distortion between the compressed signal and the original should be kept low; on the other hand, the description of the compressed signal should use as few bits as possible to code. Typically, these two objectives are conflicting, thus a suitable criterion is needed to reach a compromise.Rissanens MDL principle allows a tradeoff between these two objectives . Our MDL procedures achieve comparable MSE performance, while keeping far fewer (nonzero) coefficients. All the MDL procedures and their comparative counterparts in this paper are based on the assumption that the wavelet coefficients from a given subband are a simple random sample from some distribution. Within the MDL paradigm, more elaborate dependence structures (both within and between subbands) could be incorporated. According to the MDL
Dept. OF ECE , SBCE 31

PROJECT REPORT, JUNE 2012

IMAGE DENOISING AND COMPRESSION USING ADAPTIVE WAVELET THRESHOLDING

principle, given a sequence of observations, the best model is the one that yields the shortest description length for describing the data using the model, where the description length can be interpreted as the number of bits needed for encoding. This description can be accomplished by a two-part code: one part to describe the model and the other the description of the data using the model. In the original thresholding scheme, the thresholded coefficients are then inverse transformed. In this work, to show that quantization approximates thresholding, there is an additional step of quantizing the thresholded coefficients before inverse transform. More precisely, given the set of observations , we wish to find a model to describe it. The MDL principle chooses which minimizes the two-part code-length, L(Y, ) = L(Y| ) + L( ) Eqn.4.18 Where L(Y| ) is the code length for Y based on X, and L( ) is the code length for .In Saitos simultaneous compression and denoising method for a length-M one-dimensional signal, the hard-threshold function was used to generate the models
T(Y),

where the number of

nonzero coefficients to retain is determined by minimizing the MDL criterion. The first term L(Y| ) is the idealized code-length with the normal distribution and the second term L( ) is taken to be 3/2Klog2M , of which Klog2M are the bits needed to indicate the location of each nonzero coefficient (assuming an uniform indexing) and (1/2)log log2M for the value of each of the coefficients for justification on using (1/2)log2M bits to store the coefficient value. Although compression has been achieved in the sense that a fewer number of nonzero coefficients are kept, does not address the quantization step necessary in a practical compression setting. In the following section, an MDL-based quantization criterion will be developed by minimizing L(Y, ) with the restriction that belongs to the set of quantized signals. This MDLQ compression with BayesShrink zero-zone selection is applied to each subband independently. The steps discussed are summarized as follows. Estimate the noise variance2 , and the GGD standard Deviation x. Calculate the threshold B , and soft-threshold the wavelet Coefficients. To quantize the nonzero coefficients, minimize thrshold over m and and to find the corresponding quantized coefficient which is the compressed, denoised estimate of X.

Dept. OF ECE , SBCE

32

PROJECT REPORT, JUNE 2012

IMAGE DENOISING AND COMPRESSION USING ADAPTIVE WAVELET THRESHOLDING

The coarsest subband LLJ is quantized differently in that it is not thresholded, and the quantization. assumes the uniform distribution. The coefficients LLJ are essentially local averages of the image, and are not characterized by distributions with a peak at zero, thus the uniform distribution is used for generality. With the mean subtracted, the uniform distribution is assumed to be symmetric about zero. Every quantization bin(including the zero-zone) is of width , and the reconstruction values are the midpoints of the intervals. 4.4 IMAGE DENOISING ALGORITHM This section describes the image denoising algorithm, which achieves near optimal soft thresholding in the wavelet domain for recovering original signal from the noisy one. The algorithm is very simple to implement and computationally more efficient. It has following steps: Perform multiscale decomposition of the image corrupted by Gaussian noise using wavelet transform. Estimate the noise variance 2 . For each level, compute the scale parameter . For each subband (except the low pass residual) a) Compute the standard deviation y. b) Compute threshold TN. c) Apply soft thresholding to the noisy coefficients Invert the multiscale decomposition to reconstruct the denoised image f .

Dept. OF ECE , SBCE

33

PROJECT REPORT, JUNE 2012

IMAGE DENOISING AND COMPRESSION USING ADAPTIVE WAVELET THRESHOLDING

CHAPTER 5 RESULTS AND DISCUSSIONS


5.1 COMPARISON OF PERFORMANCE OF VARIOUS IMAGE DENOISING METHODS

5.1.1 Image denoising using various method


Image denoising at =20

Dept. OF ECE , SBCE

34

PROJECT REPORT, JUNE 2012

IMAGE DENOISING AND COMPRESSION USING ADAPTIVE WAVELET THRESHOLDING

Image denoising at =25

Image Denoising at =35

Dept. OF ECE , SBCE

35

PROJECT REPORT, JUNE 2012

IMAGE DENOISING AND COMPRESSION USING ADAPTIVE WAVELET THRESHOLDING

Image denoising at =40

Fig. 5.1 Image denoising at various values 5.2 LIST OF PSNR VALUES OF VARIOUS METHODS IN VARIOUS TEST IMAGES

LENA =20 =25 =35 =40

Soft thresholding 30.924 30.1202

Hard thresholding 33.8907 32.4858

Bayes shrink 38.4678 36.7674

Visu shrink 24.7025 24.5026

29.6335 28.8147

30.5089 30.2401

34.3431 33.4924

24.0487 23.7769

Table 5.1 PSNR values of Lena image for various thesholding methods

Dept. OF ECE , SBCE

36

PROJECT REPORT, JUNE 2012

IMAGE DENOISING AND COMPRESSION USING ADAPTIVE WAVELET THRESHOLDING

BARBARA =20 =25 =35 =40

Soft thresholding 22.81 21.21 21.25 21.02

Hard thresholding 23.5 22.81 21.92 21.53

Bayes shrink 25.5 24.89 23.90 23.41

Visu shrink 19.15 18.45 17.1421 16.52

Table 5.2 PSNR values of Barbara image various thesholding methods 5.3 CONVERSION OF MATLAB FUNCTION PROGRAM OF VISUSHRINK TO C PROGRAM 5.3.1 Procedure

Use MATLAB CODER in MATLAB .If we type >>coder in command window to create
new project for MATLAB CODER

Fig.5.2 MATLAB Coder Project window A MATLAB CODER window will be opened and here we can add files for C conversion

Dept. OF ECE , SBCE

37

PROJECT REPORT, JUNE 2012

IMAGE DENOISING AND COMPRESSION USING ADAPTIVE WAVELET THRESHOLDING

Fig.5.3 MATLAB coder option to add files C code can generate by using the option Build and here also have settings to select hardware implementation details.

Fig.5.4 MATLAB CODER option to generate C program

Dept. OF ECE , SBCE

38

PROJECT REPORT, JUNE 2012

IMAGE DENOISING AND COMPRESSION USING ADAPTIVE WAVELET THRESHOLDING

Fig.5.5 Code generation report

5.4 IMLEMENTATION OF VISU SHRINK METHOD IN DSK6713 DSP KIT C program for Visu shrink thresholding method has been created in Code Composer Studio and build it.An out file has been generated and loaded it in DSK6713 DSP kit. Address location for input and output as follows.Here y be the input and x be the output.

Dept. OF ECE , SBCE

39

PROJECT REPORT, JUNE 2012

IMAGE DENOISING AND COMPRESSION USING ADAPTIVE WAVELET THRESHOLDING

Fig.5.6 Address locations of input and output

Dept. OF ECE , SBCE

40

PROJECT REPORT, JUNE 2012

IMAGE DENOISING AND COMPRESSION USING ADAPTIVE WAVELET THRESHOLDING

CHAPTER 6 CONCLUSION
A simple subband adaptive threshold has been proposed in this project to address the issue of image recovery from its noisy counterpart. It is based on the generalized Guassian distribution modeling of subband coefficients. The proposed BayesShrink threshold specifies the zero-zone of the quantization step of this coder, and this zero-zone is the main agent in the coder which removes the noise. The image denoising algorithm uses soft thresholding to provide smoothness and better edge preservation at the same time. In this project comparison of several thresholding calculation methods have been done. Analysis of comparison shows that Bayes shrink method shows the better PSNR value than other methods.C code for visu shrink thresholding method has been generated and build in Code Composer Studio.

Dept. OF ECE , SBCE

41

PROJECT REPORT, JUNE 2012

IMAGE DENOISING AND COMPRESSION USING ADAPTIVE WAVELET THRESHOLDING

REFERENCES
[1] F. Abramovich, T. Sapatinas, and B. W. Silverman, Wavelet thresholding via a Bayesian approach, J. R. Statist. Soc., ser. B, vol. 60, pp.725749,1998. [2] M. Antonini, M. Barlaud, P. Mathieu, and I. Daubechies, Image coding using wavelet transform, IEEE Trans. Image Processing, vol. 1, no. 2,pp. 205220, 1992. [3] WaveLabToolkit,J. Buckheit, S. Chen, D. Donoho, I. Johnstone, and J. Scargle, http://www-stat.stanford.edu:80/~wavelab/. [4] A. Chambolle, R. A. DeVore, N. Lee, and B. J. Lucier, Nonlinear wavelet image processing: Variational problems, compression, and noise removal through wavelet shrinkage, IEEE Trans. Image Processing,vol. 7, pp. 319335, 1998. [5] S. G. Chang, B. Yu, and M. Vetterli, Bridging compression to wavelet thresholding as a denoising method, in Proc. Conf. Information Sciences Systems, Baltimore, MD, Mar. 1997, pp. 568573. [6] H. Chipman, E. Kolaczyk, and R. McCulloch, Adaptive bayesian wavelet shrinkage, J. Amer. Statist. Assoc., vol. 92, no. 440 , pp.14131421,1991. [7] M. Clyde, G. Parmigiani, and B. Vidakovic, Multiple shrinkage and subset selection in wavelets, Biometrika, vol. 85, pp. 391402,1998. [8] M. S. Crouse, R. D. Nowak, and R. G. Baraniuk, Wavelet-based statistical signal processing using hidden Markov models, IEEE Trans.Signal Processing, vol. 46, pp. 886 902, Apr. 1998. [9] I. Daubechies, Ten Lectures onWavelets, Vol. 61 of Proc. CBMS-NSF Regional Conference Series in Applied Mathematics. Philadelphia, PA:SIAM, 1992. [10] R. A. DeVore and B. J. Lucier, Fast wavelet techniques for near-optimal image processing, in IEEE Military Communications Conf. Rec. San Diego, Oct. 1114, 1992, vol. 3, pp. 11291135. [11] D. L. Donoho, De-noising by soft-thresholding, IEEE Trans. Inform.Theory, vol. 41, pp. 613627, May 1995. [12] D. L. Donoho and I. M. Johnstone, Ideal spatial adaptation via wavelet shrinkage, Biometrika, vol. 81 , pp. 425455,1994.

Dept. OF ECE , SBCE

42

PROJECT REPORT, JUNE 2012

IMAGE DENOISING AND COMPRESSION USING ADAPTIVE WAVELET THRESHOLDING

[13] Adapting to unknown smoothness via wavelet shrinkage,Journal of the American Statistical Assoc., vol. 90, no. 432, pp.12001224, December 1995. [14] Wavelet shrinkage: Asymptopia?, J. R. Stat. Soc. B, ser. B, vol.57, no. 2, pp. 301369, 1995. [15] M. Hansen and B. Yu, Wavelet thresholding via MDL: Simultaneous denoising and compression , 1999. [16] M. Jansen, M. Malfait, and A. Bultheel, Generalized cross validation for wavelet thresholding, Signal Process. , Jan. 1997, vol. 56, pp. 3344. [17] I. M. Johnstone and B.W. Silverman, Wavelet threshold estimators for data with correlated noise, J. R. Statist. Soc, 1997., vol. 59. [18] R. L. Joshi, V. J. Crump, and T. R. Fisher, Image subband coding using arithmetic and trellis coded quantization, IEEE Trans. Circuits Syst.Video Technol., vol. 5, Dec. 1995, pp. 515523. [19] J. Liu and P. Moulin, Complexity-regularized image denoising, Proc.IEEE Int. Conf. Image Processing, vol. 2 , Oct. 1997,pp. 370373. [20] S. M. LoPresto, K. Ramchandran, and M. T. Orchard, Image coding based on mixture modeling of wavelet coefficients and a fast estimationquantization framework, in Proc. Data Compression Conf., Snowbird,UT, Mar. 1997, pp. 221230. [21] S. Mallat, A theory for multiresolution signal decomposition: The wavelet representation, IEEE Trans. Pattern Anal. Machine Intell.,vol. 11, pp. 674693, July 1989. [22] G. Nason, Choice of the threshold parameter in wavelet function estimation, in Wavelets in Statistics, A. Antoniades and G. Oppenheim,Eds. Berlin, Germany: Springer-Verlag, 1995. [23] B. K. Natarajan, Filtering random noise from deterministic signals via data compression, IEEE Trans. Signal Processing, vol. 43, pp.25952605, Nov. 1995. [24] J. Rissanen, Stochastic Complexity in Statistical Inquiry. Singapore:World Scientific, 1989. [25] F. Ruggeri and B. Vidakovic, A Bayesian decision theoretic approach to wavelet thresholding, Statist. Sinica, vol. 9, no. 1, pp. 183197, 1999.

Dept. OF ECE , SBCE

43

PROJECT REPORT, JUNE 2012

IMAGE DENOISING AND COMPRESSION USING ADAPTIVE WAVELET THRESHOLDING

[26] N. Saito, Simultaneous noise suppression and signal compression using a library of orthonormal bases and the minimum description length criterion,in Wavelets in Geophysics, E. Foufoula-Georgiou and P. Kumar,Eds. New York: Academic, 1994, pp. 299324. [27] E. Simoncelli and E. Adelson, Noise removal via Bayesian wavelet coring, Proc. IEEE Int. Conf. Image Processing, vol. 1, Sept. 1996,pp. 379382. [28] C. M. Stein, Estimation of the mean of a multivariate normal distribution,Ann. Statist., vol. 9, no. 6, pp. 11351151, 1981. [29] M. Vetterli and J. Kovacevic, Wavelets and Subband Coding. Englewood Cliffs, NJ: Prentice-Hall, 1995. [30] B. Vidakovic, Nonlinear wavelet shrinkage with Bayes rules and Bayes factors, J. Amer. Statist. Assoc., vol. 93, no. 441, pp. 173179, 1998. [31] Y. Wang, Function estimation via wavelet shrinkage for long-memory data, Ann. Statist., vol. 24, pp. 466484, 1996. [32] P. H. Westerink, J. Biemond, and D. E. Boekee, An optimal bit allocation algorithm for sub-band coding, in Proc. IEEE Int. Conf. Acoustics,Speech, Signal Processing, Dallas, TX, Apr. 1987, pp. 13781381. [33] N. Weyrich and G. T. Warhola, De-noising using wavelets and crossvalidation,Dept. of Mathematics and Statistics, Air Force Inst. of Tech.,AFIT/ENC, OH, Tech. Rep. AFIT/EN/TR/94-01, 1994. [34] Y. Yoo, A. Ortega, and B. Yu, Image subband coding using contextbased classification and adaptive quantization, IEEE Trans. Image Processing,vol. 8, pp. 17021715, Dec. 1999. [35] Image denoising via lossy compression and wavelet thresholding,in Proc. IEEE Int. Conf. Image Processing, vol. 1, Santa Barbara, CA, Nov. 1997, pp. 604607. [36] D.L. Donoho, De-Noising by Soft Thresholding, IEEE Trans. Info. Theory 43,pp. 933-936, 1993

Dept. OF ECE , SBCE

44

PROJECT REPORT, JUNE 2012

IMAGE DENOISING AND COMPRESSION USING ADAPTIVE WAVELET THRESHOLDING

APPENDIX
Matlab program for image denoising clear; clc; clear all; close all; display('select the image'); display(' 1:lena.png'); display(' 2:barbara.png'); display(' 3:boat.png'); display(' 4:shuttle.jpg'); ss1 =input('enter your choice: '); switch ss1 case 1 f=imread('lena.png'); case 2 f=imread('barbara.jpg'); case 3 f=imread('boat.png'); case 4 f=imread('index.jpg'); end s = double(f); sigma=input('GAUSSIAN NOISE STANDARD DEVIATION ='); % Gaussian noise standard deviation In = randn(size(s))*sigma; % White Gaussian noise g = s + In; %noisy image2 g=uint8(g); x=g; % find default values (see ddencmp). [thr,sorh,keepapp] = ddencmp('den','wv',x); display(''); display('select wavelet'); display('enter 1 for haar wavelet'); display('enter 2 for db2 wavelet'); display('enter 3 for db4 wavelet'); display('enter 4 for sym wavelet'); display('enter 5 for sym wavelet'); display('enter 6 for bior wavelet'); display('enter 7 for bior wavelet'); display('enter 8 for mexh wavelet'); display('enter 9 for coif wavelet'); display('enter 10 for meyr wavelet');
Dept. OF ECE , SBCE 45

PROJECT REPORT, JUNE 2012

IMAGE DENOISING AND COMPRESSION USING ADAPTIVE WAVELET THRESHOLDING

display('enter 11 for morl wavelet'); display('enter 12 for rbio wavelet'); display('press any key to quit'); display(''); ww=input('enter your choice: '); switch ww case 1 wv='haar'; case 2 wv='db2'; case 3 wv='db4' ; case 4 wv='sym2' case 5 wv='sym4'; case 6 wv='bior1.1'; case 7 wv='bior6.8'; case 8 wv='mexh'; case 9 wv='coif5'; case 10 wv='dmey'; case 11 wv='mor1'; case 12 wv='jpeg9.7'; otherwise quit; end display(''); display('enter 1 for soft thresholding'); display('enter 2 for hard thresholding'); display('enter 3 for bayes soft thresholding'); display('enter 4 for sure shrink'); display('enter 5 for wiener filtering'); sorh=input('sorh: '); display('enter the level of decomposition'); level=input(' enter level 1 or 2 or 3 etc: '); switch sorh case 1 sorh='s';
Dept. OF ECE , SBCE 46

PROJECT REPORT, JUNE 2012

IMAGE DENOISING AND COMPRESSION USING ADAPTIVE WAVELET THRESHOLDING

xd = wdencmp('gbl',x,wv,level,thr,sorh,keepapp); case 2 sorh='h'; xd = wdencmp('gbl',x,wv,level,thr,sorh,keepapp); case 3 sig=20; V=(sig/256)^2; npic=g; filtertype=wv; levels=level; %Doing the wavelet decomposition [C,S]=wavedec2(npic,levels,filtertype); st=(S(1,1)^2)+1; bayesC=[C(1:st-1),zeros(1,length(st:1:length(C)))]; var=length(C)-S(size(S,1)-1,1)^2+1; %Calculating sigmahat sigmahat=median(abs(C(var:length(C))))/0.6745; for jj=2:size(S,1)-1 %for the H detail coefficients coefh=C(st:st+S(jj,1)^2-1); thr=bayes(coefh,sigmahat); bayesC(st:st+S(jj,1)^2-1)=sthresh(coefh,thr); st=st+S(jj,1)^2; % for the V detail coefficients coefv=C(st:st+S(jj,1)^2-1); thr=bayes(coefv,sigmahat); bayesC(st:st+S(jj,1)^2-1)=sthresh(coefv,thr); st=st+S(jj,1)^2; %for Diag detail coefficients coefd=C(st:st+S(jj,1)^2-1); thr=bayes(coefd,sigmahat); bayesC(st:st+S(jj,1)^2-1)=sthresh(coefd,thr); st=st+S(jj,1)^2; end bayespic=waverec2(bayesC,S,filtertype); xd=bayespic; figure, imagesc(uint8(bayespic));colormap(gray); case 4 xd = VisuThresh(g); case 5 [n m]= size(f); xd = wiener2(g,[m n]); end subplot(2,2,1), imshow(f);title('original image'); subplot(2,2,2),imshow(g);
Dept. OF ECE , SBCE 47

PROJECT REPORT, JUNE 2012

IMAGE DENOISING AND COMPRESSION USING ADAPTIVE WAVELET THRESHOLDING

title('noisy image'); subplot(2,2,3),xd=uint8(xd); imshow(xd);title('denoised image'); subplot(2,2,4),sub=f-xd; sub=abs(1.2*sub); imshow(im2uint8(sub));title('difference image'); ff=im2double(f);xdd=im2double(xd); display(' '); display(' '); psnr_value=WPSNR(ff,xdd) display(' '); display(' '); mse=compare11(ff,xdd) function to perform soft thresholding function op=sthresh(X,T); ind=find(abs(X)<=T); ind1=find(abs(X)>T); X(ind)=0; X(ind1)=sign(X(ind1)).*(abs(X(ind1))-T); op=X; Function to calculate Threshold for BayesShrink function threshold=bayes(X,sigmahat) len=length(X); sigmay2=sum(X.^2)/len; sigmax=sqrt(max(sigmay2-sigmahat^2,0)); if sigmax==0 threshold=max(abs(X)); else threshold=sigmahat^2/sigmax; end Function for visu thresholding method function [x] = VisuThresh(y,type) if nargin < 2, type = 'Soft'; end thr = sqrt(2*log(length(y))) ; if strcmp(type,'Hard'), x = HardThresh(y,thr); else x = SoftThresh(y,thr); end Function for PSNR function f = WPSNR(A,B,varargin) if A == B
Dept. OF ECE , SBCE 48

PROJECT REPORT, JUNE 2012

IMAGE DENOISING AND COMPRESSION USING ADAPTIVE WAVELET THRESHOLDING

error('Images are identical: PSNR has infinite value') end max2_A = max(max(A)); max2_B = max(max(B)); min2_A = min(min(A)); min2_B = min(min(B)); if max2_A > 1 | max2_B > 1 | min2_A < 0 | min2_B < 0 error('input matrices must have values in the interval [0,1]') end e = A - B; if nargin<3 fc = csf; % filter coefficients of CSF else fc = varargin{1}; end ew = filter2(fc,e); % filtering error with CSF decibels = 20*log10(1/(sqrt(mean(mean(ew.^2))))); f=decibels; function fc = csf() Fmat = csfmat; fc = fsamp2(Fmat); function Sa = csffun(u,v) sigma = 2; f = sqrt(u.*u+v.*v); w = 2*pi*f/60; Sw = 1.5*exp(-sigma^2*w^2/2)-exp(-2*sigma^2*w^2/2); sita = atan(v./(u+eps)); bita = 8; f0 = 11.13; w0 = 2*pi*f0/60; Ow = ( 1 + exp(bita*(w-w0)) * (cos(2*sita))^4) / (1+exp(bita*(w-w0))); Sa = Sw * Ow; function Fmat = csfmat() min_f = -20; max_f = 20; step_f = 1; u = min_f:step_f:max_f; v = min_f:step_f:max_f; n = length(u); Z = zeros(n); for i=1:n for j=1:n Z(i,j)=csffun(u(i),v(j)); % calling function csffun end end
Dept. OF ECE , SBCE 49

PROJECT REPORT, JUNE 2012

IMAGE DENOISING AND COMPRESSION USING ADAPTIVE WAVELET THRESHOLDING

Fmat = Z;

C code for visu shrink method


#include "rt_nonfinite.h" #include "VisuThresh.h" #include "VisuThresh_initialize.h" #include "rt_nonfinite.h" #include "VisuThresh.h" #include "rtGetInf.h" #define NumBitsPerChar 8U #include "rtGetNaN.h" #include "rt_nonfinite.h" #include "rtGetNaN.h" #include "rtGetInf.h" #include "rt_nonfinite.h" #include "VisuThresh.h" #include "VisuThresh_terminate.h" real_T rtInf; real_T rtMinusInf; real_T rtNaN; real32_T rtInfF; real32_T rtMinusInfF; real32_T rtNaNF; void main(void) { real_T x[63]; real_T value=1; size_t realSize=1; VisuThresh_initialize(); VisuThresh(x); rtGetInf(); rtGetInfF(); rtGetMinusInf(); rtGetMinusInfF(); rt_InitInfAndNaN(realSize); rtIsInf(value); rtIsInfF(value); rtIsNaN(value); rtIsNaNF(value); VisuThresh_terminate(); } void VisuThresh(real_T x[63]) { int32_T k; static const real_T dv0[63] = { 18.0, 8.5, 2.0, -6.5, -6.0, 34.0, -17.5, -9.0,
Dept. OF ECE , SBCE 50

PROJECT REPORT, JUNE 2012

IMAGE DENOISING AND COMPRESSION USING ADAPTIVE WAVELET THRESHOLDING

-9.5, 0.0, -7.0, 9.0, 24.5, -6.0, -8.0, 12.0, 13.5, 9.0, 15.5, 28.5, 7.0, 16.0, 0.0, 6.0, -12.5, -6.0, 37.5, -44.0, 6.0, 13.0, -3.0, -16.0, 23.5, 24.5, -15.5, -18.0, 17.5, 17.0, 16.0, 12.5, 19.0, -28.0, 22.5, 8.5, -15.0, 0.0, 3.0, 0.0, -37.5, -4.0, 15.0, 9.0, -6.5, -6.0, 37.0, -30.5, -8.5, -8.5, -1.0, 6.0, 21.5, 28.5, 3.0 }; real_T b_x; real_T y[63]; static const real_T b_y[63] = { 18.0, 8.5, 2.0, -6.5, -6.0, 34.0, -17.5, -9.0, -9.5, 0.0, -7.0, 9.0, 24.5, -6.0, -8.0, 12.0, 13.5, 9.0, 15.5, 28.5, 7.0, 16.0, 0.0, 6.0, -12.5, -6.0, 37.5, -44.0, 6.0, 13.0, -3.0, -16.0, 23.5, 24.5, -15.5, -18.0, 17.5, 17.0, 16.0, 12.5, 19.0, -28.0, 22.5, 8.5, -15.0, 0.0, 3.0, 0.0, -37.5, -4.0, 15.0, 9.0, -6.5, -6.0, 37.0, -30.5, -8.5, -8.5, -1.0, 6.0, 21.5, 28.5, 3.0 }; for (k = 0; k < 63; k++) { b_x = fabs(dv0[k]) - 2.09629414793641; y[k] = fabs(b_x); x[k] = b_y[k]; res[k] = b_x; } for (k = 0; k < 63; k++) { b_x = x[k]; if (x[k] > 0.0) { b_x = 1.0; } else if (x[k] < 0.0) { b_x = -1.0; } else { if (x[k] == 0.0) { b_x = 0.0; } } x[k] = b_x; } for (k = 0; k < 63; k++) { x[k] *= (res[k] + y[k]) / 2.0; } } void VisuThresh_initialize(void) { rt_InitInfAndNaN(8U); } real_T rtGetInf(void)
Dept. OF ECE , SBCE 51

PROJECT REPORT, JUNE 2012

IMAGE DENOISING AND COMPRESSION USING ADAPTIVE WAVELET THRESHOLDING

{ size_t bitsPerReal = sizeof(real_T) * (NumBitsPerChar); real_T inf = 0.0; if (bitsPerReal == 32U) { inf = rtGetInfF(); } else { uint16_T one = 1U; enum { LittleEndian, BigEndian } machByteOrder = (*((uint8_T *) &one) == 1U) ? LittleEndian : BigEndian; switch (machByteOrder) { case LittleEndian: { union { LittleEndianIEEEDouble bitVal; real_T fltVal; } tmpVal; tmpVal.bitVal.words.wordH = 0x7FF00000U; tmpVal.bitVal.words.wordL = 0x00000000U; inf = tmpVal.fltVal; break; } case BigEndian: { union { BigEndianIEEEDouble bitVal; real_T fltVal; } tmpVal; tmpVal.bitVal.words.wordH = 0x7FF00000U; tmpVal.bitVal.words.wordL = 0x00000000U; inf = tmpVal.fltVal; break; } } } return inf; } real32_T rtGetInfF(void) { IEEESingle infF; infF.wordL.wordLuint = 0x7F800000U;
Dept. OF ECE , SBCE 52

PROJECT REPORT, JUNE 2012

IMAGE DENOISING AND COMPRESSION USING ADAPTIVE WAVELET THRESHOLDING

return infF.wordL.wordLreal; } real_T rtGetMinusInf(void) { size_t bitsPerReal = sizeof(real_T) * (NumBitsPerChar); real_T minf = 0.0; if (bitsPerReal == 32U) { minf = rtGetMinusInfF(); } else { uint16_T one = 1U; enum { LittleEndian, BigEndian } machByteOrder = (*((uint8_T *) &one) == 1U) ? LittleEndian : BigEndian; switch (machByteOrder) { case LittleEndian: { union { LittleEndianIEEEDouble bitVal; real_T fltVal; } tmpVal; tmpVal.bitVal.words.wordH = 0xFFF00000U; tmpVal.bitVal.words.wordL = 0x00000000U; minf = tmpVal.fltVal; break; } case BigEndian: { union { BigEndianIEEEDouble bitVal; real_T fltVal; } tmpVal; tmpVal.bitVal.words.wordH = 0xFFF00000U; tmpVal.bitVal.words.wordL = 0x00000000U; minf = tmpVal.fltVal; break; } } } return minf; }
Dept. OF ECE , SBCE 53

PROJECT REPORT, JUNE 2012

IMAGE DENOISING AND COMPRESSION USING ADAPTIVE WAVELET THRESHOLDING

real32_T rtGetMinusInfF(void) { IEEESingle minfF; minfF.wordL.wordLuint = 0xFF800000U; return minfF.wordL.wordLreal; } /* End of code generation (rtGetInf.c) */ void rt_InitInfAndNaN(size_t realSize) { (void) (realSize); rtInf = rtGetInf(); rtInfF = rtGetInfF(); rtMinusInf = rtGetMinusInf(); rtMinusInfF = rtGetMinusInfF(); } boolean_T rtIsInf(real_T value) { return ((value==rtInf || value==rtMinusInf) ? 1U : 0U); } boolean_T rtIsInfF(real32_T value) { return(((value)==rtInfF || (value)==rtMinusInfF) ? 1U : 0U); } boolean_T rtIsNaN(real_T value) { #if defined(_MSC_VER) && (_MSC_VER <= 1200) return _isnan(value)? TRUE:FALSE; #else return (value!=value)? 1U:0U; #endif } boolean_T rtIsNaNF(real32_T value) { #if defined(_MSC_VER) && (_MSC_VER <= 1200) return _isnan((real_T)value)? true:false; #else return (value!=value)? 1U:0U; #endif } void VisuThresh_terminate(void) { /* (no terminate code required) */
Dept. OF ECE , SBCE 54

PROJECT REPORT, JUNE 2012

IMAGE DENOISING AND COMPRESSION USING ADAPTIVE WAVELET THRESHOLDING

} Header files rt_nonfinite.h #ifndef __RT_NONFINITE_H__ #define __RT_NONFINITE_H__ #if defined(_MSC_VER) && (_MSC_VER <= 1200) #include <float.h> #endif #include <stddef.h> #include "rtwtypes.h" extern real_T rtInf; extern real_T rtMinusInf; extern real_T rtNaN; extern real32_T rtInfF; extern real32_T rtMinusInfF; extern real32_T rtNaNF; extern void rt_InitInfAndNaN(size_t realSize); extern boolean_T rtIsInf(real_T value); extern boolean_T rtIsInfF(real32_T value); extern boolean_T rtIsNaN(real_T value); extern boolean_T rtIsNaNF(real32_T value); typedef struct { struct { uint32_T wordH; uint32_T wordL; } words; } BigEndianIEEEDouble; typedef struct { struct { uint32_T wordL; uint32_T wordH; } words; } LittleEndianIEEEDouble; typedef struct { union { real32_T wordLreal; uint32_T wordLuint; } wordL; } IEEESingle; #endif /* End of code generation (rt_nonfinite.h) */
Dept. OF ECE , SBCE 55

PROJECT REPORT, JUNE 2012

IMAGE DENOISING AND COMPRESSION USING ADAPTIVE WAVELET THRESHOLDING

rtGetInf.h #ifndef __RTGETINF_H__ #define __RTGETINF_H__ #include <stddef.h> #include "rtwtypes.h" #include "rt_nonfinite.h" extern real_T rtGetInf(void); extern real32_T rtGetInfF(void); extern real_T rtGetMinusInf(void); extern real32_T rtGetMinusInfF(void); #endif rtGetNaN.h #ifndef __RTGETNAN_H__ #define __RTGETNAN_H__ #include <stddef.h> #include "rtwtypes.h" #include "rt_nonfinite.h" extern real_T rtGetNaN(void); extern real32_T rtGetNaNF(void); #endif rtwtypes.h #ifndef __RTWTYPES_H__ #define __RTWTYPES_H__ #ifndef TRUE # define TRUE (1U) #endif #ifndef FALSE # define FALSE (0U) #endif #ifndef __TMWTYPES__ #define __TMWTYPES__ #include <limits.h> typedef signed char int8_T; typedef unsigned char uint8_T; typedef short int16_T; typedef unsigned short uint16_T; typedef int int32_T; typedef unsigned int uint32_T; typedef float real32_T; typedef double real64_T; typedef double real_T; typedef double time_T;
Dept. OF ECE , SBCE 56

PROJECT REPORT, JUNE 2012

IMAGE DENOISING AND COMPRESSION USING ADAPTIVE WAVELET THRESHOLDING

typedef unsigned char boolean_T; typedef int int_T; typedef unsigned int uint_T; typedef unsigned long ulong_T; typedef char char_T; typedef char_T byte_T; #define CREAL_T typedef struct { real32_T re; real32_T im; } creal32_T; typedef struct { real64_T re; real64_T im; } creal64_T; typedef struct { real_T re; real_T im; } creal_T; typedef struct { int8_T re; int8_T im; } cint8_T; typedef struct { uint8_T re; uint8_T im; } cuint8_T; typedef struct { int16_T re; int16_T im; } cint16_T; typedef struct { uint16_T re; uint16_T im; } cuint16_T; typedef struct { int32_T re; int32_T im; } cint32_T; typedef struct { uint32_T re; uint32_T im; } cuint32_T; #define MAX_int8_T ((int8_T)(127)) #define MIN_int8_T ((int8_T)(-128))
Dept. OF ECE , SBCE 57

PROJECT REPORT, JUNE 2012

IMAGE DENOISING AND COMPRESSION USING ADAPTIVE WAVELET THRESHOLDING

#define MAX_uint8_T #define MIN_uint8_T #define MAX_int16_T #define MIN_int16_T #define MAX_uint16_T #define MIN_uint16_T #define MAX_int32_T #define MIN_int32_T #define MAX_uint32_T #define MIN_uint32_T

((uint8_T)(255)) ((uint8_T)(0)) ((int16_T)(32767)) ((int16_T)(-32768)) ((uint16_T)(65535)) ((uint16_T)(0)) ((int32_T)(2147483647)) ((int32_T)(-2147483647-1)) ((uint32_T)(0xFFFFFFFFU)) ((uint32_T)(0))

/* Logical type definitions */ #if !defined(__cplusplus) && !defined(__true_false_are_keywords) # ifndef false # define false (0U) # endif # ifndef true # define true (1U) # endif #if ((SCHAR_MIN + 1) != -SCHAR_MAX) #error "This code must be compiled using a 2's complement representation for signed integer values" #endif #define TMW_NAME_LENGTH_MAX 64 #endif #endif

sign.h
#ifndef __SIGN_H__ #define __SIGN_H__ #include <math.h> #include <stdio.h> #include <stdlib.h> #include <string.h> #include "rtwtypes.h" #include "VisuThresh_types.h" extern void b_sign(real_T x[4086]); #endif
Dept. OF ECE , SBCE 58

PROJECT REPORT, JUNE 2012

IMAGE DENOISING AND COMPRESSION USING ADAPTIVE WAVELET THRESHOLDING

VisuThresh.h #ifndef __VISUTHRESH_H__ #define __VISUTHRESH_H__ #include <math.h> #include <stdio.h> #include <stdlib.h> #include <string.h> #include "rtwtypes.h" #include "VisuThresh_types.h" extern void VisuThresh(real_T x[4086]); #endif VisuThresh_initialize.h #ifndef __VISUTHRESH_INITIALIZE_H__ #define __VISUTHRESH_INITIALIZE_H__ #include <math.h> #include <stdio.h> #include <stdlib.h> #include <string.h> #include "rtwtypes.h" #include "VisuThresh_types.h" extern void VisuThresh_initialize(void); #endif

VisuThresh_terminate.h #ifndef __VISUTHRESH_TERMINATE_H__


Dept. OF ECE , SBCE 59

PROJECT REPORT, JUNE 2012

IMAGE DENOISING AND COMPRESSION USING ADAPTIVE WAVELET THRESHOLDING

#define __VISUTHRESH_TERMINATE_H__ #include <math.h> #include <stdio.h> #include <stdlib.h> #include <string.h> #include "rtwtypes.h" #include "VisuThresh_types.h" extern void VisuThresh_terminate(void); #endif

VisuThresh_types.h #ifndef __VISUTHRESH_TYPES_H__ #define __VISUTHRESH_TYPES_H__ #endif

Dept. OF ECE , SBCE

60

S-ar putea să vă placă și