Sunteți pe pagina 1din 13

1394 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 15, NO.

11, NOVEMBER 2005

A Robust Error Concealment Technique Using


Data Hiding for Image and Video Transmission
Over Lossy Channels
Chowdary B. Adsumilli, Student Member, IEEE, Mylène C. Q. Farias, Member, IEEE,
Sanjit K. Mitra, Life Fellow, IEEE, and Marco Carli, Senior Member, IEEE

Abstract—A robust error concealment scheme using data hiding error control that adapts to the approaches both at the trans-
which aims at achieving high perceptual quality of images and mitter and at the receiver has proven to be more effective [2].
video at the end-user despite channel losses is proposed. The Error concealment methods use spatial and temporal informa-
scheme involves embedding a low-resolution version of each image
or video frame into itself using spread-spectrum watermarking,
tion to recognize that an error has occurred. Once an error is de-
extracting the embedded watermark from the received video tected, the received video stream is adjusted with an attempt to
frame, and using it as a reference for reconstruction of the parent recover the original data. A number of error concealment tech-
image or frame, thus detecting and concealing the transmission niques have been proposed in the literature that use either sta-
errors. Dithering techniques have been used to obtain a binary tistical methods to detect and correct errors (these are usually
watermark from the low-resolution version of the image/video computationally intensive) or depend on certain critical infor-
frame. Multiple copies of the dithered watermark are embedded
in frequencies in a specific range to make it more robust to channel mation from the transmitter, like the resynchronization markers,
errors. It is shown experimentally that, based on the frequency to detect these transmission errors.
selection and scaling factor variation, a high-quality watermark In this paper, a new robust error concealment algorithm which
can be extracted from a low-quality lossy received image/video uses watermarking is proposed. Watermarking is usually used
frame. Furthermore, the proposed technique is compared to its to introduce some redundancy to the transmitted data with little
two-part variant where the low-resolution version is encoded
increase in its bit rate during implementation [3].
and transmitted as side information instead of embedding it.
Simulation results show that the proposed concealment technique The basic idea of the proposed algorithm is as follows.
using data hiding outperforms existing approaches in improving A frame of a video is wavelet transformed and the low–low
the perceptual quality, especially in the case of higher loss proba- approximation coefficients (usually second- or third-order) are
bilities. then embedded in the frame itself during MPEG encoding. The
Index Terms—Data hiding, digital half-toning, error conceal- embedded data can then be error protected unequally such that
ment, image/video transmission, wireless video. the mark signal embedded packets are given higher protection
against channel errors. At the receiver, the mark is extracted
from the decoded frame. The channel corrupted information
I. INTRODUCTION of the frame is then reconstructed using the embedded mark
signal. Specific areas lost through transmission are selected
T HE transmission of images and video over wired and/or
wireless channels introduces multiple losses into the
transmitted data that manifest themselves as various types of
from the reconstructed mark signal and replaced in the original
frame, thus enhancing its perceptual quality.1 This contribution
artifacts. These artifacts degrade the quality of the received provides modified algorithms for implementation in video
image/video as they vary rapidly during the course of trans- transmission and high-detail color extensions in case of wired
mission based on the channel conditions. Therefore, there is a and wireless transmission scenarios. Also, a comparison of
need for a good error concealment technique that can detect the proposed technique to its two-part variant (where the
and correct (or conceal) these errors better and display a good low-resolution child image is encoded and transmitted as side
quality image/video regardless of the channel conditions [1]. In information) is provided along with an extensive analysis of its
the case of video transmission over wireless channels, adaptive performance.
The paper is organized as follows. Section II briefly points
Manuscript received September 3, 2003; revised March 6, 2004. This work
out the previous work in error concealment where similar/re-
was supported in part by the National Science Foundation under Grant CCR- lated approaches have been addressed. Section III describes
0105404, the University of California under a MICRO grant, Philips Research the proposed error concealment/reconstruction algorithm using
Laboratories, and Microsoft Corporation. This paper was recommended by As- data hiding. Section IV gives a detailed explanation of the
sociate Editor L. Onural.
C. B. Adsumilli, M. C. Q. Farias, and S. K. Mitra are with the Depart- multiple scenarios under which the algorithm can be modified
ment of Electrical and Computer Engineering, University of California, to make it more efficient. The simulations done using the
Santa Barbara, CA 93106–9560 USA (e-mail: chowdary@ece.ucsb.edu; proposed technique and the obtained results are presented in
mylene@ece.ucsb.edu; mitra@ece.ucsb.edu).
M. Carli is with the Department of Applied Electronics, University of Rome
Tre, Rome 00146, Italy (e-mail: carli@uniroma3.it). 1Preliminary results of the performance of the algorithm’s implementation
Digital Object Identifier 10.1109/TCSVT.2005.856933 for wired transmission of images have been partially reported [4].

1051-8215/$20.00 © 2005 IEEE


ADSUMILLI et al.: ROBUST ERROR CONCEALMENT TECHNIQUE USING DATA HIDING FOR IMAGE AND VIDEO TRANSMISSION 1395

Fig. 1. Block diagram of the embedding algorithm.

Section V. In Section VI, conclusions are drawn based on the errors in the data will have smaller and local effects on the re-
results, and suggestions for further work are discussed. constructed video. A possible solution to the third problem will
be addressed in a future work.
A set of concealment techniques that do not use data hiding
II. PREVIOUS WORK while giving similar high levels of performance have been pro-
The use of data hiding as an error control tool was first intro- posed in the literature. Block-based deterministic interpolation
duced by Liu and Li [5]. They extracted the important informa- models are used for reconstruction of missing blocks in either
tion in an image, like the dc components of each 8 8 block, spatial domain [14]–[16] (simplified edge-extraction imposition
and embedded it into the host image. In the work that followed, for obtaining the directional interpolation was considered in
Liu and Li’s work formed the basis. Certain key features were [14] and [15] while projection onto convex sets was considered
extracted from the image, and these features were encoded and in [16]) or spectral domain [17] (where lost DCT coefficients
data hidden in the original image either as a resilience tool or are estimated based on spatial smoothing constraints). Li and
for concealment [6]–[8]. Orchard provided a good review of these techniques and pro-
Watermarking of error correcting codes was introduced by posed a set of block-based sequential recovery techniques [18].
Lee and Won [9]. Here, the parities generated by conventional These work well in simplified loss scenarios where successfully
error control codes were used for watermarking sequence. A received data are assumed to be reconstructed loss-free. This is
region of interest (ROI)-based coded bit stream embedding was often not the case. A comparison of these techniques with the
employed by Wang and Ji, where the ROI DCT bit stream is proposed technique is provided in Section V.
embedded into the region-of-background wavelet coefficients
[10]. This technique gives better results when perception-based III. ERROR CONCEALMENT USING DATA HIDING
encoding is employed. The proposed scheme can be divided into an embedding
The concept was extended to video coding by Bartolini et al. part and a retrieval part. It should be noted that the proposed
[11]. However, they used data hiding as a tool to increase the technique does not overload the communication channel by
syntax-based error detection rate in H.263 but not for recovering requiring feedback or any retransmission of damaged blocks.
or correcting lost data. Munadi et al. extended the concept of key
feature extraction and embedding to interframe coding [12]. In A. Embedding Part
their scheme, the most important feature is embedded into the
The data hiding technique used here is a modified version
prediction error of the current frame. However, the effects on
of the Cox’s watermarking algorithm [19]. Due to the limited
motion vectors and the loss of motion-compensated errors were
embedding capacity of the algorithm, it is practically not fea-
not addressed. Yilmaz and Alatan proposed embedding a combi-
sible to embed the whole frame into itself [20]. In this work,
nation of edge oriented information, block bit-length, and parity
therefore, the discrete wavelet transform (DWT) and dithering
bits in intraframes [13]. They use a minimally robust technique
techniques have been used to reduce the amount of data to be
of even–odd signaling of DCT coefficients for embedding.
embedded such that the algorithm embeds maximum informa-
The problems with existing techniques are as follows.
tion while still catering to the feasibility issues. Wavelets have
1) Only one or a few selected set of key features are used for several properties that make them good candidates for this ap-
embedding. These features may not necessarily follow the plication. Some of the important ones relevant to this algorithm
loss characteristics of the channel employed. are: 1) the approximation coefficients provide a good low-reso-
2) They use transform domain to encode the data that needs lution estimate of the image, while minimizing the aliasing ar-
to be embedded, often DCT. However, if losses occur on tifacts resulting from the reduction in resolution, and 2) The
the dc coefficient or a set of first few ac coefficients, the wavelet coefficients are localized, such that a corruption of a
loss to the extracted reference would be significant and coefficient through channel errors has only a local effect on the
therefore may lead to reduction in concealment perfor- image. Dithering techniques make it possible to generate bi-
mance. nary images which look very similar to gray-level images. The
3) Almost all of the techniques use fragile or semifragile technique employed here is the Floyd–Steinberg error diffusion
data-embedding schemes which are more susceptible to dithering algorithm [21], [22]. The DWT approximation coeffi-
attacks. cients are half-toned before being embedded.
Our proposed technique avoids two of the three problems by The block diagram of the embedding algorithm is shown in
embedding half-toned version of the whole reference image (in- Fig. 1. The operation of the embedding part can be described
stead of encoding its transform coefficients). This way, loss or as follows. The two-dimensional (2-D) DWT of the frame is
1396 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 15, NO. 11, NOVEMBER 2005

Fig. 2. Block diagram of the retrieval algorithm.

first computed. A second-level DWT is performed again on the application. For the purpose of delivering a high-quality video
approximation coefficients to obtain an image that is 1/16th the through a channel, the mid-frequencies are a good choice. In-
size of the original frame. A half-toned image, the marker, is serting the watermark in the low frequencies would cause vis-
then generated from the reduced size image. One marker is used ible artifacts in the image, while inserting it in the high frequen-
for each frame. After the marker is generated, each pixel of the cies would make it more prone to channel-induced defects. Also,
marker is repeated four times in a 2 2 matrix format. This multiple copies of the marker can be inserted sequentially with
repetition operation allows the decoder to recover the marker various initial frequencies to make the watermark more robust
from the data in a more robust fashion. to channel errors. In this case, the multiple copies are generated
Mathematically, the reduced size image generated for the th using independent randomly generated pseudonoise matrices.
frame can be represented as . Here, is of size and
is of size . The half-toning operation is performed B. Retrieval Part
on using a Floyd–Steinberg diffusion kernel given by The block diagram of the retrieval technique is shown in
Fig. 2. The DCT coefficients of the luminance channel of the
(1) received frame , denoted by , are computed as
(4)
where is the current pixel position and is typically ap- where represents the 2-D DCT operation.
plied on each 3 3 block of the reduced size image. The re- These coefficients are then multiplied by the corresponding
sulting marker is denoted as . Each pixel of is then re- pseudo-noise image . The pseudonoise image generated is the
peated in a 2 2 matrix format to form . Note that is of same as that at the transmitter side. It is tacitly assumed that the
size . receiver knows the seed for generating the pseudonoise image
A zero-mean, unit variance pseudonoise image is then ran- and the initial frequencies , where the mark was in-
domly generated with a Gaussian distribution and a known seed. serted. An issue of concern with this assumption is that it might
A unique pseudonoise image of size is generated lead to possible synchronization problems when severe channel
for each frame of the video. For a generic th frame of a video errors cause loss of frames. This can in turn be handled by
sequence, the final watermark is obtained by multiplying embedding the frame order number (similar to the sequence
with the pseudonoise image number in packet transmission) into the frame itself. The re-
(2) ceiver-side pseudonoise generator algorithm can be driven by
the recovered value while the missing frames can be detected
where represents element-by-element multiplication. Note using the missing frame order numbers.
that . The result of the multiplication, denoted as , is averaged
The computed DCT coefficients of the luminance channel of over the four pixels (2 2 matrix form) and the binary marker
the frame are denoted as . The watermark is then scaled is extracted by taking the sign of this average
by a factor and added to a set of coefficients in starting at
the initial frequencies of . The resulting image is (5)
given by
where
(3)
where and correspond to the pixel location in the spatial (6)
domain and the coefficient location in the DCT domain. Here,
, , and represent the individual component and , , and are the individual component
values of matrices , , and , respectively. Note that values of matrices and and the extracted marker ,
and . is then inverse transformed, respectively. Note here that the values of greater than 0 are
encoded, and transmitted. assigned a value of 1 and those that are equal to or less than 0
In the proposed method, the final watermark is added only to are assigned a value 0 to make the resulting image binary. Also
the mid-frequency DCT coefficients. The range of frequencies note that, while the size of is , the size of is
where the watermark is inserted is strongly dependent on the .
ADSUMILLI et al.: ROBUST ERROR CONCEALMENT TECHNIQUE USING DATA HIDING FOR IMAGE AND VIDEO TRANSMISSION 1397

Fig. 3. Block diagram of the lossy wired transmission model.

It has been shown that this approach enables a fairly large


amount of hidden data to be embedded without significantly af-
fecting the perceptual quality of the encoded image [4]. Once
the binary marker is extracted, the reduced size image is ob-
tained by inverse half-toning the watermark.
Although a number of algorithms have been proposed for
inverse half-toning [23]–[25], the inverse half-toning algorithm Fig. 4. States of the adopted wireless simulation model.
using wavelets proposed by Xiong et al. [23] is employed
here because of its performance and ease of operation. This gray scale to color images; and 3) extension from images to
process primarily involves edge extraction from the high-fre- video. The various models that are adopted to test the feasibility
quency components and edge-preserving noise removal of and effectiveness of the proposed algorithm in these three sce-
the low-frequency components of the wavelet coefficients. A narios are also discussed.
discrete dyadic wavelet transform using “Haar” wavelet (as a
perfect reconstruction filter) without sampling rate conversion A. Wired Versus Wireless Transmission
is employed to re-obtain the processed smooth marker. The
The lossy wired model that has been adopted for implemen-
whole operation can be represented as
tation of the proposed algorithm is shown in Fig. 3. The testing
(7) of the algorithm can be easily performed at the IP level. The
embedded image is packetized with appropriate protection and
where denotes the inverse half-toning operation and
header information and transmitted.
is the extracted approximation of .
At the router level, the following algorithm is implemented.
A 2-D inverse DWT is performed on this smooth marker to
The channel (layered) header is decoded to check for a match
obtain an intermediate resolution image . The values of
in the source, next hop, and destination IP addresses. Then the
form the approximation coefficients. Other high-frequency co-
packets are sorted into one of the multiple priority queues based
efficients are assumed to be 0 while computing the inverse DWT
on the value of priority_byte in the header. The transmission
(8) from this point forward is based on the priority of the packet and
the current channel conditions. Only when queue 1 is empty are
Note that is of size . It is then up-sampled by a
packets from queue 2 transmitted and so on. According to this
factor of two and passed through a low-pass interpolation filter
model, packet loss is introduced when the queue buffer is full
to obtain an image. The resulting image is compared
for each of the queues except for queue 1.
with the current received frame to detect and conceal the
Once the packets are reordered based on their priorities, a
corrupted blocks by substituting the appropriate data.
delay is created in the transmission according to predefined la-
The criterion for substituting the loss areas is different for
tency values. The packets are then randomly dropped in accor-
images and video. The substituted areas in images are identi-
dance with a known probabilistic distribution (Gaussian in this
fied by packet-size blocks of lost data, while, in case of video,
case) which has a preset (controllable) mean and variance. The
these are located using motion vectors and motion compensated
remainder of the packets are forwarded to the destination of a
error residual (MCER) values. An implementation issue here is
point-to-point network.
that the marker needs to be scaled before the appropriate areas
However, for wireless cases, the model in Fig. 3 does not
are substituted. The scaling can be either done throughout the
work due to the following reasons: 1) wireless channel has un-
image or only in the localized areas where the frame experi-
predictable variation with time; 2) co- and cross-channel inter-
enced packet losses. A global scaling constant is used when the
ferences are not accounted for; and 3) fading and power losses
image is globally scaled. In case of local scaling, the different
are not considered in the current model. Apart from these, the
local scaling factors are used based on the intensities of the sur-
model used in case of wired transmission is adapted for network
rounding areas. Both approaches are explored and the results
traffic characteristics of the Internet, typically like network con-
presented in Section V.
gestion, which is quite unlike the case in wireless transmission.
A link layer modeling instead of network layer is adopted
IV. CASE STUDIES WITH IMPLEMENTATION SCENARIOS
for wireless channel transmission scenarios. A simple point-to-
In this section, three different extensions where the algorithm point, two-state Markov chain, i.e., the Gilbert–Elliot model as
can be extended with minor modifications are presented and dis- shown in Fig. 4, has been adopted for wireless transmission sce-
cussed. The extensions to the regular implementation are: 1) ex- narios [26]. The two states in this model can be considered as
tension from wired to wireless transmission; 2) extension from the good state and bad state or the receive state and the loss
1398 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 15, NO. 11, NOVEMBER 2005

Fig. 5. Block diagram of the error diffusion algorithm. D is the


Floyd–Steinberg kernel and T (1; 1) is the threshold operator.

state, respectively, with predefined probabilities and . This Fig. 6. Block diagram of the generalized error-diffusion algorithm. is the
A
h

means that the Markov chain is in the good state if the packet is hysteresis value and is a low-pass filter.
received in time without any errors and is in the bad state if the
packet is lost during transmission due to latency or bandwidth in Fig. 6 is then applied to the CMYK images individually to
limitations of the channel. obtain a color half-toned image.
The parameters and are called the transition probabilities The block diagram of the generalized error diffusion algo-
of the Markov chain between the good and the bad states. The rithm used for color image half-toning, shown in Fig. 6, is quite
transition matrix of this two-state Markov chain can be repre- similar to Floyd–Steinberg dithering technique in Fig. 5 except
sented as that it has another positive feedback loop [28]. The sample ,
created from threshold operation on , goes through a scaled
(9) low-pass filter with a predefined hysteresis value . Since the
sample values after threshold process are sharper in the CMYK
The values of and are quite apart with more emphasis space, this low-pass filtering operation is required. is chosen
on the good state. For a typical wireless channel, the values of to be a 3 3 low-pass filter to simplify implementation with
and would be around 0.999 and 0.001. Note here that respect to the 3 3 Floyd–Steinberg kernel . is chosen
. Such a model is followed here for the wireless channel to be 0.5 in our implementation.
simulations, and the results are presented in Section V. After the error-diffusion process, the image is converted back
to RGB color space by an approximate inverse of the transfor-
B. Gray Scale Versus Color mation defined in (10). The conversion can be represented as
The algorithm can be extended to work in the case of color
image/video transmission. In this case, differences can be seen (11)
not only in the algorithm implementation, but in the results as
well. The variation in results and their analysis are presented in Either the R, G, and B channels or the Y, Cb, and Cr channels of
Section V while the implementation changes are discussed here. the half-toned color marker signal can now be embedded into
The half-toning technique used for gray-scale image pro- the luminance component of the original image/video frame.
cessing is a simple feedback-based loop as shown in Fig. 5. Each of these cases with individual and dependent variations
Here, is the Floyd–Steinberg kernel given in (1) and in are considered, and the results are presented and analyzed
is a threshold operation. and , the errors in the in Section V.
th sample of and , respectively, are “diffused” back
into to samples using a feedback loop. This C. Image Versus Video
implementation gives a pretty good estimate of the original There are several methods in which the previous algorithm
gray-scale image as a binary image. can be extended to video. Due to the facts that the algorithm
For the case of color images/videos, we use color dithering can be implemented on a frame-by-frame basis and the effects
[27]. Here, the color space is divided into four subspaces cyan of watermarking on motion vectors is minimal (shown later),
(C), magenta (M), yellow (Y), and black (K). The original RGB this algorithm can be employed in almost all of the currently ex-
color image is converted into a CMYK image using a isting video communication codecs like MPEG or H.263.2 How-
transformation given by ever, implementing the algorithm on a frame-by-frame basis is
practically not viable [even though it gives very high peak SNR
(PSNR) values after reconstruction] because it increases the pre-
and post-processing complexity of the video transceivers.
(10) A tradeoff issue that needs to be considered is a reduction in
the pre- and post-processing complexity against the loss of some
where and . The black values efficiency of the error concealment that the algorithm provides.
are initially calculated using old , , and values, and 2Implementation in H.263 is much more straightforward based on the fact
then the new , , and values are obtained by modifying that the algorithm can be extended to work for YUV (4:2:2) and (4:2:0) color
the old ones using these values. The error diffusion shown video frames.
ADSUMILLI et al.: ROBUST ERROR CONCEALMENT TECHNIQUE USING DATA HIDING FOR IMAGE AND VIDEO TRANSMISSION 1399

Fig. 7. (a) Original k th frame of table tennis sequence. (b) Original (k + 1)th frame. (c) Motion vectors between the above frames. (d) Motion vectors when the
watermark is embedded in only the k th frame. (e) Motion vectors when the watermark is embedded in both the frames. (f) Difference in motion vectors for (c)
and (e).

We can achieve this tradeoff in two ways. In the first method, we motion vector information of the P- and the B-frames, provided
exploit the motion vector information of the video [29]. It can that the P-frames are embedded in themselves.
be seen from Fig. 7 that the effect of embedding the watermark The effects of the algorithm on motion vectors can be seen in
in consecutive frames does not cause noticeable differences to Fig. 7. The original th and th frames of the Table tennis
the motion vectors. Previous work also suggests that any small video are shown in Fig. 7(a) and (b), respectively. Fig. 7(c)
effects that this process creates can be used to enhance the error shows the original motion vectors between these two frames,
concealment at the receiver by coding and transmitting these and Fig. 7(d) shows the effect of watermarking on the motion
differences [30]. Hence, we can embed the low-resolution ver- vectors when the watermark is embedded only in the th frame.
sions of the I-frames inside themselves with little change in the Note that these effects are seen in the areas (mid-frequency
1400 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 15, NO. 11, NOVEMBER 2005

range of the frame) where the watermark is embedded, specif- TABLE I


ically in the area of the table in this case. Fig. 7(e) shows the PERFORMANCE OF THE PROPOSED ALGORITHM. PSNR (IN dB) FOR A FIXED
MEAN LOSS 15% AND VARIANCE 2.5%
effects of watermarking on motion vectors when the mark is em-
bedded in both the frames. This figure shows that embedding in
both frames removes the effects of data hiding on motion vec-
tors. Fig. 7(f) shows the difference between Fig. 7(c) and (e).
However, this algorithm can only conceal the spatial errors
introduced in the transmission while the temporal errors con-
tinue to be present. These latter errors in the video can be easily
reduced by a simple modification of the proposed technique. In-
stead of embedding the I-frames in themselves, we embed the
low-resolution version of the subsequent P-frame in the cur-
rent I-frame. This not only enhances the error concealment at
the receiver for the current and subsequent P- and B-frames,
but also preserves the motion information by giving a reference
for the prediction. For enhanced error concealment in P- and
B-frames, their motion vectors can also be embedded along with
the low-resolution versions of the frames. A disadvantage of this fading is assumed for each of the packet transmissions in the in-
modification would be that it does not take care of the loss of in- dividual channels of a multichannel scenario. It should be noted
formation in the I-frame itself. here that a standard CDMA2000 system [33] is followed due to
In the second method to achieve the processing-efficiency its efficiency of operation in the link layer when compared to
tradeoff, we can embed the non-I-frames in the motion vectors other wireless systems.
itself. Previous work suggests that an arbitrary one-dimensional Table I summarizes the results of the experiment for wired
(1-D) signal can be embedded into the motion vectors of a video and wireless transmissions. In case of color images, the com-
and recovered at the receiver without much loss [31]. This can be posite PSNR (CPSNR) is calculated at the receiver. The symbol
adapted to the proposed algorithm such that the watermark can indicates the images/video frames that are implemented using
now be embedded into the motion vectors and can be recovered the wired transmission simulations. Note that the percentage
without much loss. To reduce the coding overhead in this case, improvement due to localized error concealment in the case of
we can now embed every alternate P-frame (instead of every wireless transmission is much more pronounced than in the case
P-frame) in the motion vectors that correspond to the frame. of wired transmission. For each value of image/video frame, the
These techniques have not yet been explored and are under in- PSNR of the received image , the PSNR of the error
vestigation. concealed image , and the PSNR of the local-scaled
error concealed image are noted.
When the received frame is error concealed using the pro-
V. EXPERIMENTAL RESULTS AND ANALYSIS posed algorithm, the scaling operation performed on the con-
cealed data blocks is fixed. Instead, if localized interpolation is
The algorithm proposed in Section III with extensions performed on the reconstructed blocks by considering the neigh-
proposed in Section IV is implemented. For wired channel boring pixels that were correctly received, an improvement in
transmission, conventional user datagram protocol (UDP) with the visual quality of the restored frame is observed (as can be
packet loss is considered, whereas a simulated point-to-point seen in Table I).
lossy link layer model is used for wireless cases. The following A sample result for the wired transmission case is shown in
assumptions are made for simplicity with regard to the im- Fig. 8 for the Cameraman image with the following parameter
plementation of the algorithm: 1) the binary loss probability values: , mean loss probability and loss variance
of the channel is assumed to be constant for a given network . The received frame had a PSNR and the
bandwidth; 2) the source transmission rate is assumed to be less error-concealed image had a PSNR . These frames
than the maximum channel bandwidth; 3) no retransmissions with the original frame are shown in Fig. 8(a)–(c). Fig. 8(d) has
occur; and 4) bit errors over successfully received packets are been obtained by localized scaling error concealment. Here, the
negligible. error-concealed image is locally scaled by using a localization
The Haar wavelet is used for calculating the two-level ap- kernel of size 8 8 or 16 16 (based on the size of the lost data
proximate wavelet coefficients. For wired scenarios, the com- area). The PSNR obtained by performing this localized scaling
pressed output stream is vectorized and multiplied with a vector operation on the error-concealed image is PSNR .
that is randomly generated using a Gaussian distribution. About However, better results in terms of PSNR are expected if the
1000 varying packet loss simulations have been generated inde- kernel size is varied.
pendently for each transmission, and their statistical average is In Fig. 9, a sample result for the wireless transmission sim-
taken to obtain the probability loss vector. ulation are presented for the Sail image with the following pa-
In the case of wireless transmission, the channel and its losses rameter values: , average loss probability , and
are simulated using the ns-2 simulator [32]. The power levels for loss variance . The received image had a PSNR
the wireless transmission are kept almost constant. A constant (the value of 19.4923 that is listed in the table is CPSNR,
ADSUMILLI et al.: ROBUST ERROR CONCEALMENT TECHNIQUE USING DATA HIDING FOR IMAGE AND VIDEO TRANSMISSION 1401

Fig. 8. (a) Original frame. (b) Received frame with mean loss probability =
0 15 = 2 5%
: , variance : ; PSNR = 19 8271
: . (c) Error-concealed frame; =
36 = 27 6852
: ; PSNR : . (d) Localized scaling error-concealed frame; PSNR =
28 9047
: .

i.e., it is obtained for the color image). The original, received


and error-concealed images PSNR are shown
in Fig. 9(a)-(c), respectively, and the localized error-concealed
image with 8 8 kernel PSNR is represented in
Fig. 9(d).
A sample set of 25 iterations for error concealment and lo-
calized concealment algorithms on the Sail image (due to ran-
domness in loss for wireless scenarios) is represented in Fig. 10
with variations in packet loss probability. As seen from these
simulations, the quality of the error-concealed image is much
better (approximately 6–8-dB improvement) for all cases. The
localized error concealment algorithm achieved higher quality
(approximately 2–3 dB over the error concealment algorithm).
Note that the improvements in the PSNR values increased with
increasing loss probabilities. The mean and standard deviations
of these curves are shown in Fig. 11. Note that the variation of
standard deviations was higher at lower packet losses and lower
at higher packet losses.
Since four copies of the marker are embedded in the
image/video frame, it can be extracted with high quality even at
higher loss probabilities of 0.4–0.6. The variation of the water-
mark quality with the loss probability for the same sample set
of 25 simulations is plotted in Fig. 12. The mean and the stan-
dard deviations of the extracted watermark quality are plotted
in Fig. 13. Note that, while the mean decreased, the standard
deviations increased nonlinearly with increasing packet loss
probabilities. This suggests that the variation in the decrease in
extracted watermark quality is nonlinear, i.e., the decrease in
quality varies less at lower loss probabilities and changes more
Fig. 9. (a) Original image. (b) Received image with mean loss probability of
at higher loss probabilities even with high robustness. It can be
(
0.12 and variance 5% PSNR = 19 3003)
: . (c) Error-corrected image using
shown that finding an optimum value of the scaling parameter the data hiding algorithm with =5( = 28 1702)
; PSNR : . (d) Localized
might compensate for this nonlinear variation. 2 ( = 29 9137)
error-corrected image with kernel size 8 8; PSNR : .
1402 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 15, NO. 11, NOVEMBER 2005

Fig. 10. Comparison of the quality of the received frame versus the Fig. 12. Variation of the quality of the extracted watermark with increasing
error-concealed (EC) and the localized error-concealed frames with variation packet loss probabilities for the same sample set as in Fig. 10.
in loss probability for a sample set of 25 simulations.

Fig. 13. Mean and standard deviation of the extracted watermark quality in
Fig. 12.
Fig. 11. Mean and standard deviation of the curves in Fig. 10.

A. Scaling Parameter Variation


The value of the scaling parameter is changed to vary the
strength of the embedded watermark. The parameter in this
case defines the robustness of the watermark with respect to
channel errors. As increases, it not only enhances the quality
of the received marker, but also makes the embedded water-
mark more visible, thus reducing the quality of the received
image/frame. Hence, it is important to choose an optimum value Fig. 14. Results with different values of . (a) = 1. (b) = 10 (extreme
of such that there is a balance between the quality of the re- case).
ceived image/video frame and the amount of the information
embedded for the quality of the received watermark to be high. Fig. 14(b) increased multifold, thus highly decreasing the frame
The results of the experiment with variation can be seen quality.
in Fig. 14. The Sail image in Fig. 14(a) is embedded with an The extracted watermark from the received image, however,
value of 1 while that in Fig. 14(b) is embedded with an has quality variations that follow a different trend. Fig. 15 shows
value of 10 at the extreme case. The embedded signal content in the variation of the quality of the extracted watermark for packet
ADSUMILLI et al.: ROBUST ERROR CONCEALMENT TECHNIQUE USING DATA HIDING FOR IMAGE AND VIDEO TRANSMISSION 1403

Fig. 15. Result of watermark quality variation with for a sample packet loss Fig. 16. Comparison of extracted watermark quality with degrading frame
probability of 0.09. The plot also shows the convergence of watermark quality quality after watermark insertion.
for higher values of .

YCbCr color space PSNR . The error concealed


loss probability . Note that the quality converges to a
image with PSNR and the localized scaled error
maximum value (32.11 dB in the case of the Sail image). The
concealed image with PSNR are shown in Fig. 17(c)
value of at which this maximum is achieved can be
and (d), respectively.
defined as the smallest after which there is no substantial in-
Here, the embedded color dithered watermark is in RGB color
crease in the quality for increasing . The value of for
space and all three channels are embedded into the luminance
the Sail image is 4.74.
channel of the image. The watermark can also be converted into
Since increasing would decrease the quality of the received
YCbCr color space before it is embedded. This way, the im-
image/video frame, choosing would be a start in
plementation of the algorithm at the receiver is much simpler.
finding the optimum . However, even at , we are not
Besides, it helps in the compression of data to be embedded.
guaranteed a high quality of the received image. Here, the op-
For example, instead of embedding the entire color information,
timum is found graphically by plotting the quality variations of
we can subsample the chrominance components and embed the
the received image and the extracted watermark against vari-
4:2:2 or 4:2:0 watermark in the luminance channel of the orig-
ation and finding the point of intersection of these two curves
inal image/video frame. This proves to be effective for codecs
based on the assumption that the transmitter has a good estimate
like H.263 where such kind of subsampling is usually adopted.
of the channel losses (this can be achieved by either using the
The extracted watermark quality difference between embed-
feedback from the receiver or expecting the channel to follow
ding the RGB channels of the watermark and embedding the
a particular probabilistic distribution that closely approximates
YCbCr channels is very little. This can be seen from Fig. 18. The
the real time channel losses). Fig. 16 shows this plot for the
original color watermark, the extracted color watermark from
Sail image for a mean packet loss probability of 0.09. As ex-
the received image (mean loss probability , loss variance
pected, the optimum value of (the point of intersection, 4.19)
) when RGB channels of the watermark are embedded
lies below the value of . Also, since this value is close to
, and when YCbCr channels are embedded are
, the nonlinear marker quality fluctuations seen in Fig. 13
shown in Fig. 18(a)–(c), respectively. In fact, the simplicity in
are reduced if not eliminated.
the implementation and compression are the only reasons that
make embedding subsampled chrominance components more
B. Effects on Chrominance Components appealing.
For the case of color images/video, losses are simulated both The quality of the extracted color watermark can be further
in the RGB color space and the YCbCr color space. In the latter improved if different values are used to embed each of the
case, even though the losses that affect all three channels are Y, Cb, and Cr channels independently. Fig. 18(d) shows the ex-
considered, those that effect only the luminance channel are tracted color watermark when the values for luminance and
given higher priority because most of the embedded informa- the chrominance components are 3.75, 4.5, and 5, respectively.
tion in the YCbCr (YUV in case of H.263) color space is mainly In this case, a reasonable improvement is seen in the quality of
concentrated in the luminance channel. the extracted watermark when compared to Fig. 18(b) and (c).
The results of the algorithm implementation on the Sail color The values of , , and are randomly chosen here and
image are shown in Fig. 17. The original color image is shown in may not be optimum for each of the individual channel’s recon-
Fig. 17(a), while Fig. 17(b) represents the received image with struction. However, better results are expected if such kind of
errors due to wireless channel occurring in all three channels of optimization is performed.
1404 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 15, NO. 11, NOVEMBER 2005

Fig. 18. (a) Original watermark color image. (b) Received watermark for
a mean loss probability of 0.15 and variance 3% when RGB channels are
embedded with = 5. (c) Extracted watermark for the same loss when YCrCb
channels (4:2:0) are embedded with = 5. (d) Extracted watermark when
YCrCb (4:2:0) channels are embedded with = 3:75, = 4:5, and
= 5. Note that (b)–(d) are inverse half-toned images.

TABLE II
PERFORMANCE COMPARISON IN PSNR (dB)

that the proposed algorithm gives an improvement of 1–3 dB


over the existing error concealment techniques.
An alternate method to achieve a high level of error conceal-
ment performance is to transmit the low-resolution version of
the image/video frame as encoded side information. The en-
coded bits of the low-resolution image would be generated in the
same manner as proposed. They would comprise the side infor-
mation and be transmitted through the same channel instead of
embedding them in the source. This approach yield results that
are comparable to the proposed technique. However, there are
three problems with this approach, which are given as follows.
• Side information can be appended to the encoded video,
but this addition incurs higher transmission bandwidth. If,
however, the compression or encryption of the video is in-
creased to accommodate for the side information without
increasing the bandwidth, there would be an increase in
computational complexity. Data hiding provides a seam-
less alternative to achieve this tradeoff [34].
• Since the effects of the wireless channel are approximated
by independent or burst errors from packet losses, there
would be significant loss incurred in the low-resolution
reference. Therefore, recovering the lost data of the high-
resolution image would be very difficult if either the areas
where the losses occur match or higher losses occur in the
zoomed up low resolution image.
• More often than not, the compressed mark embedded
image requires more bits to be encoded than the com-
pressed original due to the higher entropy of the data
hiding process. However, the increase in the number of
bits is very small. For example, a Hockey video frame of
size 240 320 was 7899 bytes after compression, while
Fig. 17. (a) Original color image. (b) Received image with mean loss
probability of 0.15 and variance 3% with loss occurring in all three channels its marker was 1085 bytes. The mark embedded image
(PSNR = 19:4923). (c) Error-concealed image using the data hiding algorithm was 8015 bytes after compression. This implies that the
(PSNR = 26:5968). (d) Localized error-corrected image (PSNR = 29:6880). transmission of side information bits would require 969
bytes more (and therefore higher bandwidth) than the
C. Comparison With Other Techniques encoded data hidden signal.
Simulations have been carried out for the case of transmitting
Table II shows the comparison of the proposed technique and the low-resolution image as a data-hidden watermark (ECDH)
the techniques presented in [14]–[18] (obtained from [18]). Note and as encoded side channel information (ECSI) in a wireless
ADSUMILLI et al.: ROBUST ERROR CONCEALMENT TECHNIQUE USING DATA HIDING FOR IMAGE AND VIDEO TRANSMISSION 1405

scaled copies of these coefficients in the frame itself. A local-


ized scaling error concealment technique is implemented for
improving the perceptual quality of the video frames affected
with packet losses. Simulation results with analysis have been
presented for various loss rates. Wired to wireless, gray-scale to
color, and image to video extensions to the proposed algorithm
are presented. Possible future work is in the video processing
extension. It includes exploiting motion vector information to
embed the watermark of the subsequent P-frame in the current
I-frame. Another future direction on this would be to identify
the areas in the image/video frame where there is a little or no
motion and embed the watermark in those areas. This might
prove to be useful since viewers often tend to focus on the areas
that contain high motion.
Another possible future direction would be to devise a wa-
termarking technique that is robust for this application. Spread
spectrum watermarking was considered here as a proof of con-
cept due to its ease of implementation in DCT domain. However,
it is not ideal for this application due to its large spreading gain
and higher entropy. Quantization-based watermarking may be
an alternative. A robust watermarking technique that is targeted
for error control applications still needs to be developed.

ACKNOWLEDGMENT
The authors would like to thank the reviewers for their valu-
able and insightful suggestions at various stages of the paper
revisions.

REFERENCES
[1] Y. Wang and Q.-F. Zhu, “Error control and concealment for video com-
munications,” Proc. IEEE, vol. 86, no. 5, pp. 974–997, May 1998.
[2] S. B. Wicker, Error Control Systems of Digital Communication and
Storage. Englewood Cliffs, NJ: Prentice-Hall, 1995.
[3] T. Kalkar, “Considerations on watermarking security,” in Proc. IEEE
Workshop Multimedia Signal Processing, Cannes, France, Oct. 2001,
pp. 201–206.
[4] C. B. Adsumilli, M. C. Q. Farias, M. Carli, and S. K. Mitra, “A hy-
brid constrained unequal error protection and data hiding scheme for
packet video transmission,” in Proc. IEEE Int. Conf. Acoustics, Speech
and Signal Processing, vol. 5, Hong Kong, Apr. 2003, pp. 680–683.
[5] Y. Liu and Y. Li, “Error concealment for digital images using data
Fig. 19. PSNR-versus-loss-rate curves for the techniques of ECDH and ECSI
hiding,” presented at the 9th DSP Workshop, Hunt, TX, Oct. 2000.
for the Sail image. (a) PSNR of the reconstructed image. (b) PSNR of the
[6] Y. Shao, L. Zhang, G. Wu, and X. Lin, “Reconstruction of missing blocks
retrieved low-resolution images.
in image transmission by using self-embedding,” in Proc. Int. Symp. In-
telligent Multimedia, Video and Speech Processing, Hong Kong, May
2001, pp. 535–538.
scenario. Fig. 19(a) gives the PSNR-versus-loss-rate curves of [7] P. Yin, B. Liu, and H. H. Yu, “Error concealment using data hiding,” in
the two techniques. It can be seen that ECDH performs better at Proc. IEEE Int. Conf. Acoustics, Speech, and Signal Processing, vol. 3,
higher loss rates (up to 3–4-dB improvement), while the perfor- May 2001, pp. 1453–1456.
[8] C.-S. Lu, “Wireless multimedia error resilience via a data hiding tech-
mance of both the techniques is comparable at lower loss rates nique,” in Proc. Int. Workshop Multimedia Signal Processing, Virgin Is-
(up to about 0.2). Note that, at very low packet loss probabili- lands, Dec. 2002, pp. 316–319.
ties ( 0.04), ECSI tends to give a better performance. This may [9] J. Lee and C. S. Won, “A watermarking sequence using parities of error
control coding for image authentification and correction,” IEEE Trans.
be due to the fact that the end result of the ECDH suffers from Consum. Electron., vol. 46, no. 2, pp. 313–317, May 2000.
minor data hiding defects, which are rarely visible. Fig. 19(b) [10] J. Wang and L. Ji, “A region and data hiding based error concealment
shows the variation in the PSNR values of the received low-res- scheme for images,” IEEE Trans. Consum. Electron., vol. 47, no. 2, pp.
257–262, May 2001.
olution reference images. [11] F. Bartolini, A. Manetti, A. Piva, and M. Barni, “A data hiding approach
for correcting errors in H.263 video transmitted over a noisy channel,”
in Proc. IEEE Workshop Multimedia Signal Processing, Oct. 2001, pp.
VI. CONCLUDING REMARKS 65–70.
The data-hiding technique employed for error concealment [12] K. Munadi, M. Kurosaki, and H. Kiya, “Error concealment using digital
watermarking technique for interframe video coding,” in Proc. Int. Tech.
is efficient in protecting the low–low wavelet coefficients of Conf. Circuits/Systems, Computers, and Communications, Jul. 2002, pp.
the frame from transmission errors by embedding multiple 599–602.
1406 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 15, NO. 11, NOVEMBER 2005

[13] A. Yilmaz and A. A. Alatan, “Error concealment of video sequences by Mylène C. Q. Farias (M’02) received the B.Sc.
data hiding,” in Proc. Int. Conf. Image Processing, vol. 3, Sept. 2003, degree in electrical engineering from Universidade
pp. 679–682. Federal de Pernambuco (UFPE), Pernambuco,
[14] P. Salama, N. B. Shroff, E. J. Coyle, and E. J. Delp, “Error concealment Brazil, in 1995, the M.Sc. degree in electrical
techniques for encoded video streams,” in Proc. Int. Conf. Image Pro- engineering from the Universidade Estadual de
cessing, vol. 1, Oct. 1995, pp. 9–12. Campinas (UNICAMP), Sao Paulo, Brazil, in 1998,
[15] J. W. Park and S. U. Lee, “Recovery of corrupted image data based on and the Ph.D. degree in electrical and computer
the NURBS interpolation,” IEEE Trans. Circuits Syst. Video Technol., engineering from the University of California, Santa
vol. 9, no. 10, pp. 1003–1008, Oct. 1999. Barbara (UCSB), in 2004.
[16] H. Sun and W. Kwok, “Concealment of damaged block transform coded She is currently a Post-Doctoral Researcher with
images using projection onto convex sets,” IEEE Trans. Image Process., the Departments of Electrical and Computer Engi-
vol. 4, no. 4, pp. 470–477, Apr. 1995. neering and Psychology, UCSB, under the supervision of Prof. S. K. Mitra and
[17] Y. Wang and Q. Zhu, “Signal loss recovery in DCT-based image and Prof. J. M. Foley, respectively. Her current interests include video quality assess-
video codecs,” in Proc. SPIE Conf. Visual Communications and Image ment, video processing, multimedia, watermarking, and information theory.
Processing, vol. 1605, Nov. 1991, pp. 667–678.
[18] X. Li and M. T. Orchard, “Novel sequential error-concealment tech-
niques using orientation adaptive interpolation,” IEEE Trans. Circuits
Syst. Video Technol., vol. 12, no. 10, pp. 857–864, Oct. 2002.
[19] I. Cox, J. Kilian, F. Leighton, and T. Shamoon, “Secure spread spectrum
watermarking for multimedia,” IEEE Trans. Image Process., vol. 6, no.
12, pp. 1673–1687, Dec. 1997. Sanjit K. Mitra (S’59–M’63–SM’69–F’74–LF’01)
[20] M. D. Swanson, M. Kobayashi, and A. H. Tewfik, “Multimedia data- received the B.Sc. (Hons.) degree in physics from the
embedding and watermarking technologies,” Proc. IEEE, vol. 86, no. 6, Utkal University, Cuttack, India, in 1953, the M.Sc.
pp. 1064–1087, Jun. 1998. (Tech.) degree in radio physics and electronics from
[21] J. Buchanan and L. Streit, “Threshold-diffuse hybrid halftoning Calcutta University, Calcutta, India, in 1956, and the
methods,” in Proc. Western Computer Graphics Symp. (SKIGRAPH), M.S. and Ph.D. degrees in electrical engineering from
1997, pp. 79–90. the University of California, Berkeley, in 1960 and
[22] R. W. Floyd and L. Steinberg, “An adaptive algorithm for spatial gray- 1962, respectively.
scale,” Proc. Soc. Information Display, vol. 17, no. 2, pp. 75–78, Apr. He has been a Professor of electrical and computer
1976. engineering with the University of California, Santa
[23] Z. Xiong, M. T. Orchard, and K. Ramachandran, “Inverse halftoning Barbara, since 1977, where he served as Chairman of
using wavelets,” IEEE Trans. Image Process., vol. 8, no. 10, pp. the Department from July 1979 to June 1982. He has published over 600 papers
1479–1483, Oct. 1999. in signal and image processing and 12 books and holds five patents.
[24] M. Mese and P. P. Vaidyanathan, “Look up table (LUT) inverse Dr. Mitra is a Fellow of the AAAS and SPIE and a member of EURASIP. He
halftoning,” IEEE Trans. Image Process., vol. 10, no. 10, pp. 1566–1578, has served the IEEE in various capacities including service as the President of
Oct. 2001. the IEEE Circuits and Systems Society in 1986 and as a Member-at-Large of the
[25] M.-Y. Shen and C.-C. Kuo, “A robust nonlinear filtering approach to Board of Governors of the IEEE Signal Processing Society from 1996 to 1999.
inverse halftoning,” J. Vis. Commun. Image Representat., vol. 12, pp. He is a member of the U.S. National Academy of Engineering, an Academician
84–95, Mar. 2001. of the Academy of Finland, a member of the Norwegian Academy of Techno-
[26] J. J. Lemmon, “Wireless Link Statistical Bit Error Model,” Dept. logical Sciences, a foreign member of the Croatian Academy of Sciences and
Commerce (Communication and Information), National Telecommu- Arts, and a foreign member of the Academy of Engineering of Mexico. He was
nications and Information Administration (NTIA), Tech. Rep. 02-394, the recipient of the 1973 F. E. Terman Award and the 1985 AT&T Foundation
2002. Award of the American Society of Engineering Education, the 1989 Education
[27] D. L. Lau and G. R. Arce, Modern Digital Halftoning. New York: Award, and the 2000 Mac Van Valkenburg Society Award of the IEEE Circuits &
Marcel Dekker, 2001. Systems Society, the 1996 Technical Achievement Award and the 2001 Society
[28] H. R. Kang, Color Technology for Electronic Imaging De- Award of the IEEE Signal Processing Society, the IEEE Millennium Medal in
vice. Bellingham, WA: SPIE Opt. Eng. Press, 1997. 2000, the McGraw-Hill/Jacob Millman Award of the IEEE Education Society
[29] F. Dufaux and F. Moscheni, “Motion estimation techniques for digital in 2001, and the 2002 Technical Achievement Award of the European Associ-
TV: a review and a new contribution,” Proc. IEEE, vol. 83, no. 6, pp. ation for Signal Processing (EURASIP). He was the co-recipient of the 2000
858–876, Jun. 1995. Blumlein–Browne–Willans Premium of the the Institution of Electrical Engi-
[30] A. M. Tekalp, Digital Video Processing. Englewood Cliffs, NJ: Pren- neers (London) and the 2001 IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS
tice-Hall, 1995. FOR VIDEO TECHNOLOGY Best Paper Award. He has been awarded honorary
[31] K. A. Peker, A. A. Alatan, and A. N. Akansu, “Low-level motion activity doctorate degrees from the Tampere University of Technology, Tampere, Fin-
features for semantic characterization of video,” in Proc. IEEE Int. Conf. land, and the “Politehnica” University of Bucharest, Bucharest, Romania.
Multimedia and Expo., New York, Aug. 2000, pp. 801–804.
[32] (1995) The Network Simulator. [Online]http://www.isi.edu/nsnam/ns/
[33] S. Dennet, The Cdma2000 ITU-R RTT Candidate Submission (0.18),
1998.
[34] M. Wu, “Multimedia data hiding,” Ph.D. dissertation, Dept. Elect. Eng.,
Princeton Univ., Princeton, NJ, Jun. 2001.
Marco Carli (SM’05) received the Laurea degree in
telecommunication engineering from the University
Chowdary B. Adsumilli (S’95) received the of Roma, “La Sapienza,” Rome, Italy, in 1996.
B.Tech. degree in electronics and communications He was with Datamat-Systems Engineering Com-
engineering from Jawaharlal Nehru Technological pany until 1997. He is now an Associate Researcher
University, Hyderabad, India, in 1999 and the M.S. with the University of Rome, “Roma Tre,” Rome,
degree in electrical and computer engineering from where he has been also a Lecturer for the graduate
the University of Wisconsin, Madison, in 2001. He course telecommunication systems and multimedia
is currently working toward the Ph.D. degree at communications. Since August 2000, he has been
the Image Processing Laboratory in Electrical and a Visiting Research Associate with the Image Pro-
Computer Engineering, University of California, cessing Laboratory, University of California, Santa
Santa Barbara. Barbara. His research interests are in the area of digital signal and image
His research interests include image and video pro- processing and in multimedia communications.
cessing, multimedia communications, error resilience and concealment, water- Dr. Carli is a member of the Technical Committees of several IEEE confer-
marking, video quality assessment, and wireless video. ences and a reviewer for several IEEE journals.

S-ar putea să vă placă și