Sunteți pe pagina 1din 19

◆Delivering Quality of Experience in

Multimedia Networks
Harold Batteram, Gerard Damm, Amit Mukhopadhyay,
Laurent Philippart, Rhodo Odysseos, and Carlos Urrutia-Valdés

Next-generation multimedia networks need to deliver applications with a


high quality of experience (QoE) for users. Many network elements provide
the building blocks for service delivery, and element managers provide
performance data for specific network elements. However, this discrete
measurement data is not sufficient to assess the overall end user experience
with particular applications. In today’s competitive world of multimedia
applications, it is imperative for service providers to differentiate themselves
in delivering service level agreements with certainty; otherwise they run the
risk of customer churn. While QoE for well-established services like voice and
Internet access is well understood, the same cannot be said about newer
multimedia services. In this paper, we propose parameters for measuring the
QoE for newer services. We propose and justify parameter values for
satisfactory end user experience and show how standard measurement data
can be collected from various network elements and processed to derive end
user QoE. © 2010 Alcatel-Lucent.

Introduction
Service providers are looking at new applications In addition, transition of network traffic from pri-
to compensate for declining revenues from traditional marily voice to primarily data is a well-known phe-
voice and flat-rate high-speed data services. These are nomenon in the telecommunication industry. What is
often multimedia applications involving text, voice, more interesting is that data traffic is no longer just a
pictures, video clips, hyperlinks, and mixed media and best-effort type of traffic. Many applications which
can be provided by either the service provider (SP) are running as data traffic require a high quality of ser-
or an Internet-based application provider (as shown in vice to meet user needs and expectations. Consequently,
Figure 1). In today’s highly competitive environment, many standards bodies and industry organizations
users have the option of choosing from a plethora of such as the International Telecommuni-cation Union
service providers such as wireless, wireline, and cable (ITU), 3rd Generation Partnership Project (3GPP*),
operators. Therefore, it is not enough to simply make and Broadband Forum (formerly the Digital Sub-
the services available to users; service providers must scriber Line [DSL] Forum) have come up with various
deliver those services in such a way that users fully classifications of services and associated quality of
enjoy a rich experience at a reasonable price. service (QoS) parameters. However, none of these

Bell Labs Technical Journal 15(1), 175–194 (2010) © 2010 Alcatel-Lucent. Published by Wiley Periodicals, Inc.
Published online in Wiley InterScience (www.interscience.wiley.com) • DOI: 10.1002/bltj.20431
Panel 1. Abbreviations, Acronyms, and Terms
3GPP—3rd Generation Partnership Project NE—Network element
CDMA—Code division multiple access NGN—Next-generation networks
CLI—Command line interface NMS—Network management system
DSL—Digital subscriber line NOC—Network operations center
DSLAM—Digital subscriber line access OTT—Over-the-top
multiplexer PoC—Push-to-talk over cellular
EMS—Element management system QoE—Quality of experience
EPG—Electronic program guide QoS—Quality of service
GSM—Global System for Mobile RCS—Rich communications suite
Communications RTP—Real Time Transport Protocol
HD—High definition SAP—Service access point
IETF—Internet Engineering Task Force SBC—Session border controller
IM—Instant messaging SD—Standard definition
IMS—IP Multimedia Subsystem SLA—Service level agreement
IP—Internet Protocol SMS—Short message service
IPTV—Internet Protocol television SP—Service provider
ITU—International Telecommunication Union SQM—Service quality management
ITU-T—ITU Telecommunication Standardization STB—Set-top box
Sector TM Forum—TeleManagement Forum
KPI—Key performance indicator TV—Television
KQI—Key quality indicator VoD—Video on demand
MDI—Media delivery index VoIP—Voice over Internet Protocol
MOS—Mean opinion score

matrices alone clearly captures end user quality of on network effects such as end-to-end delays or jitter.
experience (QoE) as the measurements deal only with Of course, QoE is directly related to QoS, but the chal-
network elements. lenge for a service provider is to have the right set of
In this paper, QoE is defined as the measure of tools and processes to map the QoS at the network
how well a system or an application meets the user’s level to the QoE at the user and session levels and
expectations [5, 12]. This concept is different from have the ability to control it.
quality of service, which focuses on measuring per- Another important point to note is that measure-
formance from a network perspective. For instance, ments in individual nodes may indicate acceptable
QoE focuses on user-perceived effects, such as degra- QoS, but end users may still be experiencing unac-
dation in voice or video quality, whereas QoS focuses ceptable QoE. Assume, for example, that there is
an edge router with an inadequately engineered
voice over Internet Protocol (VoIP) buffer size; also
Application Application
assume that packet overflow is monitored at the aggre-
servers servers
gate queue level. The VoIP buffer may be overflowing
quite frequently, but because the VoIP traffic volume
Access SP core Internet
network network is relatively low compared to other types of traffic and
because other buffers do not overflow, aggregate buffer
SP—Service provider statistics may still appear to be satisfactory (acceptable
QoS). This, however, is no consolation for the VoIP
Figure 1. subscriber, who is consistently experiencing service
End-to-end architecture. interruptions (poor QoE because of dropped packets).

176 Bell Labs Technical Journal DOI: 10.1002/bltj


Service providers have traditionally focused on Performance Measurement Using Test Packets
determining and managing QoS, not QoE. The most As a general method, test packets are sent from
common and time-tested means for measuring QoS is management systems, and performance metrics such
the use of a performance management system that as delay, jitter, and packet loss are measured along
extracts measurement data from network elements or the way. The results of these measurements are used
element management systems (EMS) to assess the per- as a proxy for the performance of real traffic. This
formance of various network elements across the net- method is also often used for troubleshooting.
work. However, as we noted in the previous para- While this is a very simple technique, care needs
graph, this method does not guarantee acceptable QoE to be taken when interpreting the results. It is not
estimation for individual applications, sessions, or desirable to send test packets during busy hours since
users. Several approaches have emerged over the past this will unnecessarily load the network with man-
few years to measure application performance. The agement traffic. On the other hand, unless testing is
focus of this paper is to go a step further and explore performed during peak usage hours, the measure-
the end user experience. The key is not only to mea- ments will not truly reflect user experience at the
sure QoE but also to manage it effectively for the great- most important time.
est impact on the operator’s balance sheet. Ideally
Performance Measurement Using Probes
speaking, QoE issues should be prioritized based on
In this method, probes in the form of software
their relative impact on potential revenue, as it is often
agents or network appliances are deployed on net-
impractical to address all the problems at one time.
work elements and user devices (for the software
We begin by providing details on application per-
agent case). Measurements based on these probes
formance measurement techniques; this section lays
provide a very accurate status of the devices at any
the foundation for the rest of the discussions in the
time. Furthermore, in the case of software agents,
paper. The next section provides an overview of stan-
true user experience can be measured unobtrusively
dards, and of the standards gaps that exist for current
since measurements are obtained directly from user
methodologies. We follow that with details on a
devices.
generic approach we propose to cope with the ever-
The main drawback of this technique is that it
increasing complexity of new applications. This
doesn’t scale for large networks. While it is very use-
discussion is followed by a section which provides
ful for fault isolation and root cause analysis, this tech-
some examples of key quality indicators/key perfor-
nique cannot be used for monitoring large networks
mance indicators (KQI/KPIs) which lead to a way to
with millions of user devices and network elements.
measure QoE. In the final section, we discuss further
work needed in this area. Performance Management Using Network
Measurements
Application Performance Measurement In this method, measurements from various net-
Techniques work elements and their EMSs are collected and pro-
There are primarily three techniques prevalent in cessed at a central repository, as illustrated in Figure 2.
the market today for measuring application perfor- The repository also collects data from various network
mance: 1) using test packets, 2) using probes in net- management systems (NMS), e.g., configuration,
work elements and user equipment, and 3) correlating inventory, subscriber, or fault management systems.
measurements from several network elements. This Intelligent software in the central repository corre-
section provides a brief discussion of these techniques, lates and analyzes these different sets of data. It also
with the greatest focus on the third technique, since can apply a rules engine to monitor certain events or
it is a very complex method but has very few draw- provide diagnostics. The rules engine may, alterna-
backs otherwise. It may be noted, however, that any tively, trigger some further data collection to probe
combination of these techniques may be used in a deeper into potential problems. Managing QoE starts
particular performance measurement tool. with well-chosen ongoing and ad hoc measurements

DOI: 10.1002/bltj Bell Labs Technical Journal 177


KPI/KQI aggreggation
Central
repository Analysis

Rules
Automated
diagnostics Diagnostics
Administrator
Monitor
Data collection

EMS NE OSS

EMS—Element management systems NE—Network elements


KPI—Key performance indicator OSS—Operations support systems
KQI—Key quality indicator QoE—Quality of experience

Figure 2.
Use of standard measurement for QoE determination.

since they form the basis for all the analyses needed to point (SAP). This restricts the bandwidth
determine the level of QoE. allowed on that SAP but keeps it at a high
To get a better understanding of the process, let us enough value to allow a significant amount of
look at a simplified example application for Internet traffic through.
Protocol television (IPTV) where a human error 4. STBs downstream of that router port identify
caused a degradation of QoE. Figure 3 depicts the missing data and begin sending retry requests.
assumed architecture. 5. The KPI threshold for STB retry requests is
Here is the sequence of events around the prob- crossed and alarms are generated (A1).
lem and its resolution: 6. The KPI threshold for DSLAMs and switches is
1. On a permanent basis, the following KPIs are col- not crossed and no alarms are triggered (since no
lected: set-top box (STB) retries per channel, digi- traffic is dropped).
tal subscriber line access multiplexer (DSLAM) 7. The KPI threshold-crossing rules for the miscon-
uplink and downlink port loss and bandwidth, figured SAP may trigger intermittent alarms (A2),
edge and core network router and switch port loss based on port loss.
and bandwidth, and headend-level monitoring of 8. The KPI threshold-crossing rules for headend-
each channel. level monitoring do not raise an alarm.
2. Over time these KPIs are aggregated and archived. 9. The alarms will appear on the administrator dash-
3. An operator makes a mistake using a command boards and in the reports.
line interface (CLI) command and misconfigures 10. The logical correlation engine will recognize
a bandwidth profile on a router service access that A1 and A2 alarms are related by causality

178 Bell Labs Technical Journal DOI: 10.1002/bltj


14. Administrator
corrects problem

Administrator

•9-13. Correlation engine identifies


probable cause. Sends report
5, 7. Alerts:
STB and router Operator
A1 A2 CLI
2.•KQI/KPI aggregation command
and analysis QoE management
server
3. Router
misconfigured

•1. Measurement
collection

STB DSLAM Switch Router

TV
4. Retry requests due to missing data

CLI—Command line interface KQI—Key quality indicator


DSLAM—Digital subscriber line access multiplexer QoE—Quality of experience
IPTV—Internet Protocol television STB—Set-top box
KPI—Key performance indicator TV—Television

Figure 3.
Simplified IPTV architecture.

(A2’s probable cause is a parent of A1’s probable 14. The network operations center (NOC), when
cause in the causality graph root cause analysis). treating these alarms, will rapidly inspect the sus-
11. The spatial correlation engine, using its knowl- pected SAP, should discover quickly that the
edge of the topology, will recognize that A2 bandwidth profile was misconfigured, and will
alarms are issued by a node which sits on all the rectify the problem easily. (The whole process
network paths of all A1 alarms. from detection to resolution could happen within
12. These engines in tandem will establish that the minutes, before customers even start to complain
A2 alarms are a probable root cause of the A1 and call their customer care centers or open tick-
alarms, simplifying the administrator’s view (since ets on their portal.)
correlated A1 notifications are collapsed under 15. If the bandwidth error had been barely detectable
A2) and directing the troubleshooting operations (i.e., causing only rare problems), the problem
towards the suspect SAP by putting the events in would still be discovered through the historical
context (e.g., which services have a problem, the correlation engine.
severity of the problem, which alarms are raised, 16. Rather than STB retries (a network-level KPI), an
which rules are triggered, and which customers end user QoE estimation could also be used as
[STBs] are impacted). input for the correlation engines.
13. Bandwidth misconfiguration may be listed as a pos- Understanding network services is necessary, but
sible cause of the A2 alarms, thus providing even measuring complete end-to-end performance is
more assistance to the troubleshooting process. impractical. The trade-off here involves having the

DOI: 10.1002/bltj Bell Labs Technical Journal 179


From network management to
service quality management
80 out of 100
20 out 120
120 dropped 100 dropped dropped
dropped
calls /hour calls/hour corporate calls
corporate calls

Customer centric
Network centric
Quality of experience
Quality of service
• Mean opinion score (MOS)
• Packet loss
• Call completion rate
• Jitter
• % ineffective call attempts
• Delay
• Call setup time
• Throughput
• …
• …

Figure 4.
Network-centric versus customer-centric view.

topology intelligence to reconstruct end-to-end per- survey of the relevant standards. While they provide
formance data from local performance data and a guidelines on the measurement data, defining the
sampling of the actual measurements. KPIs/KQIs based on the measurements is generally
Performance Measurement to QoE left to the operators.
It is not enough to measure performance and
derive KPI/KQIs. One has to take a step beyond to ITU-T
determine QoE and manage it effectively. This Within the ITU Telecommunication Standardi-
requires a paradigm shift from a network centric to a zation Sector (ITU-T), Study Group 12 is the lead
customer centric view, as shown in Figure 4. group responsible for QoS and QoE. Some of their
Managing QoE is essential. It is one thing to moni- main recommendations include the following:
tor and measure it, but another thing to maximize it • ITU-T G.1010 provides guidelines with regard to
(to increase profit). Maximizing QoE requires early key QoS factors impacting the user. It focuses on
detection of potential QoE-affecting problems and solv- delay, delay variation, and information loss and
ing them as soon as possible, before they reach the cus- gives performance targets for various applications
tomer complaint stage. It also means being able to (e.g., conversational voice, audio streaming, Web
prioritize problems, so as to first tackle those with the browsing, and others) that would meet user
highest impact in terms of potential revenue: A problem expectations. For instance, short message service
affecting 1,000 customers is more important than a (SMS) is classified as a low priority transaction
problem affecting only 10; a problem affecting five gold service, and therefore it is argued that tens of sec-
customers may or may not be deemed more important onds in delivery delay are acceptable. Again, the
than a problem affecting 10 bronze customers, depend- purpose of this recommendation is to serve as a
ing on the weights and the decision rules. guide and not to set forth definitive requirements,
since actual target values would be left to the
Standards Activities operator to decide.
The concept of QoE has been the study of various • ITU-T G.1030 provides a model for estimating the
standards organizations. In this section we present a performance of data applications over Internet

180 Bell Labs Technical Journal DOI: 10.1002/bltj


Protocol (IP) networks. This model consists of In general, this is a good first step since the Broadband
three steps: 1) network performance assessment, Forum provides a complete methodology and some
2) application performance assessment, and specific requirements in terms of delay, bandwidth, and
3) perceptual performance assessment. The last other such network parameters. However, it is not clear
step is the one which introduces the idea of user how QoE would be affected if such requirements are
experience (perception). This can be viewed as not met, so a complete mapping is the subject for future
an “opinion model” similar to the e-model research.
defined in [8], which maps end user experience TeleManagement Forum
from the network layer up to the application The TeleManagement Forum (TM Forum) looks
layer. The recommendation includes a model for at QoE from the service level agreement (SLA) man-
Web browsing, but other applications are left for agement perspective, as would be expected. The TM
further study. Forum defines KQIs and KPIs as measurements of
• ITU-T G.1070 provides an opinion model for com- perceived quality rather than network performance
puting video telephony QoE based on a series of [13–15], which is consistent with ITU-T and Broadband
speech and video parameters. The model consists Forum views on QoE. KQIs are constructed from KPIs,
of three functions: 1) video quality estimation, and KPIs are derived from network performance mea-
2) speech quality estimation, and 3) multimedia surements. For instance, an example KQI is the “per-
quality integration functions. The outputs are centage of sessions that experience delay of X or
multimedia quality, video quality influenced above,” and a corresponding KPI is the session start-
by speech quality, and speech quality influenced by up delay, which is derived from network performance
video quality; however, the model is based on measurements.
very specific terminals and environments. An In summary, the goal of all these standards organi-
extension to accommodate other conditions is a zations is to provide clear definitions of QoE and QoS
topic for further study. and to establish the relationship between the two so
Other recommendations such as [9] define a that service providers can measure and manage QoE.
series of Real Time Transport Protocol (RTP) statistics In many respects this has been achieved for triple play
that can be collected from the network element (NE) applications but other applications are still left for
to compute performance metrics (e.g., packet loss, future work.
delay, jitter, failures, etc.) Yet others such as [2] and
[3] recommend test methods for assessing audio The Challenge With NGN Applications
and video qualities. While QoE estimation algorithms for voice quality
are well understood and video quality estimation meth-
Broadband Forum ods are becoming increasingly more practical, methods
Broadband Forum has also taken on the task of for other applications are far less mature. QoE estima-
defining QoE and its relationship to QoS [12], although tions for new NGN applications will require a per-
their target is triple play applications (i.e., video [both application analysis in which the main properties that
broadcast and on-demand], audio, and best-effort can impact user-perceived quality are determined.
Internet data). Other applications are left for future Next-generation services may be provided either by
work. Their definition of QoE is consistent with that of an SP or by an over-the-top (OTT) content provider or
ITU-T in that it is viewed as a measure of overall per- application provider. While an SP is in full control of
formance from the user’s perspective, whereas QoS is the access network and thus may have the ability to
a measure of network performance. Their goal is to provide desired QoE, OTT players are in a much dif-
provide a clear relationship between the two so that ferent position. In today’s world, OTT players often
given a set of QoS measurements, one could estimate allow the user to download client software which
the QoE for a user, and likewise, given a target QoE, helps optimize user experience. However, as band-
one could calculate the required network performance. width demand grows for applications and various

DOI: 10.1002/bltj Bell Labs Technical Journal 181


applications need to interact with each other to pro- infrastructure. Finding the best way to complement the
vide a rich user experience, a simple client-based capabilities of service providers and OTT players is a
approach is unlikely to be successful. Service providers topic that is likely to be debated over the next several
are not likely to be satisfied with the role of bit-pipe years. However, one trend that has clearly emerged in
provider and certainly would like to get a share of the the industry over the past year is support for the rich
OTT revenue. However, it is impractical for them to communications suite (RCS), which has been champi-
mirror the thousands of applications available on the oned by the Global System for Mobile Communications
Internet today—or the exponential expansion of appli- (GSM) Association and equally supported by code divi-
cations we can envision in the future—within their sion multiple access (CDMA) operators in the wireless
own infrastructure. world. Wireline operators are also embracing the con-
In direct competition, service providers have a cept with apparent eagerness. A likely scenario to
theoretical ability to provide poor QoE to users access- evolve is one in which service providers develop the
ing OTT applications (even though there are legal applications for supporting RCS while OTT players take
debates currently raging on this subject) while pro- advantage of the underlying infrastructure to provide all
viding better QoE for the same applications provided other services.
through their own network infrastructure. However, To illustrate this point, the following three ser-
there exists a happy middle ground: OTT players can vices form the cornerstones of RCS:
negotiate business arrangements with service providers • Enhanced address book. Provides presence and user
for specific QoE for their applications; this can be a equipment capability indications.
win-win situation for both players. Service providers • Rich call. Allows users to exchange different types
can leverage their QoE infrastructure in several ways: of content (e.g., pictures and video) during a call.
1) provide QoE to differentiate the high quality content • Rich message. Multimedia messaging with enhanced
they deliver, 2) provide QoE for content/application user interface.
providers that are business partners, 3) act as media- In addition, service providers often have location
tor for different application providers to develop information for their subscribers. Suppose an OTT
higher value services, 4) personalize content on behalf player wants to develop a powerful marketing and
of partner application/content providers. sales application. As soon as a user is within a certain
Service providers have a number of fundamental distance of a particular store (leveraging presence and
advantages over OTT players in providing QoE for location information), a rich message may be sent.
applications. For example, they have control over the The message is customized, based on the subscriber’s
broadband access network and maintain basic demo- demographic data, and is tailored for the appropriate
graphic information about their subscribers. In addi- device being used at that time (leveraging informa-
tion, wireless operators have full knowledge of user tion from the enhanced address book). If the sub-
equipment capabilities. OTT players have the basic scriber is interested, he may click a link to call a
advantage that they can spread the cost of developing salesperson and view pictures/video clips of mer-
applications or content over a much larger user base, chandise during the call for a rich user experience.
can draw advertisements and sponsorships from a The important point to note is that the OTT player
much larger set of interested parties, and spend very does not need to build the infrastructure needed for
little on physical infrastructure. A marriage of these the service and can focus on application development
two forces can provide a very powerful business and revenue collection, as long as there is a business
solution. relationship of revenue sharing or other arrangement
While IP Multimedia Subsystem (IMS) infrastruc- with the service provider. The service provider, on the
ture is slowly gaining momentum, and applications over other hand, must guarantee adequate QoE to make
this infrastructure are now taking off, it is also true that the OTT player’s business a success.
numerous “point solutions” are available on the One of the main challenges with next-generation
Internet, which do not necessarily depend on an IMS network (NGN) services such as the one just described

182 Bell Labs Technical Journal DOI: 10.1002/bltj


is modeling the service in such a way that meaning- characteristics of individual service components.
ful QoE measurements can be extracted. After all, Figure 5 illustrates traditional metrics collection.
QoE is very subjective but must somehow quantita- Metrics such as voice MOS scores are measured, sta-
tively reflect the end user’s satisfaction with the tistically aggregated, and compared with predefined
overall service in a meaningful way, such that it threshold values. Operators can define target KQI val-
allows the operator to take appropriate actions based ues that indicate the overall performance level of
on those values. Service QoE typically includes mul- voice quality. For example, a KQI target may state
tiple aspects of the service; for example, service availa- that the average voice MOS score during a measure-
bility, usability, and media quality factors all contribute ment period of one week must be above 3.5 for 99.5
to the overall service QoE. While calculation models percent of all calls. Traditional service quality would
to estimate voice quality using a mean opinion score be expressed in terms of the number of calls that meet
(MOS) are well studied and validated with numer- a designated service quality level. A small percentage
ous human test subjects, the same level of confi- would be allowed to fall below this level. An example
dence will be difficult to obtain for most other might be 99.5 percent of calls must achieve a setup
services. Deriving a single MOS-like score for a par- delay less than 1.0 seconds. When threshold values
ticular service will be quite complicated when a ser- are exceeded, an alarm can be generated.
vice consists of multiple contributing factors that While such measurements are clearly valuable,
influence the overall user-perceived quality. Unless for example, in verifying SLAs, the per-session rela-
the service QoE is carefully studied for each con- tionship between QoE contributing factors, such as
tributing factor with feedback from sufficiently large call setup delay and voice quality within the same
groups of actual users, the absolute value of a QoE voice service session, is not preserved. For an end user
score has limited meaning. Furthermore, a service perception of the service quality, both factors are
that is composed of multiple applications is quite important. Either excessive call setup delay or poor
complicated to analyze. voice quality will result in a poor overall QoE evalua-
tion from the end user. If both occur within the same
Absolute Versus Relative QoE Measurements session, the total effect can even be amplified since
Analyzing a service and understanding the vari- the end user is likely to be extremely dissatisfied if
ous contributing factors that influence overall QoE can, he experiences both an excessive call setup delay and
however, be a very effective way to measure changes in poor voice quality within the same call session.
QoE. In other words, QoE measurements can be mean- Consequently, performance and perceived service
ingful when variations over time can be correlated to quality require different measurement approaches
changes in the contributing factors [16]. These changes where the relationship between the various factors
may be caused by changes in the network, for example, that influence the overall service QoE is combined
by bandwidth allocation, equipment capacity, or other and preserved as contributing factors per service ses-
changes in the service delivery components. It is there- sion, as illustrated in Figure 6.
fore not only important to understand the key service Now consider the same QoE analysis process for
QoE contributing factors from an end user perspective, a “rich call.” On top of the traditional MOS for the
but also the relationship between these factors and the basic voice service, we need to evaluate additional
individual service components in the end-to-end ser- attributes related to picture and video quality, not
vice delivery chain. This will allow degrading QoE val- just as standalone services, but also as a mixed-media
ues to be traced back to their root cause and, equally service. Finding the main factors that influence user-
important, allow verification if optimizations in the ser- perceived service quality, their interdependencies,
vice architecture component actually result in mea- and their relationship with service delivery compo-
surable QoE improvements. nents is a challenging task. This process will involve
Traditionally, service quality management a number of steps, which are discussed in the sec-
(SQM) systems focus on measuring the performance tions following.

DOI: 10.1002/bltj Bell Labs Technical Journal 183


Availability Usability Quality

Service session KQI 1 KQI 2 KQI 3 KQI 4 KQI 5 KQI 6

Individual KQI statistics

Category statistics

Reports, dashboards, and alarms


• Statistics per KPI/KQI group and/or categories are measured, aggregated, and reported
• QoE relationship per-service session is not preserved

KPI—Key performance indicator


KQI—Key quality indicator
SLA—Service level agreement
QoE—Quality of experience

Figure 5.
Traditional performance measurements focus on SLA values.

Service Session Demarcation Points watching an SD channel. Hence, it is reasonable to


Service QoE measurement requires that the usage partition the sessions into channel viewing time. But
of a service is measurable between clear points of the channel change time is also a factor that needs to
demarcation, i.e., the start and end times of a service be taken into account. Excessive channel change time
session. QoE metrics can only be related to user expe- is a well-known complaint from IPTV subscribers. The
rience within the context of session boundaries. channel change could be considered part of the ses-
A VoIP service has clear session demarcation sion, or, alternatively, channel changing can be con-
points. The session starts when the user picks up the sidered a separate service where each session consists
phone and ends when the phone is put down. For of one or more channel changes within a reasonable
other services, these boundaries are not always clear. time interval. As NGN services become more com-
For example, consider live broadcast TV as part of an plex, defining practical session boundaries will not be
IPTV service. A user can be watching live TV for many a trivial task. However, the reliability of any service
hours per day and will be zapping through various QoE measurements also depends on the precise defi-
channels during that period. What are the session nition of the service session demarcation points.
boundaries in this case? The media quality may be
different per channel. One channel may be delivered Service Session Decomposition
in high definition (HD) quality while others are stan- The next challenge is to decompose the service ses-
dard definition (SD) quality. Clearly the user expec- sion into separate, measurable service elements that can
tation when watching an HD channel is different from contribute to the service QoE. Each of the elements must

184 Bell Labs Technical Journal DOI: 10.1002/bltj


Availability Usability Quality Availability Usability Quality Service

Service session KQI 1 KQI 2 KQI 3 KQI 4 KQI 5 KQI 6 QoE QoE QoE QoE

Individual KQI statistics

Category statistics

Reports,
dashboards, and alarms
• Statistics per KPI/KQI group and/or categories are measured, aggregated and reported as before
• In addition, per-session QoE measurements are calculated, aggregated and reported
• Additional reported measurement reflect per-service session QoE statistics

KPI—Key performance indicator


KQI—Key quality indicator
QoE—Quality of experience

Figure 6.
Service oriented QoE measurements.

be directly perceivable by the end user. These elements Forum also defines several useful service categories
may be sequential steps of the service, or they may occur in [13]. Depending on the service aspects the opera-
concurrently. For example, a simple VoIP call can be tor wishes to measure and monitor, a selection of
decomposed into the following sequential elements: most applicable categories can be made.
• Service start. The user picks up the phone and waits Now consider the same analysis process for a “rich
for dial tone. call”:
• Connection setup time. The user dials the number • Service start. Similar to VoIP above but depends
and waits for ringing notification. upon the starting media.
• Voice quality. The MOS estimate of the conversation. • Connection setup time. Not quite similar; we have
• Call disconnect time. The user ends the call and to consider new media setup time every time the
waits for confirmation (the network has to free up user uses a different media.
resources so the user needs to wait until another • Voice quality. This metric needs to be replaced by a
call can be placed). combined measure of voice, picture, and video
The service elements can be grouped into cate- quality. Each media session may have its own
gories (e.g., service availability, service usability, and measurements, and the overall “rich call” QoE
media quality). In the example above, the connection has to be defined as an aggregation of the indi-
setup and call disconnect time could be grouped into vidual voice, picture, and video session related
the service usability categories. For example, ITU-T QoE.
G.1000 [4] defines a category matrix to facilitate iden- • Call disconnect time. Again similar to VoIP but may
tification of communications QoS criteria. The TM depend upon the ending media.

DOI: 10.1002/bltj Bell Labs Technical Journal 185


Each session QoE assessment element will have into measurable service elements that contribute to
attributes such as measurement units and maximum, the overall service QoE. Now the relationship
minimum, and average values. The contribution of between the functional service elements and the
each element to the overall QoE will be different and architectural components of the service should be
needs to be normalized. Operators may also assign a analyzed. For example, in an IMS VoIP application,
different weight or importance to a particular factor. call setup delay can be measured at various points in
We recommend that both raw measured values and the service architecture—in the end user device,
weight or normalization factors be registered so that at the session border controller (SBC), or at other IMS
these factors can be modified without losing the origi- network elements. A “rich call” will have many more
nal data. Figure 7 shows a generic approach for mod- such components. Each of these elements can also
eling service QoE measurements. The KQIs of each be the root cause for an excessive call setup delay
service element can be weighted according to operator- value due to congestion, equipment failure, or other
defined criteria, to emphasize the relative importance factors. When a poor QoE value is measured, the con-
of the measurement, then normalized and grouped tributing factor(s) must be traced back to the probable
into a category. Categories can be combined into an cause. Figure 7 illustrates the relationship between
overall QoE indicator, which can be used for high-level service-specific, user-perceivable KQI elements and
system monitoring, reporting, and trend analysis. root cause, performance related KPIs as measured in
Exceeding threshold limits can trigger an alert to the the network and related service equipment. Note that
operator and diagnostic or root cause analysis processes this relationship does not necessarily mean that the
similar to traditional performance monitoring systems. service-specific KQIs can be derived or calculated from
the underlying KPIs, rather that the network and
Service Architecture Decomposition equipment KPIs represent the sources of probable
The service QoE model has a session-oriented, cause. Hence, the relationship must be understood
end user perspective. Service usage is decomposed between service elements noticeable by the end user

Overall end user service


Service quality of experience indicator
satisfaction

Weight factor Weight factor Weight factor

Key contributing
Service availability Service usability Media quality
categories

End-to-end service KQI1 KQI2 KQI3 KQI4 KQI5 KQIn


key quality indicators

Contributing root
cause KPIs

Equipment/network KPI1 KPI2 KPI3 KPI4 KPI5 KPIn


metrics

KPI—Key performance indicator


KQI—Key quality indicator

Figure 7.
Service quality of experience model.

186 Bell Labs Technical Journal DOI: 10.1002/bltj


and the service architecture components responsible understand which environment variables can impact
for the service function. This relationship can be used the service QoE and, when possible, to obtain infor-
to design a QoE measurement architecture. mation about these variables per user. Knowledge of
QoE Measurement Architecture user equipment (say from the enhanced address
The QoE measurement architecture defines the book) will allow the service provider to track and ana-
measurement points and required measurement lyze QoE with greater accuracy.
equipment or interfaces in the service architecture. Another complicating factor is that a user will
The QoE architecture not only defines what informa- change expectations over time. For example, voice qual-
tion needs to be extracted and at which points, but ity in early mobile phone services was not at the same
also the set of related measurement parameters that level as it is today, yet early adopters were quite satisfied
need to be obtained and grouped in such a way with the service. However, as technology evolves, user
that the relationship with the service session is pre- expectations will quickly adapt accordingly.
served. For example, if the service architecture
decomposition as explained in the previous section Example Applications and Corresponding KQI/KPIs
identified the set of network elements that could In this section, we present example KQIs and KPIs
potentially contribute to significant call setup delays, for the following applications: VoIP, IPTV, Web brows-
performance measurements on those elements can ing, video streaming, push-to-talk over cellular (PoC),
be included and grouped as a set of measurement and instant messaging (IM) with presence. The inten-
parameters that are collected and stored during the tion is not to give an exhaustive but rather a repre-
service session. Maintaining these related perfor- sentative list.
mance indicators across multiple network elements As stated in the previous section, KQIs and KPIs
as a group of measurement parameters associated need to be defined from a user’s perceived quality
with a particular service session will help to identify perspective. Furthermore, they can also be defined
the root cause when the service QoE value is below a from a “session” level perspective as well as from an
certain threshold. The challenge is to find a practical aggregate “network” perspective. With that in mind,
balance with the often large amount of available per- for the purposes of this paper we categorize KQIs into
formance data and the minimum information the following three classes:
required for effective root cause analysis. 1. Service availability. A measure of whether the user
can utilize the desired application.
Session Context and Environment Variables
2. Service responsiveness (also called service accessibil-
User expectations can significantly depend on the
ity [6]). A measure of time taken from a specific
context or environment in which the service is used.
user’s request to its response, or the time it takes
For example, a user may be watching an HD quality
for packets to arrive at a destination.
movie during a video-on-demand (VoD) service ses-
3. Service quality (also called service integrity [6]).
sion on a television (TV) which does not support the
A measure of information loss, which can either
full resolution quality of the movie. If the user is
be caused by packet losses, by bit errors, or by
aware of the limitations of his television set, his expec-
degradations introduced due to media coding and
tations of the video quality will be modest. However,
decoding.
if the same user decides to invest in an expensive high
resolution TV set to take full advantage of the HD VoD VoIP
service, his expectations will be quite different. If the Table I shows some common KQIs and KPIs
delivered video quality does not match the new defined for VoIP. The KQIs can be computed on a ses-
expectations, the same service will result in a much sion level such as the “percentage of service down-
lower QoE opinion from that user. The service deliv- time experienced by sessions from customer X” or at
ery has not changed, but the environment in which a network-aggregate level such as the “percentage of
the service is used has. It is, therefore, important to service downtime experienced for all PSTN calls.”

DOI: 10.1002/bltj Bell Labs Technical Journal 187


Table I. VoIP KQI/KPI examples.

Category KQI KPI


Service availability % of service downtime Call setup success ratio
Service responsiveness Call setup delay Setup delay
Service quality MOS score Delay, jitter, packet loss
KPI—Key performance indicator MOS—Mean opinion score
KQI—Key quality indicator VoIP—Voice over Internet Protocol

Table II. IPTV KQI/KPI examples.

Category KQI KPI


Registration success ratio, session setup
Service availability % of service downtime
success ratio
Session start delay Setup delay
TV channel change delay Control signal delay
Service responsiveness
TV control delay Control signal delay
Digital video recorder control delay Control signal delay
Video MOS score (MOSv) Delay, jitter, image element loss

Service quality Audio MOS score (MOSa) Delay, jitter, packet loss
Combined A/V quality (MOSav) audio⫹video Delay, jitter, packet loss
synchronization (lip synch)
A/V—Audio/visual KQI—Key quality indicator
IPTV—Internet Protocol television MOS—Mean opinion score
KPI—Key performance indicator TV—Television

Appropriate target values can then be assigned An example set of KQIs and KPIs for IPTV is
depending on the operator’s business needs. For shown in Table II. As with the VoIP example, these
example, ⬍1% service downtime for enterprise VoIP KQIs can be defined at a session level or at a network-
sessions or 3 seconds average delay for intra-network aggregate level. Typically delays should be in the mil-
calls. liseconds for all the control signals.
IPTV With respect to the video, audio, and A/V MOS
IP video service quality metrics with a primary score KQIs in this table, the ITU-T presents an objec-
focus on the need for correlation of viewer percep- tive model in [7] that can be used to compare video
tion with network performance and operation has quality, but a complete MOS model based on perfor-
been analyzed in [1]. IPTV QoE is not only deter- mance measurements is still not defined. Other KQIs
mined by video quality, but also depends on other based on the Internet Engineering Task Force (IETF)
factors such as zapping (or channel change) time, media delivery index (MDI) [17] have been proposed
electronic program guide (EPG) accuracy, and respon- as well.
siveness to pause, resume, fast forward, fast rewind, Web Browsing
record, and stop commands issued by the user. The Estimating the user perceived QoE of a Web brows-
TM Forum video over IP application note [16] pub- ing session can be very challenging due to the variety
lished recommendations for service responsiveness of Web applications as well as the complexity of the
related KPIs. systems and components involved. An important

188 Bell Labs Technical Journal DOI: 10.1002/bltj


Table III. Web browsing KQI/KPIs examples.

Category KQI KPI


Service availability % of service downtime Session setup success ratio
Service responsiveness Response time between request and response End-to-end delay
Service quality N/A (see note) N/A
Note: TCP will attempt to correct all errors—if BER or packet loss is high, it will cause added delays in the transmission or the connection will fail.
Thus both effects are included in the other two KQIs.

BER—Bit error rate KQI—Key quality indicator


KPI—Key performance indicator TCP—Transmission Control Protocol

Table IV. Video streaming KQI/KPI examples.

Category KQI KPI


Service availability % of service downtime Session setup success ratio
Session start delay Setup delay
Service responsiveness Pause delay Control signal delay
Fast forward/rewind delay Control signal delay
Video MOS score (MOSv) Blockiness, jerkiness, bluriness
Audio MOS score (MOSa) Delay, jitter, packet loss
Service quality
Combined A/V quality (MOSav) audio⫹video
Delay, jitter, packet loss
synchronization (lip synch)

A/V—Audio/visual KQI—Key quality indicator


KPI—Key performance indicator MOS—Mean opinion score

attribute of Web browsing in general is the respon- could be in the milliseconds, but in other cases like in
siveness to a user action. Users easily tend to become free user-generated videos expectations are lower so
impatient if they cannot access a Web site quickly delays could range in the seconds. Table IV shows
enough or if the Web page is not responding fast examples of video streaming KQIs and KPIs.
enough to user actions such as pressing a button or Push-to-Talk Over Cellular
entering a search query. Table V details examples of push-to-talk over cel-
Table III presents example KQIs/KPIs for Web lular (PoC) KQIs and KPIs. Within the table, “talk burst
browsing. Example target values for the response time confirm delay” refers to the time required for the sig-
are based on [10]. A response time of ⬍2 seconds is naling messages to flow back and forth in the network
preferred, ⬍4 seconds is acceptable, and 10 seconds from the moment the PoC button is pushed to the
is the maximum, as 10 seconds is about the limit for playing of the chirp by the user device. “Volley
keeping the user’s attention focused on the dialog. latency” refers to the time it takes to gain floor control.
Video Streaming Open Mobile Alliance PoC requirements [11]
Objective models have been proposed to measure state that the talk burst confirm delay should typi-
video quality, but like those for IPTV above, a complete cally be less than 2 seconds. Volley latency from the
MOS model is still not defined. We again refer the reader end user’s perspective should be imperceptible so a
to [7]. In some cases (e.g., a paid live Webcast) delays few hundreds of milliseconds are usually acceptable.

DOI: 10.1002/bltj Bell Labs Technical Journal 189


Table V. Push to talk over cellular KQI/KPI examples.

Category KQI KPI


Service availability % of service downtime Session setup success ratio
Talk-burst confirm delay Setup delay
Service responsiveness
Volley latency Control signal delay

Service quality MOS score Delay, jitter, packet loss

KPI—Key performance indicator


KQI—Key QoS indicator
MOS—Mean opinion score

Table VI. IM with presence KQI/KPI examples.

Category KQI KPI


Service availability % of service downtime Session setup success ratio
Message delay Session setup delay, transfer delay
Service responsiveness
Status change delay Control signal delay
Service quality See note See note

Note: IM most likely should have similar values for the response times as Web browsing so a response time of ⬍2 seconds is preferred. Presence
updates delays could be less stringent and take a few minutes.

IM—Instant messaging
KPI—Key performance indicator
KQI—Key QoS indicator

Instant Messaging With Presence For example, even well-established applications such
Table VI presents example KQIs and KPIs for as peer-to-applications like Skype* or BitTorrent* use
instant messaging with presence. IM most likely very sophisticated mechanisms to deliver their ser-
should have similar values for the response times to vices. As long as the service provider is satisfied with
Web browsing so a response time of ⬍2 seconds is the role of bit-pipe provider and there are no guaran-
preferred. Presence update delays could be less strin- tees around quality of service, there is no problem.
gent and take a few minutes. However, if there is indeed an expectation of QoE
triggered by “net neutrality” or other similar regula-
Conclusion tory issues, the problem cannot simply be wished
In this paper, we reviewed various techniques away. In addition, the demand for higher QoE is
that can lead to an estimation of QoE. We believe that expected to increase when amateur videos in
the use of network measurements is a technique that, YouTube* are things of the past, and the professional
although more complex than others, can lead to a quality videos in Hulu* are the norm. Let us also be
better estimation and resolution of QoE issues. We aware that user equipment (particularly in the wire-
also discussed standard definitions of various mea- less industry) is moving from a pure media consump-
surements and presented proposed values of KPIs/ tion device to a media creation and consumption
KQIs. Finally, we investigated some of the challenges device (picture/video editing, multimedia messaging),
of next-generation applications and provided a frame- further highlighting the need for QoS for a wide range
work for addressing them. of applications. All these issues require further study.
The proposed framework is a starting point to deal Another item for further study, and also a matter
with the ever-increasing complexity of QoE issues. of increasing importance, is the burgeoning world of

190 Bell Labs Technical Journal DOI: 10.1002/bltj


cloud computing. The user interface, processing, and [3] International Telecommunication Union,
storage all may be at different physical sites connected Telecommunication Standardization Sector,
to each other via ultra-high-speed connections. There “Interactive Test Methods for Audiovisual
Communications,” ITU-T P.920, May 2000,
are at least two aspects of QoE in this environment.
⬍http://www.itu.int⬎.
First, the end users pay the computing application [4] International Telecommunication Union,
provider based on usage and/or subscription (so that Telecommunication Standardization Sector,
they don’t have to build and maintain their own com- “Communications Quality of Service: A
puting infrastructure). Consequently, there is an Framework and Definitions,” ITU-T G.1000,
expectation of QoE for the service provided by the Nov. 2001, ⬍http://www.itu.int⬎.
[5] International Telecommunication Union,
computing resource provider. The cloud computing
Telecommunication Standardization Sector,
provider, in turn, has to depend upon the backbone “End-User Multimedia QoS Categories,” ITU-T
connection provider to deliver end user service with G.1010, Nov. 2001, ⬍http://www.itu.int⬎.
the right QoE. Since the cloud computing service [6] International Telecommunication Union,
provider tries to minimize idle resources, the highest Telecommunication Standardization Sector,
degree of QoE must be provided by the interconnec- “Quality of Telecommunication Services:
Concepts, Models, Objectives and
tion provider to facilitate inter-server communication.
Dependability Planning—Use of Quality of
For economic reasons, it is not practical to provide Service Objectives for Planning of
highly reliable point-to-point links among servers Telecommunication Networks, Framework of a
located around the globe. A well-defined framework Service Level Agreement,” ITU-T E.860, June
and methodology are going to be necessary in the 2002, ⬍http://www.itu.int⬎.
near future to find the perfect balance between a high [7] International Telecommunication Union,
Telecommunication Standardization Sector,
degree of QoE and reasonable economics.
“Objective Perceptual Video Quality
Measurement Techniques for Digital Cable
Acknowledgements Television in the Presence of a Full Reference,”
The authors would like to thank Jayant Desphande ITU-T J.144, Mar. 2004, ⬍http://www.itu.int⬎.
from Bell Labs for his recommendations on speech [8] International Telecommunication Union,
quality estimation systems and Bilgehan Erman from Telecommunication Standardization Sector,
“The E-Model, a Computational Model for Use
Bell Labs for his recommendations on video quality.
in Transmission Planning,” ITU-T G.107, Mar.
2005, ⬍http://www.itu.int⬎.
*Trademarks [9] International Telecommunication Union,
3GPP is a trademark of the European Telecommunications Telecommunication Standardization Sector,
Standards Institute.
“Gateway Control Protocol: Version 3,” ITU-T
BitTorrent is a registered trademark of BitTorrent, Inc.
Hulu is a trademark of Hulu, LLC. H.248.1, Sept. 2005, ⬍http://www.itu.int⬎.
Skype is a trademark of Skype Technologies, S.A. [10] J. Nielsen, “Response Times: The Three
Wi-Fi is a registered trademark of the Wi-Fi Alliance Important Limits,” 1994, ⬍http://www.useit.com/
Corporation. papers/responsetime.html⬎.
WiMAX is a registered trademark of the WiMAX Forum. [11] Open Mobile Alliance, “Push to Talk Over
YouTube is a registered trademark of Google, Inc. Cellular Requirements,” Approved Version 1.0,
OMA-RD-PoC-V1_0-20060609-A, June 9,
References 2006, ⬍http://www.openmobilealliance.org⬎.
[1] B. Erman and E. P. Matthews, “Analysis and [12] T. Rahrer, R. Fiandra, and S. Wright (eds.),
Realization of IPTV Service Quality,” Bell Labs “Triple-Play Services Quality of Experience
Tech. J., 12:4 (2008), 195–212. (QoE) Requirements,” DSL Forum TR-126,
[2] International Telecommunication Union, Dec. 13, 2006.
Telecommunication Standardization Sector, [13] TeleManagement Forum, SLA Management
“Subjective Audiovisual Quality Assessment Handbook: Volume 2, Concepts and Principles,
Methods for Multimedia Applications,” ITU-T Rel. 2.5, GB 917-2, TM Forum, Morristown,
P.911, Dec. 1998, ⬍http://www.itu.int⬎. NJ, July 2005.

DOI: 10.1002/bltj Bell Labs Technical Journal 191


[14] TeleManagement Forum, “Best Practice: Voice AMIT MUKHOPADHYAY is a distinguished member of
Over IP SLA Management—Application Note technical staff in the Alcatel-Lucent Bell
to the SLA Management Handbook,” TM Labs Network Planning, Performance and
Forum GB 934, Rel. 2.0, v1.10, Apr. 2008. Economic Analysis Center in Murray Hill,
[15] TeleManagement Forum, “Best Practice: Video New Jersey. He holds a B.Tech. from the
Over IP SLA Management—Application Note Indian Institute of Technology, Kharagpur,
to the SLA Management Handbook,” TM and a Ph.D. in operations research from the University
Forum GB 938, Rel. 2.0, v1.10, Oct. 2008. of Texas, Dallas. His current work focuses on 3G and 4G
[16] TeleManagement Forum, “Technical Report: wireless technologies; the convergence of wireless,
Holistic e2e SQM Framework,” TM Forum TR wireline, and cable services; as well as the evolution of
149, v0.5c, Jan. 2009. next-generation technologies. Dr. Mukhopadhyay’s
[17] J. Welch and J. Clark, “A Proposed Media research interests include architecture, performance,
Delivery Index (MDI),” IETF RFC 4445, Apr. and cost optimization for voice, data, and video as well
2006, ⬍http://www.ietf.org/rfc/rfc4445.txt⬎. as converged networks for various access technologies
including wireless, cable, DSL, wireless fidelity (Wi-Fi*),
(Manuscript approved November 2009) and WiMAX*. He is a senior member of IEEE and a
member of Alcatel-Lucent Technical Academy and has
HAROLD BATTERAM is a distinguished member of several publications in refereed journals.
technical staff with the Alcatel-Lucent Bell
Labs Network Modeling and Optimization
group in Hilversum, the Netherlands. He has RHODO ODYSSEOS is a product marketing manager
participated in several Dutch national and and business development associate in
European collaborative research projects Alcatel-Lucent SAI in Cascais, Portugal. Her
including the European 5th and 6th framework area of expertise is operations support
projects. His current work focuses on quality of systems (OSS) and business support systems
experience for multimedia applications, IPTV modeling, (BSS). For over 15 years, she has been
and real-time sensitive network applications. He has instrumental in consultation and development for
several patents pending in the areas of context aware OSS/BSS for service providers worldwide. Most
applications, network latency equalization, IPTV, and recently she has been involved in consulting with
Session Initiation Protocol (SIP) signaling. His current customers to support the design, implementation,
research interests are next-generation network integration, and management of OSS/BSS solutions for
applications and multi-party, multimedia real time next-generation networks and IP-based services. Ms.
applications. Mr. Batteram holds a B.S. degree in Odysseos holds a master of science degree in
electrical and computer engineering from the Hogere computer science and engineering from Czech
Technische School (HTS) in Hilversum. Technical University in Prague.

GERARD DAMM is product manager for the 8920


Service Quality Manager (SQM) at Alcatel-
Lucent in Cascais, Portugal. His current focus LAURENT PHILIPPART is a product line manager at
is on end-to-end service quality assurance. Alcatel-Lucent in Cascais, Portugal and has
He was previously with Bell Labs and the over 15 years experience in OSS. He has a
Alcatel Research and Innovation telecommunications engineering
department. He has experience in modeling, background and holds an M.Sc. in satellite
simulation, network and service management, IP, communications and spacecraft technology.
carrier-grade Ethernet, multicast, IPTV, virtual private He is the leader of the TeleManagement Forum service
networks (VPNs), schedulers, optical burst switching, level agreement management team and served as
and network processors, with several papers and editor of its IPTV Application Note. He has been
patents in these areas. He is the editor of the TM involved in a number of large projects, including
Forum SLA Management Handbook (GB917 v3). He consulting and implementation for performance and
holds a Ph.D. in computer science from Paris University, service quality management systems for service
France, and is member of the Alcatel-Lucent Technical providers worldwide and has specialized in voice and
Academy. audio/video quality assessment for IP-based services.

192 Bell Labs Technical Journal DOI: 10.1002/bltj


CARLOS URRUTIA-VALDÉS is a research engineer in the
Network Modeling and Optimization group
at Alcatel-Lucent Bell Labs in Murray Hill,
New Jersey. His current work focuses on the
modeling and analysis of multimedia
applications, IMS, wireless technologies, and
home networks. He has published several papers on 3G
networks and on IMS. He has been awarded a patent in
the area of wireless backhaul optimization and has
patents pending in the areas of next-generation
applications and IMS optimization. His current research
interests are home networking, application and traffic
modeling, and the end-to-end design of wireless and
wireline networks. Mr. Urrutia-Valdés holds a B.S.
degree in electrical engineering from Florida
International University, Miami, and an M.S. degree in
computer engineering from the University of Southern
California, Los Angeles. ◆

DOI: 10.1002/bltj Bell Labs Technical Journal 193

S-ar putea să vă placă și