Sunteți pe pagina 1din 6

Xtera Communications, Inc.

Nu-Wave DWDM Transport System

I d e a s i n a d i f f e r e n t l i g h t .

Xtera Communications, Inc.

DWDM Transport System

Putting Bandwidth to Work

Putting Bandwidth to Work 1


Xtera Communications, Inc.
Nu-Wave DWDM Transport System

I d e a s i n a d i f f e r e n t l i g h t .

Putting Bandwidth to Work

Introduction

Bandwidth in long haul optical systems is no longer a precious commodity, yet most users and
carriers still treat it as such. High bandwidth applications, from video broadcasting to SANs, are
being squeezed into narrow pipes, causing considerable degradation in quality and reliability at a
cost which, in many cases, is higher than the cost of raw bandwidth. This paper examines some
of these applications and the dichotomy between bandwidth that is not being made available in
the network because according to conventional wisdom the need does not exist, while at the
same time bandwidth hungry applications are starved because not enough bandwidth is available
at a reasonable price. The paper will show that by deploying Xteras Nu-Wave DWDM system,
which delivers up to 240 channels of 10 Gb/s or 960 channels of 2.5 Gb/s over distances of up to
3000 km before regeneration is required, and at a cost that is far lower than that of other
commercially available DWDM systems, carriers can offer superior service quality while
reducing overall cost through the wholesale elimination of lower order transmission, switching
and routing equipment.

Background

Up until very recently traffic in the long haul network consisted nearly exclusively of massive
amounts of relatively small data streams. It wasnt so long ago that networks carried almost
nothing but voice and even large corporate users who leased their own lines only carried large
amounts of voice for themselves. Early data services like Frame Relay were built on the voice
infrastructure (using, for example, fractional T1s). The pipes that carried this traffic were always
considerably larger than each of the individual data streams and aggregation took place in
channel banks, Digital Loop Carriers, Digital Cross-connect Systems and Add/Drop
Multiplexers. Local aggregation packed all of the individual data streams as efficiently as
possible, linear aggregation than further added each location's aggregated traffic to incoming
upstream traffic and sent it downstream.

The advent of IP and ATM changed the nature of the traffic carried in long haul networks but not
the fact that traffic is made up of many small data streams. Large IP routers and ATM switches
are designed to route and manage millions of connections. Although the relationship between
traffic and bandwidth capacity is radically different for the well behaved and very predictable
synchronously multiplexed data streams and the peaks and valleys of packetized data, from the
perspective of bandwidth, IP routers are nothing more than very efficient aggregators performing
both local and linear aggregation of millions of small data streams.

Bandwidth growth in the core of the long haul network has always followed the growth of
aggregated traffic. Traffic, in turn, has always been heavily aggregated because long haul
bandwidth was always assumed to be more expensive than the cost of aggregation.

Putting Bandwidth to Work 2


Xtera Communications, Inc.
Nu-Wave DWDM Transport System

I d e a s i n a d i f f e r e n t l i g h t .

Large users

Aggregation has a dual purpose. The first is to make sure that all bandwidth is used efficiently
and the second is up-conversion. Until recently it was a matter of course that long haul pipes
were orders of magnitude bigger in capacity than the services they carried. Although, with the
advent of DWDM, the total traffic carrying capacity of a single fiber is still much larger than any
one service, many users now have service requirements that are not far below the capacity of a
single channel. Most channels in modern DWDM equipment run at 10 Gb/s and for the first time
ever, if there is a need to go beyond the current line rate it will be driven by users requirements
and not by the economies of higher line rates. In the past a 4x improvement in line rate was
usually achieved at a 2.5x increase in cost. Going from 10 Gb/s to 40 Gb/s is not likely to follow
that rule of thumb, hindered by impairments such as Polarization Mode Dispersion and
Chromatic Dispersion.

Some applications that are now coming into their own do have rather large bandwidth appetites.
Uncompressed High Definition Television generates data streams at 1.5 Gb/s. SANs and Grid
Computing require, or in todays network perhaps it is better to say: desire, single user
connectivity of up to 10 Gb/s. A closer look at each of these applications reveals the loss in
quality and reliability and the monetary cost of how each of these applications is transported.

Digital Video

Standard, uncompressed digital video has a data rate of 270 Mb/s, uncompressed High Definition
video runs at close to 1.5 Gb/s. Lets examine, as an example, the live broadcast of a sports event
broadcast in High Definition.

Putting Bandwidth to Work 3


Xtera Communications, Inc.
Nu-Wave DWDM Transport System

I d e a s i n a d i f f e r e n t l i g h t .

In the stadium there can be as many as 10 High Definition cameras and a number of storage and
graphics devices. Each one of these produces a High Definition feed to a production truck, which
in turn produces the main body of the program. This, in essence, is the first stage of aggregation,
necessitated by the fact that it has always been very difficult to transport one composite feed, let
alone individual feeds from cameras and announcers booths. The feed from the stadium is then
sent to a regional or national centralized production facility where it is combined with feeds from
studios in other locations and feeds from other stadiums. The resulting program feed is then sent
out for distribution to the network affiliates.

Each time the HDTV signal is transported it is down-converted (compressed) to 270 Mb/s. It is
then transported on a concatenated SONET or SDH channel or over ATM, usually passing
through many SONET or SDH Add/Drop Multiplexers, Digital Cross-connect Systems and/or
ATM switches and optical repeaters. In the regional or national production facility the signal is
up-converted to the original 1.5 Gb/s HDTV signal for further processing and production. It is
then down-converted again and transported for distribution.

Each time an HDTV signal is compressed it loses some of its visual quality. Each time it passes
through a multiplexing stage, a repeater, a Digital Cross-connect System or an ATM switch the
signal is compromised through a build-up of jitter and latency. Reliability is also adversely
affected because each and every set of hardware and software in the transport path is a potential
failure point. All of these degradations are expected and reluctantly accepted by broadcasters
because they are accustomed to living in a bandwidth-constrained world.

By mapping uncompressed 1.5 Gb/s HDTV signals directly into 2.5 Gb/s Ultra Long Haul
channels, the need for, and impairments and failure potential caused by down-converters,
SONET/SDH multiplexers, Digital Cross-connect Systems, ATM switches disappear and
instead, a clean, uncompromised HDTV signal is delivered at a cost that is lower than that of a
compressed signal. Taking it one step further, the economies of Nu-Wave are such that for many
events individual camera feeds can be brought into the regional or national production facility,
saving the cost of deploying remote production trucks and allowing better use of centralized
production facilities.

SANs and Grid Computing

Storage Area Networks (SANs) started out as a means of connecting storage devices within a
campus or within close proximity. They then spread out across metropolitan areas and, because
of an increased emphasis on disaster recovery and probably just as much as a result of mergers
between very large national financial institutions, SANs have now gone wide area. The
bandwidth requirements between storage devices in a large SAN can be considerable and are
now reaching 1 10 Gb/s. Grid Computing applications, although fewer in number than SANs,
tend to be even heavier users of bandwidth, generating virtually constant, very high bit rate data
streams. Because of the cost of long haul transport it is generally accepted that wide area SANs
and Grid Computing applications will operate over IP networks, but this brings with it its own
host of problems.

Putting Bandwidth to Work 4


Xtera Communications, Inc.
Nu-Wave DWDM Transport System

I d e a s i n a d i f f e r e n t l i g h t .

First, the cost of adding 1 10 Gb/s to a router is not trivial. Very high bit rate router ports are
expensive and, because the SAN or Grid Computing will add a significant amount of traffic to
each and every router it passes through, expensive capacity upgrades may be required all along
the transport path.

Second, each router adds a considerable amount of latency. Because of the way TCP, the
protocol that controls and manages IP traffic, works high latency will severely curtail the
transmission speed on a long link. At the transmitting end of a TCP/IP connection TCP will wait
for acknowledgement from the receiving end that the packets it sent have arrived without error.
If that acknowledgement does not arrive before all the packets in the transmit buffer have been
sent, then transmission will be suspended. In long links, where the round trip delay may measure
in the tens of milliseconds, latency becomes the one single parameter that determines
transmission speed, no matter what the capacity of the transmission link is.

Third, once the SAN traffic is mixed with the other IP traffic in the network it will have to fight
for priority. This is not an issue on an underutilized link, but when network traffic is heavy the
SAN traffic, because it is such a heavy user, will be the first to get affected. There are many
expensive Quality of Service (QoS) schemes available to alleviate this problem, in effect trying
to reserve a fixed slice of bandwidth out of a shared medium when the application calls for it. In
SAN applications and even more so in Grid Computing applications, that may be nearly always.

Fourth, sharing a network means others are sharing the network with the SAN and along with
that come copious security concerns and the cost of isolating the SAN from other users.

Finally, the reliability and robustness of a SAN connection over a shared IP network will be
determined almost exclusively by router and networking hardware and software.

Putting Bandwidth to Work 5


Xtera Communications, Inc.
Nu-Wave DWDM Transport System

I d e a s i n a d i f f e r e n t l i g h t .

Providing a low cost, fixed 2.5 or 10 Gb/s connection between SAN locations or between Grid
Computing sites will:
Eliminate the cost of expensive, high speed, backbone routers
Lower the latency between sites to what is possible given the speed of light
Offer the highest possible QoS
Offer the highest possible level of security
Offer the highest possible reliability

Summary

Xteras Nu-Wave DWDM system can provide previously unattainable amounts of high quality
bandwidth. With 240 10 Gb/s channels or 960 2.5 Gb/s channels per fiber pair, and with
transponders which cost only a fraction of a single router blade, a carrier can offer high
performance, highly reliable fixed connections at a cost that is lower than the cost of a vastly
inferior shared connection. The wholesale elimination of aggregation, switching and routing
equipment for high bandwidth users will also drastically reduce operating expenses.

500 W. Bethany Drive, Suite 100


Allen, TX 75013
1-866-GO-XTERA (1-866-469-8372)
www.xtera.com

Putting Bandwidth to Work 6

S-ar putea să vă placă și