Sunteți pe pagina 1din 28

White Paper

Gigabit
Ethernet and ATM
A Technology Perspective
Bursty, high-bandwidth applications are driving

the need for similarly high-bandwidth campus

backbone infrastructures. Today, there are two choices

for the high-speed campus backbone: ATM or Gigabit

Ethernet. For many reasons, business and technical,

Gigabit Ethernet is selected as the technology of choice.

This paper briefly presents, from a technical perspective,

why Gigabit Ethernet is favored for most enterprise LANs.


In the past, most campuses use shared-media backbones
— such as 16/32 Mbps Token-Ring and 100 Mbps FDDI —
that are only slightly higher in speed than the LANs
and end stations they interconnect. This has caused
severe congestion in the campus backbones when these
backbones interconnect a number of access LANs.

A high capacity, high performance, and highly resilient


backbone is needed-one that can be scaled as end stations
grow in number or demand more bandwidth. Also
needed is the ability to support differentiated service
levels (Quality of Service or QoS), so that high priority,
time-sensitive, and mission-critical applications can
share the same network infrastructure as those that
require only best-effort service.

2 Gigabit Ethernet and ATM: A Technology Perspective White Paper


Until recently, Asynchronous Transfer Interface, and Multiprotocol over ATM). Which is a “better” technology is no
Mode (ATM) was the only switching This additional complexity is required in longer a subject of heated industry debate
technology able to deliver high capacity order to adapt ATM to the connectionless, — Gigabit Ethernet is an appropriate
and scalable bandwidth, with the promise frame-based world of the campus LAN. choice for most campus backbones.
of end-to-end Quality of Service. ATM Meanwhile, the very successful Fast Many business users have chosen Gigabit
offered seamless integration from the Ethernet experience spurred the development Ethernet as the backbone technology
desktop, across the campus, and over the of Gigabit Ethernet standards. Within for their campus networks. An Infonetics
metropolitan/wide area network. It was two years of their conception (June Research survey (March 1999) records
thought that users would massively 1996), Gigabit Ethernet over fiber that 91 percent of respondents believe
deploy connection-oriented, cell-based (1000BASE-X) and copper (1000BASE-T) that Gigabit Ethernet is suitable for LAN
ATM to the desktop to enable new native standards were approved, developed, and backbone connection, compared with 66
ATM applications to leverage ATM’s rich in operation. Gigabit Ethernet not only percent for ATM. ATM continues to be a
functionality (such as QoS). However, provides a massive scaling of bandwidth good option where its unique, rich, and
this did not come to pass. The Internet to 1000 Mbps (1 Gbps), but also shares a complex functionality can be exploited
Protocol (IP), aided and abetted by the natural affinity with the vast installed base by its deployment, most commonly in
exploding growth of the Internet, rode of Ethernet and Fast Ethernet campus metropolitan and wide area networks.
roughshod over ATM deployment and LANs running IP applications. Whether Gigabit Ethernet or ATM
marched relentlessly to world dominance. is deployed as the campus backbone
Enhanced by additional protocols already
When no other gigabit technology existed, common to Ethernet (such as IEEE technology of choice, the ultimate
ATM provided much needed relief as a 802.1Q Virtual LAN tagging, IEEE decision is one of economics and sound
high bandwidth backbone to interconnect 802.1p prioritization, IETF business sense, rather than pure technical
numerous connectionless, frame-based Differentiated Services, and Common considerations.
campus LANs. But with the massive Open Policy Services), Gigabit Ethernet is The next two sections provide a brief
proliferation of IP applications, new now able to provide the differential qualities description of each technology.
native ATM applications did not appear. of service that previously only ATM could
Even 25 Mbps and 155 Mbps ATM provide. One key difference with Gigabit Asynchronous Transfer
to the desktop did not appeal to the Ethernet is that additional functionality Mode (ATM)
vast majority of users, because of their can be incrementally added in a Asynchronous Transfer Mode (ATM)
complexity, small bandwidth increase, non-disruptive way as required, compared has been used as a campus backbone
and high costs when compared with the with the rather revolutionary approach of technology since its introduction in the
very simple and inexpensive 100 Mbps ATM. Further developments in bandwidth early 1990s. ATM is specifically designed
Fast Ethernet. and distance scalability will see 10 Gbps to transport multiple traffic types —
On the other hand, Fast Ethernet, with its Ethernet over local (10G-BASE-T) and data, voice and video, real-time or
auto-sensing, auto-negotiation capabilities, wide area (10G-BASE-WX) networks. non-real-time — with inherent QoS
integrated seamlessly with the millions Thus, the promise of end-to-end seamless for each traffic category.
of installed 10 Mbps Ethernet clients integration, once only the province of
To enable this and other capabilities,
and servers. Although relatively simple ATM, will be possible with Ethernet
additional functions and protocols are
and elegant in concept, the actual and all its derivations.
added to the basic ATM technology.
implementation of ATM is complicated Today, there are two technology choices Private Network Node Interface (PNNI)
by a multitude of protocol standards for the high-speed campus backbone: provides OSPF-like functions to signal
and specifications (for instance, LAN ATM and Gigabit Ethernet. While both and route QoS requests through a
Emulation, Private Network Node seek to provide high bandwidth and hierarchical ATM network. Multiprotocol
differentiated QoS within enterprise
LANs, these are very different technologies.

Gigabit Ethernet and ATM: A Technology Perspective White Paper 3


The main reason for this success is that
Gigabit Ethernet provides the functionality
that meets today’s immediate needs at an
Interface, LAN Emulation Network- affordable price, without undue complexity
Network Interface, and a multitude of and cost. Gigabit Ethernet is complemented
additional protocols, signaling controls, by a superset of functions and capabilities
and connections (point-to-point, point- that can be added as needed, with the
to-multipoint, multipoint-to-point, and promise of further functional enhancements
multipoint-to-multipoint). and bandwidth scalability (for example,
over ATM (MPOA) allows the establish- IEEE 802.3ad Link Aggregation, and 10
ment of short-cut routes between Until recently, ATM was the only
Gbps Ethernet) in the near future. Thus,
communicating end systems on different technology able to promise the benefits
Gigabit Ethernet provides a simple
subnets, bypassing the performance of QoS from the desktop, across the LAN
scaling-up in bandwidth from the 10/100
bottlenecks of intervening routers. There and campus, and right across the world.
Mbps Ethernet and Fast Ethernet LANs
have been and continue to be enhancements However, the deployment of ATM to the
that are already massively deployed.
in the areas of physical connectivity, desktop, or even in the campus backbone
LANs, has not been as widespread as Simply put, Gigabit Ethernet is Ethernet,
bandwidth scalability, signaling, routing
predicted. Nor have there been many but 100 times faster!
and addressing, security, and management.
native applications available or able to Since Gigabit Ethernet uses the same
While rich in features, this functionality
benefit from the inherent QoS capabilities frame format as today’s legacy installed
has come with a fairly heavy price tag in
provided by an end-to-end ATM solution. LANs, it does not need the segmentation
complexity and cost. To provide backbone
Thus, the benefits of end-to-end QoS and reassembly function that ATM
connectivity for today’s legacy access
have been more imagined than realized. requires to provide cell-to-frame and
networks, ATM — a connection-oriented
Gigabit Ethernet as the campus backbone frame-to-cell transitions. As a connection-
technology — has to emulate capabilities
technology of choice is now surpassing less technology, Gigabit Ethernet does not
inherently available in the predominantly
ATM. This is due to the complexity require the added complexity of signaling
connectionless Ethernet LANs, including
and the much higher pricing of ATM and control protocols and connections
broadcast, multicast, and unicast
components such as network interface that ATM requires. Finally, because QoS-
transmissions. ATM must also manipulate
cards, switches, system software, capable desktops are not readily available,
the predominantly frame-based traffic on
management software, troubleshooting Gigabit Ethernet is no less deficient in
these LANs, segmenting all frames into cells
tools, and staff skill sets. There are also providing QoS. New methods have been
prior to transport, and then reassembling
interoperability issues, and a lack of developed to incrementally deliver QoS
cells into frames prior to final delivery.
suitable exploiters of ATM technology. and other needed capabilities that lend
Many of the complexity and interoperability
themselves to much more pragmatic and
issues are the result of this LAN
Gigabit Ethernet cost-effective adoption and deployment.
Emulation, as well as the need to provide
resiliency in these emulated LANs. There Today, Gigabit Ethernet is a very viable To complement the high-bandwidth
are many components required to make and attractive solution as a campus capacity of Gigabit Ethernet as a campus
this workable; these include the LAN backbone LAN infrastructure. Although backbone technology, higher-layer functions
Emulation Configuration Server(s), relatively new, Gigabit Ethernet is derived and protocols are available, or are being
LAN Emulation Servers, Broadcast and from a simple technology, and a large and defined by standards bodies such as the
Unknown Servers, Selective Multicast well-tested Ethernet and Fast Ethernet Institute of Electrical and Electronics
Servers, Server Cache Synchronization installed base. Since its introduction, Engineers (IEEE) and the Internet
Protocol, LAN Emulation User Network Gigabit Ethernet has been vigorously
adopted as a campus backbone technology,
with possible use as a high-capacity
connection for high-performance servers
and workstations to the backbone switches.
4 Gigabit Ethernet and ATM: A Technology Perspective White Paper
Engineering Task Force (IETF). Many of
these capabilities recognize the desire for
convergence upon the ubiquitous Internet
Protocol (IP). IP applications and transport Technological Aspects
protocols are being enhanced or developed Aspects of a technology are important
to address the needs of high speed, multi- because they must meet some minimum
media networking that benefit Gigabit requirements to be acceptable to users.
Ethernet. The Differentiated Services Value-added capabilities will be used
(DiffServ) standard provides differential where desirable or affordable. If these
QoS that can be deployed from the additional capabilities are not used, Quality of Service
Ethernet and Fast Ethernet desktops whether for reasons of complexity or lack Until recently, Quality of Service (QoS)
across the Gigabit Ethernet campus of “exploiters” of those capabilities, then was a key differentiator between ATM
backbones. The use of IEEE 802.1Q users are paying for them for no reason and Gigabit Ethernet. ATM was the only
VLAN Tagging and 802.1p User Priority (a common example is that many of the technology that promised QoS for voice,
settings allow different traffic types to advanced features of a VCR are rarely video, and data traffic. The Internet
be accorded the appropriate forwarding exploited by most users). If features are Engineering Task Force (IETF) and
priority and service. too expensive, relative to the benefits that various vendors have since developed
When combined with policy-enabled can be derived, then the technology is protocol specifications and standards that
networks, DiffServ provides powerful, not likely to find widespread acceptance. enhance the frame-switched world with
secure, and flexible QoS capabilities for Technology choices are ultimately QoS and QoS-like capabilities. These
Gigabit Ethernet campus LANs by using business decisions. efforts are accelerating and, in certain
protocols such as Common Open Policy cases, have evolved for use in both the
The fundamental requirements for LAN
Services (COPS), Lightweight Directory ATM and frame-based worlds.
campus networks are very much different
Access Protocol (LDAP), Dynamic Host from those of the WAN. It is thus The difference between ATM and Gigabit
Configuration Protocol (DHCP), and necessary to identify the minimum Ethernet in the delivery of QoS is that
Domain Name System (DNS). Further requirements of a network, as well as ATM is connection-oriented, whereas
developments, such as Resource the value-added capabilities that are Ethernet is connectionless. With ATM,
Reservation Protocol, multicasting, “nice to have”. QoS is requested via signaling before
real-time multimedia, audio and video communication can begin. The connection
In the sections that follow, various terms
transport, and IP telephony, will add is only accepted if it is without detriment
are used with the following meanings:
functionality to a Gigabit Ethernet campus, to existing connections (especially for
using a gradual and manageable approach • “Ethernet” is used to refer to all current reserved bandwidth applications).
when users need these functions. variations of the Ethernet technology: Network resources are then reserved as
traditional 10 Mbps Ethernet, 100 required, and the accepted QoS service is
There are major technical differences
Mbps Fast Ethernet, and 1000 Mbps guaranteed to be delivered “end-to-end.”
between Gigabit Ethernet and ATM. A
Gigabit Ethernet. By contrast, QoS for Ethernet is mainly
companion white paper, Gigabit Ethernet
and ATM: A Business Perspective, provides a • “Frame” and “packet” are used delivered hop-by-hop, with standards
comparative view of the two technologies interchangeably, although this is not in progress for signaling, connection
from a managerial perspective. absolutely correct from a technical admission control, and resource reservation.
purist point of view.

Gigabit Ethernet and ATM: A Technology Perspective White Paper 5


services to already established connections.
Once established, traffic from the end
systems are policed and shaped for
• nrt-VBR: Non-real-time Variable Bit conformance with the agreed traffic
Rate, for applications with similar contract. Flow and congestion are
needs as rt-VBR, requiring low cell loss, managed in order to ensure the proper
varying amounts of bandwidth, and QoS delivery.
with no critical delay and delay
variation requirements. Example Gigabit Ethernet QoS
ATM QoS applications include non-real-time One simple strategy for solving the
From its inception, ATM has been voice and video. backbone congestion problem is to over-
designed with QoS for voice, video provision bandwidth in the backbone.
• ABR: Available Bit Rate, for applications
and data applications. Each of these This is especially attractive if the initial
requiring low cell loss, guaranteed
has different timing bounds, delay, investment is relatively inexpensive and
minimum and maximum bandwidths,
delay variation sensitivities (jitter), the ongoing maintenance is virtually
and with no critical delay or delay
and bandwidth requirements. ‘costless’ during its operational life.
variation requirements. The minimum
In ATM, QoS has very specific meanings and maximum bandwidths are Gigabit Ethernet is an enabler of just such
that are the subject of ATM Forum and characterized by the Minimum Cell a strategy in the LAN. Gigabit Ethernet,
other standards specifications. Defined at Rate and Peak Cell Rate respectively. and soon 10-Gigabit Ethernet, will
the ATM layer (OSI Layer 2), the service provide all the bandwidth that is ever
• UBR: Unspecified Bit Rate, for
architecture provides five categories of needed for many application types,
applications that can use the network
services that relate traffic characteristics eliminating the need for complex QoS
on a best-effort basis, with no service
and QoS requirements to network behavior: schemes in many environments. However,
guarantees for cell loss, delay and delay
• CBR: Constant Bit Rate, for applications some applications are bursty in nature
variations. Example applications are
that are sensitive to delay and delay and will consume all available bandwidth,
e-mail and file transfer.
variations, and need a fixed but to the detriment of other applications that
Depending on the QoS requested, ATM may have time-critical requirements. The
continuously available amount of
provides a specific level of service. At solution is to provide a priority mechanism
bandwidth for the duration of a
one extreme, ATM provides a best-effort that ensures bandwidth, buffer space,
connection. The amount of bandwidth
service for the lowest QoS (UBR), with and processor power are allocated to the
required is characterized by the
no bandwidth reserved for the traffic. different types of traffic.
Peak Cell Rate. An example of this
At the other extreme, ATM provides a
is circuit emulation. With Gigabit Ethernet, QoS has a broader
guaranteed level of service for the higher
• rt-VBR: Real-time Variable Bit Rate, for interpretation than with ATM. But it
QoS (that is, CBR and VBR) traffic.
applications that need varying amounts is just as able — albeit with different
Between these extremes, ABR is able to
of bandwidth with tightly regulated mechanisms — to meet the requirements
use whatever bandwidth is available with
delay and delay variation, and whose of voice, video and data applications.
proper traffic management and controls.
traffic is bursty in nature. The amount In general, Ethernet QoS is delivered
Because ATM is connection-oriented,
of bandwidth is characterized by the at a high layer of the OSI model. Frames
requests for a particular QoS, admission
Peak Cell Rate and Sustainable Cell are typically classified individually by a
control, and resource allocation are an
Rate; burstiness is defined by the filtering scheme. Different priorities are
integral part of the call signaling and
Maximum Burst Size. Example assigned to each class of traffic, either
connection setup process. The call is
applications include real-time voice explicitly by means of priority bit settings
admitted and the connection established
and video conferencing. in the frame header, or implicitly in the
between communicating end systems
only if the resources exist to meet a
requested QoS, without jeopardizing

6 Gigabit Ethernet and ATM: A Technology Perspective White Paper


priority level of the queue or VLAN to
which they are assigned. Resources are
then provided in a preferentially prioritized
(unequal or unfair) way to service the These definitions are required in order
queues. In this manner, QoS is delivered to guarantee the consistency of expected
by providing differential services to service when a packet crosses from one
the differentiated traffic through this network’s administrative domain to
mechanism of classification, priority another, or for multi-vendor interoperability.
setting, prioritized queue assignment, The Working Group also standardized the
and prioritized queue servicing. (For following specific per-hop behaviors and restricted, medium precedence frames
further information on QoS in Frame- recommended bit patterns (also known as are discarded next, and low precedence
Switched Networks, see WP3510-A/5-99, code points or DSCPs) of the DS Field frames are dropped only in the most
a Nortel Networks white paper available for each PHB: extreme lack of resource conditions.
on the Web at www.nortelnetworks.com.) • A recommended Default PHB with
• Expedited Forwarding (EF-PHB),
sometimes described as Premium a DSCP of b’000000’ (six zeros) that
Differentiated Services equates to today’s best-effort service
Chief among the mechanisms available Service, uses a DSCP of b’101110’.
The EF-PHB provides the equivalent when no explicit DS marking exists.
for Ethernet QoS is Differentiated
Services (DiffServ). The IETF DiffServ service of a low loss, low latency, low In essence, DiffServ operates as follows:
Working Group proposed DiffServ jitter, assured bandwidth point-to- • Each frame entering a network is
as a simple means to provide scalable point connection (a virtual leased line). analyzed and classified to determine
differentiated services in an IP network. EF-PHB frames are assigned to a high the appropriate service desired by the
DiffServ redefines the IP Precedence/Type priority queue where the arrival rate of application.
of Service field in the IPv4 header and frames at a node is shaped to be always
less than the configured departure rate • Once classified, the frame is marked in
the Traffic Class field in the IPv6 header the DS field with the assigned DSCP
as the new DS Field (see Figure 1). An IP at that node.
value to indicate the appropriate PHB.
packet’s DS Field is then marked with a • Assured Forwarding (AF-PHB) uses Within the core of the network, frames
specific bit pattern, so the packet will 12 DSCPs to identify four forwarding are forwarded according to the PHB
receive the desired differentiated service classes, each with three levels of drop indicated.
(that is, the desired forwarding priority), precedence (12 PHBs). Frames are
assigned by the user to the different • Analysis, classification, marking,
also known as per-hop behavior (PHB),
classes and drop precedence depending policing, and shaping operations need
at each network node along the path
on the desired degree of assured — only be carried out at the host or
from source to destination.
but not guaranteed — delivery. When network boundary node. Intervening
To provide a common use and interpreta- nodes need only examine the short
tion of the possible DSCP bit patterns, allocated resources (buffers and band-
width) are insufficient to meet demand, fixed length DS Field to determine the
RFC 2474 and RFC 2475 define the appropriate PHB to be given to the
architecture, format, and general use frames with the high drop precedence are
discarded first. If resources are still frame. This architecture is the key to
of these bits within the DSCP Field. DiffServ scalability. In contrast, other
models such as RSVP/Integrated
Figure 1: Differentiated Services Field (RFC 2474). Services are severely limited by signaling,
application flow, and forwarding state
Byte Bit 1 2 3 4 5 6 7 8 maintenance at each and every node
1 IP Version IP Header Length along the path.
2 Differentiated Services Code Point (DSCP) Currently Unused

3-20 (Remainder of IP Header)

Gigabit Ethernet and ATM: A Technology Perspective White Paper 7


Figure 2: Passport Campus Solution and Optivity Policy Services.

Server Farm
Optivity Policy Services Passport 8000
and Management Routing Switch
Connection-oriented
Policy Server communicates vs. Connectionless
filter and queuing rules using
Common Open Policy Services ATM is a connection-oriented protocol.
Server Switch ensures most
appropriate server used, Most enterprise LAN networks are
Routing Switch policies, depending on loads and
End Station can set
802.1p or DSCP field
shapes and forwards response times connectionless Ethernet networks,
classified frames
whether Ethernet, Fast Ethernet and
Gigabit Ethernet.
Data Passport 700
Server Switch Note: Because of Ethernet’s predominance,
Routing Switch validates
using policy server and sets/resets it greatly simplifies the discussion to not
DSCP using Express Classification
refer to the comparatively sparse Token-
Ring technology; this avoids complicating
Passport 1000 the comparison with qualifications for
Routing Switch
Token-Ring LANs and ELANs, Route
Descriptors instead of MAC addresses
• Policies govern how frames are marked With Gigabit Ethernet, the switches at the as LAN destinations, and so forth.
and traffic conditioned upon entry network ingress may act as COPS clients. An ATM network may be used as a
to the network; they also govern the COPS clients examine frames as they high-speed backbone to connect Ethernet
allocation of network resources to the enter the network, communicate with a LAN switches and end stations together.
traffic streams, and how the traffic is central COPS server to decide if the traffic However, a connection-oriented ATM
forwarded within that network. should be admitted to the network, and backbone requires ATM Forum LAN
DiffServ allows nodes that are not DS- enforce the policies. These policies include Emulation (LANE) protocols to emulate
capable, or even DS-aware, to continue any QoS forwarding treatment to be the operation of connectionless legacy
to use the network in the same way as applied during transport. Once this is LANs. In contrast with simple Gigabit
they have previously by simply using determined, the DiffServ-capable Gigabit Ethernet backbones, much of the
the Default PHB, which is best-effort Ethernet switches can mark the frames complexity of ATM backbones arises
forwarding. Thus, without requiring using the selected DSCP bit pattern, from the need for LANE.
end-to-end deployment, DiffServ provides apply the appropriate PHB, and forward
Gigabit Ethernet with a powerful, yet the frames to the next node. The next ATM LAN Emulation v1
simple and scalable, means to provide node need only examine the DiffServ LANE version 1 was approved in January
differential QoS services to support markings to apply the appropriate PHB. 1995. Whereas a Gigabit Ethernet
various types of application traffic. Thus, frames are forwarded hop-by-hop backbone is very simple to implement,
through a Gigabit Ethernet campus with each ATM emulated LAN (ELAN) needs
Common Open Policy Services the desired QoS. several logical components and protocols
To enable a Policy Based Networking In Nortel Networks’ Passport* Campus that add to ATM’s complexity. These
capability, the Common Open Policy Solution, COPS will be used by Optivity* components are:
Services (COPS) protocol can be used Policy Services (COPS server) and the • LAN Emulation Configuration
to complement DiffServ-capable devices. Passport Enterprise and Routing Switches Server(s) (LECS) to, among other
COPS provides an architecture and (COPS clients) to communicate QoS duties, provide configuration data to an
a request-response protocol for policies defined at the policy server to the end system, and assign it to an ELAN
communicating admission control switches for enforcement (see Figure 2). (although the same LECS may serve
requests, policy-based decisions, and more than one ELAN).
policy information between a network
• Only one LAN Emulation Server
policy server and the set of clients it serves.
(LES) per ELAN to resolve 6-byte LAN
MAC addresses to 20-byte ATM
addresses and vice versa.

8 Gigabit Ethernet and ATM: A Technology Perspective White Paper


Figure 3: LAN Emulation v1 Connections and Functions.

Point-to-point or
Connection Name Uni- or Bi-directional Point-to-multipoint Used for communication

Configuration Direct VCC Bi-directional Point-to-point Between an LECS and an LEC


Control Direct VCC Bi-directional Point-to-point Between an LES and its LECs**
Control Distribute VCC Uni-directional Point-to-multipoint From an LES to its LECs
Multicast Send VCC Bi-directional Point-to-point Between a BUS and an LEC
Multicast Forward VCC Uni-directional Point-to-multipoint From a BUS to its LECs
Data Direct VCC Bi-directional Point-to-point Between an LEC and another LEC

**Note: There is a difference between LECS with an uppercase “S” (meaning LAN Emulation Configuration Server) and LECs with a lowercase “s”
meaning LAN Emulation Clients, or more than one LEC) at the end of the acronym.

• Only one Broadcast and Unknown after. Unintended release of a required LUNI, among other enhancements,
Server (BUS) per ELAN to forward VCC may trigger the setup process. In added the Selective Multicast Server
broadcast frames, multicast frames, and certain circumstances, this can lead to (SMS), to provide a more efficient means
frames for destinations whose LAN or instability in the network. of forwarding multicast traffic, which
ATM address is as yet unknown. The most critical components of the was previously performed by the BUS.
• One or more LAN Emulation Clients LAN Emulation Service are the LES and SMS thus offloads much of the multicast
(LEC) to represent the end systems. BUS, without which an ELAN cannot processing from the BUS, allowing the
This is further complicated by whether function. Because each ELAN can only BUS to focus more on the forwarding
the end system is a LAN switch to be served by a single LES and BUS, these of broadcast traffic and traffic with
which other Ethernet end stations are components need to be backed up yet-to-be-resolved LAN destinations.
attached, or whether it is an ATM- by other LESs and BUSs to prevent LNNI provides for the exchange of
directly attached end station. A LAN any single point of failure stopping configuration, status, control coordination,
switch requires a proxy LEC, whereas communication between the possibly and database synchronization between
an ATM-attached end station requires hundreds or even thousands of end stations redundant and distributed components
a non-proxy LEC. attached to an ELAN. In addition, the of the LAN Emulation Service.
Collectively, the LECS, LES, and BUS single LES or BUS represents a potential However, each improvement adds new
are known as the LAN Emulation performance bottleneck. complexity. Additional protocols are
Services. Each LEC (proxy or non-proxy) Thus, it became necessary for the LAN required and additional VCCs need to be
communicates with the LAN Emulation Emulation Service components to be established, maintained, and monitored
Services using different virtual channel replicated for redundancy and elimination for communication between the new
connections (VCCs) and LAN Emulation of single points of failures, and distributed LAN Emulation Service components and
User Network Interface (LUNI) protocols. for performance. LECs. For example, all LESs serving an
Figure 3 shows the VCCs used in ELAN communicate control messages to
LANE v1. ATM LAN Emulation v2 each other through a full mesh of Control
Some VCCs are mandatory — once To enable communication between Coordinate VCCs. These LESs must also
established, they must be maintained if the redundant and distributed LAN synchronize their LAN-ATM address
the LEC is to participate in the ELAN. Emulation Service components, as well as databases, using the Server Cache
Other VCCs are optional — they may or other functional enhancements, LANE v1 Synchronization Protocol (SCSP — RFC
may not be established and, if established, was re-specified as LANE v2; it now 2334), across the Cache Synchronization
they may or may not be released there- comprises two separate protocols: VCC. Similarly, all BUSs serving an
• LUNI: LAN Emulation User Network ELAN must be fully connected by a
Interface (approved July 1997) mesh of Multicast Forward VCCs used
to forward data.
• LNNI: LAN Emulation Network-Network
Interface (approved February 1999).
Gigabit Ethernet and ATM: A Technology Perspective White Paper 9
Figure 4: LAN Emulation v2 Additional Connections and/or Functions.

Point-to-point or
Connection Name Uni- or Bi-directional Point-to-multipoint Used for communication

LECS Synchronization VCC Bi-directional Point-to-point Between LECSs


Configuration Direct VCC Bi-directional Point-to-point Between an LECS and an LEC, LES or BUS
Control Coordinate VCC Bi-directional Point-to-point Between LESs
Cache Synchronization VCC Bi-directional Point-to-point Between an LES and its SMSs
Default Multicast Send VCC Bi-directional Point-to-point Between a BUS and an LEC (as in v1)
Default Multicast Forward VCC Uni-directional Point-to-multipoint From a BUS to its LECs and other BUSs
Selective Multicast Send VCC Bi-directional Point-to-point Between an SMS and an LEC
Selective Multicast Forward VCC Uni-directional Point-to-multipoint From an SMS to its LECs

Unicast traffic from a sending LEC is • If an SMS is available, the LEC can address database with its LES using
initially forwarded to a receiving LEC via establish, in addition to the Default SCSP across Cache Synchronization
the BUS. When a Data Direct VCC has Multicast Send VCC to the BUS, a VCCs.
been established between the two LECs, Selective Multicast Send VCC to the Figure 4 shows the additional connections
the unicast traffic is then forwarded via SMS. In this case, the BUS will add the required by LANE v2.
the direct path. During the switchover LEC as a leaf to its Default Multicast
This multitude of control and coordina-
from the initial to the direct path, it is Forward VCC and the SMS will add
tion connections, as well as the exchange
possible for frames to be delivered out of the LEC as a leaf to its Selective
of control frames, consumes memory,
order. To prevent this possibility, LANE Multicast Forward VCC. The BUS is
processing power, and bandwidth, just
requires an LEC to either implement the then used initially to forward multicast
so that a Data Direct VCC can finally be
Flush protocol, or for the sending LEC to traffic until the multicast destination is
established for persistent communication
delay transmission at some latency cost. resolved to an ATM address, at which
between two end systems. The complexity
The forwarding of multicast traffic time the SMS is used. The SMS also
can be seen in Figure 5.
from an LEC depends on the availability synchronizes its LAN-ATM multicast
of an SMS:
• If an SMS is not available, the LEC Figure 5: Complexity of ATM LAN Emulation.
establishes the Default Multicast Send LECS
11
LECS
1 1
VCC to the BUS that, in turn, will 1
1 1 1
add the LEC as a leaf to its Default 1 1
10**
Multicast Forward VCC. The BUS LES LES
9**
is then used for the forwarding of BUS BUS
2 2 9 9 9 9 2 2
multicast traffic. 3
3 5 5
4
4 SMS SMS SMS SMS 4 4

8 8
7 7 7 7

LEC LEC LEC LEC LEC LEC


6 6 6 6
6 6

1 Configuration Direct VCC 7 Selective Multicast VCC


2 Control Direct VCC 8 Selective Multicast Forward VCC
3 Control Distribute VCC 9 Cache Sync-only VCC
4 Default Multicast Send VCC 10 Control Coordinate-only VCC
5 Default Multicast Forward VCC 11 LECS Sync VCC
6 Data Direct VCC
** may be combined into one dual function VCC between two neighbor LESs

10 Gigabit Ethernet and ATM: A Technology Perspective White Paper


AAL-5 Encapsulation
In addition to the complexity of connections
and protocols, the data carried over
LANE uses ATM Adaptation Layer-5 Frame Format (Full-Duplex)
(AAL-5) encapsulation, which adds Full-duplex Gigabit Ethernet uses the
overhead to the Ethernet frame. The same frame format as Ethernet and Fast
Ethernet frame is stripped of its Frame Ethernet, with a minimum frame length
Check Sequence (FCS); the remaining of 64 bytes and a maximum of 1518 bytes
fields are copied to the payload portion (including the FCS but excluding the
of the CPCS-PDU, and a 2-byte LANE Preamble/SFD). If the data portion is less
header (LEH) is added to the front, with than 46 bytes, pad bytes are added to
an 8-byte trailer at the end. Up to 47 produce a minimum frame size of 64 bytes.
pad bytes may be added, to produce a
CPCS-PDU that is a multiple of 48, Figure 7 shows the same frame format for
the size of an ATM cell payload. Ethernet, Fast Ethernet and full-duplex
Gigabit Ethernet that enables the seamless
The CPCS-PDU also has to be segmented integration of Gigabit Ethernet campus
into 53-byte ATM cells before being backbones with the Ethernet and
transmitted onto the network. At the Fast Ethernet desktops and servers
receiving end, the 53-byte ATM cells have they interconnect.
to be decapsulated and reassembled into
the original Ethernet frame. Figure 6: AAL-5 CPCS-PDU.
Figure 6 shows the CPCS-PDU that is CPCS-PDU Trailer
used to transport Ethernet frames over Bytes 1-65535 0-47 1 1 2 4
LANE. LEH CPCS-PDU Payload Pad CPCS-UU CPI Length CRC
CPCS-PDU
Gigabit Ethernet LAN
In contrast, a Gigabit Ethernet LAN
backbone does not have the complexity Figure 7: Full-Duplex Gigabit Ethernet Frame Format
and overhead of control functions, (no Carrier Extension).
data encapsulation and decapsulation, Bytes 8 6 6 2 46 to 1500 4
segmentation and reassembly, and control Preamble/ Destination Source Length/
Data Pad FCS
and data connections required by an SFD Address Address Type
64 min to 1518 bytes max
ATM backbone.
As originally intended, at least for initial
deployment in the LAN environment,
Gigabit Ethernet uses full-duplex
transmission between switches, or
between a switch and a server in a server
farm — in other words, in the LAN
backbone. Full-duplex Gigabit Ethernet
is much simpler, and does not suffer
from the complexities and deficiencies
of half-duplex Gigabit Ethernet, which
uses the CSMA/CD protocol, Carrier
Extension, and frame bursting.

Gigabit Ethernet and ATM: A Technology Perspective White Paper 11


Figure 8: Half-Duplex Gigabit Ethernet Frame Format
(with Carrier Extension).
512 bytes min transmission
Bytes 8 6 6 2 46 to 493 4 448 to 1
Preamble/ Destination Source Length/ Carrier However, this calculation is only applicable
Data Pad FCS
SFD Address Address Type Extension
to half-duplex (as opposed to full-duplex)
64 bytes min (non-extended)
Gigabit Ethernet. In the backbone and
server-farm connections, the vast majority
“Goodput” Efficiency (if not all) of the Gigabit Ethernet
With full-duplex Gigabit Ethernet, deployed will be full-duplex.
the good throughput (“goodput”) in
a predominantly 64-byte frame size Mapping Ethernet Frames
environment, where no Carrier Extension
into ATM LANE Cells
is needed, is calculated as follows (where
As mentioned previously, using ATM
Frame Format (Half-Duplex) SFD=start frame delimiter, and
LAN Emulation as the campus backbone
Because of the greatly increased speed of IFG=interframe gap):
for Ethernet desktops require AAL-5
propagation and the need to support encapsulation and subsequent segmentation
64 bytes (frame)
practical network distances, half-duplex and reassembly.
Gigabit Ethernet requires the use of the [64 bytes (frame) + 8 bytes (SFD) + 12 bytes (IFG)] Figure 9 shows a maximum-sized
Carrier Extension. The Carrier Extension
= 1518-byte Ethernet frame mapped into
provides a minimum transmission length
76 % approx. a CPCS-PDU and segmented into 32
of 512 bytes. This allows collisions to be
53-byte ATM cells, using AAL-5; this
detected without increasing the minimum This goodput translates to a forwarding
translates into a goodput efficiency of:
frame length of 64 bytes; thus, no changes rate of 1.488 million packets per second
are required to higher layer software, such (Mpps), known as the wirespeed rate. 1514 bytes (frame without FCS)
as network interface card (NIC) drivers With Carrier Extension, the resulting
and protocol stacks. goodput is very much reduced:
[32 ATM cells x 53 bytes per ATM cell]

With half-duplex transmission, if the data =


portion is less than 46 bytes, pad bytes 64 bytes (frame) 89 % approx.
are added in the Pad field to increase For a minimum size 64-byte Ethernet
[512 bytes (frame with CE) + 8 bytes (SFD)
the minimum (non-extended) frame to frame, two ATM cells will be required;
+ 12 bytes (IFG)]
64 bytes. In addition, bytes are added this translates into a goodput efficiency of:
in the Carrier Extension field so that a =
minimum of 512 bytes for transmission is 12 % approx. 60 bytes (frame without FCS)
generated. For example, with 46 bytes of
In ATM and Gigabit Ethernet comparisons, [2 ATM cells x 53 bytes per ATM cell]
data, no bytes are needed in the Pad field,
this 12 percent figure is sometimes quoted
and 448 bytes are added to the Carrier =
as evidence of Gigabit Ethernet’s inefficiency.
Extension field. On the other hand, with 57 % approx.
494 or more (up to 1500) bytes of data,
no pad or Carrier Extension is needed. Figure 9: Mapping Ethernet Frame into ATM Cells.
Bytes 8 6 6 2 1500 4
Preamble/ Destination Source Length/
Data Pad FCS
SFD Address Address Type
1514 bytes

CPCS-PDU Payload CPCS-PDU Trailer

Bytes 2 1514 12 1 1 2 4
CPCS-PDU LEH Ethernet Frame Pad CPCS-UU CPI Length CRC

ATM Cells 1 2 3 4 29 30 31 32
1696 bytes

12 Gigabit Ethernet and ATM: A Technology Perspective White Paper


Figure 10: Frame Bursting.
Bytes 8 52-1518 12 8 64-1518 12 8 64-1518 12
Preamble/ MAC Extension Preamble/ MAC Preamble/ MAC
IFG IFG IFG
SFD Frame-1 (if needed) SFD Frame-2 SFD Frame-n
1 Frame Burst

Frame Bursting Flow Control and • Traffic Policing: monitoring and


The Carrier Extension is an overhead, controlling the stream of cells entering
Congestion Management
especially if short frames are the predominant the network for connections accepted,
In both ATM or Gigabit Ethernet, flow
traffic size. To enhance goodput, half- and marking out-of-profile traffic for
control and congestion management
duplex Gigabit Ethernet allows frame possible discard using Usage Parameter
are necessary to ensure that the network
bursting. Frame bursting allows an end Control (UPC) and the Generic Cell
elements, individually and collectively,
station to send multiple frames in one Rate Algorithm (GCRA).
are able to meet QoS objectives required
access (that is, without contending for by applications using that network. • Backpressure: exerting on the source
channel access for each frame) up to Sustained congestion in a switch, whether to decrease cell transmission rate when
the burstLength parameter. If a frame is ATM or Gigabit Ethernet, will eventually congestion appears likely or imminent.
being transmitted when the burst Length result in frames being discarded. Various • Congestion Notification: notifying the
threshold is exceeded, the sender is techniques are employed to minimize source and intervening nodes of current
allowed to complete the transmission. or prevent buffer overflows, especially or impending congestion by setting the
Thus, the maximum duration of a frame under transient overload conditions. The Explicit Forward Congestion
burst is 9710 bytes; this is the burst difference between ATM and Gigabit Indication (EFCI) bit in the cell header
Length (8192 bytes) plus the max Frame Ethernet is in the availability, reach, and (Payload Type Indicator) or using
Size (1518 bytes). Only the first frame is complexity (functionality and granularity) Relative Rate (RR) or Explicit Rate
extended if required. Each frame is spaced of these techniques. (ER) bits in Resource Management
from the previous by a 96-bit interframe
(RM) cells to provide feedback both in
gap. Both sender and receiver must be ATM Traffic and the forward and backward directions,
able to process frame bursting.
Congestion Management so that remedial action can be taken.
In an ATM network, the means employed
CSMA/CD Protocol • Cell Discard: employing various discard
to manage traffic flow and congestion are
Full-duplex Gigabit Ethernet does not strategies to avoid or relieve congestion:
based on the traffic contract: the ATM
use or need the CSMA/CD protocol. • Selective Cell Discard: dropping cells
Service Category and the traffic descriptor
Because of the dedicated, simultaneous, that are non-compliant with traffic
parameters agreed upon for a connection.
and separate send and receive channels, it contracts or have their Cell Loss
These means may include:
is very much simplified without the need Priority (CLP) bit marked for
for carrier sensing, collision detection, • Connection Admission Control (CAC):
possible discard if necessary
backoff and retry, carrier extension, and accepting or rejecting connections
frame bursting. being requested at the call setup stage,
depending upon availability of network
resources (this is the first point
of control and takes into account
connections already established).

Gigabit Ethernet and ATM: A Technology Perspective White Paper 13


Using this simple start-stop mechanism,
Gigabit Ethernet prevents frame discards
when input buffers are temporarily
Gigabit Ethernet Flow Control depleted by transient overloads. It is only
For half-duplex operation, Gigabit effective when used on a single full-duplex
Ethernet uses the CSMA/CD protocol link between two switches, or between a
to provide implicit flow control by switch and an end station (server).
“backpressuring” the sender from Because of its simplicity, the PAUSE
transmitting in two simple ways: function does not provide flow control
• Early Packet Discard (EPD): dropping across multiple links, or from end-to-end
• Forcing collisions with the incoming
all the cells belonging to a frame that across (or through) intervening switches.
traffic, which forces the sender to back
is queued, but for which transmission It also requires both ends of a link (the
off and retry as a result of the collision,
has not been started sending and receiving partners) to be
in conformance with the CSMA/CD
• Partial Packet Discard (PPD): dropping MAC Control-capable.
protocol.
all the cells belonging to a frame that
• Asserting carrier sense to provide a
is being transmitted (a more drastic Bandwidth Scalability
“channel busy” signal, which prevents
action than EPD) Advances in computing technology have
the sender from accessing the medium
fueled the explosion of visually and aural-
• Random Early Detection (RED): to transmit, again in conformance with
ly exciting applications for e-commerce,
dropping all the cells of randomly the protocol.
whether Internet, intranet or extranet.
selected frames (from different sources)
With full-duplex operation, Gigabit These applications require exponential
when traffic arrival algorithms indicate
Ethernet uses explicit flow control to increases in bandwidth. As a business
impending congestion (thus avoiding
throttle the sender. The IEEE 802.3x grows, increases in bandwidth are also
congestion), and preventing waves
Task Force defined a MAC Control required to meet the greater number of
of synchronized re-transmission
architecture, which adds an optional users without degrading performance.
precipitating congestion collapse.
MAC Control sub-layer above the MAC Therefore, bandwidth scalability in
A further refinement is offered using
sub-layer, and uses MAC Control frames the network infrastructure is critical to
Weighted RED (WRED).
to control the flow. To date, only one supporting incremental or quantum
• Traffic Shaping: modifying the stream MAC Control frame has been defined; increases in bandwidth capacity, which is
of cells leaving a switch (to enter or this is for the PAUSE operation. frequently required by many businesses.
transit a network) so as to ensure
A switch or an end station can send ATM and Gigabit Ethernet both provide
conformance with contracted profiles
a PAUSE frame to stop a sender from bandwidth scalability. Whereas ATM’s
and services. Shaping may include
transmitting data frames for a specified bandwidth scalability is more granular
reducing the Peak Cell Rate, limiting
length of time. Upon expiration of the and extends from the desktop and over
the duration of bursting traffic, and
period indicated, the sender may resume the MAN/WAN, Gigabit Ethernet has
spacing cells more uniformly to reduce
transmission. The sender may also resume focused on scalability in campus networking
the Cell Delay Variation.
transmission when it receives a PAUSE from the desktop to the MAN/WAN
frame with a zero time specified, indicating edge. Therefore, Gigabit Ethernet provides
the waiting period has been cancelled. quantum leaps in bandwidth from
On the other hand, the waiting period 10 Mbps, through 100 Mbps, 1000
may be extended if the sender receives a Mbps (1 Gbps), and even 10,000 Mbps
PAUSE frame with a longer period than (10 Gbps) without a corresponding
previously received. quantum leap in costs.

14 Gigabit Ethernet and ATM: A Technology Perspective White Paper


ATM Bandwidth
ATM is scalable from 1.544 Mbps
through to 2.4 Gbps and even higher
speeds. Approved ATM Forum multiplexed. The original cell stream is
specifications for the physical layer recovered in correct sequence from the
include the following bandwidths: multiple physical links at the receiving
• 1.544 Mbps DS1 end. Loss and recovery of individual links
in an IMA group are transparent to the
• 2.048 Mbps E1
users. This capability allows users to:
• 25.6 Mbps over shielded and unshielded together to provide greater bandwidth
• Interconnect ATM campus networks
twisted pair copper cabling (the bandwidth and resiliency. Work in this area of
over the WAN, where ATM WAN
that was originally envisioned for ATM standardization is proceeding through
facilities are not available by using
to the desktop) the IEEE 802.3ad Link Aggregation Task
existing DS1/E1 facilities
• 34.368 Mbps E3 Force (see the Trunking and Link
• Incrementally subscribe to more Aggregation section of this paper).
• 44.736 Mbps DS3 DS1/E1 physical links as needed
• 100 Mbps over multimode fiber cabling • Protect against single link failures Distance Scalability
• 155.52 Mbps SONET/SDH over UTP when interconnecting ATM campus Distance scalability is important because
and single and multimode fiber cabling networks across the WAN of the need to extend the network across
• Use multiple DS1/E1 links that are widely dispersed campuses, and within
• 622.08 Mbps SONET/SDH over single
typically lower cost than a single large multi-storied buildings, while making
and multimode fiber cabling
DS3/E3 (or higher speed) ATM use of existing UTP-5 copper cabling
• 622.08 Mbps and 2.4 Gbps cell- and common single and multimode
WAN link for normal operation or
based physical layer (without any frame fiber cabling, and without the need for
as backup links.
structure). additional devices such as repeaters,
Work is also in progress (as of October Gigabit Ethernet Bandwidth extenders, and amplifiers.
1999) on 1 Gbps cell-based physical layer, Ethernet is scalable from the traditional Both ATM and Gigabit Ethernet (IEEE
2.4 Gbps SONET/SDH, and 10 Gbps 10 Mbps Ethernet, through 100 Mbps 802.3ab) can operate easily within the
SONET/SDH interfaces. Fast Ethernet, and 1000 Mbps Gigabit limit of 100 meters from a wiring closet
Ethernet. Now that the Gigabit Ethernet switch to the desktop using UTP-5
Inverse Multiplexing standards have been completed, the next copper cabling. Longer distances are
over ATM evolutionary step is 10 Gbps Ethernet. typically achieved using multimode
In addition, the ATM Forum’s Inverse The IEEE P802.3 Higher Speed Study (50/125 or 62.5/125 µm) or single mode
Multiplexing over ATM (IMA) standard Group has been created to work on 10 (9-10/125 µm) fiber cabling.
allows several lower-speed DS1/E1 physical Gbps Ethernet, with Project
links to be grouped together as a single Authorization Request and formation of
higher speed logical link, over which cells a Task Force targeted for November 1999
from an ATM cell stream are individually and a standard expected by 2002.
Bandwidth scalability is also possible
through link aggregation ( that is,
grouping multiple Gigabit Ethernet links

Gigabit Ethernet and ATM: A Technology Perspective White Paper 15


Figure 11: Ethernet and Fast Ethernet Supported Distances.

Ethernet Ethernet Ethernet Ethernet


10BASE-T 10BASE-FL 100BASE-TX 100BASE-FX

IEEE Standard 802.3 802.3 802.3u 802.3u


Data Rate 10 Mbps 10 Mbps 100 Mbps 100 Mbps
Multimode Fiber distance N/A 2 km N/A 412 m (half duplex)
2 km (full duplex)

Singlemode Fiber distance N/A 25 km N/A 20 km


Cat 5 UTP distance 100 m N/A 100 m N/A
STP/Coax distance 500 m N/A 100 m N/A

Gigabit Ethernet Distances single computer room or wiring closet. The IEEE 802.3ab standard specifies the
Figure 11 shows the maximum distances Collectively, the three designations — operation of Gigabit Ethernet over distances
supported by Ethernet and Fast Ethernet, 1000BASE-SX, 1000BASE-LX and up to 100m using 4-pair 100 ohm
using various media. 1000BASE-CX — are referred to as Category 5 balanced unshielded twisted
1000BASE-X. pair copper cabling. This standard is also
IEEE 802.3z Gigabit Ethernet Figure 12 shows the maximum distances known as the 1000BASE-T specification;
– Fiber Cabling supported by Gigabit Ethernet, using it allows deployment of Gigabit Ethernet
IEEE 802.3u-1995 (Fast Ethernet) various media. in the wiring closets, and even to the
extended the operating speed of desktops if needed, without change to the
1000BASE-X Gigabit Ethernet is capable
CSMA/CD networks to 100 Mbps over UTP-5 copper cabling that is installed in
of auto-negotiation for half- and full-duplex
both UTP-5 copper and fiber cabling. many buildings today.
operation. For full-duplex operation,
The IEEE P802.3z Gigabit Ethernet Task auto-negotiation of flow control includes
Force was formed in July 1996 to develop
Trunking and
both the direction and symmetry
a Gigabit Ethernet standard. This work of operation — symmetrical and
Link Aggregation
was completed in July 1998 when the Trunking provides switch-to-switch
asymmetrical.
IEEE Standards Board approved the connectivity for ATM and Gigabit
IEEE 802.3z-1998 standard. Ethernet. Link Aggregation allows
IEEE 802.3ab Gigabit Ethernet
multiple parallel links between switches,
The IEEE 802.3z standard specifies the — Copper Cabling
or between a switch and a server, to
operation of Gigabit Ethernet over existing For Gigabit Ethernet over copper cabling,
provide greater resiliency and bandwidth.
single and multimode fiber cabling. It an IEEE Task Force started developing a
While switch-to-switch connectivity
also supports short (up to 25m) copper specification in 1997. A very stable draft
for ATM is well-defined through the
jumper cables for interconnecting switches, specification, with no significant technical
NNI and PNNI specifications, several
routers, or other devices (servers) in a changes, had been available since July
vendor-specific
1998. This specification, known as IEEE
802.3ab, is now approved (as of June
1999) as an IEEE standard by the IEEE
Standards Board.

16 Gigabit Ethernet and ATM: A Technology Perspective White Paper


Figure 12: Gigabit Ethernet Supported Distances.

1000BASE-SX 1000BASE-LX 1000BASE-CX 1000BASE-T

IEEE Standard 802.3z 802.3z 802.3z 802.3ab


Data Rate 1000 Mbps 1000 Mbps 1000 Mbps 1000 Mbps
Optical Wavelength (nominal) 850 nm (shortwave) 1300 nm (longwave) N/A N/A
Multimode Fiber (50 (m) distance 525 m 550 m N/A N/A
Multimode Fiber (62.5 (m) distance 260 m 550 m N/A N/A
Singlemode Fiber (10 (m) distance N/A 3 km N/A N/A
UTP-5 100 ohm distance N/A N/A N/A 100m
STP 150 ohm distance N/A N/A 25 m N/A
Number of Wire Pairs/Fiber 2 fiber 2 fiber 2 pairs 4 pairs
Connector Type Duplex SC Duplex SC Fibre Channel-2 RJ-45
or DB-9

Note: distances are for full duplex, the expected mode of operation in most cases.

protocols are used for Gigabit Ethernet, procedures as a single logical aggregated the list back to the ingress switch for
with standards-based connectivity to be link. The individual links within a set of recomputation of a new path. An ATM
provided once the IEEE 802.3ad Link paralleled links may be any combination switch may perform path computation as
Aggregation standard is complete. of the supported ATM speeds. As more a background task before calls are received
Nortel Networks is actively involved in bandwidth is needed, more PNNI links (to reduce latency during call setups),
this standards effort, while providing may be added between switches as necessary or when a call request is received (for
highly resilient and higher bandwidth without concern for the possibility of real-time optimized path at the cost of
Multi-Link Trunking (MLT) and Gigabit loops in the traffic path. some setup delay), or both (for certain
LinkSafe technology in the interim. By using source routing to establish a path QoS categories), depending on user
(VCC) between any source and destination configuration.
ATM PNNI end systems, PNNI automatically eliminates PNNI also provides performance scalability
ATM trunking is provided through NNI the forming of loops. The end-to-end when routing traffic through an ATM
(Network Node Interface or Network-to- path, computed at the ingress ATM network, using the hierarchical structure
Network Interface) using the Private NNI switch using Generic Connection of ATM addresses. An individual ATM
(PNNI) v1.0 protocols, an ATM Forum Admission Control (GCAC) procedures, end system in a PNNI peer group can
specification approved in March 1996. is specified by a list of ATM nodes known be reached using the summary address
To provide resiliency, load distribution as a Designated Transit List (DTL). for that peer group, similar to using the
and balancing, and scalability in Computation based on default parameters network and subnet ID portions of an
bandwidth, multiple PNNI links may be will result in the shortest path meeting the IP address. A node whose address does
installed between a pair of ATM switches. requirements, although preference may be not match the summary address (the
Depending on the implementation, given to certain paths by assigning lower non-matching address is known as a
these parallel links may be treated for Administrative Weight to preferred links. foreign address) can be explicitly set
Connection Admission Control (CAC) This DTL is then validated by local CAC to be reachable and advertised.
procedures at each ATM node in the list.
If an intervening node finds the path is
invalid, maybe as a result of topology or
link state changes in the meantime, that
node is able to automatically “crank”

Gigabit Ethernet and ATM: A Technology Perspective White Paper 17


Gigabit Ethernet
Link Aggregation
With Gigabit Ethernet, multiple physical
ATM UNI Uplinks links may be installed between two
versus NNI Risers switches, or between a switch and a server,
PNNI provides many benefits with to provide greater bandwidth and resiliency.
regard to resiliency and scalability when Typically, the IEEE 802.1d Spanning Tree
connecting ATM switches in the campus Protocol (STP) is used to prevent loops
backbone. However, these advantages are forming between these parallel links, by
A Peer Group Leader (PGL) may represent blocking certain ports and forwarding on
not available in most ATM installations
the nodes in the peer group at a higher others so that there is only one path
where the LAN switches in the wiring
level. These PGLs are logical group nodes between any pair of source-destination
closets are connected to the backbone
(LGNs) that form higher-level peer end stations. In doing so, STP incurs
switches using ATM UNI uplinks. In
groups, which allow even shorter summary some performance penalty when
such connections, the end stations
addresses. These higher-level peer groups converging to a new spanning tree structure
attached to the LAN switch are associated,
can be represented in even higher peer after a network topology change.
directly or indirectly (through VLANs),
groups, thus forming a hierarchy. By
with specific proxy LECs located in the Although most switches are plug-and-play,
using this multi-level hierarchical routing,
uplinks. An end station cannot be associated with default STP parameters, erroneous
less address, topology, and link state
with more than one proxy LEC active in configuration of these parameters can lead
information needs to be advertised across
separate uplinks at any one time. Hence, to looping, which is difficult to resolve. In
an ATM network, allowing scalability as
no redundant path is available if the proxy addition, by blocking certain ports, STP
the number of nodes grow.
LEC (meaning uplink or uplink path) will allow only one link of several parallel
However, this rich functionality comes representing the end stations should fail. links between a pair of switches to carry
with a price. PNNI requires memory, traffic. Hence, scalability of bandwidth
While it is possible to have one uplink
processing power, and bandwidth from between switches cannot be increased by
active and another on standby, connected
the ATM switches for maintaining state adding more parallel links as required,
to the backbone via a different path and
information, topology and link state although resiliency is thus improved.
ready to take over in case of failure, very
update exchanges, and path computation.
few ATM installations have implemented To overcome the deficiencies of STP,
PNNI also results in greater complexity
this design for reasons of cost, complexity, various vendor-specific capabilities are
in hardware design, software algorithms,
and lack of this capability from the offered to increase the resiliency, load
switch configuration, deployment, and
switch vendor. distribution and balancing, and scalability
operational support, and ultimately much
One solution is provided by the Nortel in bandwidth, for parallel links between
higher costs.
Networks Centillion* 50/100 and System Gigabit Ethernet switches.
5000BH/BHC LAN-ATM Switches. For example, the Nortel Networks
These switches provide Token-Ring and Passport Campus Solution offers Multi-
Ethernet end station connectivity on the Link Trunking and Gigabit Ethernet
one (desktop) side and “NNI riser LinkSafe:
uplinks” to the core ATM switches on
Multi-Link Trunking (MLT) that allows
the other (backbone) side. Because these
up to four physical connections between
“NNI risers” are PNNI uplinks, the
two Passport 1000 Routing Switches, or
LAN-to-ATM connectivity enjoys all
the benefits of PNNI.

18 Gigabit Ethernet and ATM: A Technology Perspective White Paper


a BayStack* 450 Ethernet Switch and
an Passport 1000 Routing Switch, to
be grouped together as a single logical
link with much greater resiliency and
With MLT and Gigabit Ethernet
bandwidth than is possible with several
LinkSafe redundant trunking and link
individual connections.
aggregation, the BayStack 450 Ethernet
Each MLT group may be made up Switch and Passport 1000 Routing Switch
of Ethernet, Fast Ethernet or Gigabit provide a solution that is comparable
Ethernet physical interfaces; all links to ATM PNNI in its resilience and
within a group must be of the same media incremental scalability, and is superior • Greater resiliency and fault-tolerance,
type (copper or fiber), have the same in its simplicity. where traffic is automatically reassigned
speed and half- or full-duplex settings, to remaining operative links, thus
and belong to the same Spanning Tree IEEE P802.3ad maintaining communication if individual
group, although they need not be from links between two switches, or a switch
Link Aggregation
the same interface module within a and a server, fail.
In recognition of the need for open
switch. Loads are automatically balanced
standards and interoperability, Nortel • Flexible and simple migration vehicle,
across the MLT links, based on source
Networks actively leads in the IEEE where Ethernet and Fast Ethernet
and destination MAC addresses (bridged
P802.3ad Link Aggregation Task Force, switches at the LAN edges can have
traffic), or source and destination IP
authorized by the IEEE 802.3 Trunking multiple lower-speed links aggregated
addresses (routed traffic). Up to eight
Study Group in June 1998, to define to provide higher-bandwidth transport
MLT groups may be configured in an
a link aggregation standard for use on into the Gigabit Ethernet core.
Passport 1000 Routing Switch.
switch-to-switch and switch-to-server A brief description of the IEEE P802.3ad
Gigabit Ethernet LinkSafe that provides parallel connections. This standard is Link Aggregation standard (which may
two Gigabit Ethernet ports on an Passport currently targeted for availability in change as it is still fairly early in the
1000 Routing Switch interface module early 2000. standards process) follows.
to connect to another similar module on
The IEEE P802.3ad Link Aggregation is A physical connection between two
another switch, with one port active and
an important full-duplex, point-to-point switches, or a switch and a server, is
the other on standby, ready to take over
technology for the core LAN infrastructure known as a link segment. Individual link
automatically should the active port or
and provides several benefits: segments of the same medium type and
link fails. LinkSafe is used for riser and
backbone connections, with each link • Greater bandwidth capacity, allowing speed may make up a Link Aggregation
routed through separate physical paths parallel links between two switches, or Group (LAG), with a link segment
to provide a high degree of resiliency a switch and a server, to be aggregated belonging to only one LAG at any one
protection against a port or link failure. together as a single logical pipe with time. Each LAG is associated with a single
multi-Gigabit capacity (if necessary); MAC address.
An important capability is that virtual
traffic is automatically distributed
LANs (VLANs) distributed across multiple
and balanced over this pipe for high
switches can be interconnected, with or
performance.
without IEEE 802.1Q VLAN Tagging,
using MLT and Gigabit Ethernet trunks. • Incremental bandwidth scalability,
allowing more links to be added
between two switches, or a switch and
a server, only when needed for greater
performance, from a minimal initial
hardware investment, and with minimal
disruption to the network.

Gigabit Ethernet and ATM: A Technology Perspective White Paper 19


Integration of Layer 3
and Above Functions
Both ATM and Gigabit Ethernet provide
Technology the underlying internetwork over which
IP packets are transported. Although
Complexity and Cost
initially a Layer 2 technology, ATM
Two of the most critical criteria in the
functionality is creeping upwards in the
technology decision are the complexity
OSI Reference Model. ATM Private
and cost of that technology. In both
Network Node Interface (PNNI) provides
Frames that belong logically together aspects, simple and inexpensive Gigabit
signaling and OSPF-like best route
(for example, to an application being used Ethernet wins hands down over complex
determination when setting up the path
at a given instance, flowing in sequence and expensive ATM — at least in
from a source to a destination end system.
between a pair of end stations) are treated enterprise networks.
Multiprotocol Over ATM (MPOA)
as a conversation (similar to the concept ATM is fairly complex because it is a allows short-cut routes to be established
of a “flow”). Individual conversations are connection-oriented technology that has between two communicating ATM end
aggregated together to form an to emulate the operation of connection- systems located in different IP subnets,
Aggregated Conversation, according to less LANs. As a result, additional physical completely bypassing intervening routers
user-specified Conversation Aggregation and logical components, connections, along the path.
Rules, which may specify aggregation, for and protocols have to be added, with
example, on the basis of source/destination In contrast, Gigabit Ethernet is strictly
the attendant need for understanding,
address pairs, VLAN ID, IP subnet, or a Layer 2 technology, with much of the
configuration, and operational support.
protocol type. Frames that are part of a other needed functionality added above
Unlike Gigabit Ethernet (which is largely
given conversation are transmitted on it. To a large extent, this separation of
plug-and-play), there is a steep learning
a single link segment within a LAG to functions is an advantage because changes
curve associated with ATM, in product
ensure in-sequence delivery. to one function do not disrupt another
development as well as product usage.
if there is clear modularity of functions.
A Link Aggregation Control Protocol ATM also suffers from a greater number
This decoupling was a key motivation in
is used to exchange link configuration, of interoperability and compatibility
the original development of the 7-layer
capability, and state information between issues than does Gigabit Ethernet, because
OSI Reference Model. In fact, the
adjacent switches, with the objective of the different options vendors implement
in their ATM products. Although complexity of ATM may be due to the
of forming LAGs dynamically. A Flush rich functionality all provided “in one
protocol, similar to that in ATM LAN interoperability testing does improve the
hit,” unlike the relative simplicity of
Emulation, is used to flush frames in situation, it also adds time and cost to
ATM product development. Gigabit Ethernet, where higher layer
transit when links are added or removed functionality is kept separate from,
from a LAG. Because of the greater complexity, and added “one at a time” to, the basic
Among the objectives of the IEEE the result is also greater costs in: Physical and Data Link functions.
P802.3ad standard are automatic • Education and training
configuration, low protocol overheads, • Implementation and deployment
rapid and deterministic convergence when
link states change, and accommodation of • Problem determination and resolution
aggregation-unaware links. • Ongoing operational support
• Test and analysis equipment, and other
management tools.

20 Gigabit Ethernet and ATM: A Technology Perspective White Paper


MPOA and NHRP
A traditional router provides two basic
Layer 3 functions: determining the best
possible path to a destination using
routing control protocols such as RIP DDVCC between a destination end
and OSPF (this is known as the routing station and its gateways router, and
function), and then forwarding the frames several DDVCCs between intervening
over that path (this is known as the routers along the path. With MPOA,
forwarding function). only one DDVCC is needed between
the source and destination end stations.
Multi-Protocol Over ATM (MPOA) Virtual Router
enhances Layer 3 functionality over ATM Gigabit Ethernet can also leverage a
similar capability for IP traffic using the
Redundancy Protocol
in three ways: For Gigabit Ethernet, an IETF RFC 2338
Next Hop Resolution Protocol (NHRP).
• MPOA uses a Virtual Router model to Virtual Router Redundancy Protocol
In fact, MPOA uses NHRP as part of the
provide greater performance scalability (VRRP) is available for deploying
process to resolve MPOA destination
by allowing the typically centralized interoperable and highly resilient default
addresses. MPOA Resolution Requests
routing control function to be divorced gateway routers. VRRP allows a group of
are converted to NHRP Resolution
from the data frame forwarding function, routers to provide redundant and distributed
Requests by the ingress MPOA server
and distributing the data forwarding gateway functions to end stations through
before forwarding the requests towards
function to access switches on the the mechanism of a virtual IP address —
the intended destination. NHRP
periphery of the network. This the address that is configured in end
Resolution Responses received by the
“separation of powers” allows routing stations as the default gateway router.
ingress MPOA server are converted to
capability and forwarding capability to MPOA Resolution Responses before At any one time, the virtual IP address
be distributed to where each is most being forwarded to the requesting source. is mapped to a physical router, known
effective, and allows each to be scaled Just as MPOA shortcuts can be established as the Master. Should the Master fail,
when needed without interference for ATM networks, NHRP shortcuts another router within the group is elected
from the other. can also be established to provide the as the new Master with the same virtual
• MPOA enables paths (known as short- performance enhancement in a frame IP address. The new Master automatically
cut VCCs) to be directly established switched network. takes over as the new default gateway,
between a source and its destination, without requiring configuration
without the hop-by-hop, frame-by- Gateway Redundancy changes in the end stations. In addition,
frame processing and forwarding that is For routing between subnets in an ATM each router may be Master for a set
necessary in traditional router networks. or Gigabit Ethernet network, end stations of end stations in one subnet while
Intervening routers, which are potentially typically are configured with the static IP providing backup functions for another,
performance bottlenecks, are completely address of a Layer 3 default gateway thus distributing the load across
bypassed, thereby enhancing forwarding router. Being a single point of failure, multiple routers.
performance. sometimes with catastrophic consequences,
• MPOA uses fewer resources in the form various techniques have been deployed to
of VCCs. When traditional routers are ensure that an alternate backs this default
used in an ATM network, one Data gateway when it fails.
Direct VCC (DDVCC) must be With ATM, redundant and distributed
established between a source end Layer 3 gateways are currently vendor-
station and its gateway router, one specific. Even if a standard should emerge,
it is likely that more logical components,
protocols, and connections will need to
be implemented to provide redundant
and/or distributed gateway functionality.

Gigabit Ethernet and ATM: A Technology Perspective White Paper 21


Broadcast and Multicast
Broadcasts and multicasts are very natural
means to send traffic from one source to
throughout. Deployment and on-going multiple recipients in a connectionless
operational support are much easier LAN. Gigabit Ethernet is designed for
because of the opportunity to “learn once, just such an environment. The higher-
do many.” One important assumption in layer IP multicast address is easily mapped
this scenario is that ATM would be widely to a hardware MAC address. Using
deployed at the desktops. This assumption Internet Group Management Protocol
LAN Integration does not meet with reality. (IGMP), receiving end stations report
The requirements of the LAN are very
ATM deployment at the desktop is group membership to (and respond to
different from those of the WAN. In the
almost negligible, while Ethernet and queries from) a multicast router, so as to
LAN, bandwidth is practically “free” once
Fast Ethernet are very widely installed receive multicast traffic from networks
installed, as there are no ongoing usage
in millions of desktop workstations and beyond the local attachment. Source end
costs. As long as sufficient bandwidth
servers. In fact, many PC vendors include stations need not belong to a multicast
capacity is provisioned (or even over-
Ethernet, Fast Ethernet, and (increasingly) group in order to send to members of
provisioned) to meet the demand, there
Gigabit Ethernet NIC cards on the that group.
may not be a need for complex techniques
motherboards of their workstation or By contrast, broadcasts and multicasts in
to control bandwidth usage. If sufficient
server offerings. Given this huge installed an ATM LAN present a few challenges
bandwidth exists to meet all demand,
base and the common technology that it because of the connection-oriented nature
then complex traffic management and
evolved from, Gigabit Ethernet provides of ATM.
congestion control schemes may not be
seamless integration from the desktops
needed at all. For the user, other issues In each emulated LAN (ELAN), ATM
to the campus and enterprise backbone
assume greater importance; these include needs the services of a LAN Emulation
networks.
ease of integration, manageability, flexibility Server (LES) and a Broadcast and
(moves, adds and changes), simplicity, If ATM were to be deployed as the Unknown Server (BUS) to translate from
scalability, and performance. campus backbone for all the Ethernet MAC addresses to ATM addresses. These
desktops, then there would be a need for additional components require additional
Seamless Integration frame-to-cell and cell-to-frame conversion resources and complexity needed to signal,
ATM has often been touted as the — the Segmentation and Reassembly set up, maintain, and tear down Control
technology that provides seamless (SAR) overhead. Direct, Control Distribute, Multicast
integration from the desktop, over the With Gigabit Ethernet in the campus Send, and Multicast Forward VCCs.
campus and enterprise, right through backbone and Ethernet to the desktops, Complexity is further increased because
to the WAN and across the world. The no cell-to-frame or frame-to-cell conversion an ELAN can only have a single
same technology and protocols are used is needed. Not even frame-to-frame LES/BUS, which must be backed up by
conversion is required from one form another LES/BUS to eliminate any single
of Ethernet to another! Hence, Gigabit points of failure. Communication
Ethernet provides a more seamless
integration in the LAN environment.

22 Gigabit Ethernet and ATM: A Technology Perspective White Paper


between active and backup LES/BUS
nodes requires more virtual connections
and protocols for synchronization, failure
detection, and takeover (SCSP and LNNI). the common uplink technology and
with translational bridging functionality,
With all broadcast traffic going through
the Ethernet and Token-Ring LANs can
the BUS, the BUS poses a potential
interoperate relatively easily.
bottleneck.
With Gigabit Ethernet, interoperation
For IP multicasting in a LANE network,
between Ethernet and Token-Ring LANs SONET OC-3c/SDH STM-1, SONET
ATM needs the services of the BUS and,
requires translational bridges that transform OC-12c/SDH STM-4, and Packet-
if available (with LUNI v2), an SMS.
the frame format of one type to the other. over-SONET/SDH in their Gigabit
For IP multicasting in a Classical IP ATM
network, ATM needs the services of a Ethernet switches.
MAN/WAN Integration
Multicast Address Resolution Server While an ATM LAN does offer seamless
It is relatively easy to interconnect ATM
(MARS), a Multicast Connection Server integration with the ATM MAN or
campus backbones across the MAN or
(MCS), and the Cluster Control VCCs. WAN through direct connectivity,
WAN. Most ATM switches are offered
These components require additional the MAN/WAN for the most part will
with DS1/E1, DS3/E3, SONET OC-3c/
resources and complexity for connection continue to be heterogeneous, and not
SDH STM-1 and SONET OC-12c/
signaling, setting up, maintenance and homogeneous ATM. This is due to the
SDH STM-4 ATM interfaces that
tearing down. installed non-ATM equipment, geographical
connect directly to the ATM MAN or
With UNI 3.0/3.1, the source must first coverage, and time needed to change.
WAN facilities. Some switches are offered
resolve the target multicast address to the This situation will persist more so than in
with DS1/E1 Circuit Emulation, DS1/E1
ATM addresses of the group members, the LAN where there is a greater control
Inverse Multiplexing over ATM, and
and then construct a point-to-multipoint by the enterprise and, therefore, greater
Frame Relay Network and Service
tree, with the source itself as the root to ease of convergence. Even in the LAN,
Interworking capabilities that connect
the multiple destinations before multicast the convergence is towards Ethernet and
to the existing non-ATM MAN or WAN
traffic may be distributed. With UNI 4.0, not ATM as the underlying technology.
facilities. All these interfaces allow ATM
end stations may join as leaves to a point- Technologies other than ATM will be
campus switches direct connections to
to-multipoint distribution tree, with or needed for interconnecting between
the MAN or WAN, without the need for
without intervention from the root. Issues locations, and even over entire regions,
additional devices at the LAN-WAN edge.
of interoperability between the different because of difficult geographical terrain
At this time, many Gigabit Ethernet or uneconomic reach. Thus, there will
UNI versions are raised in either case.
switches do not offer MAN/WAN continue to be a need for technology
interfaces. Connecting Gigabit Ethernet conversion from the LAN to the
Multi-LAN Integration campus networks across the MAN
As a backbone technology, ATM can WAN, except where ATM has been
or WAN typically requires the use of implemented.
interconnect physical LAN segments
additional devices to access MAN/WAN
using Ethernet, Fast Ethernet, Gigabit
facilities, such as Frame Relay, leased
Ethernet, and Token Ring. These are
lines, and even ATM networks. These
the main MAC layer protocols in use on
interconnect devices are typically routers
campus networks today. Using ATM as
or other multiservice switches that add to
the total complexity and cost. With the
rapid acceptance of Gigabit Ethernet as
the campus backbone of choice, however,
many vendors are now offering
MAN/WAN interfaces such as ATM

Gigabit Ethernet and ATM: A Technology Perspective White Paper 23


While all these technologies are evolving,
businesses seek to minimize risks by
investing in the lower-cost Gigabit
also known as IP over SONET/SDH). Ethernet, rather than the higher-cost ATM.
SONET is emerging as a competitive
service to ATM over the MAN/WAN. Management Aspects
With POS, IP packets are directly Because businesses need to be increasingly
encapsulated into SONET frames, dynamic to respond to opportunities
thereby eliminating the additional and challenges, the campus networking
Another development — the widespread environment is constantly in a state of
overhead of the ATM layer (see column
deployment of fiber optic technology — flux. There are continual moves, adds,
“C” in Figure 13).
may enable the LAN to be extended over and changes; users and workstations form
the WAN using the seemingly boundless To extend this a step further, IP packets
and re-form workgroups; road warriors
optical bandwidth for LAN traffic. This can be transported over raw fiber without
take the battle to the streets, and highly
means that Gigabit Ethernet campuses the overhead of SONET/SDH framing;
mobile users work from homes and hotels
can be extended across the WAN just as this is called IP over Optical (see column
to increase productivity.
easily, perhaps even more easily and with “D” in Figure 13). Optical Networking
can transport very high volumes of data, With all these constant changes,
less cost, than ATM over the WAN.
voice and video traffic over different light manageability of the campus network
Among the possibilities are access to Dark
wavelengths. is a very important selection criterion.
Fiber with long-haul extended distance
The more homogeneous and simpler
Gigabit Ethernet (50 km or more), The pattern of traffic has also been rapidly
the network elements are, the easier they
Packet-over-SONET/SDH and IP changing, with more than 80 percent of
are to manage. Given the ubiquity of
over Optical Dense Wave Division the network traffic expected to traverse
Ethernet and Fast Ethernet, Gigabit
Multiplexing. the MAN/WAN, versus only 20 percent
Ethernet presents a more seamless
One simple yet powerful way for extending remaining on the local campus. Given
integration with existing network
high performance Gigabit Ethernet the changing pattern of traffic, and the
elements than ATM. Therefore, Gigabit
campus networks across the WAN, emergence of IP as the dominant network
Ethernet is easier to manage. Gigabit
especially in the metropolitan area, is the protocol, the total elimination of layers
Ethernet is also easier to manage because
use of Packet-over-SONET/SDH (POS, of communication for IP over the
of its innate simplicity and the wealth of
MAN/WAN means reduced bandwidth
experience and tools available with its
usage costs and greater application
predecessor technologies.
performance for the users.

Figure 13: Interconnection Technologies over the MAN/WAN.


B-ISDN
IP IP over ATM IP over SONET/SDH
ATM IP IP IP over Optical
SONET/SDH ATM SONET/SDH IP
Optical Optical Optical Optical
(A) (B) (C) (D)

24 Gigabit Ethernet and ATM: A Technology Perspective White Paper


By contrast, ATM is significantly different
from the predominant Ethernet desktops
it interconnects. Because of this difference
and its relative newness, there are few
Because of the fast pace of development
tools and skills available to manage ATM
efforts during this period, a stable
network elements. ATM is also more
environment was felt to be needed for
difficult to manage because of the
consolidation, implementation and inter-
complexity of logical components and
operability. In April 1996, the Anchorage
connections, and the multitude of protocols
Accord agreed on a collection of some 60
needed to make ATM workable. On top and interoperability of Gigabit Ethernet
ATM Forum specifications that provided
of the physical network topology lie a standards. Since its formation in 1996,
a basis for stable implementation. Besides
number of logical layers, such as PNNI, the Alliance has been very successful in
designating a set of foundational and
LUNI, LNNI, MPOA, QoS, signaling, helping to introduce the IEEE 802.3z
expanded feature specifications, the
SVCs, PVCs, and soft PVCs. Logical 1000BASE-X, and the IEEE 802.3ab
Accord also established criteria to ensure
components are more difficult to 1000BASE-T Gigabit Ethernet standards.
interoperability of ATM products and
troubleshoot than physical elements Similar to the ATM Consortium, the
services between current and future
when problems do occur. Gigabit Ethernet Consortium was formed
specifications. This Accord provided the
assurance needed for the adoption of in April 1997 at the University of New
Standards and ATM and a checkpoint for further standards Hampshire InterOperability Lab as a
Interoperability development. As of July 1999, there are cooperative effort among Gigabit
Like all technologies, ATM and Gigabit more than 40 ATM Forum specifications Ethernet product vendors. The objective
Ethernet standards and functions mature in various stages of development. of the Gigabit Ethernet Consortium
and stabilize over time. Evolved from a is the ongoing testing of Gigabit Ethernet
To promote interoperability, the ATM
common technology, frame-based Gigabit products and software from both an inter-
Consortium was formed in October
Ethernet backbones interoperate seamlessly operability and conformance perspective.
1993, one of several consortiums at the
with the millions of connectionless,
University of New Hampshire
frame-based Ethernet and Fast Ethernet
InterOperability Lab (IOL). The ATM
desktops and servers in today’s enterprise
Consortium is a grouping of ATM product
campus networks. By contrast, connection-
vendors interested in testing interoperability
oriented, cell-based ATM backbones need
and conformance of their ATM products
additional functions and capabilities that
in a cooperative atmosphere, without
require standardization, and can easily
adverse competitive publicity.
lead to interoperability issues.

Gigabit Ethernet Standards


ATM Standards
By contrast, Gigabit Ethernet has evolved
Although relatively new, ATM standards
from the tried and trusted Ethernet and
have been in development since 1984 as
Fast Ethernet technologies, which have
part of B-ISDN, designed to support
been in use for more than 20 years. Being
private and public networks. Since the
relatively simple compared to ATM,
formation of the ATM Forum in 1991,
much of the development was completed
many ATM specifications were completed,
within a relatively short time. The Gigabit
especially between 1993 and 1996.
Ethernet Alliance, a group of networking
vendors including Nortel Networks,
promotes the development, demonstration,

Gigabit Ethernet and ATM: A Technology Perspective White Paper 25


• Up to 160 Fast Ethernet
100BASE-FX ports
• Up to 64 Gigabit Ethernet
family, and BayStack 450 Stackable 1000BASE-SX or -LX ports
Switch, complemented by Optivity Policy
• Wirespeed switching for Ethernet,
Services for policy-enabled networking.
Fast Ethernet and Gigabit Ethernet
The following highlights key features of
• High resiliency through Gigabit
the Passport 8000 Enterprise Switch, the
LinkSafe and Multi-Link Trunking
Passport winner of the Best Network Hardware
award from the 1999 Annual SI Impact • High availability through fully
Campus Solution
Awards, sponsored by IDG’s Solutions distributed switching and management
In response to the market requirements
Integrator magazine: architectures, redundant and load-sharing
and demand for Ethernet, Fast Ethernet,
power supplies and cooling fans, and
and Gigabit Ethernet, Nortel Networks • High port density, scalability
ability to hot-swap all modules
offers the Passport Campus Solution as and performance
the best-of-breed technology for campus • Rich functionality through support of:
• Switch capacity of 50 Gbps,
access and backbone LANs. The Passport • Port- and protocol-based VLANs
scalable to 256 Gbps
Campus Solution (see Figure 14) for broadcast containment, logical
comprises the Passport 8000 Enterprise • Aggregate throughput of 3 million
workgroups, and easy moves, adds
Switch, with its edge switching and routing packets per second
and changes
capabilities, the Passport 1000 Routing • Less than 9 microseconds of latency
• IEEE 802.1Q VLAN Tagging for
Switch family, Passport 700 Server Switch • Up to 372 Ethernet 10/100BASE-T carrying traffic from multiple VLANs
auto-sensing, auto-negotiating ports over a single trunk
• IEEE 802.1p traffic prioritization
Figure 14: Passport Campus Solution and Optivity Policy Services. for key business applications

Server Farm
• IGMP, broadcast and multicast
Optivity Policy Services
Passport 8000 & Management 10/100 Ethernet rate limiting for efficient broadcast
Voice Enterprise Switch MLT resiliency
containment
• Spanning Tree Protocol FastStart for
Data Centillion 100 Common Open Policy Services faster network convergence and recovery
Multi-LAN Switch Differentiated Services Gigabit Ethernet
IP Precedence/Type of Service LinkSafe resiliency
IEEE 802.1Q VLAN Tag • Remote Network Monitoring
IEEE 802.1p User Priority
Data Express Classification (RMON), port mirroring, and Remote
Traffic Monitoring (RTM) for network
Passport 1000
Routing Switch
10/100/Gigabit Ethernet Passport 700 Server Switch management and problem determination.
Data MLT resiliency server redundancy
OSPF & load balancing
EMP
WAN

Data BN Router
redundant gateways System 5000BH
Voice Multi-LAN Switch
System 390
Mainframe Server

BayStack 450 Video


Ethernet Switch Voice
Data

26 Gigabit Ethernet and ATM: A Technology Perspective White Paper


For users with investments in Centillion
50/100 and System 5000BH LAN-ATM
Switches, evolution to a Gigabit Ethernet
environment will be possible once Gigabit
Ethernet switch modules are offered in available today, before the next paradigm
the future. shift, and before new solutions introduce
another set of completely new challenges.
Information on the other award-winning
members of the Passport Campus Gigabit Ethernet provides a pragmatic,
Solution is available on Nortel Networks viable, and relatively inexpensive (and
therefore, lower risk) campus backbone For these reasons, Nortel Networks
website: http://www.nortelnetworks.com
solution that meets today’s needs and recommends Gigabit Ethernet as the
integrates seamlessly with the omnipresence technology of choice for most campus
Conclusion and
of connectionless, frame-based Ethernet backbone LANs. ATM was, and continues
Recommendation to be, a good option where its unique and
and Fast Ethernet LANs. Enhanced by
In enterprise networks, either ATM or complex functionality can be exploited,
routing switch technology such as the
Gigabit Ethernet may be deployed in the in deployment, for example, in the
Nortel Networks Passport 8000
campus backbone. The key difference is metropolitan and wide area network.
Enterprise Switches, and policy-enabled
in the complexity and much higher cost This recommendation is supported by
networking capabilities in the Nortel
of ATM, versus the simplicity and much many market research surveys that show
Networks Optivity Policy Services,
lower cost of Gigabit Ethernet. While it users overwhelmingly favor Gigabit
Gigabit Ethernet provides enterprise busi-
may be argued that ATM is richer in Ethernet over ATM, including surveys
nesses with the bandwidth, functionality,
functionality, pure technical consideration such as User Plans for High Performance
scalability, and performance they need, at
is only one of the decision criteria, albeit LANs by Infonetics Research Inc.
a much lower cost than ATM.
a very important one. (March 1999), and Hub and Switch
By contrast, ATM provides a campus
Of utmost importance is functionality 5-Year Forecast by the Dell’Oro Group
backbone solution that has the disadvantages
that meets today’s immediate needs at (July 1999).
of undue complexity, unused functionality,
a price that is realistic. There is no point
and much higher cost of ownership in the
in paying for more functionality and
enterprise LAN. Much of the complexity
complexity than is necessary, that may
results from the multitude of additional
or may not be needed, and may even
components, protocols, control, and
be obsolete in the future. The rate of
data connections required by connection-
technology change and competitive
oriented, cell-based ATM to emulate
pressures demand that the solution be
broadcast-centric, connectionless, frame-
based LANs. While Quality of Service
(QoS) is an increasingly important
requirement in enterprise networks, there
are other solutions to the problem that are
simpler, incremental, and less expensive.

Gigabit Ethernet and ATM: A Technology Perspective White Paper 27


For more sales and product information, please call 1-800-822-9638.
Author: Tony Tan, Portfolio Marketing, Commercial Marketing

United States Asia Pacific


Nortel Networks Nortel Networks
4401 Great America Parkway 151 Lorong Chuan
Santa Clara, CA 95054 #02-01 New Tech Park
1-800-822-9638 Singapore 556741
Canada 65-287-2877
Nortel Networks Caribbean and Latin America
8200 Dixie Road Nortel Networks
Brampton, Ontario 1500 Concord Terrace
L6T 5P6, Canada Sunrise, Florida
1-800-466-7835 33323-2815 U.S.A.
Europe, Middle East, and Africa 954-851-8000
Nortel Networks
Les Cyclades - Immeuble Naxos
25 Allée Pierre Ziller
06560 Valbonne France
33-4-92-96-69-66

http:// www.nortelnetworks.com
*Nortel Networks, the Nortel Networks logo, the Globemark, How the World Shares Ideas, Unified Networks, BayStack, Centillion, Optivity,
and Passport are trademarks of Nortel Networks. All other trademarks are the property of their owners.
© 2000 Nortel Networks. All rights reserved. Information in this document is subject to change without notice.
Nortel Networks assumes no responsibility for any errors that may appear in this document. Printed in USA.

WP3740-B / 04-00

S-ar putea să vă placă și