Sunteți pe pagina 1din 12

IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, VOL. 15, NO.

5, JUNE 1997 795

The MainStreetXpress Core Services


Node—A Versatile ATM Switch
Architecture for the Full Service Network
Erwin P. Rathgeb, Wolfgang Fischer, Member, IEEE,
Christian Hinterberger, Eugen Wallmeier, and Regina Wille-Fier

Abstract— Switching systems based on the ATM principle respect to ATM-based solutions, statistical multiplexing
have outgrown the experimental stage, and are already used of packetized voice and data in combination with voice
today in private and corporate networks, as well as in public compression and intelligent call-routing features will be
wide area networks to provide regular service. ATM has the
inherent ability to provide a common basis for transmission and required.
switching functionality in both local and wide area networks. 3) Network Consolidation: Providers of existing wide-area
With the potential to support all services available today as networks introduce ATM as the basis for a future broad-
well as those envisaged for the future, ATM holds a strong band ISDN, which allows the consolidation of the ex-
promise for network operators and end customers. To fully isting networks in the long term. Such an application
exploit this potential, ATM switch architectures are required
which provide versatility and modularity in supporting services scenario requires intensive interworking with the exist-
and protocols, independent scalability of data throughput and ing narrow-band networks with adaptation functions for
control performance over a wide range, and also reliability the user channels and feature transparent interworking
features adaptable to the respective application scenario. This between the signaling systems used.
paper describes in some detail how the MainStreetXpress core 4) Entertainment-Oriented Interactive Video and Multime-
services node, which has evolved from the prototype described
earlier [5] to a mature central office ATM switch, addresses these dia Services for Residential Customers: In the medium
issues and provides a future-proof architecture incorporating all term, this application area is expected to grow sig-
of the features required in the B-ISDN era. nificantly. In this area, cost-effective solutions for the
Index Terms—Asynchronous transfer mode, broad-band com- customer access are the key issue. In the core net-
munication, redundancy, switching systems. work, high-throughput ATM switches able to support
the highly asymmetrical traffic streams in an efficient
way are required to accommodate an adequate number
I. INTRODUCTION: NETWORK CONCEPTS of subscribers per switch.

T HE introduction of ATM in wide area networks is cur-


rently driven by four main application areas.
1) High-Speed Interconnection of Local Area Networks
If an ATM-based multiservice platform is targeted, the ATM
systems used in wide area networks have to meet the full
spectrum of requirements for the application areas mentioned
(LAN): For this application area, features for interwork- above. In many cases, these requirements differ substantially
ing with existing data networks and services, as well as from those defined for local area networks.
mechanisms for efficient multiplexing of bursty traffic • Reliability and availability of broad-band wide area net-
streams have to be provided by the ATM multiservice works—which are public networks in many cases—are
network. To enhance the support of the ATM transport oriented towards the established values defined for the
network for this application, a variety of mechanisms public telephone and leased line networks. These re-
are currently being discussed and specified. Examples quirements imply that for the architecture of the systems
are LAN emulation (LANE [1]) and “multiprotocol over which are used in these networks, redundancy of central
ATM” (MPOA [2]), as well as the “classical IP over subsystems and of interfaces is absolutely mandatory.
ATM” approach [9]. • The wide spectrum of applications of these sys-
2) Integrated Voice and Data Transport for New Alternative tems—from small access nodes with a few interfaces and
Service Providers: Since the transmission links con- a throughput of less than 10 Gbit/s up to large central
necting local networks of these providers will often office switches with several thousands of interfaces and
have to be leased from established network providers, a throughput of much more than 100 Gbit/s—require
an integrated transport technology is required which architectures which allow a high level of scalability.
allows us to optimally utilize these resources. With This scalability, however, relates not only to the data
throughput of the ATM switching network, but also to
Manuscript received May 1, 1996; revised December 1, 1996. the processing power of the switching processors used.
The authors are with the Public Communication Networks Group, Broad-
band Networks, Siemens AG, D-81359 München, Germany. • Data and packetized voice with silence suppression would
Publisher Item Identifier S 0733-8716(97)03378-7. cause a huge waste of resources if they were trans-
0733–8716/97$10.00  1997 IEEE
796 IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, VOL. 15, NO. 5, JUNE 1997

ported through wide area networks employing peak bit-


rate allocation. Statistical multiplexing of these traffic
types—which is state of the art in all existing data
networks—is a prerequisite for economic use of trans-
mission capacity also in ATM networks. When pro-
viding statistical multiplexing capabilities for real-time
services coded with variable bit rates (voice, video), no
large multiplexing buffers can be used in order not to
violate the stringent delay objectives. Therefore, only
connection admission procedures taking into account the
bursty nature of these traffic types can be applied to
achieve statistical gain without excessive delays. For less
delay sensitive (data) services, large buffers have to be
provided in the switching nodes, along with appropriate
non-FIFO scheduling strategies working at very high
speeds. These scheduling mechanisms have to ensure
fairness and isolation between traffic streams originating
from individual users, who—in contrast to the users
in local area networks—cannot be assumed to behave
cooperatively. In addition, delay priority has to be given
to the real-time services within these mechanisms. In
particular, the support of the new “available bit rate”
(ABR) service category—or “ATM transfer capability”
in ITU-T terms—and the “unspecified bit rate” (UBR)
service category defined by the ATM Forum requires
these large buffers and the corresponding scheduling and
priority mechanisms.
• In the foreseeable future, only a fairly small percentage
of end systems is expected to provide ATM interfaces.
Most terminals instead will be equipped with the interface
types already in use today, e.g., PDH leased lines, Frame Fig. 1. MainStreetXpress core services node architecture.
Relay, SMDS/CBDS, and X.25. In order to connect these
systems with their service-specific interfaces to ATM • the fully redundant ATM switching network (ASN) con-
networks, interface adaptation has to be performed by sisting of ATM multiplexers (AMX) and the ASN core to
the ATM nodes. Beyond the pure interface adaptation, perform cell multiplexing, concentration, and cell switch-
processing of the higher layer service-specific functions ing functions;
and protocols also has to be provided. In this context, • the scalable, fully redundant control cluster for central
the handling of Internet-specific functions (e.g., routing,
control and termination of signaling protocols;
address conversion, protocol handling, etc.) [6] is of
• echo cancellation circuits (ECC) for 64 kbit/s voice
primary importance because the efficient support of the
connections;
IP-based Internet applications will be a prerequisite for
• statistical multiplexing units (SMU) providing large cell
the successful introduction and acceptance of ATM-based
buffers, non-FIFO cell scheduling, delay priority mecha-
multiservice network infrastructures.
nisms, and support of ABR and UBR protocols to support
The following sections will provide insight into the archi- statistical multiplexing for nonreal time variable bit-rate
tecture of the Siemens MainStreetXpress core services node connections.
which has been designed with all of the requirements men-
tioned above in mind. The LIC’s connected to one AMX are physically combined
with this AMX in a so-called access unit (AU) shelf. The
AU’s as well as the LIC’s are designed such that, in addition
II. THE SYSTEM ARCHITECTURE to a normal simplex configuration, line and board redundancy
schemes also can be supported if required by the application
scenario.
A. System Overview In between the functional blocks shown in Fig. 1, a common
The main components of the system shown in Fig. 1 are cell-based interface is used, allowing for a flexible configura-
as follows: tion of the system, e.g.,
• line interface circuits (LIC) of various types to connect • all LIC types can be mixed freely within one AU shelf to
external interfaces (subscriber lines, trunks) and to per- provide optimum utilization of the bandwidth available
form ATM layer and service-specific functions; at one AMX;
RATHGEB et al.: MAINSTREETXPRESS 797

• ECC’s can be pooled together in separate shelves (ECC


cluster) or can be plugged into regular AU’s according to
the requirements of a specific configuration;
• the AMX can be omitted for high-rate interfaces (2.4
Gbit/s) not requiring multiplexing toward the ASN;
• the SMU can be omitted if it is not required for certain
interfaces only carrying real-time critical traffic.

B. ATM Switching Network (ASN)


The ASN consists of AMX’s physically located in the AU
shelves and of the central ASN core. The AMX’s provide a
multiplexing and concentration stage to allow for efficient use
of the ASN core. The function of the ASN is to switch cells
received from the ingress side LIC’s to the appropriate egress
side LIC’s, to specific server-type boards, e.g., ECC’s, or to
the control cluster by using the routing information contained
in the internal cell header.
The AMX is connected to the ASN core via unidirec-
tional cables with a granularity of four STM-1 equivalents.
Therefore, the AMX can be connected to the ASN core with
a variable number of STM-1 equivalent links to provide a
flexible concentration ratio. Due to the fact that the cabling Fig. 2. 20 Gbit/s ASN core.
is unidirectional, the interconnection between the AMX and
ASN core can also be asymmetrical to support applications outputs. By connecting funnels in parallel to all the inputs,
with highly asymmetrical bit-rate characteristics, e.g., for a quadratic switch fabric can be constructed. Fig. 2 shows as
interactive video services. an example the structure of a 20 Gbit/s ASN core built up of
The AMX includes a peripheral control platform (PCP) to switching elements (SE 32/16) with 32 inputs and 16 outputs,
control and supervise the switching functions, as well as a each running at 207 Mbit/s, and thus able to transport the
clock generation platform and power supply units. It should traffic load of one external STM-1 interface per port.
be noted here that the PCP is a universal processing platform The main properties of the funnel switch fabric are the
consisting of a processor with all required processor periphery. following.
All PCP software is stored in Flash-PROM memories to make
• It is strictly nonblocking, i.e., connections can be ac-
the PCP fully software loadable, and thus to eliminate the need
cepted, and the bit rate of any existing connection can
for physical access to the PCP for upgrades. Also, part of the
be modified (increased) without any restriction as long as
PCP is an ASIC processing the lower levels of the internal
sufficient bandwidth is available at the respective input
control communication protocols to offload this dynamic load
and output ports.
from the peripheral processor. The PCP with a common basic
• It has only a single path between any input/output pair
software is used on all LIC’s, ECC’s, SMU’s, and in the AMX
(single path network), i.e., no path hunting is necessary,
and ASN core, together with some specific software extensions
for the different applications. and the cell sequence integrity is preserved.
The ASN core is strictly nonblocking in the sense that—if • The performance is practically equivalent to the per-
a properly working connection acceptance control (CAC) formance of a single-stage output-buffered switch (ideal
function ensures that the destination output link is not over- switch) irrespective of the number of funnel stages.
loaded—any cell stream arriving at an arbitrary ASN core The input/output ports of the ASN core can be grouped
input port can be routed to any arbitrary ASN core output together and operated as bundles with a capacity of 2, 4, 8, or
port irrespective of the traffic at the other ports without cell 16 STM-1 equivalents which are cyclically fed by a common
losses exceeding the specified limits. This means that there queue. As the cells on the links of a bundle are transmitted
are no additional restrictions nor is any additional bandwidth staggered against each other, it can be guaranteed that the
management necessary within the ASN core. The ASN core cell sequence within the bundle is preserved. By operating all
can be scaled from a minimum configuration with a total groups of internal links connected to the same receiving device
throughput of 5 Gbit/s up to configurations with more than (switching element, LIC) as a single higher bit rate ATM pipe,
1 Tbit/s with the same switching network structure. any bandwidth loss due to fragmentation is avoided. Due to
The ASN core as well as the AMX’s are constructed by the fact that within the ASN core only bundles of 16 links are
using switching elements (SE) which provide inputs and used (STM-16 equivalent capacity), interfaces and connections
outputs (SE 2 ). The ASN core has a “funnel” structure with bit rates up to 2.4 Gbit/s can be supported without any
with stages such that a single funnel represents a strictly restrictions and without the need for resequencing. The SE’s
nonblocking switch fabric module with 2 2 inputs and are based on an extension of the “Sigma” concept described in
798 IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, VOL. 15, NO. 5, JUNE 1997

a cell copy has been transmitted on each destination output, the


cell is physically removed from the buffer. If the SE outputs
are operated in bundle mode, only one logical output queue
per bundle is maintained.
With respect to the cell-routing function, the SE’s operate in
different modes depending on their location within the ASN.
The SE’s in the first stage of a funnel filter out the cells
destined for the outputs connected to the respective funnel and
multiplex them onto their outputs. The SE’s in the intermediate
funnel stages—as well as the SE’s in the AMX’s on the ingress
side—only perform multiplexing, whereas the SE’s in the last
funnel stage and in the egress side AMX’s perform the actual
routing functions.
The SE’s also support the CLP mechanism of the ATM
cell header. The cell loss priorities are implemented by a
partial buffer-sharing mechanism. With this mechanism, only
high priority cells are admitted to the cell
buffer once the fill level exceeds a configurable threshold. In
addition, a fairness management mechanism is implemented.
This mechanism allows to specify and control a limit for the
number of buffer spaces in the central cell memory which
can be occupied by a single logical output queue. With this
fairness mechanism, a single, highly loaded output queue can
be prevented from occupying the complete shared buffer space.
Larger switch fabrics have the same basic funnel structure
Fig. 3. Switching element with Sigma architecture. and are provided by using the next generation of switching
elements (SE G16/8) with eight times higher throughput (16
inputs/8 outputs at 3.3 Gbit/s port data rate). These SE’s
[5] featuring a central buffer architecture with logical output are designed by using 0.5 m technology and are already
queues for optimum performance. A block diagram of the available. They provide a functionality equivalent to that of
Sigma SE is shown in Fig. 3. Cells received on any of the the SE 32/16, and allow us to realize, e.g., a 40 Gbit/s switch
input links are converted to a parallel format after phase fabric on a single board. A 160 Gbit/s switching network can
alignment. They are then stored in an empty location of the be built with a three-stage funnel equivalent to the structure
shared cell memory which is organized as random access shown in Fig. 2, whereby a fully redundant configuration fits
memory (RAM). The internal routing tags are forwarded to into one standard rack. To avoid the limitations of the electrical
a control block. For point-to-point connections, the complete cabling, optical links are used to interconnect the various ASN
routing information for the ASN is added to the cell at the boards. By extending the structure to more than three stages
ingress LIC, and the control block makes the routing decision with each additional stage doubling the switching network
based on the analysis of the relevant parts of the routing tag throughput, funnel structures with a throughput of more than 1
(self-routing principle). Tbit/s can be constructed with the SE G16/8 without reaching
The SE’s are also able to support ATM layer multicast technological limits. The routing field of the internal cell
connections by copying arriving cells and reemitting the format is already dimensioned to cope with the large-capacity
copies on an arbitrary subset of their outputs. For multicast switching networks. Therefore, in order to allow an upgrade
connections, a multicast routing address (MCRA) is added of smaller systems, only specific multiplexing stages within
to the cell at the ingress LIC instead of the normal routing the ASN core have to be provided to convert from the
information. This identifier is used by the SE’s to address 207 Mbit/s internal interface format used in the periphery to
an SE-specific multicast lookup table containing a bit map the 3.3 Gbit/s format used in the large ASN core. For such an
identifying the outputs to which copies of a specific cell have upgrade, only the ASN core has to be replaced; all peripheral
to be transmitted. With this multicast mechanism, there is no components including AMX’s and SMU’s can be retained.
restriction to the fan-out of any multicast connection within
the ASN.
Depending on the result of the evaluation of the routing tags, C. Line Interface Circuits (LIC’s)
a pointer to the location of the cell in the central cell memory is The LIC’s provide connectivity of various external line
queued in one or several logical output queues, which control interfaces to the system. The LIC types can vary in the
the reemission order of the cells on the outputs. It should be interface bit rate (1.5 Mbit/s up to 2.4 Gbit/s) as well as in
mentioned that the cell itself is stored physically only once in the services provided (ATM cell relay, 64 kbit/s, 1.5/2
the central cell buffer; only the relevant information is stored in Mbit/s circuit emulation service, Frame Relay, SMDS/CBDS).
several logical output queues for multicast connections. Once Depending on these characteristics, one or several external
RATHGEB et al.: MAINSTREETXPRESS 799

using an extended internal cell format between the ingress


and the egress side.
In order to provide access to the full VPI/VCI range—while
keeping the RAM for connection-specific parameters to a
minimum size—a content-addressable memory (CAM) is used.
This device is a full custom design ASIC integrating an
address reduction function from a 32-bit input address (12-
bit VPI, 16-bit VCI, 4-bit port number) to a 13-bit internal
connection identifier. The address reduction is performed
prior to the access to the connection specific memory where,
e.g., UPC/NPC parameters, new VPI/VCI values, ATM traffic
measurement data, etc., are stored. The address reduction
allows direct addressing of the connection RAM with 8096
entries, which is implemented by using fast SRAM or DRAM
Fig. 4. LIC providing ATM cell relay, CES, and n 2 64 kbit/s service. technology.
Full ATM layer multicast is also supported providing both
interfaces are provided on one LIC board. For each interface spatial and logical multicast as shown in Fig. 5. Spatial
type, the same hardware is used for UNI, VB5 interface, or multicast refers to copying cells within the ATM switch, and
NNI applications. reemitting the cell copies with individual VPI/VCI values
The external interfaces may support ATM traffic and also onto different physical egress interfaces. When using logical
non-ATM traffic depending on the LIC type, whereas the multicast, the cell copies are reemitted onto the same phys-
internal interface toward the ASN is purely cell based. If ical interface but still carry individual VPI/VCI values. The
the external line interface carries, e.g., CBR or Frame Relay latter function is required if network elements not supporting
traffic, the respective LIC provides interworking and the ATM multicast are located in downstream direction.
adaptation layer (AAL) functionality for the corresponding The copy function for spatial multicast in the ASN is
service. performed by the switching elements as described in Section
All LIC types have the common overall structure shown in II-B. A similar spatial multicast operation is also performed on
Fig. 4 using an LIC providing both ATM cell relay and CBR multiport egress LIC’s. The individual header translation for
circuit emulation, as an example. For other services, basically the cell copies is performed on the egress LIC’s. The header
the service-dependent interworking and AAL functions are translation mechanism also allows us to copy cells several
adapted, whereas for pure cell relay interfaces, these parts are times and to attach individual headers to the copies to perform
omitted. logical multicast.
The LIC’s are equipped with a PCP, an on-board power All ATM layer operations on a cell are performed “on the
supply, and units to derive the required clocks from a centrally fly,” i.e., they are completed in less than one cell duration
distributed system clock. They also provide the possibility to on the external link. The clock frequency of the ATM layer
recover the transmission clock and feed it to the central system ASIC’s is chosen such that sufficient free clock cycles are
clock generator to synchronize the ATM node to the network generated to be able to update the connection-specific table
clock if no separate clock signal is provided by the network. entries in case of connection setup or release without affecting
All LIC’s can be operated in a nonredundant configuration, the user data flow.
but if required, the same boards and shelves can be used The ATM layer part is implemented by using a common
to provide line/board redundancy as described in more detail set of semicustom ASIC’s for all types of LIC’s up to STM-1
in Section III. Common to all LIC types is also the ATM interface bit rates. This chip set can support 8192 connections
layer processing and the interface to the AMX and ASN. The simultaneously. For the higher rate interfaces, specific ATM
physical layer (PHY) processing is adapted to the different layer chip sets with higher data throughput and a higher
transmission systems for the various interface types. number of simultaneous connections are provided. As the
The PHY part provides both the PMD and the TC sublayer interface between the PHY and ATM layer as well as between
functions, which include handling of the physical character- the AAL and the ATM layer devices, the UTOPIA interface
istics of the external lines, framing of the transport signal, as specified by the ATM Forum is used.
the PHY–OAM functions and mapping of ATM cells to the
transmission frame structure. The HEC mechanism, with on-
the-fly HEC processing, is implemented for cell delineation D. Statistical Multiplexing Unit
and header verification as specified in the relevant ITU-T The FIFO cell buffers integrated in the switching fabric
recommendation. and the LIC’s are dimensioned to support real-time services
The ATM layer part provides the standardized functions like which are either using constant bit rate (CBR) or real-time
VPI/VCI translation, UPC/NPC, and ATM–OAM processing variable bit-rate (VBR-rt) connections. Significantly larger
in full compliance with the relevant ITU-T and ATM Forum buffers do not provide additional benefit for CBR and VBR-rt
specifications. In addition, it provides the system-specific connections since quality of service requirements concerning
functions required for the internal operation of the switch by cell delay and cell delay variation do not allow for long
800 IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, VOL. 15, NO. 5, JUNE 1997

Fig. 5. Multicast connections.

queueing delays. For nonreal-time services with variable bit


rate, however, larger buffers can improve the achievable
network utilization significantly. It is a well-known fact (see
[10], [13], and [18]) that high link loads for bursty connections
with high peak cell rates (larger than a few percent of the link
rate) can only be achieved in wide area broad-band networks
if a network node is able to buffer multiple bursts for a link.
For the support of these services, the switch incorporates
large cell buffers located in the so-called statistical multi-
plexing units (SMU). For these large buffers, pure FIFO
queueing is not adequate [13]. Therefore, the SMU provides
appropriate, non-FIFO cell-scheduling mechanisms and the
additional traffic control functions necessary to operate a Fig. 6. Buffering architecture of the switch with SMU’s.
network including ATM switches with large cell buffers. Delay
priority is given to the real-time connections passing through
an SMU so that the large buffers are transparent for them. connections can make use of them. The queueing is such
Due to the localization of all functions related to statistical that for each ABR, UBR, and nonreal-time VBR connection,
multiplexing for nonreal-time services in the SMU’s, the an individual (logical) FIFO queue is reserved (“per-VC
architecture is modular and flexible. For switches carrying only queueing”). Eight thousand queues are available per SMU. The
real-time traffic, the SMU’s can be omitted. use of per-VC queuing provides connection isolation, allows
With the SMU’s, the following features can be provided. for simple congestion control, and significantly simplifies the
• ABR, nonreal-time VBR and UBR connections take ad- implementation of early packet discard or partial packet dis-
vantage of the large cell buffers. CBR- and real-time VBR card (see [3] and [15]) schemes for ABR or UBR connections.
traffic is routed through the SMU via a bypass. Furthermore, it is a prerequisite for cell-scheduling strategies
• Peak cell rates of real-time or nonreal-time connections controlling service rates of individual connections.
can be shaped at the ingress of the switch. The cell scheduling mechanisms of the SMU use a two-
• The peak cell rate of a VP connection originated in the stage approach. Scheduler blocks, which are shown as a
node can be shaped at the egress of the switch. set of queues feeding one pipe in Fig. 6, are used to form
• Peak cell rates of nonreal-time connections can be re- groups of per-VC queues. Each scheduler block (containing an
shaped at the egress of the switch. arbitrary number of connections) is served at a deterministic,
Buffer Management: Fig. 6 gives an overview of the configurable rate . The cell streams emitted by the different
buffering architecture of the switch equipped with SMU’s. scheduler blocks are multiplexed together with the traffic
Large cell buffers are provided in the ingress SMU and in the coming from the real-time bypass on the egress link of the
egress SMU. The buffers in the ingress SMU are required to SMU. The FIFO queue at this multiplexing point (point
protect the ASN core with its small buffers against temporary or point , respectively) can be kept small by ensuring that
overload. A switch internal flow control called the “dynamic the sum of the rates plus the bandwidth required for the
bandwidth allocation” (DBA) protocol [18] is used between real-time bypass is below the capacity of the egress link of
the SMU’s at the ingress and the egress sides. It ensures that the SMU. (In Fig. 6, the queue feeding the egress link of an
the aggregate cell rate of the cell streams destined for the SMU is not explicitly shown.)
same egress port of the ASN core (point in Fig. 6) never A scheduler block can be operated in one of two modes.
exceeds the capacity of the port. In the “WFQ” mode, the queues within a scheduler block
While the cells of CBR and VBR-rt connections use a share its deterministic service rate according to a “weighted
special bypass around the large buffers, the cells of all other fair queueing” (WFQ) strategy. WFQ provides fairness and
RATHGEB et al.: MAINSTREETXPRESS 801

throughput guarantees for individual queues as required, e.g., One of the advantages of a cell-based flow control protocol
for ABR traffic. WFQ was first proposed in [4], and has like DBA is its independence from the ASN core realization.
been further analyzed in [11] and [12]. The algorithms studied DBA can be used with any nonblocking self-routing switching
in these references have been designed for nodes handling fabric. There are three basic types of messages passed across
variable-length packets and are difficult to implement. The the core fabric between ingress and egress SMU’s.
SMU uses a simplified algorithm which is easier to implement • Request and Cleardown messages are generated by the
and more suitable for ATM systems switching fixed size ingress SMU’s in response to changes in the fill state of
cells. For a discussion of further WFQ algorithms and the their scheduler blocks.
role of WFQ for ATM traffic management, the reader is • The Offer message is generated by egress SMU’s when
referred to [14]. In order to shape the peak cell rates of bandwidth cleared down by one SMU can be offered to
individual connections, a scheduler block can be operated in another one.
the “rate-shaping” mode. In this mode, the service rate of each In [17], the performance of the DBA protocol has been
individual queue of the scheduler block is always kept below studied. For high loads, the cell delays in a switch using input
a queue-specific maximum value. and output buffering together with the DBA protocol approach
The scheduler blocks of the SMU’s are used for different those experienced in an ideal output buffered switch. For very
purposes at the ingress and egress side, respectively. In the low loads, the delays are slightly higher than for the ideal
egress SMU, a scheduler block is associated with a physical switch due to the fact that it takes some time before a backlog
interface of the switch or with a virtual path originating in the of cells at the ingress is detected and the bandwidth of a pipe
node. In the simplest case, the rate of the scheduler block is increased.
is defined as the rate of the associated interface, e.g., 45
Mbit/s for a scheduler block feeding a DS3 interface.
In the ingress SMU, one scheduler block is provided per E. Control Concept
egress SMU in the system which contains the FIFO queues The main challenge for the control concept of broad-band
for all nonreal-time connections routed via this specific egress systems is to meet the requirements of future applications and
SMU. The service rate of the scheduler block defines the traffic mixes, where little relevant experience exists today.
capacity of an ATM pipe interconnecting the ingress SMU Other than in narrow-band switches, the applications are very
with the corresponding egress SMU. heterogeneous and have significantly different traffic profiles.
The Dynamic Bandwidth Allocation Protocol: As shown in For example, for video on demand, the bandwidth required per
Fig. 6, the DBA protocol controls the capacity (cell rate) connection is relatively high (1.5–6 Mbit/s), and long holding
of the pipes which interconnect each ingress/egress SMU times (60 min and more) occur quite often. Narrow-band
pair. The DBA protocol is a distributed resource management backbone trunking with switching of individual connections,
mechanism whereby each SMU is responsible for its own local on the other hand, requires low connection bandwidth (64
resources, i.e., DBA does not involve any central functions kbit/s or less), but due to the—on average—shorter holding
in the switch. The DBA procedures use a cell-based mes- times and the large number of individual connections, the
saging protocol to exchange bandwidth allocation information required call-processing power is dramatically higher than in
between ingress and egress SMU’s. the first case, while the required switch fabric throughput is
Each ingress SMU has to detect a backlog of cells for significantly lower. As a consequence, ATM switches need a
any pipe to an egress SMU, and interprets the backlog as an scalable processing platform with flexible task assignment and
indication that the currently allocated rate for the specific distributed application software to be future-proof. Moreover,
pipe should be increased. This causes a DBA control cell to the processing power has to be scalable independently of the
be sent to the appropriate egress SMU requesting an increase size of the switch fabric and the number of external interfaces.
in the path’s allocated bandwidth. However, the ingress SMU For MainStreetXpress core node, a distributed control plat-
only requests additional bandwidth from an egress SMU if it form concept has been developed [8]. The main functions
has bandwidth available at its own output (bottleneck at point for call processing and operation/maintenance are performed
) to send data into the ASN core. by a cluster of central control processors, the MP’s (main
Upon receiving a request cell, the egress SMU will grant processors). Starting with one processor, the processing power
the request only if the port bandwidth at point in Fig. 6 of the cluster can be easily extended by adding additional MP’s
is not already fully allocated. If an egress SMU has allocated according to the requirements.
all its inlet bandwidth, any further requests for bandwidth are The peripheral control platforms (PCP’s) located on the
queued until an ingress SMU has emptied its logical buffer various modules (LIC’s, ECC’s, AMX’s, SMU’s, ASN core)
assigned to this egress SMU, and sends a clear down message support the central processors by performing local, mainly
to release bandwidth no longer required. It is the purpose of hardware-related tasks like maintaining the connection-specific
the large input buffers to avoid cell loss in this situation. tables on the LIC’s as well as local maintenance and fault de-
This method ensures that only those cells are admitted to the tection tasks. The communication among the central processors
ASN core that have a prebooked exit from it in order to avoid and between the central and the peripheral processors is ATM
excessive cell losses in the buffers of the switching elements. based, and uses the ASN communication infrastructure already
On the other hand, the latency of the control is low enough to available. The internal control communication protocol uses a
guarantee a full utilization of the switch fabric. standard AAL. The AAL functions as well as real-time critical
802 IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, VOL. 15, NO. 5, JUNE 1997

Fig. 7. Control cluster.

operations of the higher protocol layers are supported by


specific communication ASIC’s to offload these tasks from the
Fig. 8. Software architecture of MainStreetXpress core services node.
actual controllers. The internal control communication services
on the main processor (MP) and the peripheral controllers • Call Processing (CALLP): The Call Processing MP
support location transparency, i.e., applications which want to (CallP–MP) handles the higher layers of the signaling
use a certain software service (e.g., digit analysis) do not need protocols (e.g., DSS2 Q.2931, B-ISUP, N-ISUP),
to know on which MP the service is offered; they just ask performs connection admission control (CAC), and
for the service type, and the control communication system provides digit analysis and translation. User and access
directs the request to the right MP. The support of location resources are assigned to a CallP–MP by means of
transparency allows us to move software services from one administration, allowing again for rearrangements due to
MP to the other without any impact on the software structure. performance requirements. The number of CallP–MP’s
For the flexible assignment of tasks to MP’s, four basic load grows with the overall connection-handling capacity
types have been defined (see Fig. 7). requirements.
• Stand Alone (SA): The Stand Alone MP (MP–SA) pro- The load types allow us to configure a switch exactly for the
vides the Q3 interface toward the management system processing capacity required. Moreover, the processing of a
and interfaces for external data storage devices. All call is usually distributed over several MP’s: the MP–SLT’s
administered data and code are stored on disk for security. terminate the incoming/outgoing signaling link and the
The MP–SA is also the “maintenance parent,” i.e., it CallP–MP’s perform the connection admission control for the
controls start-up of the system and the recovery escalation A-side and B-side resources (e.g., bandwidth, VPI/VCI). Digit
in case of errors and failures. Usually, there is one translation and routing are performed on all CallP–MP’s for
MP–SA per node; for very large switches, it may be a specific call, usually by the A-side CallP–MP. As the ASN
necessary to provide more than one MP with SA load core itself is nonblocking and has the single-path property,
type. The major performance requirement for the MP–SA, there is no need for a central path search. Therefore, there is
apart from start-up and recovery, is the processing of all no MP which has to be addressed for every connection and
measurement and billing data collected in the system and which could create a bottleneck in the system.
its periodic delivery to an operation center. For an efficient software implementation, it is important to
• Signaling Link Termination (SLT): The MP–SLT termi- identify generic functions which may be used by all appli-
nates all external signaling links, and provides the low- cations. For the MainStreetXpress core node, three software
level protocol handling, e.g., the signaling adaptation layers have been defined as shown in Fig. 8. These layers are
layer (SAAL) for broad-band UNI and NNI and the described below in more detail.
message transfer part (MTP) for narrow-band CCS7 links. Software Platform: The software platform comprises the
UNI and NNI signaling links are assigned by admin- microcore software, the operating system (OS), and the inter-
istration to an MP–SLT. If performance measurements nal communication services. The OS provides basic services
indicate that an MP–SLT is permanently overloaded, like scheduling, memory management, and internal processor
signaling links may be rearranged to another MP–SLT. communication. For interprocessor communication, a specific
Depending on the configuration, there can be multiple internal transport protocol (ITP) is used. OS and ITP provide
MP–SLT’s for a node. the basic support for location transparency by service address-
• Signaling Manager (SM): The MP–SM performs the ing. For ITP and external protocols (e.g., SAAL), time-critical
network management tasks for the CCS7 network, e.g., functions, e.g., the keep-alive functions for the SAAL and
handling of CCS7 link states. Usually, there is only one a confirmed delivery for ITP, are supported by a dedicated
MP–SM per node. ASIC on the MP.
RATHGEB et al.: MAINSTREETXPRESS 803

Application Software Platform: The application software fault-tolerant software implementation, special software
platform comprises the following generic services which are error treatment and a recovery strategy with several
used by the various applications of a broad-band switch. escalation steps are supported.
• Operation Administration and Maintenance (OAM) Ser- • System Utilities: Online debug utilities are provided for
vices: These comprise the Q3 communication services testing and fault evaluation purposes. The main online
and the file system. Applications use the Q3 communi- utilities are patch, trace, and dump.
cation services for the processing of Q3 commands and Applications: The applications implement the specific ser-
for the output of statistics/billing data to the operation vices of the broad-band switch visible to the user. Call-
center. Generic Q3 support functions are event forward- processing software controls the setup and release of calls
ing, notification logging, security functions, and alarming. and connections. Signaling interfaces for the NNI (e.g., CCS7)
The Q3 protocol stack comprises CMIP/CMISE, FTP, and the UNI (e.g., DSS2) are supported. With respect to the
and TCP/IP. The file system is used for the storage of implementation, supplementary features are separated from
administered data or log files for billing/statistics data the basic call handling. Hence, the realization of new call-
on disk. Access to the file system and Q3 communication processing features has a minimal impact on the existing
services is performed through a common API (application software. For billing, the relevant information including, e.g.,
programming interface). the cell counts is provided in a format suitable for pro-
• Physical Switching Server: The physical switching server cessing at an external billing center. Statistics are collected
is responsible to actually set up the paths through the for performance management (e.g., quality of connections),
ASN. For each connection setup request, the application traffic measurements, and load measurements for the MP.
specifies the endpoints of a path. The physical switching Administration is provided for a number of applications, e.g.,
server determines the modules involved and formulates call processing, CCS7 network, system configuration, and
the switching commands required to load the connection- ATM layer OAM functions (loop back, continuity check, etc.).
specific tables in the LIC’s and, for multicast connections Both the narrow-band CCS7 (MTP, ISUP) and the B-ISDN
only, in the switching elements. Thus, the physical switch- (SAAL, MTP level 3, B-ISUP) signaling protocols and the
ing server hides the details of the switching fabric from required interworking functions are implemented. Together
the applications. with the packetization/depacketization for the user traffic on
• Database Management System (DBMS): Applications use the LIC’s and the additional echo cancelation circuits, this
DBMS to store and access semipermanent and transient allows for interworking with the narrow-band ISDN.
data. DBMS is a distributed database system, i.e., data
may be located on any MP in the system. To provide
efficient access and storage, data may be partitioned or III. REDUNDANCY PRINCIPLES
replicated. For example, subscriber data are partitioned
To be able to meet the availability and reliability require-
among the CallP–MP’s, i.e., a specific CallP–MP keeps
ments for a central office switch, various redundancy concepts
the data of a particular subscriber. Routing data for
are implemented in the MainStreetXpress core node, including:
digit translation are replicated, i.e., the same data are
kept on each MP. The location of the data on a certain • full redundancy for all central components, i.e., for the
MP is completely hidden from the application. DBMS ASN, the SMU’s, and the central control;
offers generic access routines (e.g., GET, CREATE, SET, • optional redundancy for external lines and LIC boards;
DELETE) and Q3 operations (e.g., scoping, filtering), and • pool redundancy for ECC’s.
provides concurrency and transaction control to guarantee In general, the application classes for redundancy are de-
consistent data access. The implementation approach for fined as follows.
DBMS is data driven, which means that all information “ ”: The traffic protected by the redundancy is car-
on data is kept in a data dictionary (DDI). ried via one working and one protection entity
• Upgrade: The MainStreetXpress core node allows for the simultaneously. The receiver terminating the
online upgrade of the system software without loss of redundancy has to select cells from one or the
service. During upgrade, the redundant central control other entity to be able to forward one consistent
hardware is used to switch all system functions from the traffic stream.
old to the new software version. The upgrade procedure “ ”: The traffic protected by the redundancy is only
is controlled by generic platform software. carried by the protection entity if a failure oc-
• Load Control: The load state of the MP’s is constantly curs on the working entity. One protection entity
monitored. In case of overload, appropriate measures are protects one working entity. During normal con-
taken by the load control, for example, new call requests ditions, the protection entity can be used for other
are rejected. If an MP gets into overload states frequently, applications, e.g., to transport extra traffic, which
a reassignment of a service may be initiated. For example, is dropped if a failure on the working line occurs.
CCS7 links could be assigned to another or new MP–SLT. “ ”: The traffic protected by the redundancy is only
• Maintenance: Performs all hardware-related maintenance carried by the protection entity if a failure occurs
activities such as fault processing, diagnostics, reconfig- on one of the working entities. One protection
uration, and routine testing of an MP. To allow for a entity protects working entities.
804 IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, VOL. 15, NO. 5, JUNE 1997

A. The ASN Redundancy Principle


The ASN is fully duplicated in two redundant planes op-
erating according to a redundancy scheme. To support
this scheme, a connection-specific sequence number as well
as a frame check sequence (FCS) are added to every cell at
the ingress side LIC. Then each cell is being duplicated, and
one copy is transmitted through each plane. Due to the fact
that the planes are not synchronized, the two copies of a cell
will, in general, experience slightly different queueing delays
within the ASN. On the egress side LIC which receives the
cell streams from both ASN planes, a “redundant path com-
bination” (RPC) mechanism decides, based on the sequence Fig. 9. LIC protection switch (LPS) in SE.
number and the result of the FCS mechanism, which one of the
copies will be forwarded. If the first cell copy arriving carries To meet the port down-time requirements, board redundancy
a valid FCS, it is forwarded to the outgoing link. The other is offered for the LIC’s in a “ ” configuration for high-rate
cell copy will be recognized based on the sequence number PDH interfaces and SDH/SONET interfaces. In this “ ”
and will be discarded. In case the FCS mechanism detects bit application class, one LIC board is active and another board is
errors in the faster cell copy, this one is discarded, and the one in standby mode. For E1/DS1 interfaces, a “ ” redundancy
arriving later is forwarded if at least the cell header is free of configuration is supported, i.e., active LIC’s are protected
bit errors. The main advantages of this redundancy scheme by one standby LIC. In the case of external line redundancy,
are as follows. the redundant lines are connected to different LIC’s. If one of
• A redundancy on connection (VPC, VCC) level is pro- these LIC’s fails, the same protection switching mechanism is
vided where single failures in the ASN will not disturb used as in the case of a line failure.
existing connections at all. Multiple failures will have no For the support of line and board redundancy, the means
impact on a particular connection if at least one path in are provided to direct the cell streams to the currently active
either plane is still available. units. In the direction from the ASN toward the LIC, an
• No complex switchover mechanisms are necessary in case LIC protection switch (LPS) is implemented in the switching
of failures. elements (SE’s) of the egress AMX (see Fig. 9).
• No cross channels for synchronization of the two ASN The LPS can redirect the traffic from the logical output
planes are required. buffers of the SE to any physical output of the SE without
• A complete plane can be taken out of service for main- changing the routing information contained in the extended
tenance or upgrade without any impact on existing con- cell header. The LPS can be set individually for each output
nections, i.e., the upgrade of the ASN can be performed of the SE by the AMX control as:
without service interruption. • point-to-point (e.g., applied for “ ” line protection);
• The combination of the internal FCS mechanisms and the • point-to-2 point (e.g., applied for “ ” line protection);
RPC mechanism makes the MainStreetXpress core node • no connection, which leads to cell discard, e.g., applied
virtually error-free with respect to sporadic bit errors since for the low-priority traffic of a “ ” line protection if a
the probability that both copies of one cell are corrupted failure on working line/LIC occurs.
is extremely small. In the direction from the external interface toward the ASN,
a line selection function connects the external interface to the
B. Line and Board (LIC) Redundancy currently active LIC in case of board redundancy; for external
Network reliability can be improved significantly by provid- line redundancy, this selection is not necessary. Depending
ing redundant external lines between two network elements. on the interface type (electrical, optical), fiber splitters, p-i-n
The LIC’s support “ ” line protection switching for high diode switches, relays, or a spare bus are used to provide the
bit-rate SDH/SONET interfaces. With the same hardware, line selection.
“ ” line protection switching also can be offered with the In all “1 : 1” and “1 1” redundancy cases, the connection-
option to carry low-priority traffic (which is dropped in case specific tables on the spare LIC are already preloaded with
of a failure) on the protection line. the same information as those in the active LIC. Thus, a short
The line protection switching is implemented according to switchover time 50 ms can be achieved. In a “1 : 1” line
ITU-T G.841 (SDH) and according to Bellcore TR-NWT- redundancy application with low-priority traffic, both routing
000253 (SONET), allowing the nodes to terminate redundant data sets (of working and protection line) are preloaded in the
lines coming from standard transmission equipment. To con- tables of the protection LIC.
trol the switchover, the multiplex section/line overhead bytes
are used to perform the “multiplex section protection” C. Pool Redundancy for Echo Cancelation Circuits (ECC’s)
(MSP, G.783) protocol for SDH interfaces and the similar All ECC’s are managed as a common pool irrespective
“automatic protection switching” (APS) protocol according to of the location of the various ECC boards within the node,
Bellcore TR-NWT-000253 for SONET interfaces. i.e., if a new connection requires echo cancellation, any
RATHGEB et al.: MAINSTREETXPRESS 805

free channel out of this pool can be freely selected by the [6] W. Fischer, T. Rambold, and T. Theimer, “Internet and ATM—Prospects
application software. If, e.g., due to hardware failures, single for the Internet of the future as it integrates ATM capabilities,” Telcom
Rep. Int., vol. 19, 1996.
echo cancellation channels or complete ECC’s have to be taken [7] W. Fischer, E. Wallmeier, T. Worster, S. Davis, and A. Hayter, “Data
out of service, sufficient spare capability is provided, which communications using ATM—Architectures, protocols and resource
can then be allocated to new connections. management,” IEEE Commun. Mag., vol. 32, Aug. 1994.
[8] C. H. Hoogendoorn and A. T. Maher, “Enhanced SW architecture for
an ATM universal communication node,” in Proc. ISS’92, Yokohama,
D. Redundancy of Central Control Japan, vol. 1, paper C4.7.
[9] IETF, RFC 1577, “Classical IP and ARP over ATM,” Jan. 1994.
An MP consists of two physical processors with microsyn- [10] ITU-T, Draft Recommendation E.73X, “Methods for traffic control in
chronous operation. Due to this kind of operation, both sides B-ISDN,” Geneva, Switzerland, Dec. 1994.
[11] A. K. Parekh and R. G. Gallager, “A generalized processor sharing
always have the same information about transient states, and approach to flow control in integrated service networks: The single node
the redundancy is completely transparent to the applications case,” IEEE/ACM Trans. Networking, vol. 1, pp. 344–357, June 1993.
[12] , “A generalized processor sharing approach to flow control in
software. If one side fails, the other side takes over immedi- integrated service networks: The multiple node case,” IEEE/ACM Trans.
ately. The identification of the faulty unit is done by means of Networking, vol. 2, pp. 137–150, Apr. 1994.
the built-in self-test capabilities of the processors. Besides this [13] J. W. Roberts, “Variable bit rate traffic control in B-ISDN,” IEEE
Commun. Mag., vol. 29, pp. 50–56, Sept. 1991.
hardware redundancy, there is a logical redundancy on the [14] , “Virtual spacing for flexible traffic control,” Int. J. Commun.
service level for the central control platform. For replicated Syst., vol. 7, pp. 307–318, 1994.
services, any MP offering the service can replace a faulty MP [15] A. Romanow and S. Floyd, “Dynamics of TCP traffic over ATM
networks,” IEEE J. Select. Areas Commun., vol. 13, no. 4, pp. 633–641,
without impact on the applications. For partitioned services, 1995.
the service may be reassigned to another MP. [16] E. Wallmeier, “A connection acceptance algorithm for ATM networks
based on mean and peak bit rates,” Int. J. Digital and Analogue Commun.
Syst., vol. 3, pp. 143–153, 1990.
IV. CONCLUSION [17] E. Wallmeier, S. Schneeberger, T. Worster, and S. Davis, “Traffic control
in ATM switches with large buffers,” in Proc. 9th ITC Specialists
This paper has provided an insight into the architecture of Seminar, KPN Res., Leidschendam, The Netherlands, Nov. 1995.
the MainStreetXpress core node which has been designed for [18] T. Worster, W. Fischer, S. P. Davis, and A. T. Hayter, “Buffering and
flow control for statistical multiplexing in an ATM switch,” in Proc.
central office applications in the core of an ATM network. The
ISS’95, 1995.
highlights of this architecture are the following:
• the scalability, both in terms of throughput and processing
power, allowing to build configurations ranging from
small single-shelf solutions up to very large central office
switches; Erwin P. Rathgeb was born in Ulm, Germany,
in 1958. He received the Dipl.-Ing. and Dr.-Ing.
• the high availability and reliability which are achieved by (Ph.D.) degrees in electrical engineering from the
using redundancy mechanisms in all central components University of Stuttgart, Germany, in 1985 and 1991,
of the system core as well as in the interface units; respectively.
From 1985 to 1990, he was a member of the
• effective support of traffic management mechanisms Scientific Staff at the Institute of Communications
which lead to high link utilization for bursty data traffic, Switching and Data Techniques, University of
and which are confined within specific SMU modules Stuttgart, where he was head of a research group
on the design and analysis of distributed systems.
avoiding that these complex mechanisms have to be From 1990 to 1991, he was a Member of the
treated in every part of the system; Technical Staff within Applied Research at Bellcore, Morristown, NJ, before
• a software architecture which allows us to introduce joining Bosch Telecom, Backnang, Germany. In 1993, he joined the Public
Communication Networks Group of Siemens AG, Munich, Germany. There,
feature upgrades without a major impact on existing he contributed to the definition of the overall system architecture of the
software, and to upgrade the system without service MainStreetXpress core node. Currently he is involved in product planning for
interruption. broad-band communication systems.

Finally, the use of a number of system-specific ASIC’s pro-


vides, at the same time, optimum performance, high reliability,
and all the features required, without compromises with respect
to the architecture. Wolfgang Fischer (S’87–M’89) received the Dipl.-
Ing. degree in electrical engineering 1983 and the
REFERENCES Dr.-Ing. (Ph.D.) degree in 1989, both from the
University of Stuttgart, Germany.
[1] ATM Forum Tech. Committee, “LAN emulation over ATM,” V 1.0 From 1983 to 1988, he was a member of the
Specification, Jan. 1995. Scientific Staff at the Institute of Communications
[2] , “Baseline text for MPOA,” source: Multiprotocol Sub-Working Switching and Data Techniques, University of
Group, Mar. 1996. Stuttgart. During that period, he was responsible
[3] G. Boiocchi, P. Crocetti, L. Fratta, M. Gerla, and M. A. Marsiglia, for the analysis of ISDN protocols both in terms
“ATM connectionless server: Performance evaluation,” in Proc. IFIP of performance and formal correctness. In 1989,
Workshop TC 6 Modeling and Performance Eval. of ATM Technol., La he held the position of Research Associate at
Martinique, Jan. 1993. INRS Telecommunications, Montreal, Canada, where he was involved in
[4] A. Demers, S. Keshav, and S. Shenker, “Analysis and simulation of a the performance analysis of ATM systems. Since 1990, he has been with the
fair queuing algorithm,” in ACM SIGCOM’89. Public Communication Networks Group of Siemens AG, Munich, Germany,
[5] W. Fischer and O. Fundneider, E.-H. Göldner, and K. A. Lutz, “A where he has held various positions dealing with systems engineering of
scalable ATM switching system architecture,” IEEE J. Select. Areas ATM communications systems. Currently, he is responsible for the product
Commun., vol. 9, pp. 1299–1307, Oct. 1991. line management of broad-band communications systems.
806 IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, VOL. 15, NO. 5, JUNE 1997

Christian Hinterberger was born in Burgkirchen, Regina Wille-Fier was born in Darmstadt, Ger-
Germany, in 1961. He studied electrical engineering many, on April 25, 1956. She studied mathematics
at the Technical University of Munich, where he at the Universities of Tübingen and Münster and
received the Dipl.-Ing. degree in 1986. received the Dr. rer. nat. (Ph.D.) degree for her work
In 1987, he joined the Siemens Research and in the area of functional analysis.
Development Department, where he was involved In 1985, she joined the Research Laboratories
in studies on ATM switching architectures. In 1989, of the Public Communications Networks Group of
he joined the Siemens ATM System Development Siemens AG, Munich, Germany, where she was
Group. There, he was engaged in the development working on software architectures for ATM switches
of ATM switching systems and ATM ASIC’s. From with decentralized control. She participated in the
1994 to 1996, he was with the ATM System Engi- development of the concepts for switching network
neering Group. Since 1997, he has been with the Siemens Semiconductor control, call processing, and internal communication for the MainStreetXpress
Department, responsible for product definition of ATM layer and ATM core node. Currently, she is Director of Systems Engineering for broad-band
adaptation layer ASIC’s. communications systems.

Eugen Wallmeier received the diploma degree and


the Dr. rer. nat. (Ph.D.) degree in mathematics in
1980 and 1983, respectively, from the University of
Muenster, Germany.
In 1980, he became a member of the Scientific
Staff at the Institute for Mathematical Statistics,
University of Muenster. In 1985, he joined the Pub-
lic Communications Networks Group of Siemens
AG, Munich, Germany, where he has been engaged
in software development for an X.25 data processor
and performance analysis of telecommunication sys-
tems. Since 1992, he has been supervising a group working on ATM traffic
and performance issues, and on the design of traffic control functions for
ATM switches.
Dr. Wallmeier is a member of VDE.

S-ar putea să vă placă și