Sunteți pe pagina 1din 12

Advanced Peer-to-Peer Networking (APPN) Advanced Peer-to-Peer Networking (APPN), is a group of protocols for setting up or configuring program-to-program communication

within an IBM SNA network. Using APPN, a group of computers can be automatically configured by one of the computers acting as a network controller so that peer programs in various computers will be able to communicate with other using specified network routing. Advanced Peer-to-Peer Networking (APPN) is an enhancement to the original IBM SNA architecture. APPN, which includes a group of protocols and processors, handles session establishment between peer nodes, dynamic transparent route calculation, and traffic prioritization. In 1981, IBM began to introduce communication standards that developed into a peer-oriented network architecture called Advanced Peer-to-Peer Networking (APPN). The development of APPN marks a significant change from the traditional top-down hierarchical SNA model because APPN supports a form of distributed processing. That is, all computers on an APPN network can communicate directly with each other, without having to depend on centralized type 5 hosts or type 4 communications controllers. This model provides an environment that is more flexible than the traditional top-down hierarchical model. APPN defines how peer-oriented components communicate with each other, as well as the level of network services, such as routing sessions, that are supplied by each computer on the network. APPN features include:
y y y y y

Better distributed network control; because the organization is peer-to-peer rather than solely hierarchical, terminal failures can be isolated Dynamic peer-to-peer exchange of information about network topology , which enables easier connections, reconfigurations, and routing Dynamic definition of available network resources Automation of resource registration and directory lookup Flexibility, which allows APPN to be used in any type of network topology

How Dynamic Configuration Works APPN works with Advanced Program-to-Program Communication (APPC) software that defines how programs will communicate with each other through two interfaces: one that responds to requests from application programs that want to communicate and one that exchanges information with communications hardware. When one program wants to communicate with another, it sends out a request (called an allocate call) that includes the destination's logical unit (LU) name - the APPC program on each computer that uniquely identifies it). APPC sets up a session between the originating and destination LUs. APPN network nodes are differentiated as low entry networking (LEN) nodes, end nodes (ENs), and network nodes (NNs). When the network computers are powered on and the software activated, links are established throughout the specified topology. The linked nodes exchange information automatically. If we consider a simplified APPN network, with one end node connected to a network node, the following would describe the sequence of events:

y y y y y

Each node indicates APPN capability and defines its node type. The network node asks the end node if it requires a network node server, which handles requests for LU locations. If it responds that it does, the two nodes establish APPC sessions to exchange programto-program information. The end node registers any other LUs defined at its node by sending the networked node formatted information gathered from the APPC session. After this sequence is completed, the network node knows the location of the EN and what LUs are located there. This information, multiplied across the network, enables LU location and routing.

Note that APPN has nothing to do with controversial peer-to-peer file sharing software such as KaZaa or Napster. The designation peer-to-peer in the case of APPN refers to its independence from a central point of control, similar to the way that a FireWire PC connection allows a video camera to talk directly to a disk drive on the FireWire network. APPN evolved to include a more efficient data routing layer which was called High Performance Routing (HPR). HPR was made available across a range of Enterprise Corporation networking products in the late 1990s, but today is typically used only within IBM's z/OS environments as a replacement for legacy SNA networks. It seems to be still widely used within UDP tunnels; this technology is known as Enterprise Extender. An APPN network is composed of three types of APPN node:
y y

Low Entry Networking (LEN) Node - APPN LEN node provides peer to peer connectivity with all other APPN nodes. End Node- An End Node is similar to a LEN node in that it participates at the periphery of an APPN network. An End Node includes a Control Point (CP) for network control information exchange with an adjacent network node. Network Node - The backbone of an APPN network is composed of one or more Network Nodes which provide network services to attached LEN and End Nodes.

The APPN network has the following major functional processors: Connectivity- The first phase of operation in an APPN network is to establish a physical link between two nodes. When it has been established, the capabilities of the two attached nodes are exchanged using XIDs. At this point, the newly attached node is integrated into the network. Location of a Targeted LU- Information about the resources (currently only LUs) within the network is maintained in a database which is distributed across the End and Network Nodes in the network. End Nodes hold a directory of their local LUs. If the remote LU is found in the directory, a directed search message is sent across the network to the remote machine to ensure that the LU has not moved since it was last used or registered. If the local search is unsuccessful, a broadcast search is initiated across the network. When the node containing the remote LU receives a directed or broadcast search message, it sends back a positive response. A negative response is sent back if a directed or broadcast search fails to find the remote LU. Route Selection- When a remote LU has been located, the originating Network Node server calculates the best route across the network for a session between the two LUs. Every Network Node in the APPN network backbone maintains a replicated topology database. This is used to

calculate the best route for a particular session, based on the required class of service for that session. The class of service specifies acceptable values for session parameters such as propagation delay, throughput, cost and security. The route chosen by the originating Network Node server is encoded in a route selection control vector (RSCV). Session Initiation - A BIND is used to establish the session. The RSCV describing the session route is appended to the BIND. The BIND traverses the network following this route. Each intermediate node puts a session connector for that session in place, which links the incoming and outgoing paths for data on the session. Data Transfer- Session data follows the path of the session connectors set up by the initial BIND. Adaptive pacing is used between each node on the route. The session connectors on each intermediate node are also responsible for segmentation and segment assembly when the incoming and outgoing links support different segment sizes. Dependent LU Requestor- Dependent LUs require a host based System Services Control Point (SSCP) for LU-LU session initiation and management. This means that dependent LUs must be directly attached to a host via a single data link. High-performance routing (HPR)- HPR is an extension to the APPN architecture. HPR can be implemented on an APPN network node or an APPN end node. HPR does not change the basic functions of the architecture. HPR has the following key functions:
y y y y

Improves the performance of APPN routing by taking advantage of high-speed, reliable links Improves data throughput by using a new rate-based congestion control mechanism Supports non disruptive re-routing of sessions around failed links or nodes Reduces the storage and buffering required in intermediate nodes.

The SNA APPN model defines its own standards for the following components:

Hardware Components or Nodes: Hardware that provides the computing platforms and network devices that implement specific SNA APPN communications and management functions. Connection Types: Hardware and communication standards that provide the data communication paths between components in an SNA APPN network. Physical Units (PUs): Hardware and software that provide the configuration support and control of the SNA APPN network devices, connections, and protocols. Logical Units (LUs): Protocols that provide a standardized format for delivery of data for specific applications, such as terminal access and printing. Note Although the SNA APPN network model is organized into the same component classes as the hierarchical SNA network model, the components themselves are often quite different from the components used in the hierarchical model.

Asynchronous Transfer Mode (ATM) is a network technology based on transferring data in cells or packets of a fixed size; it is a technology that has the potential of revolutionizing data communications and telecommunications. The cell used with ATM is relatively small compared to units used with older technologies. The small, constant cell size allows ATM equipment to transmit video, audio, and computer data over the same network, and assure that no single type of data hogs the line. ATM creates a fixed channel, or route, between two points whenever data transfer begins. This differs from TCP/IP, in which messages are divided into packets and each packet can take a different route from source to destination. This difference makes it easier to track and bill data usage across an ATM network, but it makes it less adaptable to sudden surges in network traffic. ATM has functional similarity with both circuit switched networking and small packet switched networking. This makes it a good choice for a network that must handle both traditional highthroughput data traffic (e.g., file transfers), and real-time, low-latency content such as voice and video. ATM uses a connection-oriented model in which a virtual circuit must be established between two endpoints before the actual data exchange begins. ATM is a core protocol used over the SONET/SDH backbone of the Integrated Services Digital Network (ISDN). Because ATM is designed to be easily implemented by hardware (rather than software), faster processing and switch speeds are possible. The prespecified bit rates are either 155.520 Mbps or 622.080 Mbps. Speeds on ATM networks can reach 10 Gbps. Along with Synchronous Optical Network (SONET) and several other technologies, ATM is a key component of broadband ISDN (BISDN). Four different types of service: y y y y constant bit rate (CBR): specifies a fixed bit rate so that data is sent in a steady stream. This is analogous to a leased line. variable bit rate (VBR): provides a specified throughput capacity but data is not sent evenly. This is a popular choice for voice and videoconferencing data. available bit rate (ABR): provides a guaranteed minimum capacity but allows data to be bursted at higher capacities when the network is free. unspecified bit rate (UBR): does not guarantee any throughput levels. This is used for applications, such as file transfer, that can tolerate delays.

Traffic shaping Traffic shaping usually takes place at the entry point to an ATM network and attempts to ensure that the cell flow will meet its traffic contract. Traffic policing To maintain network performance, networks may police virtual circuits against their traffic contracts. If a circuit is exceeding its traffic contract, the network can either drop the cells or mark the Cell Loss Priority (CLP) bit (to identify a cell as discardable farther down the line). Basic policing works on a cell by cell basis, but this is sub-optimal for encapsulated packet traffic (as discarding a single cell will invalidate the whole packet). As a result, schemes such as

Partial Packet Discard (PPD) and Early Packet Discard (EPD) have been created that will discard a whole series of cells until the next frame starts. This reduces the number of useless cells in the network, saving bandwidth for full frames. EPD and PPD work with AAL5 connections as they use the frame end bit to detect the end of packets Why cells? Consider a speech signal reduced to packets, and forced to share a link with bursty data traffic (traffic with some large data packets). No matter how small the speech packets could be made, they would always encounter full-size data packets, and under normal queuing conditions, might experience maximum queuing delays. That is why all packets, or "cells," should have the same small size. In addition the fixed cell structure means that ATM can be readily switched by hardware without the inherent delays introduced by software switched and routed frames. Thus, the designers of ATM utilized small data cells to reduce jitter (delay variance, in this case) in the multiplexing of data streams. Reduction of jitter (and also end-to-end round-trip delays) is particularly important when carrying voice traffic, because the conversion of digitized voice into an analog audio signal is an inherently real-time process, and to do a good job, the codec that does this needs an evenly spaced (in time) stream of data items. If the next data item is not available when it is needed, the codec has no choice but to produce silence or guess and if the data is late, it is useless, because the time period when it should have been converted to a signal has already passed. At this rate, a typical full-length 1500 byte (12000-bit) data packet would take 77.42 s to transmit. In a lower-speed link, such as a 1.544 Mbit/s T1 line, a 1500 byte packet would take up to 7.8 milliseconds. A queuing delay induced by several such data packets might exceed the figure of 7.8 ms several times over, in addition to any packet generation delay in the shorter speech packet. This was clearly unacceptable for speech traffic, which needs to have low jitter in the data stream being fed into the codec if it is to produce good-quality sound. A packet voice system can produce this in a number of ways:
y

Have a playback buffer between the network and the codec, one large enough to tide the codec over almost all the jitter in the data. This allows smoothing out the jitter, but the delay introduced by passage through the buffer would require echo cancellers even in local networks; this was considered too expensive at the time. Also, it would have increased the delay across the channel, and conversation is difficult over high-delay channels. Build a system which can inherently provide low jitter (and minimal overall delay) to traffic which needs it. Operate on a 1:1 user basis (i.e., a dedicated pipe).

The design of ATM aimed for a low-jitter network interface. However, to be able to provide short queueing delays, but also be able to carry large datagrams, it had to have cells. ATM broke up all packets, data, and voice streams into 48-byte chunks, adding a 5-byte routing header to

each one so that they could be reassembled later. The choice of 48 bytes was political rather than technical.[3] When the CCITT was standardizing ATM, parties from the United States wanted a 64-byte payload because this was felt to be a good compromise in larger payloads optimized for data transmission and shorter payloads optimized for real-time applications like voice; parties from Europe wanted 32-byte payloads because the small size (and therefore short transmission times) simplify voice applications with respect to echo cancellation. Most of the European parties eventually came around to the arguments made by the Americans, but France and a few others held out for a shorter cell length. With 32 bytes, France would have been able to implement an ATM-based voice network with calls from one end of France to the other requiring no echo cancellation. 48 bytes (plus 5 header bytes = 53) was chosen as a compromise between the two sides. 5-byte headers were chosen because it was thought that 10% of the payload was the maximum price to pay for routing information. ATM multiplexed these 53-byte cells instead of packets. Doing so reduced the worst-case jitter due to cell contention by a factor of almost 30, minimizing the need for echo cancellers. Cells in practice ATM supports different types of services via ATM Adaptation Layers (AAL). Standardized AALs include AAL1, AAL2, and AAL5, and the rarely used AAL3 and AAL4. AAL1 is used for constant bit rate (CBR) services and circuit emulation. Synchronization is also maintained at AAL1. AAL2 through AAL4 are used for variable bit rate (VBR) services and AAL5 for data. Which AAL is in use for a given cell is not encoded in the cell. Instead, it is negotiated by or configured at the endpoints on a per-virtual-connection basis. Following the initial design of ATM, networks have become much faster. A 1500 byte (12000-bit) full-size Ethernet packet takes only 1.2 s to transmit on a 10 Gbit/s optical network, reducing the need for small cells to reduce jitter due to contention. Some consider that this makes a case for replacing ATM with Ethernet in the network backbone. However, it should be noted that the increased link speeds by themselves do not alleviate jitter due to queuing. Additionally, the hardware for implementing the service adaptation for IP packets is expensive at very high speeds. Why virtual circuits? ATM operates as a channel-based transport layer, using Virtual circuits (VCs). This is encompassed in the concept of the Virtual Paths (VP) and Virtual Channels. Every ATM cell has an 8- or 12-bit Virtual Path Identifier (VPI) and 16-bit Virtual Channel Identifier (VCI) pair defined in its header. Together, these identify the virtual circuit used by the connection. The length of the VPI varies according to whether the cell is sent on the user-network interface (on the edge of the network), or if it is sent on the network-network interface (inside the network). As these cells traverse an ATM network, switching takes place by changing the VPI/VCI values (label swapping). Although the VPI/VCI values are not necessarily consistent from one end of the connection to the other, the concept of a circuit is consistent (unlike IP, where any given packet could get to its destination by a different route than the others). Another advantage of the use of virtual circuits comes with the ability to use them as a multiplexing layer, allowing different services (such as voice, Frame Relay, n* 64 channels, IP).

Using cells and virtual circuits for traffic engineering Another key ATM concept involves the traffic contract. When an ATM circuit is set up each switch on the circuit is informed of the traffic class of the connection. ATM traffic contracts form part of the mechanism by which "Quality of Service" (QoS) is ensured. There are four basic types (and several variants) which each have a set of parameters describing the connection. 1. CBR - Constant bit rate: a Peak Cell Rate (PCR) is specified, which is constant. 2. VBR - Variable bit rate: an average cell rate is specified, which can peak at a certain level for a maximum interval before being problematic. 3. ABR - Available bit rate: a minimum guaranteed rate is specified. 4. UBR - Unspecified bit rate: traffic is allocated to all remaining transmission capacity. VBR has real-time and non-real-time variants, and serves for "bursty" traffic. Non-real-time is usually abbreviated to vbr-nrt. Most traffic classes also introduce the concept of Cell Delay Variation Tolerance (CDVT), which defines the "clumping" of cells in time. To maintain traffic contracts, networks usually use "shaping", a combination of queuing and marking of cells. "Policing" generally enforces traffic contracts.

Types of virtual circuits and paths ATM can build virtual circuits and virtual paths either statically or dynamically. Static circuits (permanent virtual circuits or PVCs) or paths (permanent virtual paths or PVPs) require that the provisioner must build the circuit as a series of segments, one for each pair of interfaces through which it passes. PVPs and PVCs, though conceptually simple, require significant effort in large networks. They also do not support the re-routing of service in the event of a failure. Dynamically built PVPs (soft PVPs or SPVPs) and PVCs (soft PVCs or SPVCs), in contrast, are built by specifying the characteristics of the circuit (the service "contract") and the two endpoints. Finally, ATM networks build and tear down switched virtual circuits (SVCs) on demand when requested by an end piece of equipment. One application for SVCs is to carry individual telephone calls when a network of telephone switches are inter-connected by ATM. SVCs were also used in attempts to replace local area networks with ATM. Virtual circuit routing Most ATM networks supporting SPVPs, SPVCs, and SVCs use the Private Network Node Interface or Private Network-to-Network Interface (PNNI) protocol. PNNI uses the same shortest-path-first algorithm used by OSPF and IS-IS to route IP packets to share topology information between switches and select a route through a network. PNNI also includes a very

powerful summarization mechanism to allow construction of very large networks, as well as a call admission control (CAC) algorithm that determines whether sufficient bandwidth is available on a proposed route through a network to satisfy the service requirements of a VC or VP. Call admission and connection establishment A network must establish a connection before two parties can send cells to each other. In ATM this is called a virtual circuit (VC). It can be a permanent virtual circuit (PVC), which is created administratively on the endpoints, or a switched virtual circuit (SVC), which is created as needed by the communicating parties. SVC creation is managed by signaling, in which the requesting party indicates the address of the receiving party, the type of service requested, and whatever traffic parameters may be applicable to the selected service. "Call admission" is then performed by the network to confirm that the requested resources are available and that a route exists for the connection.

The ATM Network The technology allows both public (i.e., RBOC or carrier) and private (i.e., LAN or LAN-tointernal switch) ATM networks. This capability gives a seamless and transparent (to the user) connection from one end user to another end user, whether in the same building or across two continents. Three types of interfaces exist in the network are: 1. User-to-Network Interface (UNI) 2. Network-to-Network Interface (NNI) 3. Inter-Carrier Interface (ICI) The UNI exists between a single end user and a public ATM network, between a single end user and a private ATM switch, or between a private ATM switch and the public ATM network of an RBOC. The NNI exists between switches in a single public ATM network. NNIs may also exist between two private ATM switches. The ICI is located between two public ATM networks (an RBOC and an interexchange carrier). All of these interfaces are very similar. The major differences between these types of interfaces are administrative and signalling related. The only type of signalling exchanged across the UNI is that required to set up a VIRTUAL CHANNEL for the transmission. Communication across the NNI and the ICI will require signalling for virtual-path and virtualchannel establishment together with various exchange mechanisms for the exchange of information such as routing tables, etc. The network functions as follows: End User 1 in Chicago wishes to transfer a data file to End User 2 in Los Angeles. A virtual channel is created and a virtual path is established from switch

to switch within the public ATM network in Chicago (ATM Network 1). The Chicago RBOC, in turn, establishes contact with the public ATM network in Los Angeles (ATM Network 2). ATM Network 2 also establishes a virtual path from switch to switch within the network and with the Private ATM Switch at the destination. The private ATM network completes the virtual path by establishing a virtual channel with End User 2. At each interface in this network, a unique virtual path identifier (VPI) and virtual channel identifier (VCI) are established for this transmission. These identifiers are of local significance ONLY: the identifier is significant only for a specific switch and the two nodes adjacent to it in the virtual path. Each node within the virtual path (including both the end users and the switches) maintain a pool of inactive identifiers to be used as needed. End User 2 encapsulates the file in 53-byte cells, each with its unique VPI/VCI "destination address" in the header. These cells are streamed and sent across the UNI to the ATM network switch. This switch reads the ATM header, consults the routing table created during the virtual path setup, changes the VPI/VCI as necessary, and sends each cell in the stream out of the appropriate port and across the NNI to the next switch in the virtual path. The last switch within the virtual path for ATM Network 1 repeats this process and sends the cell out through the ICI to ATM Network 2. ATM Network 2 continues the process in a similar manner until the cell is carried through the UNI to the Private ATM Switch which, in turn, sends the cell to End User 2. End User 2 then reconstructs the file from the sequential cells, stripping the 5-byte header from each cell. End User 1 or End User 2 terminates the call, i.e., "hangs up," and the virtual path is dismantled. The VCI and VPI values are returned to the pool of available values for each switch. Notice that only the End Users at either end of the transmission deal with the 48-byte information load within the cell. At each stage of the transmission, the switch is only concerned with accepting the cell from one port, changing the VPI/VCI according to its tables, and routing the cell out the appropriate switch port. ATM Layered Architecture At the End User sites, ATM operates with a layered structure that is similar to the OSI 7-layered model. However, ATM only addresses the functionality of the two lowest layers of the OSI model: the physical layer (Layer 1) and the data link layer (Layer 2). All other layers are irrelevant in ATM, as these layers are only part of the encapsulated information portion of the cell which is not used by the ATM network. In ATM, the functionality of the two lower OSI layers are handled by three layers y ATM Adaptation Layer - Convergence Sublayer

y y

ATM Layer Physical Layer - Transmission Convergence Sublayer - Segmentation & Reassembly Sublayer - Physical Medium Dependent Sublayer

The Physical Layer defines the medium for transmission, any medium-dependent parameters (e.g., rate, quality of service required), and framing used to find the data contained within the medium. The ATM Layer provides the basic 53-byte cell format, by defining the 5-byte ATM header for each 48-byte payload segment handed down by the AAL. The ATM Adaptation Layer (AAL) adapts the higher-level data into formats compatible with the ATM Layer requirements, i.e., this layer segments the data and adds appropriate error control information as necessary. It is dependent on the type of services (voice, data, etc.) being transported by the higher layer. The ATM Cell Each individual ATM cell consists of a 5-byte cell header and 48 bytes of information encapsulated within its payload. The ATM network uses the header to support the virtual path Generic Flow Control (GFC) The GFC field of the header is only defined across the UNI. It is intended to control the traffic flow across the UNI and to alleviate short-term overload conditions. It is currently undefined and these 4 bits must be set to 0's. Virtual Path Identifier (VPI) The VPI, an 8-bit field for the UNI and a 12-bit field for the NNI, is used to identify virtual paths. In an idle cell, the VPI is set to all 0's. (Together with the Virtual Channel Identifier, the VPI provides a unique local identification for the transmission.) Virtual Channel Identifier (VCI) This 16-bit field is used to identify a virtual channel. For idle cells, the VCI is set to all 0's. (Together with the Virtual Path Identifier, the VCI provides a unique local identification for the transmission.)

Payload Type Identifier (PTI) The three bits of the PTI are used for different purposes. Bit 4 is set to 1 to identify operation, administration, or maintenance cells (i.e., anything other than data cells). Bit 3 is set to 1 to indicate that congestion was experienced by a data cell in transmission and is only valid when bit 4 is set to 0. Bit 2 is used by AAL 5 to identify the data as Type 0 (beginning of message, continuation of message; bit = 0) or Type 1 (end of message, single-cell message; bit = 1) when bit 4 is set to 0. It may also be used for management functions when bit 4 is set to 1. This bit is currently carried transparently through the network and has no meaning to the end user when AAL 5 is NOT in use. Cell Loss Priority (CLP) The 1-bit CLP field is used for explicit indication of the priority of the cell. It may be set by the AAL Layer to indicate cells to discard in cases of congestion, or by the network as part of the traffic management on commercial subscriber networks. Header Error Control (HEC) This is an 8-bit cyclical redundancy check computed for all fields of the first 4 bytes of the ATM cell header ONLY. It is capable of detecting all single-bit errors and some multiple-bit errors. The HEC is compared by each switch as the ATM cell is received and all cells with HEC discrepancies (errors) are discarded. Cells with single-bit errors may be subject to error correction (if supported or discarded. When a cell is passed through the switch and the VPI/VCI values are altered, the HEC is recalculated for the cell prior to being passed out the port. Operations and Maintenance Cells (OAM) The Operations and Maintenance (OAM) cells are used to provide various maintenance functions within the ATM network, including connectivity verification and alarm surveillance. These cells consist of a single segment with an ATM header (

S-ar putea să vă placă și