Sunteți pe pagina 1din 36

Computer Networks

Assignment Set - 1
Q1. Explain all design issues for several layers in Computer. What is connection oriented and connectionless service?

Ans: Design issues for the layers: The various key design issues are present in severallayers in computer networks. The important design issues are:1. Addressing: Mechanism for identifying senders and receivers, on the network need someform of addressing. There are multiple processes running on one machine. Some means isneeded for a process on one machine to specify with whom it wants to communicate.2. Error Control: There may be erroneous transmission due to several problems duringcommunication. These are due to problem in communication circuits, physical medium, dueto thermal noise and interference. Many error detecting and error correcting codes are known, but both ends of the connection must agree on which one being used. In addition, the receiver must have some mechanism of telling the sender which messages have been receivedcorrectly and which has not.3. Flow control: If there is a fast sender at one end sending data to a slow receiver, then theremust be flow control mechanism to control the loss of data by slow receivers. There areseveral mechanisms used for flow control such as increasing buffer size at receivers, slowdown the fast sender, and so on. Some process will not be in position to accept arbitrarilylong messages. Then, there must be some mechanism to disassembling, transmitting and thenreassembling messages.4. Multiplexing / demultiplexing: If the data has to be transmitted on transmission mediaseparately, it is inconvenient or expensive to setup separate connection for each pair of communicating processes. So, multiplexing is needed in the physical layer at sender end anddemultiplexing is need at the receiver end.5.

Routing: When data has to be transmitted from source to destination, there may bemultiple paths between them. An optimized (shortest) route must be chosen. This decision ismade on the basis of several routing algorithms, which chooses optimized route to thedestination. Connection Oriented and Connectionless Services Layers can offer two types of services namely connection oriented service and connectionlessservice.Connection oriented service: The service user first establishes a connection, uses theconnection and then releases the connection. Once the connection is established betweensource and destination, the path is fixed. The data transmission takes place through this pathestablished. The order of the messages sent will be same at the receiver end. Services arereliable and there is no loss of data. Most of the time, reliable service providesacknowledgement is an overhead and adds delay.Connectionless Services: In this type of services, no connection is established between sourceand destination. Here there is no fixed path. Therefore, the messages must carry fulldestination address and each one of these messages are sent independent of each other.Messages sent will not be delivered at the destination in the same order. Thus, grouping andordering is required at the receiver end, and the services are not reliable. There is noacknowledgement confirmation from the receiver. Unreliable connectionless service is oftencalled datagram service, which does not return an acknowledgement to the sender. In somecases, establishing a connection to send one short messages is needed. But reliability isrequired, and then acknowledgement datagram service can be used for these applications.Another service is the request-reply service. In this type of service, the sender transmits asingle datagram containing a request from the client side. Then at the other end, server replywill contain the answer. Request-reply is commonly used to implement communication in theclient-server model. Service Primitives Primitives are operations. Service is specified as a set of primitives available to a user process. The primitives for connection oriented service are different from those of connectionless service.There are five service primitives for connection service. They are:1. LISTEN Block waiting for an incoming connection.2. CONNECT Establish connection with peer on other side.3. RECEIVE Block waiting for an incoming message.4. SEND Send a message to the peer.5. DISCONNECT Terminate a connection from the peer.The above primitives are used for illustrating connection oriented services interactions

Q2. Discuss OSI Reference model. Ans: The OSI model is based on a proposal developed by the International StandardsOrganization as a first step towards international standardization of the protocols used in thevarious layers. The model is called the ISO (International Standard Organization OpenSystems Interconnection) Reference Model because it deals with connecting open systems that is, systems that follow the standard are open for communication with other systems,irrespective of a manufacturer. Its main objectives were to: Allow manufacturers of different systems to interconnect equipment through a standardinterfaces. Allow software and hardware to integrate well and be portable on different systems.The OSI model has seven layers shown in Figure. The principles that were applied to arriveat the seven layers are as follows:1. Each layer should perform a welldefined function.2. The function of each layer should be chosen with an eye toward defining internationallystandardized protocols.3. The layer boundaries should be chosen to minimize the information flow across theinterfaces.The set of rules for communication between entities in a layer is called protocol for that layer.

The Physical Layer The Physical layer coordinates the function required to carry a bit (0s and 1s) stream over a physical medium. It defines electrical and mechanical specifications of cables, connectorsand signaling options that physically link two nodes on a network. The Data Link Layer The main task of the data link layer is to provide error free transmission. It accomplishes thistask by having the sender configure the input data into data frames, transmit the framessequentially, between network devices and process the acknowledgement frames sent back bythe intermediate receiver. The data link layer creates and recognizes frame boundaries. Thiscan be accomplished by attaching special bit pattern to the beginning and end of the frame.Since these bit patterns can accidentally occur in the data, special care must be taken to makesure these patterns are not incorrectly interpreted as frame boundaries. The Network Layer Whereas the data link layer is responsible for delivery on a hop, the network layer ensuresthat each packet travels from its sources and destination successfully and efficiently.

A keydesign issue is determining how packets are routed from source to destination. Routes can be based on static tables that are wired into the network and rarely changed. They can also bedetermined at the start of each conversation, for example, a terminal session. Finally, they can be highly dynamic, being determined a new for each packet, to reflect the current network load. When a packet has to travel from one network to another to get its destination, many problems can arise. The addressing used by the second network may be different from thefirst one. The second network may not accept the packet at all because it is too large. The protocols may differ, and so on. It is up to the network layer to overcome all these problemsto allow heterogeneous networks to be interconnected. The Transport Layer The basic function of the transport layer is to accept data from the session layer, split it upinto smaller units if need be, pass these to the network layer, and ensure that the pieces all arrive correctly at other end. Furthermore, all this must be done efficiently, and in a way thatisolates the upper layers from the inevitable changes in the hardware technology. Transportlayer provides location and media independent end-to-end data transfer service to session andupper layers. The Session Layer The session layer allows users on different machines to establish sessions between them. Asession allows ordinary data transport, as does the transport layer, but it also providesenhanced services useful in some applications. A session might by used to allow a user to loginto a remote timesharing systems or to transfer a file between two machines.One of the services of the session layer is to manage dialogue control . Sessions can allowtraffic to go in both directions at the same time, or in only one direction at a time. If trafficcans only way at a time (analogous to a single railroad track), the session layer can help keeptrack of whose turn it is.A related session service is token management. For some protocols, it is essential that bothsides do not attempt the same operation at the same time. To manage these activities, thesession layer provides tokens that can be exchanged. Only the side holding the token may perform the desired operation.Another session service is synchronization. . Consider the problem that might occur whentrying to do a 2 hour file transfer between two machines with an one hour mean time betweencrashes. After each transfer was aborted, the whole transfer would have to start over againand would probably fail again the next time as well. To eliminate this problem, the sessionlayer provides a way to insert markers after the appropriate checkpoints. The Presentation Layer Unlike all the lower layers, which are just interested in moving bits reliably from here tothere, the presentation layer is concerned with the syntax and semantics of the informationtransmitted.A typical example of a presentation service is encoding data in

standard agreed upon way.Most user programs do not exchange random binary bit strings, they exchange things such as peoples names, dates, amounts of money and invoices. These items are represented ascharacter strings, integers, floating-point number, and data structures composed of severalsimpler items. Different computers have different codes for representing character strings(e.g., ASCII and Unicode), integers (e.g., ones complement and twos complement), and soon. In order to make it possible for computers with different representations to communicate,the data structure to be exchanged can be defined in an abstract way, along with a standardencoding to be used on the wire. The presentation layer manages these abstract datastructures and converts from the representation used inside the computer to the network standard representation and back. The Application Layer Application layer supports functions that control and supervise OSI application processessuch as start/maintain/stop application, allocate/ de allocate OSI resources, accounting, and check point and recovering. It also supports remote job execution, file transfer protocol,message transfer and virtual terminal.

Q 3. Describe different types of Data transmission modes. Ans: The transmission of binary data across a link can be accomplished in either parallel or serial mode. In parallel mode, multiple bits are sent with each clock tick. In serial mode, 1 bitis sent with each clock tick. While there is one way to send parallel data, there are threesubclasses of serial transmission: asynchronous, synchronous, and isochronous.

Serial and ParallelSerial Transmission In serial transmission one bit follows another, so we need only one communication channelrather than n to transmit data between two communicating devices.The advantage of serial over parallel transmission is that with only one communicationchannel, serial transmission reduces cost of transmission over parallel by roughly a factor of n.Since communication within devices is parallel, conversion devices are required at theinterface between the sender and the line (parallel-to-serial) and between the line and thereceiver

(serial-to-parallel). Serial transmission occurs in one of three ways: asynchronous,synchronous, and isochronous. Parallel Transmission Binary data, consisting of 1 s and 0 s, may be organized into groups of n bits each.Computers produce and consume data in groups of bits much as we conceive of and usespoken language in the form of words rather than letters. By grouping, we can send data n bits at a time instead of 1. This is called parallel transmission.The mechanism for parallel transmission is a simple one: Use n wires to send n bits at onetime. That way each bit has its own wire, and all n bits of one group can be transmitted witheach clock tick from one device to another.The advantage of parallel transmission is speed. All else being equal, parallel transmissioncan increase the transfer speed by a factor on n over serial transmission. But there is a significant disadvantage: cost. Parallel transmission requires n communicationlines just to transmit the data stream. Because this is expensive, parallel transmission isusually limited to short distances. Simplex, Half duplex and Full duplex There are three modes of data transmission that correspond to the three types of circuitsavailable. These are:a) Simplex b) Half-duplexc) Full-duplex Different Modes of Data TransmissionSimplex Simplex communications imply a simple method of communicating, which they are. Insimplex communication mode, there is a one-way communication transmission. Televisiontransmission is a good example of simplex communications. The main transmitter sends out asignal (broadcast), but it does not expect a reply as the receiving units cannot issue a reply back to the transmitter. A data collection terminal on a factory floor or a line printer (receiveonly). Another example of simplex communication is a keyboard attached to a computer because the keyboard can only send data to the computer.At first thought it might appear adequate for many types of application in which flow of information is unidirectional. However, in almost all data processing applications,communication in both directions is required. Even for a one-way flow of information froma terminal to computer, the system will be designed to allow the computer to signal theterminal that data has been received. Without this capability, the remote used might enter dataand never know that it was not received by the other terminal. Hence, simplex circuits areseldom used because a return path is generally needed to send acknowledgement, control or error signals. Half-duplex In half-duplex mode, both units communicate over the same medium, but only one unit cansend at a time. While one is in send mode, the other unit is in receiving mode. It is like two polite people talking to each other one talks, the other listens, but neither one talks at thesame time. Thus, a half duplex line can alternately send and receive data. It requires twowires. This is the most common type of transmission for voice communications because onlyone person is supposed to speak at a time. It is also used to connect a terminal with acomputer. The terminal might transmit data and then the computer responds with

anacknowledgement. The transmission of data to and from a hard disk is also done in half duplex mode. Full duplex In a half-duplex system, the line must be turned around each time the direction is reversed.This involves a special switching circuit and requires a small amount of time (approximately150 milliseconds). With high speed capabilities of the computer, this turnaround time isunacceptable in many instances. Also, some applications require simultaneous transmissionin both directions. In such cases, a full-duplex system is used that allows information to flowsimultaneously in both directions on the transmission path. Use of a fullduplex line improvesefficiency as the line turn-around time required in a half-duplex arrangement is eliminated. Itrequires four wires. Synchronous and Asynchronous transmissionSynchronous Transmission In synchronous transmission, the bit stream is combined into longer frames, which maycontain multiple bytes. Each byte, however, is introduced onto the transmission link without agap between it and the next one. It is left to the receiver to separate the bit stream into bytesfor decoding purpose. In other words, data are transmitted as an unbroken sting of 1s and 0s,and the receiver separates that string into the bytes, or characters, it needs to reconstruct theinformation.Without gaps and start and stop bits, there is no built-in mechanism to help the receivingdevice adjust its bits synchronization midstream. Timing becomes very important, therefore, because the accuracy of the received information is completely dependent on the ability of the receiving device to keep an accurate count of the bits as they come in.The advantage of synchronous transmission is speed. With no extra bits or gaps to introduceat the sending end and remove at the receiving end, and, by extension, with fewer bits tomove across the link, synchronous transmission is faster than asynchronous transmission of data from one computer to another. Byte synchronization is accomplished in the data link layer.

Asynchronous Transmission Asynchronous transmission is so named because the timing of a signal is unimportant.Instead, information is received and translated by agreed upon patterns. As long as those patterns are followed, the receiving device can retrieve the information without regard to therhythm in which it is sent. Patterns are based on grouping the bit stream into bytes. Eachgroup, usually 8 bits, is sent along the link as a unit. The sending system handles each groupindependently, relaying it to the link whenever ready, without regard to t timer.Without synchronization, the receiver cannot use timing to predict when the next group willarrive. To alert the receiver to the arrival of an new group, therefore, an extra bit is added tothe beginning of each byte. This bit, usually a 0, is called the start bit. To let the receiver know that the byte is finished, 1 or more additional bits are appended to the end of the byte.These bits, usually 1s, are called stop bits.By this method, each byte is increased in size to at least 10 bits, of which 8 bits is informationand 2 bits or more are signals to the receiver. In addition, the transmission of each byte maythen be followed by a gap of varying duration. This gap can be represented either by an idlechannel or by a stream of additional stop bits.The start and stop bits and the gap alert the receiver to the beginning and end of the each byteand also it to synchronize with the data stream. This mechanism is called asynchronous because, at the byte level, the sender and receiver do not have to be synchronized. But withineach byte, the receiver must still by synchronized with the incoming bit stream.That is, some synchronization is required, but only for the duration of a single byte. Thereceiving device resynchronizes at the onset of each new byte. When the receiver detects astart bit, it sets a timer and begins counting bits as they come in. after n bits, the receiver looks for a stop bit. As soon as it detects the stop bit, it waits until it detects the next start bit.

Asynchronous transmissionIsochronous Transmission In real-time audio and video, in which uneven delays between frames are not acceptable,synchronous transmission fails. For example, TV images are broadcast at the rate of 30images per second; they must be viewed at the same rate. If each image is send by using oneor more frames, there should be no delays between frames. For this type of

application,synchronization between characters is not enough; the entire stream of bits must besynchronized. The isochronous transmission guarantees that the data arrive at a fixed rate. Q 4. Define Switching. What is the difference between circuit switching and PacketSwitching? Ans: A network is a set of connected devices. Whenever we have multiple devices, we havethe problem of how to connect them to make one-to-one communication possible. One of the better solutions is switching. A switch is network consists of a series of interlinked nodes,called switches. Switches are devices capable of creating temporary connections between twoor more devices linked to the switch. In a switched network, some of these nodes areconnected to the end systems (computers or telephones). Others are used only for routing.Switched networks are divided, as shown in the figure.

Different types of switching techniques3.4.1 Circuit switching A circuit switched network consists of a set of switches connected by physical links. Aconnection between two stations is a dedicated path made of one or more links. It is mainlyused for telephones to call from one to one.

Circuit switching in telephone In the figure, each office has three incoming lines and three outgoing lines. When call passesthrough a switching office, a physical connection is established between the line on which thecall came in and one of the output lines, as shown by the dotted lines. An important propertyof circuit switching is the need to set up an end-to-end path before any data can be sent. Theelapsed time between the end of dialing and the start of ringing can easily be 10 sec, more onlong-distance or international calls. Before data transmissions begin, the destinationtelephone should give acknowledgement. Once call setup, the only delay for data is the propagation time for the electromagnetic signal, about 5 msec per 1000 km. There is no problem of congestion. Packet switching In packet switching, we transfer the messages in terms of small block fixed sizes called packets. In packet switching, there is no path; packets are routed independently by sharing thenetwork at time to time, by following the best path to the destination. Packets can be in order to the destination. Packet switching is more fault tolerant than circuit switching. The store-and-forward transmission is used to route to the destination, while storing the packet in therouters main memory. Congestion may occur when more packets are sending from thevarious hosts. Comparison of switching techniques

Q 5. Classify Guided medium (wired).Compare fiber optics and copper wire.Ans: Guided Transmission Medium Ans Guided media, which are those that provide a conduit form one device to another, includetwisted-pair cable, coaxial cable, and fiber-optic cable. A single traveling along any of thesemedia is directed and contained by the physical limits of the medium. Twisted-pair

andcoaxial cable use metallic (copper) conductors that accept and transport signals on the formof electric current. Optical fiber is a cable that accepts and transports signals in the form of light. Comparison of fiber optics and copper wire Fiber has many advantages over copper wire as a transmission media. These are: It can handle much higher band widths than copper. Due to the low attenuation, repeatersare needed only about every 30 km on long lines, versus about every 5 km for copper. Fiber is not being affected by the power surges, electromagnetic interference, or power failures. Nor it is affected by corrosive chemicals in the air, making it deal for harsh factoryenvironment. Fiber is lighter than copper. One thousand twisted pairs copper cables of 1 km long weight8000 kg. But two fibers have more capacity and weigh only 100 kg, which greatly reducesthe need for expensive mechanical support systems that must be maintained. Fibers do not leak light and are quite difficult to tap. This gives them excellent securityagainst potential wire tappers. If new routes designed, the fiber is the first choice because of lower installation cost. Fibers have the following disadvantages over copper wire: Fiber is an unfamiliar technology requiring skills most engineers do not have. Since optical transmission is inherently unidirectional, two-way communication requireseither two fibers or two frequency bands on one fiber. Fiber interfaces cost more than electrical interfaces. Fibers can be damaged easily by being bent too much. Q6. What are different types of satellites? Ans: Communication satellites have some interesting properties that make them attractivefor many applications. In its simplest form, a communication satellite can be thought of as a big microwave repeater in the sky. It contains several transponders, each of which listens tosome portion of the spectrum, amplifies the incoming signal, and then rebroadcasts it atanother frequency to avoid interference with the incoming signal. Basic of Communication Satellites

Classification of Satellites Four different types of satellite orbits can be identified depending on the shape and diameter of the orbit: GEO (Geostationary orbit) LEO (Low Earth Orbit) MEO (Medium Earth Orbit) or ICO (Intermediate Circular Orbit) HEO (Highly Elliptical Orbit) elliptical orbits

Van-Allen-Belts: ionized particles 2000 6000 km and 15000 30000 km above earthsurface. Satellite Coverage

Communication satellites and their altitude above the earthGeostationary orbitAltitude: ca. 36000 km above earth surface Coverage: Ideally suited for continuous, regional coverage using a single satellite. Can also be usedequally effectively for global coverage using a minimum of three satellites Visibility: Mobile to satellite visibility decreases with increased latitude of the user. Poor Visibility in built-up, urban regions. Low Earth orbitAltitude: ca. 500 1500 km Coverage: Multi-satellite constellations of upwards of 30-50 satellites are required for global,continuous coverage. Single satellites can be used in store and forward mode for localizedcoverage but only appear for short periods of time. Visibility: The use of satellite diversity, by which more than one satellite is visible at any given time,can be used to optimize the link. This can be achieved by either selecting the optimum link or combining the reception of two or more links. The higher the guaranteed minimum elevationangle to the user, the more satellites is needed in the constellation. Medium Earth orbitAltitude: ca. 6000 20000 km Coverage:

Multi-satellite constellations of between 10 and 20 satellites are required for global coverage Visibility: Good to excellent global visibility, augmented by the use of satellite diversity techniques Highly elliptical orbitAltitude: Apogee: 40 00050 000 km, Perigee: 100020 000 km. Coverage: Three or four satellites are needed to provide continuous coverage to a region Visibility: Particularly designed to provide high guaranteed elevation angle to satellite for Northern andSouthern temperate latitudes.

ASSIGNMENT SET 2 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Q.1. write down the features of fast Ethernet and gigabit Ethernet. Fast Ethernet Technology Fast Ethernet, or 100BaseT, is conventional Ethernet but faster, operating at 100 Mbps instead of 10 Mbps. Fast Ethernet is based on the proven CSMA/CD Media Access Control (MAC) protocol and can use existing 10BaseT cabling (See Appendix for pinout diagram and table). Data can move from 10 Mbps to 100 Mbps without protocol translation or changes to application and networking software. Data- Link Layer Fast Ethernet maintains CSMA/CD, the Ethernet transmission protocol. However, Fast Ethernet reduces the duration of time each bit is transmitted by a factor of 10, enabling the packet speed to increase tenfold from 10 Mbps to 100 Mbps. Data can move between Ethernet and Fast Ethernet without requiring protocol translation, because Fast Ethernet also maintains the 10BaseT error control functions as well as the frame format and length. Other high-speed technologies such as 100VG-AnyLAN, FDDI, and Asynchronous Transfer Mode (ATM) achieve 100 Mbps

or higher speeds by implementing different protocols that require protocol translation when moving data to and from 10BaseT. This protocol translation involves changes to the frame that typically mean higher latencies when frames are passed through layer 2 LAN switches. Physical Layer Media Options Fast Ethernet can run over the same variety of media as 10BaseT, including UTP, shielded twisted-pair (STP), and ber. The Fast Ethernet specication denes separate physical sublayers for each media type: 100BaseT4 for four pairs of voice- or data-grade Category 3, 4, and 5 UTP wiring 100BaseTX for two pairs of data-grade Category 5 UTP and STP wiring 100BaseFX for two strands of 62.5/125-micron multimode ber In many cases, organizations can upgrade to 100BaseT technology without replacing existing wiring. However, for installations with Category 3 UTP wiring in all or part of their locations, four pairs must be available to implement Fast Ethernet. The MII layer of 100BaseT couples these physical sublayers to the CSMA/CD MAC layer (see Figure 1). The MII provides a single interface that can support external transceivers for any of the 100BaseT physical sublayers. For the physical connection, the MII is implemented on Fast Ethernet devices such as routers, switches, hubs, and adapters, and on transceiver devices using a 40-pin connector (See Appendix for pinout and connector diagrams). Cisco Systems contributed to the MII specication.Public Copyright 1999 Cisco Systems, Inc. All Rights Reserved. Physical Layer Signaling Schemes Each physical sublayer uses a signaling scheme that is appropriate to its media type. 100BaseT4 uses three pairs of wire for 100-Mbps transmission and the fourth pair for collision detection. This method lowers the 100BaseT4 signaling to 33 Mbps per pair, making it suitable for Category 3, 4, and 5 wiring. 100BaseTX uses one pair of wires for transmission (125-MHz frequency operating at 80percent efciency to allow for 4B5B encoding) and the other pair for collision detection and receive. 100BaseFX uses one ber for transmission and the other ber for collision detection

and receive. The 100BaseTX and 100BaseFX physical signaling channels are based on FDDI physical layers developed and approved by the American National Standards Institute (ANSI) X3T9.5 committee. 100BaseTX uses the MLT-3 line encoding signaling scheme, which Cisco developed and contributed to the ANSI committee as the specication for FDDI over Category 5 UTP. Today MLT-3 also is used as the signaling scheme for ATM over Category 5 UTP. Gigabit Ethernet: Gigabit Ethernet is a 1-gigabit/sec (1,000-Mbit/sec) extension of the IEEE 802.3 Ethernet networking standard. Its primary niches are corporate LANs, campus networks, and service provider networks where it can be used to tie together existing 10-Mbit/sec and 100Mbit/sec Ethernet networks. Gigabit Ethernet can replace 100-Mbit/sec FDDI (Fiber Distributed Data Interface) and Fast Ethernet backbones, and it competes with ATM (Asynchronous Transfer Mode) as a core networking technology. Many ISPs use Gigabit Ethernet in their data centers. Gigabit Ethernet provides an ideal upgrade path for existing Ethernet-based networks. It can be installed as a backbone network while retaining the existing investment in Ethernet hubs, switches, and wiring plants. In addition, management tools can be retained, although network analyzers will require updates to handle the higher speed. Gigabit Ethernet provides an alternative to ATM as a high-speed networking technology. While ATM has built-in QoS (quality of service) to support real-time network traffic, Gigabit Ethernet may be able to provide a high level of service quality by providing more bandwidth than is needed. This topic continues in "The Encyclopedia of Networking and Telecommunications" with a discussion of the following: Gigabit Ethernet features and specification Gigabit Ethernet modes and functional elements Gigabit Ethernet committees and specifications, including: 1000Base-LX (IEEE 802.3z) 1000Base-SX (IEEE 802.3z) 1000Base-CX (IEEE 802.3z) 1000Base-T (IEEE 802.3ab)

10-Gigabit Ethernet (IEEE 802.3ae)

Gigabit Ethernet switches Network configuration and design Flat network or subnets Gigabit Ethernet backbones Switch-to-server links Gigabit Ethernet to the desktop Switch-to-switch links

Gigabit Ethernet versus ATM Hybrid Gigabit Ethernet/ATM Core Network

10-Gigabit Ethernet As if 1 Gbits/sec wasn't enough, the IEEE is working to define 10-Gigabit Ethernet (sometimes called "10 GE"). The new standard is being developed by the IEEE 802.3ae Working Group. Service providers will be the first to take advantage of this standard. It is being deployed in emerging metro-Ethernet networks. See "MAN (Metropolitan Area Network)" and "Network Access Services." As with 1-Gigabit Ethernet, 10-Gigabit Ethernet will preserve the 802.3 Ethernet frame format, as well as minimum and maximum frame sizes. It will support full-duplex operation only. The topology is star-wired LANs that use point-to-point links, and structured cabling topologies. 802.3ad link aggregation will also be supported. The new standard will support new multimedia applications, distributed processing, imaging, medical, CAD/CAM, and a variety of other applications-many that cannot even be perceived today. Most certainly it will be used in service provider data centers and as part of metropolitan area networks. The technology will also be useful in the SAN (Storage Area Network) environment. Refer to the following Web sites for more information. 10 GEA (10 Gigabit Ethernet Alliance)http://www.10gea.org/Tech-whitepapers.htm Telecommunications the WAN" article on 10http://www.telecomsml

Gigabit Ethernet "Lighting Internet inmag.com/issues/200009/tcs/lighting_internet.ht

Q.2. Differentiate the working between pure ALOHA and slotted ALOHA ALOHA: Aloha is a computer networking system which was introduced in the early 1970 by Norman Abramson and his colleagues at university of Hawaii to solve the channel allocation problem. On the basis of global time synchronization. Aloha is divided into two different versions or protocols. i.e Pure Aloha and Slotted Aloha. Pure Aloha: Pure Aloha does not require global time synchronization. The basic idea of pure aloha system is that it allows its users to transmit whenever they have data.A sender just like other users can listen to what it is transmitting, and due to this feedback broadcasting system is able to detect collision, if any. If the collision is detected the sender will wait a random period of time and attempt transmission again. The waiting time must not be the same or the same frames will collide and destroyed over and over. Systems in which multiple users share a common channel in a way that can lead to conflicts are widely known as contention systems. Efficiency of Pure Aloha: Let "T" be the time needed to transmit one frame on the channel, and "frame-time" as a unit of time equal to T. Let "G" refer to the mean used in the Poisson distribution over transmission-attempt amounts that is, on average, there are G transmission-attempts per frame-time. Let "t" be the time at which the sender wants to send a frame. We want to use the channel for one frame-time beginning at t, and so we need all other stations to refrain from transmitting during this time. Moreover, we need the other stations to refrain from transmitting between t-T and t as well, because a frame sent during this interval would overlap with our frame.

EFFICIENCY OF ALOHA Vulnerable period for the shaded frame is 2t, if t is the frame time. A frame will not collide if no other frames are sent within one frame time of its start, before and after. For any frametime, the probability of there being k transmission-attempts during that frame-time is: {G^k e^{-G}} / {k!} If throughput (number of packets per unit time) is represented by S, under all load, S =GPo, where Po is the probability that the frame does not suffer collision. A frame does not have collision if no frames are send during the frame time. Thus, in t time Po=(e)^(-G). In 2t time Po=e^(-2G), as mean number of frames generated in 2t is 2G. From the above, throughput in 2t time S=G*(Po)=G*e^(-2G) Slotted Aloha Channel: Slotted Aloha does require global time synchronization. Efficiency of Slotted Aloha Channel: Assume that the sending stations has to wait until the beginning of a frame time (one frame time is one time slot) and arrivals still follow Poisson Distribution, where they are assumed probabilistically independent: In this case the vulnerable period is just t time units. Then the Probability that k frames are generated in a frame time is effective:Pk=(G^k)*(e^-G)/k! In t time, the probability of zero frames, Po=e^(-G) From the above throughput becomes: S=GPo=G*(e^-G) Comparison Of Pure Aloha And Slotted Aloha:

PURE ALOHA VS SLOTTED ALOHA Throughput versus offered traffic for pure ALOHA and slotted ALOHA systems, ie, plot of S against G, from S=Ge^(-2G) and S=Ge^(-G) formulas. CSMA: CSMA is a set of rules in which the devices attached to a network first determines whether the channel or carrier is in use or free and then act accordingly. As in this MAC protocol,the network devices or nodes before transmission senses the channel,therefore, this protocol is known as carrier sense multiple access protocol. Multiple Access indicates that many devices can connect to and share the same network and if a node transmits anything, it is heard by all the stations on the network. Q.3. write down distance vector algorithm. Explain path vector protocol Distance Vector Routing algorithm: 1) For each node, estimate the cost from itself to each destination. 2) For each node, send the cost information the neighbors. 3) Receive cost information from the neighbor, update the routing tables accordingly. 4) Repeat steps 1 to 3 periodically. A path vector protocol is a computer network routing protocol which maintains the path information that gets updated dynamically. Updates which have looped through the network and returned to the same node are easily detected and discarded. This algorithm is sometimes used in BellmanFord routing algorithms to avoid "Count to Infinity" problems. It is different from the distance vector routing and link state routing. Each entry in the routing table contains the destination network, the next router and the path to reach the destination. Path Vector Messages in BGP: The autonomous system boundary routers (ASBR), which participate in path vector routing, advertise the reachability of networks. Each router that receives a path vector message must verify that the advertised path is according to its policy. If the messages comply with the policy, the ASBR modifies its routing table and the message before sending it to the next neighbor. In the modified message it sends its own AS number and replaces the next router entry with its own identification.

BGP is an example of a path vector protocol. In BGP the routing table maintains the autonomous systems that are traversed in order to reach the destination system. Exterior Gateway Protocol (EGP) does not use path vectors. Q.4. state the working principle of TCP segment header and UDP header TCP Header Format TCP segments are sent as internet datagrams. The Internet Protocol header carries several information fields, including the source and destination host addresses [2]. A TCP header follows the internet header, supplying information specific to the TCP protocol. This division allows for the existence of host level protocols other than TCP. TCP Header Format 0 1 2 3

01234567890123456789012345678901 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Source Port | Destination Port |

+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Sequence Number |

+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Acknowledgment Number |

+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Data | |U|A|P|R|S|F| Window | | |

| Offset| Reserved |R|C|S|S|Y|I| | | |G|K|H|T|N|N|

+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Checksum | Urgent Pointer |

+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+

Options

Padding

+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | data |

+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+

Source Port: 16 bits The source port number. Destination Port: 16 bits The destination port number. Sequence Number: 32 bits The sequence number of the first data octet in this segment (except when SYN is present). If SYN is present the sequence number is the initial sequence number (ISN) and the first data octet is ISN+1. Acknowledgment Number: 32 bits If the ACK control bit is set this field contains the value of the next sequence number the sender of the segment is expecting to receive. always sent. Data Offset: 4 bits The number of 32 bit words in the TCP Header. This indicates where the data begins. The TCP header (even one including options) is an integral number of 32 bits long. Reserved: 6 bits Reserved for future use. Must be zero. Control Bits: 6 bits (from left to right): URG: Urgent Pointer field significant ACK: Acknowledgment field significant PSH: Push Function Once a connection is established this is

RST: Reset the connection SYN: Synchronize sequence numbers FIN: No more data from sender Window: 16 bits The number of data octets beginning with the one indicated in the acknowledgment field which the sender of this segment is willing to accept. Checksum: 16 bits The checksum field is the 16 bit one's complement of the one's complement sum of all 16 bit words in the header and text. If a segment contains an odd number of header and text octets to be checksummed, the last octet is padded on the right with zeros to form a 16 bit word for checksum purposes. The pad is not transmitted as part of the segment. While computing the checksum, the checksum field itself is replaced with zeros. The checksum also covers a 96 bit pseudo header conceptually prefixed to the TCP header. This pseudo header contains the Source Address, the Destination Address, the Protocol, and TCP length. This gives the TCP protection against misrouted segments. This information is carried in the Internet Protocol and is transferred across the TCP/Network interface in the arguments or results of calls by the TCP on the IP.

+--------+--------+--------+--------+ | Source Address |

+--------+--------+--------+--------+ | Destination Address |

+--------+--------+--------+--------+ | zero | PTCL | TCP Length |

+--------+--------+--------+--------+

The TCP Length is the TCP header length plus the data length in octets (this is not an explicitly transmitted quantity, but is computed), and it does not count the 12 octets of the pseudo header. Urgent Pointer: 16 bits This field communicates the current value of the urgent pointer as a positive offset from the sequence number in this segment. The urgent pointer points to the sequence number of the octet following the urgent data. This field is only be interpreted in segments with the URG control bit set. Options: variable Options may occupy space at the end of the TCP header and are a multiple of 8 bits in length. All options are included in the checksum. An option may begin on any octet boundary. There are two cases for the format of an option: Case 1: A single octet of option-kind. Case 2: An octet of option-kind, an octet of option-length, and the actual data octets. option-

The option-length counts the two octets of option-kind and option-length as well as the option-data octets.

Note that the list of options may be shorter than the data offset field might imply. The content of the header beyond the End-of-Option option must be header padding (i.e., zero). A TCP must implement all options. Currently defined options include (kind indicated in octal):

Kind ---0

Length ------

Meaning -------

End of option list.

1 2

No-Operation. Maximum Segment Size.

Specific Option Definitions End of Option List

+--------+ |00000000| +--------+ Kind=0

This option code indicates the end of the option list. This might not coincide with the end of the TCP header according to the Data Offset field. not otherwise coincide with the end of the TCP header. No-Operation +--------+ |00000001| +--------+ Kind=1 This is used at the end of all options, not the end of each option, and need only be used if the end of the options would

This option code may be used between options, for example, to align the beginning of a subsequent option on a word boundary. There is no guarantee that senders will use this option, so receivers must be prepared to process options even if they do not begin on a word boundary. Maximum Segment Size

+--------+--------+---------+--------+ |00000010|00000100| max seg size |

+--------+--------+---------+--------+ Kind=2 Length=4

Maximum Segment Size Option Data: 16 bits If this option is present, then it communicates the maximum receive segment size at the TCP which sends this segment. This field must only be sent in the initial connection request (i.e., in segments with the SYN control bit set). If this option is not used, any segment size is allowed. Padding: variable The TCP header padding is used to ensure that the TCP header ends and data begins on a 32 bit boundary. The padding is composed of zeros.

The User Datagram Protocol (UDP) The User Datagram Protocol (UDP) is a transport layer protocol defined for use with the IP network layer protocol. It is defined by RFC 768 written by John Postel. It provides a best-effort datagram service to an End System (IP host). The service provided by UDP is an unreliable service that provides no guarantees for delivery and no protection from duplication (e.g. if this arises due to software errors within an Intermediate System (IS)). The simplicity of UDP reduces the overhead from using the protocol and the services may be adequate in many cases. UDP provides a minimal, unreliable, best-effort, message-passing transport to applications and upper-layer protocols. Compared to other transport protocols, UDP and its UDP-Lite variant are unique in that they do not establish end-to-end connections between communicating end systems. UDP communication consequently does not incur connection establishment and teardown overheads and there is minimal associated end system state. Because of these characteristics, UDP can offer a very efficient communication transport to some applications, but has no inherent congestion control or reliability. A second unique characteristic of UDP is that it provides no inherent On many platforms, applications can send UDP datagrams at the line rate of the link interface, which is often much greater than

the available path capacity, and doing so would contribute to congestion along the path, applications therefore need to be designed responsibly [RFC 4505]. One increasingly popular use of UDP is as a tunneling protocol, where a tunnel endpoint encapsulates the packets of another protocol inside UDP datagrams and transmits them to another tunnel endpoint, which decapsulates the UDP datagrams and forwards the original packets contained in the payload. Tunnels establish virtual links that appear to directly connect locations that are distant in the physical Internet topology, and can be used to create virtual (private) networks. Using UDP as a tunneling protocol is attractive when the payload protocol is not supported by middleboxes that may exist along the path, because many middleboxes support UDP transmissions. UDP does not provide any communications security. Applications that need to protect their communications against eavesdropping, tampering, or message forgery therefore need to separately provide security services using additional protocol mechanisms. Protocol Header A computer may send UDP packets without first establishing a connection to the recipient. A UDP datagram is carried in a single IP packet and is hence limited to a maximum payload of 65,507 bytes for IPv4 and 65,527 bytes for IPv6. The transmission of large IP packets usually requires IP fragmentation. Fragmentation decreases communication reliability and efficiency and should theerfore be avoided. To transmit a UDP datagram, a computer completes the appropriate fields in the UDP header (PCI) and forwards the data together with the header for transmission by the IP network layer.

The UDP protocol header consists of 8 bytes of Protocol Control Information (PCI) The UDP header consists of four fields each of 2 bytes in length:

Source Port (UDP packets from a client use this as a service access point (SAP) to indicate the session on the local client that originated the packet. UDP packets from a server carry the server SAP in this field)

Destination Port (UDP packets from a client use this as a service access point (SAP) to indicate the service required from the remote server. UDP packets from a server carry the client SAP in this field)

UDP length (The number of bytes comprising the combined UDP header information and payload data)

UDP Checksum (A checksum to verify that the end to end data has not been corrupted by routers or bridges in the network or by the processing in an end system. The algorithm to compute the checksum is the Standard Internet Checksum algorithm. This allows the receiver to verify that it was the intended destination of the packet, because it covers the IP addresses, port numbers and protocol number, and it verifies that the packet is not truncated or padded, because it covers the size field. Therefore, this protects an application against receiving corrupted payload data in place of, or in addition to, the data that was sent. In the cases where this check is not required, the value of 0x0000 is placed in this field, in which case the data is not checked by the receiver.

Like for other transport protocols, the UDP header and data are not processed by Intermediate Systems (IS) in the network, and are delivered to the final destination in the same form as originally transmitted. At the final destination, the UDP protocol layer receives packets from the IP network layer. These are checked using the checksum (when >0, this checks correct end-to-end operation of the network service) and all invalid PDUs are discarded. UDP does not make any provision for error reporting if the packets are not delivered. Valid data are passed to the appropriate session layer protocol identified by the source and destination port numbers (i.e. the session service access points). UDP and UDP-Lite also may be used for multicast and broadcast, allowing senders to transmit to multiple receivers. Using UDP Application designers are generally aware that UDP does not provide any reliability, e.g., it does not retransmit any lost packets. Often, this is a main reason to consider UDP as a transport. Applications that do require reliable message delivery therefore need to implement appropriate protocol mechanisms in their applications (e.g. tftp). UDP's best effort service does not protect against datagram duplication, i.e., an application may receive multiple copies of the same UDP datagram. Application designers therefore

need to verify that their application gracefully handles datagram duplication and may need to implement mechanisms to detect duplicates. The Internet may also significantly delay some packets with respect to others, e.g., due to routing transients, intermittent connectivity, or mobility. This can cause reordering, where UDP datagrams arrive at the receiver in an order different from the transmission order. Applications that require ordered delivery must restore datagram ordering themselves. The burdon of needing to code all these protocol mechanims can be avoided by using TCP! Q.5. what is IP addressing? Discuss different classs of IP addressing. n identifier for a computer or device on a TCP/IP network. Networks using the

TCP/IP protocol route messages based on the IP address of the destination. The format of an IP address is a 32-bit numeric address written as four numbers separated by periods. Each number can be zero to 255. For example, 1.160.10.240 could be an IP address. Within an isolated network, you can assign IP addresses at random as long as each one is unique. However, connecting a private network to the Internetrequires using registered IP addresses (called Internet addresses) to avoid duplicates. The four numbers in an IP address are used in different ways to identify a particular network and a host on that network. Four regional Internet registries -- ARIN, RIPE NCC, LACNIC and APNIC -- assign Internet addresses from the following three classes. Class A - supports 16 million hosts on each of 126 networks Class B - supports 65,000 hosts on each of 16,000 networks Class C - supports 254 hosts on each of 2 million networks The number of unassigned Internet addresses is running out, so a new classless scheme called CIDR is gradually replacing the system based on classes A, B, and C and is tied to adoption of IPv6. IP address classes These IP addresses can further be broken down into classes. These classes are A, B, C, D, E and their possible ranges can be seen in Figure 2 below. Class A Start address 0.0.0.0 Finish address 126.255.255.255

B C D E

128.0.0.0 192.0.0.0 224.0.0.0 240.0.0.0

191.255.255.255 223.255.255.255 239.255.255.255 255.255.255.255

Figure 2. IP address Classes If you look at the table you may notice something strange. The range of IP address from Class A to Class B skips the 127.0.0.0-127.255.255.255 range. That is because this range is reserved for the special addresses called Loopback addresses that have already been discussed above. The rest of classes are allocated to companies and organizations based upon the amount of IP addresses that they may need. Listed below are descriptions of the IP classes and the organizations that will typically receive that type of allocation. Default Network: The special network 0.0.0.0 is generally used for routing. Class A: From the table above you see that there are 126 class A networks. These networks consist of 16,777,214 possible IP addresses that can be assigned to devices and computers. This type of allocation is generally given to very large networks such as multi-national companies. Loopback: This is the special 127.0.0.0 network that is reserved as a loopback to your own computer. These addresses are used for testing and debugging of your programs or hardware. Class B: This class consists of 16,384 individual networks, each allocation consisting of 65,534 possible IP addresses. These blocks are generally allocated to Internet Service Providers and large networks, like a college or major hospital. Class C: There is a total of 2,097,152 Class C networks available, with each network consisting of 255 individual IP addresses. This type of class is generally given to small to mid-sized companies. Class D: The IP addresses in this class are reserved for a service called Multicast. Class E: The IP addresses in this class are reserved for experimental use. Broadcast: This is the special network of 255.255.255.255, and is used for broadcasting messages to the entire network that your computer resides on.

Private Addresses There are also blocks of IP addresses that are set aside for internal private use for computers not directly connected to the Internet. These IP addresses are not supposed to be routed through the Internet, and most service providers will block the attempt to do so. These IP addresses are used for internal use by company or home networks that need to use TCP/IP but do not want to be directly visible on the Internet. These IP ranges are: Class A B C Private Start Address 10.0.0.0 172.16.0.0 192.168.0.0 Private End Address 10.255.255.255 172.31.255.255 192.168.255.255

If you are on a home/office private network and want to use TCP/IP, you should assign your computers/devices IP addresses from one of these three ranges. That way your router/firewall would be the only device with a true IP address which makes your network more secure. Common Problems and Resolutions The most common problem people have is by accident assigning an IP address to a device on your network that is already assigned to another device. When this happens, the other computers will not know which device should get the information, and you can experience erratic behavior. On most operating systems and devices, if there are two devices on the local network that have the same IP address, it will generally give you a "IP Conflict" warning. If you see this warning, that means that the device giving the warning, detected another device on the network using the same address. The best solution to avoid a problem like this is to use a service called DHCP that almost all home routers provide. DHCP, or Dynamic Host Configuration Protocol, is a service that assigns addresses to devices and computers. You tell the DHCP server what range of IP addresses you would like it to assign, and then the DHCP server takes the responsibility of assigning those IP addresses to the various devices and keeping track so those IP addresses are assigned only once. Q.6. define Cryptography? Discuss two cryptography techniques Cryptography is the science of information security. The word is derived from the Greekkryptos, meaning hidden. Cryptography is closely related to the disciplines

of cryptology andcryptanalysis.

Cryptography

includes

techniques

such

as

microdots,

merging words with images, and other ways to hide information in storage or transit. However, in today's computer-centric world, cryptography is most often associated with scrambling plaintext(ordinary text, sometimes referred to as cleartext) into ciphertext (a process calledencryption), then back again (known as decryption). Individuals who practice this field are known as cryptographers. Modern cryptography concerns itself with the following four objectives: 1) Confidentiality (the information cannot be understood by anyone for whom it was unintended) 2) Integrity (the information cannot be altered in storage or transit between sender and intended receiver without the alteration being detected) 3) Non-repudiation (the creator/sender of the information cannot deny at a later stage his or her intentions in the creation or transmission of the information) 4) Authentication (the sender and receiver can confirm each other?s identity and the origin/destination of the information) 3. TYPES OF CRYPTOGRAPHIC ALGORITHMS There are several ways of classifying cryptographic algorithms. For purposes of this paper, they will be categorized based on the number of keys that are employed for encryption and decryption, and further defined by their application and use. The three types of algorithms that will be discussed are (Figure 1): Secret Key Cryptography (SKC): Uses a single key for both encryption and decryption Public Key Cryptography (PKC): Uses one key for encryption and another for decryption Hash Functions: Uses a mathematical transformation to irreversibly "encrypt" information

FIGURE 1: Three types of cryptography: secret-key, public key, and hash function.

3.1. Secret Key Cryptography With secret key cryptography, a single key is used for both encryption and decryption. As shown in Figure 1A, the sender uses the key (or some set of rules) to encrypt the plaintext and sends the ciphertext to the receiver. The receiver applies the same key (or ruleset) to decrypt the message and recover the plaintext. Because a single key is used for both functions, secret key cryptography is also called symmetric encryption. With this form of cryptography, it is obvious that the key must be known to both the sender and the receiver; that, in fact, is the secret. The biggest difficulty with this approach, of course, is the distribution of the key. Secret key cryptography schemes are generally categorized as being either stream ciphers or block ciphers. Stream ciphers operate on a single bit (byte or computer word) at a time and implement some form of feedback mechanism so that the key is constantly changing. A block cipher is so-called because the scheme encrypts one block of data at a time using the same key on each block. In general, the same plaintext block will always encrypt to the same ciphertext when using the same key in a block cipher whereas the same plaintext will encrypt to different ciphertext in a stream cipher.

Stream ciphers come in several flavors but two are worth mentioning here. Selfsynchronizing stream ciphers calculate each bit in the keystream as a function of the previous n bits in the keystream. It is termed "self-synchronizing" because the decryption process can stay synchronized with the encryption process merely by knowing how far into the n-bit keystream it is. One problem is error propagation; a garbled bit in transmission will result in n garbled bits at the receiving side. Synchronous stream ciphers generate the keystream in a fashion independent of the message stream but by using the same keystream generation function at sender and receiver. While stream ciphers do not propagate transmission errors, they are, by their nature, periodic so that the keystream will eventually repeat. Block ciphers can operate in one of several modes; the following four are the most important:

Electronic Codebook (ECB) mode is the simplest, most obvious application: the secret key is used to encrypt the plaintext block to form a ciphertext block. Two identical plaintext blocks, then, will always generate the same ciphertext block. Although this is the most common mode of block ciphers, it is susceptible to a variety of brute-force attacks.

Cipher Block Chaining (CBC) mode adds a feedback mechanism to the encryption scheme. In CBC, the plaintext is exclusively-ORed (XORed) with the previous ciphertext block prior to encryption. In this mode, two identical blocks of plaintext never encrypt to the same ciphertext.

Cipher

Feedback

(CFB)

mode is

block

cipher

implementation

as

self-

synchronizing stream cipher. CFB mode allows data to be encrypted in units smaller than the block size, which might be useful in some applications such as encrypting interactive terminal input. If we were using 1-byte CFB mode, for example, each incoming character is placed into a shift register the same size as the block, encrypted, and the block transmitted. At the receiving side, the ciphertext is decrypted and the extra bits in the block (i.e., everything above and beyond the one byte) are discarded.

Output Feedback (OFB) mode is a block cipher implementation conceptually similar to a synchronous stream cipher. OFB prevents the same plaintext block from generating the same ciphertext block by using an internal feedback mechanism that is independent of both the plaintext and ciphertext bitstreams.

A nice overview of these different modes can be found at progressive-coding.com. Secret key cryptography algorithms that are in use today include: Data Encryption Standard (DES): The most common SKC scheme used today, DES was designed by IBM in the 1970s and adopted by the National Bureau of Standards (NBS) [now the National Institute for Standards and Technology (NIST)] in 1977 for commercial and unclassified government applications. DES is a block-cipher employing a 56-bit key that operates on 64-bit blocks. DES has a complex set of rules and transformations that were designed specifically to yield fast hardware implementations and slow software implementations, although this latter point is becoming less significant today since the speed of computer processors is several orders of magnitude faster today than twenty years ago. IBM also proposed a 112-bit key for DES, which was rejected at the time by the government; the use of 112-bit keys was considered in the 1990s, however, conversion was never seriously considered.