Documente Academic
Documente Profesional
Documente Cultură
Requirements
Connectivity
Multiple-access links: more than two nodes may share a single physical link/ Point-to-point: physical link is limited to a pair of nodes Limitations: Multiple-access links are often limited in size in terms of both the geographical distance they can cover and the number of nodes they can connect. If computer networks were limited to situations in which all nodes are directly connected to each other over a common physical medium, then networks would either be very limited in the number of computers they could connect, or the number of wires coming out of the back of each node would quickly become unmanageable and very expensive. Switched networks are formed by forward nodes that forward data received on one link out to another. Circuit-switched: Employed by the telephone system o First establishes a dedicated circuit across a sequence of links and then allows the source node to send a stream of bits across this circuit to a destination node. Packet-switched: Employed by the majority of computer networks. o In packet-switched networks, the nodes send discrete blocks of data to each other. Each block of data is either a packet or a message.
Typically use a strategy called store-and-forward. Each node in a store-and-forward network first receives a complete packet over some link, stores the packet in its internal memory, and then forwards the complete packet to the next node.
Switches: nodes on the inside of the cloud that implement the network whose primary function is to store and forward packets Hosts: nodes on the outside of the cloud that use the network; they support users and run application programs.
Internetwork is a set of interconnected, independent networks. - internet = a generic internetwork of networks. - Internet = the currently operational TCP/ IP Internet A router or gateway is a node that is connected to two or more networks, and it plays the role of a switch as it forwards messages from one network to another. An address is assigned to each node. An address is a byte string that identifies a node. The network can use a nodes address to distinguish it from the other nodes connected to the network. When a source node wants the network to deliver a message to a certain destination node, it specifies the address of the destination node. If the sending and receiving nodes are not directly connected, then the switches and routers of the network use this address to decide how to forward the message toward the destination. The process of determining systematically how to forward messages toward the destination node based on its address is called routing. Unicast send message to single destination node, multicast send message to some subset of other nodes, broadcast send message to all nodes on the network.
Multiplexing A system resource is shared among multiple users. Synchronous time-division multiplexing (STDM)
Divide time into equal-sized quanta and in a round-robin fashion, give each flow a chance to send its data over the physical link. Frequency-division multiplexing (FDM)
Transmit each flow over the physical link at a different frequency, much the same way that the signals for different TV stations are transmitted at a different frequency on a physical cable TV link. Limitations: 1. If one of the flows does not have any data to send, its share of the physical link remains idle, even if one of the other flows has data to transmit. 2. Both STDM and FDM are limited to situations in which the maximum number of flows is fixed and known ahead of time. It is not practical to resize the quantum or to add additional quanta in the case od STDM or to add new frequencies in the case of FDM. Statistical multiplexing (FDM)
Physical link is shared over time, data transmitted from each flow on demand rather than during a predetermined time slot => Avoids idle time. Defines an upper bound on the size of the block of data that each flow is permitted to transmit at a given time. This limited-size block of data is known as a packet, to distinguish it from the arbitrarily large message that an application program might want to transmit. The source may need to fragment the message into several packets, with the receiver assembling the packets back into the original message. It is possible that the switch receives packets faster than the shared link can accommodate. In this case, the switch is forced to buffer these packets in its memory. If a switch receives packets faster than it can send them for an extended period of time, the switch will eventually run out of buffer space and some packets will have to be dropped. The switch is said to be congested in this state. Statistical multiplexing defines a cost-effective way for multiple users to share network resources. It defines the packet as the granularity with which the links of the network are allocated to different flows, with each switch able to schedule the use of the physical links it is connected to on a per-packet basis. Fairly allocating link capacity to different flows and dealing with congestion when it occurs are the key challenges of statistical multiplexing.
Network Architecture
OSI Architecture
Internet Architecture
Protocol = specific protocol to be used. o UNSPEC = combination of PF_INET and SOCK_STREAM implies TCP Return value = handle for the newly created socket i.e. an identifier by which we can refer to the socket in the future.
2. On the server machine, the application process performs a passive open the server says that it is prepared to accept connections, but it does not actually establish a connection. int bind( int socket, struct sockaddr *address, int addr_len) int listen(int socket, int backlog) int accept(int socket, struct sockaddr *address, int *addr_len) Bind: binds the newly created socket to the specified address. This is the network address of the local participant the server. With Internet protocols, address is a data structure that includes both the IP address of the server and a TCP port number. The port number is usually some well-known number specific to the service being offered. Web servers commonly accept connections on port 80. Listen: defines how many connections can be pending on the specific socket.
Accept: carries out the passive open. It is a blocking operation that does not return until a remote participant has established a connection, and when it does complete, it returns a new socket that corresponds to this just-established connection, and the address argument contains the remote participants address. 3. On the client machine, the application process performs an active open. It says who it wants to communicate with by invoking the operation int connect(int socket, struct sockaddr *address, int add_len) Operation does not return until TCP has successfully established a connection, at which time the application is free to begin sending data. In this case, address contains the remote participants address. The client usually specifies only the remote participants address and lets the system till in the local information. While server usually listens for messages on a well-known port, a client typically does not care which port it uses for itself; the OS simply selects an unused one.
4. Once a connection is established, the application processes invoke the following operations to send and receive data: int send( int socket, char *message, int msg_len, int flags) int recv( int socket, char *buffer, int buf_len, int flags) First operation sends the given message over the specified socket Second operation receives the message from the specified socket into the given buffer. Both operations take a set of flats that control certain details of the operation.
Lab 0: An Echo Server CONCEPTS: Sockets Sockets are the primary way to communicate between two computers/ devices. They are be thought of as a long invisible pipe between 2 computers through which all their messages are exchanged. To communicate, both computers need to have a socket open.
- TO BE FILLED IN -
copied when invoking the receive operation. This forces the topmost protocol to copy the message from the applications buffer into a network buffer and vice versa, which is one of the most expensive things a protocol implementation can do. Most network subsystems define an abstract data type for messages that is shared by all protocols in the protocol graph. Thus provides copy-free ways of passing messages up and down the protocol graph, and manipulating messages in other ways. This generally involves a linked list of pointers to message buffers.
Performance
Bandwidth and Latency
Network performance is measured in 2 fundamental ways: bandwidth (also called throughput) and latency (also called delay). Bandwidth = number of bits that can be transmitted over the network in a certain period of time; a measure of the width of a frequency band. Latency = how long it takes a message to travel from one end of a network to the other. Measured strictly in terms of time. a. Propagation: Speed-of-light propagation delay b. Transmit: Amount of time it takes to transmit a unit of data (function of network bandwidth and the size of the packet in which the data is carried) c. Queue: Queuing delays inside the network, since packet switches need to store packets for some time before forwarding them on an outward link. Total latency = Propagation + Transmit + Queue Propagation = Distance/ Speed of Light Transmit = Size/ Bandwidth Distance = length of wire over which data will travel Speed of light = effective speed of light over the wire Size = size of packet Bandwidth = bandwidth at which the packet is transmitted. Round-trip time (RTT) = how long it takes to send a message from one end of a network to the other and back. Delay x Bandwidth Product = maximum number of bits that could be in transit through the pipe at any given instant.
If we think of a channel between a pair of processes as a hollow pipe, where the latency corresponds to the length of the pipe and the bandwidth gives the diameter of the pipe, then the delay x bandwidth product gives the volume of the pipe i.e. how many bits fit in the pipe. Throughput = Transfer Size/ Transfer Time Transfer Time = RTT + 1/Bandwidth x Transfer Size
Encoding
Signals propagate over physical links, thus we have to encode the binary data that the source node wants to send into the signals that the links are able to carry, and then to decode the signal back into the corresponding binary data at the receiving node. Assume we are working with 2 discrete signals: high and low. These signals may correspond to 2 different voltages on a copper-based link, or two different power levels on an optical link. The network adaptor a piece of hardware that connects a node to a link, contains a signaling component that actually encodes bits into signals at the sending node and decodes signals into bits at the receiving nodes. Signals travel over a link between 2 signaling components, and bits flow between network adaptors. Nonreturn to Zero (NRZ) is an encoding scheme that maps data value 1 to the high signal and the data value 0 onto the low signal. 2 Problems with NRZ: 1. Baseline wander: The receiver keeps an average of the signal it has seen so far, and then uses this average to distinguish between low and high signals. Whenever the signal is significantly lower than this average, the receiver concludes that it has just seen a 0, and likewise for 1. Too many consecutive 1s or 0s cause this average to change, making it more difficult to detect a significant change in the signal.
2. Clock Recovery: Frequent transitions from high to low and vice versa are necessary to enable clock recovery. Both the encoding and decoding processes are driven by a clock every clock cycle the sender transmits a bit and the receiver recovers a bit. The senders and the receivers clocks have to be perfectly synchronized in order for the receiver to recover the same bits the sender transmits. The receiver derives the clock from the received signal the clock recovery process. Whenever the signal changes, the receiver knows it is at a clock cycle boundary and can resynchronize itself. However, a long period of time without such a transition leads to clock drift.
Nonreturn to Zero Inverted (NRZI) is an encoding scheme that has the sender make transition from the current signal to encode a 1 and stay at the current signal to encode a 0. This solves the problem of consecutive 1s, but does nothing for consecutive 0s. Manchester encoding transmits the XOR of the NRZ-encoded data and the clock. Baud rate = the rate at which the signal changes Manchester encoding doubles the rate at which signal transitions are made on the link, which means that the receiver has half the time to detect each pulse of the signal. The bit rate is half the baud rate in Manchester encoding, so the encoding is considered only 50% efficient.
4B/5B encoding attempts to address the inefficiency of the Manchester encoding without suffering from the problem of having extended durations of high or low signals. Insert extra bits into the bit stream so as to break up long sequences of 0s or 1s. Every 4 bits of actual data are encoding in a 5-bit code that is then transmitted to the receiver. The 5 bit codes are selected in such a way that each one has no more than one leading 0 and no more than two trailing 0s. Thus when sent back to back, no pair of 5-bit codes results in more than 3 consecutive 0s being transmitted. The resulting 5-bit codes are then transmitted using the NRZI encoding, which solves the problem of consecutive 1s. This results in 80% efficiency.
Framing
When node A wishes to transmit a frame to node B, it tells its adaptor to transmit a frame from the nodes memory. This results in a sequence of bits being sent over the link. The adaptor on node B then collects together the sequence of bits arrive on the link and deposits the corresponding frame in Bs memory. Recognizing exact what set of bits constitute a frame i.e. determining where a frame begins and ends, is the framing problem faced by the adaptor.
Binary Synchronous Communication (BISYNC) Protocol uses special characters known as sentinel characters to indicate where frames start and end. The beginning of a frame is denoted by a special SYN character. The data portion is then contained between two more special characters: STX (start of text) and ETX (end of text). The SOH (start of header) servers the same purpose as the STX field. Problem with the sentinel approach: The ETX character may appear in the data portion of the frame. BISYNC overcomes this by escaping the ETX character by preceding it with a data-link-escape (DLE) character whenever it appears in the body of a frame; the DLE character is also escaped in the frame body. This approach is often called character stuffing because extra characters are inserted in the data portion of the frame. The frame format also includes a cyclic redundancy check (CRC) that is used to detect transmission errors.
Reliable Transmission
Frames are sometimes corrupted while in transit, with an error code like CRC used to detect such errors. While some error codes are strong enough also to correct errors, in practice the overhead is typically too large to handle the range of bit and burst errors that can be introduced on a network link. Even when error-correcting codes are used, some errors will be too severe to be corrected. As a result, some corrupt frames must be discarded. A link-level protocol that wants to deliver frames reliably must somehow recover from these discarded frames. An acknowledgement (ACK) is a small control frame that a protocol sends back to its peer saying that it has received an earlier frame. A control frame is a header without any data, although a protocol can piggyback an ACK on a data frame it happens to be sending in the opposite direction. The receipt of an acknowledgement indicates to the sender of the original frame that its frame was successfully delivered. If the sender does not receive an acknowledgement after a reasonable amount of time, then it retransmits the original frame. This action of waiting a reasonable amount of time is called a timeout. The general strategy of using acknowledgements and timeouts to implement reliable delivery is sometime called automatic repeat request (ARQ).
Stop-and-Wait
After transmitting one frame, the sender waits for an acknowledgment before transmitting the next frame. If the acknowledgment does not arrive after a certain period of time, the sender times out and retransmits the original frame. Suppose the sender sends a frame and the receiver acknowledges it, but the acknowledgment is either lost or delayed in arriving. This has the potential to cause duplicate copies of a frame to be delivered. To address this problem, the header for a stop-and-wait protocol usually includes a 1-bit sequence number, and the sequence numbers used for each frame alternate. Shortcoming: Allows the sender to have only 1 outstanding frame on the link at a time, which may be far below the links capacity.
Sliding Window
First, the sender assigns a sequence number to each frame. The sender maintains 3 variables: - Send window size (SWS) gives the upper bound on the number of outstanding (unacknowledged) frames that the sender can transmit - LAR denotes the sequence number of the last acknowledgment received - LFS denotes the sequence number of the last frame sent.
The sender also maintains the invariant: LFS LAR SWS When an acknowledgment arrives, the sender moves LAR to the right, thereby allowing the sender to transmit another frame. Also the sender associates a time with each frame it transmits, and it retransmits the frame should the timer expire before an ACK is received. The receiver maintains 3 variables: - Receive window size (RWS) gives the upper bound on the number of out-of-order frames that the receive is willing to accept. - LAF denotes the sequence number of the largest acceptable frame - LFR denotes the sequence number of the last frame received The receive maintains the invariant: LAF LFR RWS
Lecture 7 Sliding window on sender: - Assign a sequence number to each frame - Maintain 3 variables o Send window size that bounds number of unacked frames the sender can transmit o Last ack received denotes sequence number of last ack o Last frame sent denotes the sequence number of last frame sent o Invariable: LFS-LAR<= SWS Sender actions - When an ACK arrives, move LAR to the right Sliding window on receive: - Maintains 3 window variables: When a frame with seqnum arrives, discard it if out of window. If in window, decides what to ACK - Cumulative ack: a variable SeqNumToAck denotes the largest sequence number Question: Delay: 100ms; Bandwidth: 1Mbps; Packet Size: 1000 Bytes; Ack: 40 Bytes Window size = largest amount of unacked data How long does it take to ack a packet?
RTT = 100 ms * 2 + transmission delay of a packet (1000B) + transmission delay of an ack (40B) ~= 208ms How many packets can the sender send in an RTT? - 1 Mbps * 208ms/ 8000 bits = 26 - Roughly 13 packets in the pipe from sender to receiver Concurrent Logical Channels
Multiple Access Links - Ethernet - Token ring - 802.11 (WIFI) Original Design - 802.3 standard defines both MAC and physical layer details. - Ether Multiple Access Links - Many nodes attached to the same link o Ethernet o Token rings o Wireless network (WIFI - Problem: who gets to send a frame? o Multiple senders lead to collisions - Solution: a general technique o Multiple access with collision detection Ethernet - Speed: 10Mbps 10 Gbps