Sunteți pe pagina 1din 6

ASSIGNMENT OF CAP-558

SUBMITTED TO: by: Mrs.Amandeep bagga Reg.No.11209941 Roll.No.12 07B43 Section: 1207 submitted

Homework - I Course Code: CAP558 Course Title: Data Communication and Networks Q1: What are the fundamental characteristics on which the effectiveness of data communication depends on? Ans. The effectiveness of data communication system depends on four fundamental characteristics: 1. Delivery: the system must deliver data to correct destination. Data must be received by the intended device or user.

2. Accuracy: The system must deliver the data accurately. Data that have
been altered in transmission and left uncorrected are unusable. 3. Timeliness: the system must deliver data in a timely manner. Data delivered late are useless .In this case of video and audio, timely delivery means delivery means delivering data as they are produced and without significant delay .this kind of delivery is called real-time transmission. 4. Jitter: jitter refers to the variation in the packet arrival time. It is the uneven delay in the delivery of packets. for example ,let us assume that video packets are sent every 30ms.if someone of the packet arrive with 30ms delay and other with 40-ms delay, an uneven quality in the video is the result. Q2: What are the advantages of distributed processing?

Ans. Lower Cost:


Larger organizations invest in expensive mainframe and supercomputers to function as centralized servers. Each mainframe machine, for example, costs several hundred thousand dollars versus several thousand dollars for a few minicomputers, according to the University of New Mexico. Distributed data processing considerably lowers the cost of data sharing and networking across an organization by comprising several minicomputers that cost significantly less than mainframe machines.

Reliable:

Hardware glitches and software anomalies can cause single-server processing to malfunction and fail, resulting in a complete system breakdown. Distributed data processing is more reliable, since multiple control centres are spread across different machines. A glitch in any one machine does not impact the network, since another machine takes over its processing capability. Faulty machines are quickly isolated and repaired. This makes distributed data processing more reliable than single-server processing systems.

Improved Performance and Reduced Processing Time:


Single computers are limited in their performance and efficiency. An easy way to increase performance is by adding another computer to a network. Adding yet another computer will further augment performance, and so on. Distributed data processing works on this principle and holds that a job gets done faster if multiple machines are handling it in parallel, or synchronously. Complicated statistical problems, for example, are broken into modules and allocated to different machines where they are processed simultaneously. This significantly reduces processing time and improves performance.

Flexible:
Individual computers that comprise a distributed network are present at different geographical locations. For example, an organizational-distributed network comprising of three computers can have each machine in a different branch. The three machines are interconnected via the Internet and are able to process data in parallel, even while at different locations. This makes distributed data-processing networks more flexible. The system is flexible also in terms of increasing or decreasing processing power. For example, adding more nodes or computers to the network increases processing power and overall system capability, while reducing computers from the network decreases processing power.

Q3: Name and explain the factors that affect the performance of a network. Ans. Latency:
Think of latency as the speed limit on a highway. Traffic speed on a motorway is affected by many variables such as weather, other traffic, and highway signs. Likewise, data packets traversing a network are affected by many variables as well. The first step in mitigating latency is to break down the overall latency into that due to the network and that due to the application and its associated servers. With that determination made, visually graph both the application and network latency to help identify patterns and anomalies that deserve closer attention so that you can later drill down and figure out exactly what is causing the bottleneck.

Throughput:

Throughput is the amount of traffic a network can carry at any one time. Like the analogy of traffic used to explain latency above, think of throughput as analogous to the number of lanes on a highway. The more lanes, the more traffic a highway can accommodate. When thinking of networks, the higher the bit rate, the faster files transfer. Slow response times might be an issue with your network not having enough throughputs.

Packet Loss:
Glitches, errors, or network overloading might result in the loss of data packets. Sometimes routers or switches might shed traffic intentionally to maintain overall network performance or to enforce a particular service level. In a well-tuned network intentional packet loss is hopefully a rare occurrence, though packet loss is still something that happens regularly due to a host of other reasons, and must be monitored closely to ensure overall network performance.

Retransmission:
When packet loss does occur, those lost packets are retransmitted. This retransmission process can cause two delays; one from re-sending the data and the second delay resulting from waiting until the data is received in the correct order before forwarding it up the protocol stack .These factors are not exclusive, but they do help paint a picture of the many things that can contribute to a slow network. Hopefully, armed with this information, you can start accurately diagnosing your network before performance issues arise.

Q4:What is a standard? Why standards are required? What are the different standards followed in data communication? Ans. A standard is an agreed, repeatable way of doing something. It is a published
document that contains a technical specification or other precise criteria designed to be used consistently as a rule, guideline, or definition. Standards help to make life simpler and to increase the reliability and the effectiveness of many goods and services we use. Standards are created by bringing together the experience and expertise of all interested parties such as the producers, sellers, buyers, users and regulators of a particular material, product, process or service. Standards are designed for voluntary use and do not impose any regulations. However, laws and regulations may refer to certain standards and make compliance with them compulsory. For example, the physical characteristics and format of credit cards is set out in standard number BS EN ISO/IEC 7810:1996. Adhering to this standard means that the cards can be used worldwide.

Why standard needed

Standard are required in creating and maintaining an open and competitive market for equipment manufacturers and in guaranteeing national and international interoperability of data and telecommunication technology and process .standard provide guidelines to manufacturers ,vendors, govt agencies, and other services providers to ensure the kind of interconnectivity necessary in todays marketplace and in international communications

Following Standards followed in data communication:1.

De facto:-Standard that have not been approved by an organization body


but have been adopted as standard through wide-spread use a de-facto standards .de-facto standard are often established originally by manufacture who seek to define functionality of a product or technology.

2.

De jure:-those standard that have been legislated by an officially


reorganized body are de jure standards.

Standard are published by following organization:


1. International organization for standard (IOS) 2. Intenational Telecommunication Union-Telecommunication standard sector (ITU-T) 3. Amarican National Standard Institute (ANSI) 4. Institue of Electrical and Electronics engineers (IEEE) 5. Electronic Industries Association (EIA)

Q5:

Which one is the most efficient transmission media available

for data communication? What is the disadvantage of optical fiber as a transmission medium?
Ans. An optical fiber (or fibre) is a glass or plastic fiber that carries light along its length. Fiber optics is the overlap of applied science and engineering concerned with the design and application of optical fibers. Optical fibres are widely used in fiber-optic communications, which permits transmission over longer distances and at higher bandwidths (data rates) than other forms of communications. Fibers are used instead of metal wires because signals travel along them with less loss, and they are also immune to electromagnetic interference. Fibers are also used for illumination, and are wrapped in bundles so they can be used to carry images, thus allowing viewing in tight spaces. Specially designed fibers are used for a variety of other applications, including sensors and fiber lasers. Light is kept in the core of the optical fiber by total internal reflection. This causes the fiber to act as a wave guide. Fibers which support many propagation paths or transverse modes are called multi-mode fibers(MMF), while those which can only support a single mode are called single-mode fibers (SMF). Multi-mode fibers generally have a larger core diameter, and are used for short-distance communication links and for applications where high power must be transmitted.

Single-mode fibers are used for most communication links longer than 550 meters (1,800 ft). Optical fiber can be used as a medium for telecommunication and networking because it is flexible and can be bundled as cables. It is especially advantageous for long-distance communications, because light propagates through the fiber with little attenuation compared to electrical cables. This allows long distances to be spanned with few repeaters. Because of these reasons fiber optic is more realible.

The disadvantages of optical fibers are:


Price - Even though the raw material for making optical fibres, sand, is abundant and cheap, optical fibres are still more expensive per metre than copper. Although, one fibre can carry many more signals than a single copper cable and the large transmission distances mean that fewer expensive repeaters are required. Fragility - Optical fibres are more fragile than electrical wires. Affected by chemicals - The glass can be affected by various chemicals including hydrogen gas (a problem in underwater cables.) Opaqueness - Despite extensive military use it is known that most fibres become opaque when exposed to radiation. Requires special skills - Optical fibres cannot be joined together as a easily as copper cable and requires additional training of personnel and expensive precision splicing and measurement equipment Q6: The ABC Corporation has a fully connected mesh network consisting of eight devices. Calculate the total number of cable links needed and the number of ports for each device. Ans To find the no of link we have formula=n(n-1)/2 Now if ABC has 8 devices here n=8, We need 28 cable links. To calculate ports we have formula=(n-1) We need 7ports for each device.

S-ar putea să vă placă și