Sunteți pe pagina 1din 144
Computer Networks and Telecommunication
Computer Networks and Telecommunication

Prof. Rathnakar Acharya


Computer Network: It is a collection of hardware components and computers interconnected by communication channels that allow sharing of resources and information.

Data may be transmitted, via communications

links, between remote locations and the computer.

Collaboration of computer and telecommunication


File Sharing

Print Sharing


Fax Sharing

Remote Access VPN

Shared Databases

Fault Tolerance

Internet Access and Security

Communication and collaboration

Benefits of using networks:

improve communication

Staff, suppliers and customers are able to share information and get in touch more easily

More information sharing can make the business more efficient

Organization can reduce costs and improve efficiency

network administration can be centralized

Organizations can reduce errors and improve



Computer Networks can be classified in different

ways like:

Function Based,

Area Coverage Based,


Ownership-based and Media-based.

Computer Networks Classification





LAN • Network that connects communication devices, or computers within 2000 feet to share information .

Network that connects communication devices, or computers within 2000 feet to share information.

features of LAN:

Multiple user computers connected together

Machines are spread over a small geographic region

Communication channels between the machines are

usually privately owned. Channels are relatively high

capacity Mbps.

Channels are relatively error free

Metropolitan Area Networks (MAN)

MANs are based on fiber optic transmission technology

Provide high speed (10 Mbps )interconnection between sites.

A MAN can support both data and voice.

technology • Provide high speed (10 Mbps )interconnection between sites. • A MAN can support both

Wide Area Networks (WAN):

Features of WAN:

Multiple user computers connected together.

Machines are spread over a wide geographical region

Communications channels between the machines are usually furnished by a third party.

Channels are of relatively low capacity

Channels are relatively error-prone

usually furnished by a third party. – Channels are of relatively low capacity – Channels are

Network Models

Client-Server A form of distributed processing in which several computers share the resources and are able to communicate with many other computers.

processing in which several computers share the resources and are able to communicate with many other
Peer-to-peer network • Advantage: • Simplicity of design and maintenance. • No server, all nodes

Peer-to-peer network


Simplicity of design and maintenance.

No server, all nodes on the network are fully employed and independent.

No single computer holds the control of entire networks.


A failure of a node on a peer-to-peer network means that the network can no longer access the applications or data on that node but other node can function properly.

Lack of centralized control leads it is advisable to use less

number of user system like 10 to 12 users.


There are five basic components in any network :


The sender (Source Host)


The communications interface devices


The communications channel (Medium)


The receiver (Destination Host)


Communications software

Communication Interface Devices

Communication Interface Devices

Network Interface Cards

• Network Interface Cards

Characteristics of NICs include following:

NIC construct, transmits, receives, and processes

data to and from a host to network.

Each NIC has a manufacturer provided 8 bytes

permanent and unique MAC (media access control)


This address is also known as physical address.

The NIC requires drivers to operate

Switches and Routers

Switches and Routers

Router: a communication processor that routs message through several connected LANs or to a WAN.

Switch: is a computer networking device that

that connects network segments or network devices.

LANs or to a WAN. • Switch: is a computer networking device that that connects network


• Hubs


The main task of a bridge computer is to receive and pass data from one LAN to another.

In order to transmit this data successfully, the

bridge magnifies the data transmission signal.

Repeaters are devices that solve the snag of signal degradation which results as data is transmitted along the various cables.

Repeater boosts or amplifies the signal before

passing it through to the next section of cable.

Gateways: Gateways are also similar to bridges in that they relay data from network to network.

They translate data from one protocol to


similar to bridges in that they relay data from network to network. • They translate data


MODEM stands for Modulator/Demodulator. In the simplest form, it is an encoding as well as decoding device used in data


It is a device that converts a digital signal into an analog telephone signal and converts an analog telephone signal into a digital computer signal in a data communication system

telephone signal and converts an analog telephone signal into a digital computer signal in a data
telephone signal and converts an analog telephone signal into a digital computer signal in a data

Communication channels



Twisted pair cable (UTP)

Twisted pair cable (UTP)

Communications Software: Communications software manages the flow of data across a


Communications software is written to work with a wide variety of protocols, which are rules and procedures for exchanging data.

Access control

Network management

Data and file transmission

Error detection and control


Star topology



It is easy to add new and remove nodes.

A node failure does not bring down the entire network

It is easier to diagnose network problems through a central hub.


If the central hub fails, the whole network ceases to function.

It costs more to cable a star configuration than other topologies.

Bus topology

• Bus topology


Reliable in very small networks.

Requires the least amount of cable to connect the computers together and therefore is less expensive.

Is easy to extend.

A repeater can also be used to extend a bus configuration.


Heavy network traffic can slow a bus considerably.

Individual user may use lot of bandwidth.

Each connection between two cables weakens the electrical


Loss connection may disconnect the network

Ring topology

• Ring topology


Ring networks offer high performance for a small number of workstations or for larger networks where each station has a similar workload.

Ring networks can span longer distances than other types of


Ring networks are easily extendable.


Relatively expensive and difficult to install.

Failure of one computer on the network can affect the whole network.

It is difficult to trouble shoot a ring network.

Adding or removing computers can disrupt the network

Mesh Topology

• Mesh Topology


Yields the greatest amount of redundancy in the event that one of the nodes fails where network traffic can be redirected

to another node.

Network problems are easier to diagnose.


The cost of installation and maintenance is high (more cable is required than any other configuration)

Large number of network connections


Circuit Switching

Network is one that establishes a fixed bandwidth

circuit (or channel) between nodes and terminals

before the users may communicate, as if the

nodes were physically connected with an electrical circuit. In circuit-switching, this path is decided

upon before the data transmission starts.

Message Switching:

The computer receives all transmitted data; stores it ; and, when an outgoing communication line is available, forwards it to the receiving point.

Packet switching:

Packet switching refers to protocols in which messages are broken up into small transmission units called packets


Protocols are software that perform a variety of

actions necessary for data transmission between


Protocols are a set of rules for inter-computer communication that have been agreed upon and implemented by many vendors, users and standards bodies.

Network protocols which are essentially software are sets of rules for ;

Communicating timings, sequencing, formatting, and error checking for data transmission.

Providing standards for data communication

A protocol defines the following three aspects of digital communication.

(a) Syntax: The format of data being exchanged, character set used, type of error correction used, type of

encoding scheme (e.g., signal levels ) being used.

(b) Semantics: Type and order of messages used to ensure reliable and error free information transfer.

(c) Timing: Defines data rate selection and correct timing for various events during data transfer. As stated earlier,

communication protocols are rules established to govern

the way the data are transmitted in a computer network.

The Application Layer initiates or accepts a request from the user.

The Presentation Layer adds formatting, displays and

encrypts information to the packet.

The Session Layer adds traffic flow information to

determine when the packet gets sent or received. Transport

Layer adds error handling information like CRC.

The Network Layer does sequencing and adds address information in the packet.

The data Link Layer adds error checking information and prepares the data for going on to the destination.

Physical Layer



OSI Model

It is an abstract description for layered

communications and computer network protocol design.

It was developed as part of the Open Systems

Interconnection (OSI) initiative.

It divides network architecture into seven layers; Application, Presentation, Session, Transport, Network, Data-Link, and Physical Layers.


The protocols used on the Internet is called TCP/IP (transmission Control Protocol/Internet Protocol)

A TCP/IP protocol which has two parts

TCP deals with exchange of sequential data

IP handles packet forwarding and is used on the Internet

TCI/IP has four layers

Application Layer which provides services directly the users such as e-mail.

Transport Layer which provides end-to-end communication between applications and verifies correct

packet arrival.

Internet Layer which provides packet routing for error checking and addressing and integrity.

Network Interface Layer which provides an interface to the network hardware and device drivers. This can also

be called the Data Link Layer.

FTP is a file transfer protocol. It is a common mode of transferring between two internet sites. FTP is an application layer protocol


Bandwidth is the difference between the highest

and the lowest frequency that can be used to

transmit data.

It also represent the channel information carrying capacity

It is mentioned in terms of bits per second (bps)



Wireless LAN (WLAN)

• Wireless LAN (WLAN)

How wireless LANs Work?

Wireless LANs use electromagnetic waves (radio or infrared) to communicate information from one point to another without relying on any

physical connection.

In a typical wireless LAN configuration, a transmitter/receiver (transceiver) device, called an access point.

Connects to the wired network from a fixed location using standard cabling.


The client/server model is a computing model that acts as distributed application which partitions tasks or workloads

between the providers of a resource or service, called

servers, and service requesters, called clients.

Often clients and servers communicate over a computer network.

A server machine is a host that is running one or more server programs which share their resources with clients.

A client does not share any of its resources, but requests a server's content or service function.

Clients therefore initiate communication sessions with servers which await incoming requests.

Implementation examples of client / server technology:

Online banking application

Internal call centre application

Applications for end-users that are stored in the server

E-commerce online shopping page

Intranet applications

Financial, Inventory applications based on the client Server technology.

Tele communication based on Internet technologies

Benefits of the Client /Server Technology

Reduce the total cost of ownership.

Increased Productivity

End user productivity

Developer productivity

Reduces the hardware cost

It make the organization effectively and efficiently

Management control over the organization

Long term cost benefit

Can use multiple vendors software

Client/Server Components




Fat-client or Fat-server


• Client/Server Components • Client • Server • Middleware • Fat-client or Fat-server • Network:


A VPN is a private network that uses a public

network (usually the Internet) to connect remote

sites or users together.

Instead of using a dedicated, real-world connection such as leased line, a VPN uses "virtual" connections routed through the Internet from the company's private network to the remote site or employee.

Intranet-based - Company has one or more remote locations that they wish to join in a single private network, they can create an intranet VPN to connect LAN to LAN.

locations that they wish to join in a single private network, they can create an intranet

Extranet-based - When a company has a close relationship with another company (partner, supplier or

customer), they can build an extranet VPN that connects LAN

to LAN, and that allows all of the various companies to work

in a shared environment. It is an extended Intranet.

LAN to LAN, and that allows all of the various companies to work in a shared


Database servers

Application Servers :

An application server is a server program that resides in the server and provides the business logic for the application program.

The server can be a part of the network, more precisely the part of the distributed network.

First Tier: Front End - Browser (Thin Client) - a GUI interface lying at the client/workstation.

Second Tier: Middle Tier - Application Server - set of

application programs

Third Tier: Back End - Database Server.

Features of the Application Servers

Component Management: session management, and synchronous/asynchronous client notifications, as well as

executing server business logic

Fault Tolerance

Load Balancing

Transaction Management. Management Console


Categories of Application Server

Web Information Servers to generate web pages

Component Servers - database access and transaction

processing services

Active Application Server - server-side logic

Print Servers

Transaction Servers - MTS is all about managing the way

applications use components, and not just about managing


Explain caching server and proxy server 2009 5mk

Types of Internet Servers File server

Mail server

DNS server

Gopher server

Web server

FTP server

News server

Chat server

Caching server

Proxy server


The Tier -A tier is a distinct part

of hardware or software.

Single Tier: A single computer that

contains a database and a front

end to access the database.

Generally, this type of system is

used in small businesses.

a database and a front end to access the database. – Generally, this type of system


A single-tier system requires only one stand-alone computer. It also requires only one installation of proprietary software which makes it the most cost-

effective system available.


Can be used by only one user at a time. A single tier system is impractical for an organization which

requires two pr more users to interact with the

organizational data stores at the same time.

Two Tier systems: A two-tier system consists of a client an a server.

The database is stored on the

of a client an a server. • The database is stored on the server, and the

server, and the interface used to access the database is installed on the client.

Two-tier architecture : A two tier application or architecture is a client server application where

the processing load can fall either on the server or on the client.

When the processing load falls on the client then

the server simply acts as a controller between the

data and the client.

Such a client is called as a fat client and imposes a lot of memory and bandwidth on the client’s machine.

The three components are

1. User system interface (such as session, text input,

dialog, and display management services)

2. Processing Management (such as process

development, process enactment, process

monitoring, and process resource services)

3. Database /Management (such as data and file



Since processing was shared between the client and server, more users could interact with system.


Performance deteriorates if number of users is

greater than 100.

Restricted flexibility and choice of DBMS, since data language used in server is proprietary to each


Limited functionality in moving program functionality

across servers

Three Tier The three-tier architecture overcome the limitations of the two-tier architecture.

The third tier architecture (middle tier server) is

between the user interface (client) and the data

management (server) components.

This middle tier provides process management where business logic and rules are executed and can accommodate hundreds of users by providing

functions such as queuing, application execution,

and database staging.




• Client-tier • Application-server-tier • Data-server-tier

The advantages

3-tier architecture solves a number of problems that

are inherent to 2-tier architectures.

Clear separation of user-interface-control and data presentation from application-logic:

Client-applications are clear: quicker development through the reuse of pre-built business-logic

components and a shorter test phase, because the

server-components have already been tested.

Dynamic load balancing: If bottlenecks in terms of

performance occur, the server process can be moved

to other servers at runtime.


Multi-tier architecture (often referred to as n- tier architecture) is a client-server architecture in which an application is executed by more

than one distinct software agent.

The most widespread use of "multi-tier architecture" refers to three-tier architecture

The advantages of a multi-tier architecture are:

Forced separation of UI and business logic.

Low bandwidth network.

Business logic sits on a small number (maybe just

one) of centralized machines.

Enforced separation of UI and business logic.


A data center is a centralized repository for the storage, management and dissemination of data and information.

Data centers can be defined as highly secure,

fault-resistant facilities, hosting customer

equipment that connects to telecommunications networks.

Often referred to as an Internet hotel/ server farm, data farm, data warehouse, corporate data center, Internet service provider (ISP) or

wireless application service provider (WASP), the purpose of a data center is to provide

space and bandwidth connectivity for servers in a reliable, secure and saleable environment.

These data centers are also referred to as public data centers because they are open to customers.

Captive, or enterprise data centers, are usually

reserved for the sole and exclusive use of the parent company, but essentially serve the same purpose.

These facilities can accommodate thousands of

servers, switches, routers and racks, storage

arrays and other associated telecom equipment.

Facilities provided by data centers;

housing websites,

providing data serving and other services for companies.

Data center may contain a network operations center

(NOC), which is a restricted access area containing

automated systems that constantly monitor server activity, Web traffic, network performance and report

even slight irregularities to engineers, so that they can

spot potential problems before they happen.

Types of data centers


Private, Enterprise data center, or Captive

Private Data Centre:

A private data center (also called enterprise data centers) is managed by the organization’s own IT department.

It provides the applications, storage, web-

hosting, and e-business functions needed to

maintain full operations.

Public data centers:

A public data center (also called internet data centers) provide services ranging from equipment collocation to managed web- hosting.

Clients typically access their data and applications via the internet.

Data centers can be classified in tiers,

Tier 1 being the most basic and inexpensive,

Does not necessarily need to have redundant

power and cooling infrastructures.

It only needs a lock for security and can tolerate

upto 28.8 hours of downtime per year.

Tier 4 being the most robust and costly. The more

‘mission critical’ an application is, the more

redundancy, robustness and security are required for

the data center.

Redundant power and cooling, with multiple distribution paths that are active and fault tolerant.

the facility must permit no more than 0.4 hours of downtime per year.

Access should be controlled with biometric reader and single person entryways

Which sectors use them? :

Any large volume of data that needs to be centralized, monitored and managed centrally needs a data center.

The need of data center depends on the size

and criticality of data.

Data centers are extremely capital-intensive facilities.

What can they do? : Some of the value added services that a data center provides are:

Database monitoring

Web monitoring

Backup and restore

Intrusion detection system (IDS)

Storage on demand

Features of Data Centers:


Data Security

Availability of Data

Electrical and power systems


Physical security:

• ♦ Security guards

• ♦ Proximity card and PIN for door access

• ♦ Biometrics access and PIN for door access

• ♦ 24 x 365 CCTV surveillance and recording

Data security:

Perimeter security: This is to manage both internal and external threats. This consists of firewalls, intrusion detection and content

inspections; host security; anti-virus and access

control and administrative tools.

Access management: This is for both applications

and operating systems that host these critical


System monitoring and support

Storage: Data centers offer more than just network storage solutions.

• ♦ Primary storage (SAN, NAS, DAS)

• ♦ Secondary storage (tape libraries)

• ♦ Tertiary storage (offline tape storage, such as DAT drives, and magneto-optical drives)

Constituents of a Data Centre

• ♦ Network connectivity with various levels of physical (optical fibre

and copper) and service

• ♦ Dual DG sets and dual UPS

• ♦ HVAC systems for temperature control

• ♦ Fire extinguishing systems

• ♦ Physical security systems: swipe card/ biometric entry systems,

CCTV, guards and so on.

• ♦ Raised flooring

• ♦ Network equipment

• ♦ Network management software

• ♦ Network security: segregating the public and private network,

installing firewalls and intrusion detection systems (IDS)

Leveraging the best :

Using both enterprise/captive and public data centers cost savings

Provide value-added services

One-stop solution provider and give them an end-to-end outsourcing experience.

Challenges faced by the management

Maintaining a skilled staff and the high

infrastructure needed for daily data center

operations Maximizing uptime and performance

Technology selection

Resource balancing

Disaster recovery sites

Data centers need to be equipped with the appropriate disaster recovery systems that

minimize downtime for its customers.

Downtime can be eliminated by having proper disaster recovery (DR) plans for mission-critical

types of organizations.

Cold site:

Warm site:

Hot site

Business Continuity Planning (BCP)

It is a documented description of action, resources, and procedures to be followed before, during and after an event, functions vital to continue business operations are recovered, operational in an acceptable time frame.

Disaster events: -

There is a potential for significantly interrupt normal business processing,

Business is associated with natural disasters like earthquake, flood, tornadoes, thunderstorms, fire, etc.

Disasters are disruptions causing the entire facility to be inoperative for a lengthy period of


Catastrophes are disruptions resulting from disruption of processing facility.

Components of BCP

(i) Define requirements based on business needs,

(ii) Statements of critical resources needed,

(iii) Detailed planning on use of critical resources,

(iv) Defined responsibilities of trained personnel,

(v) Written documentations and procedures cove

all operations,

(vi) Commitment to maintain plan to keep up with changes.

Life Cycle of BCP: The

development of a BCP

manual can have five main phases.

1. Analysis

2. Solution design

3. Implementation

4. Testing and organization


5. Maintenance

1. Analysis • 2. Solution design • 3. Implementation • 4. Testing and organization acceptance •

ANALYSIS: The analysis phase in the development of a BCP manual

consists of an impact analysis, threat analysis, and impact scenarios

with the resulting BCP plan requirement documentation.

Impact analysis (Business Impact Analysis, BIA)

An impact analysis results in the differentiation between critical (urgent) and non-critical (non-urgent) organization functions/ activities.

For each critical (in scope) function, two values are then assigned:

Recovery Point Objective (RPO)- the acceptable latency of data

that will be recovered

Recovery Time Objective (RTO) - the acceptable amount of time

to restore the function

The Recovery Point Objective must ensure that the Maximum Tolerable Data Loss for each activity is not exceeded.

The Recovery Time Objective must ensure that

the Maximum Tolerable Period of Disruption (MTPD) for each activity is not exceeded.

The impact analysis results in the recovery requirements for each critical function. Recovery requirements consist of the

following information:

• The business requirements for recovery of

the critical function, and/or

• The technical requirements for recovery of the critical function

Threat analysis

Documenting potential threats is recommended to detail a specific disaster’s unique recovery steps.

Some common threats include disease,

earthquake, fire , flood, Cyber attack , bribery,

hurricane , utility outage, terrorism .

Recovery requirement documentation

After the completion of the analysis phase, the business and technical plan requirements are documented in order to commence the implementation phase.

A good asset management program can be of great assistance here and allow for quick identification of available and re-allocateable resources.

For an office-based, IT intensive business, the plan

requirements may cover the following elements:

The numbers of recovery methods and secondary location.

The individuals involved in the recovery effort along with their contact and technical details.

The applications and application data required from the

secondary location desks for critical business functions.

The manual and solutions

The maximum outage allowed for the applications

The peripheral requirements like printers, copier, fax machine, calculators, paper, pens etc.


The goal of the solution design phase is to identify the most cost effective disaster recovery


For IT applications, this is commonly expressed as:

1. The minimum application and application data requirements

2. The time frame in which the minimum

application and application data must be


The solution phase determines:

The crisis management command structure

The location of a secondary work site (where necessary)

Telecommunication architecture between primary and

secondary work sites

Data replication methodology between primary and

secondary work sites

The application and software required at the secondary

work site, and

The type of physical data requirements at the secondary

work site.


It is the execution of the design elements identified in the solution design phase.

Work package testing may take place during

the implementation of the solution, however;

work package testing does not take the place

of organizational testing.

TESTING AND ORGANIZATIONAL ACCEPTANCE The purpose of testing is to achieve organizational acceptance.

Testing may include:

Crisis command team call-out testing

Technical swing test from primary to secondary work locations

Technical swing test from secondary to primary work locations

Application test

Business process test

At minimum, testing is conducted on a biannual or annual



Maintenance of a BCP manual is broken down into

three periodic activities.

The first activity is the confirmation of information in the manual, roll out to ALL staff for awareness and specific training for individuals whose roles are

identified as critical in response and recovery.

The second activity is the testing and verification of technical solutions established for recovery operations.

The third activity is the testing and verification of

documented organization recovery procedures. A

biannual or annual maintenance cycle is typical.

Information update and testing

Some types of changes that should be identified and updated in the manual include:

Staffing changes

Changes to important clients and their contact details

Changes to important vendors/suppliers and their contact


Departmental changes like new, closed or fundamentally

changed departments.

Changes in company investment portfolio and mission


Changes in upstream/downstream supplier routes

Testing and verification of technical solutions

As a part of ongoing maintenance, any specialized technical deployments must be checked for


Virus definition distribution

Application security and service patch distribution

Hardware operability check

Application operability check

Data verification

Testing and verification of organization recovery procedures

As work processes change over time, the previously

documented organizational recovery procedures may no

longer be suitable. Some checks include:

Are all work processes for critical functions documented?

Have the systems used in the execution of critical functions


Are the documented work checklists meaningful and

accurate for staff?

Do the documented work process recovery tasks and

supporting disaster recovery infrastructure allow staff to

recover within the predetermined recovery time objective


Need for security: The basic objective for providing network

security is twofold:

To safeguard assets

To ensure and maintain the data integrity.

To establish the system resources that the users desire to employ

There are two types of systems security.

Physical security is implemented to protect the physical systems assets of an organization like the personnel, hardware, facilities, supplies and documentation.

Logical security is intended to control -(i) malicious and non-malicious threats to physical security and (ii) malicious threats to logical security itself.

Level of Security: The task of a Security Administration in an organization is to conduct a security program which is a series of

ongoing, regular and periodic review of

controls exercised to ensure safeguarding of assets and maintenance of data integrity. Security programs involve following eight steps

Preparing project plan for enforcing security,

Assets identification,

Assets valuation,

Threats identification,

Threats probability of occurrence assessment,

Exposure analysis,

Controls adjustment,

Report generation outlining the levels of security to be

provided for individual systems, end user, etc.

First outlining the objectives

Second prepare an action plan

Third step of valuation of Assets can pose a difficulty

The fourth step in a security review is Threats


The fifth step in a security review is assessment or the probability of occurrence of threats over a given time period.

The sixth step is the Exposures Analysis by first identifying the controls in the place, secondly assessing the reliability of the existing

controls, thirdly evaluating the probability that a threat can be successful and lastly assessing

the resulting loss if the threat is successful.

The seventh step is the adjustment of controls which means whether over some time period any control can be designed, implemented and operated such that the cost of control is lower than the reduction in the expected losses.

The reduction in the expected losses is the

difference between expected losses with the (i) existing set of controls and (ii) improved set of


The last step is report generation documenting, the findings of the review and specially recommending new assets

safeguarding techniques that should be implemented.

IDS Components : The goal of intrusion detection is to monitor network assets to detect anomalous

behavior and misuse.

Network Intrusion Detection (NID) : Network intrusion detection deals with information passing on the wire between hosts. Typically

referred to as "packet-sniffers," network intrusion

detection devices intercept packets traveling along various communication mediums and

protocols, usually TCP/IP.

Host-based Intrusion Detection (HID): Host- based intrusion detection systems are designed to monitor, detect, and respond to

user and system activity and attacks on a given host.

Hybrid Intrusion Detection: Hybrid intrusion detection systems offer management of and alert notification from both network and host-

based intrusion detection devices. Hybrid solutions provide the logical complement to

NID and HID - central intrusion detection


Network-Node Intrusion Detection (NNID) :

Network-node intrusion detection was developed to work around the inherent flaws in traditional NID. Network-node pulls the packet-intercepting technology off of the wire and puts it on the host.

With NNID, the "packet-sniffer" is positioned in

such a way that it captures packets after they reach their final target, the destination host.

The packet is then analyzed.

Threats and Vulnerabilities The threats to the security of systems assets can be broadly divided into nine categories:

(i) Fire

(ii) Water

(iii) Energy variations like voltage fluctuations, circuit breakage, etc.

(iv) Structural damages

(v) Pollution

(vi) Intrusion like physical intrusion and eavesdropping

which can be eliminated / minimized by physical access

controls, prevention of electromagnetic emission and

providing the facilities with their proper locations / sites

(vii) Viruses and Worms (being discussed in detail later on)

(viii) Misuse of software, data and services which can be avoided by preparing an employees’ code of conduct and

(ix) Hackers, the expected loss from whose activities can be mitigated only by robust logical access controls.

The controls to guard against the virus are threefold

Preventive controls like using only clean and licensed copies of software files, cutting the use of pubic domain software / shareware, downloading files or software only from a reliable websites, implementing read-only access to software.

Detective controls like regularly running antivirus software,

undertaking file size comparison to observe whether the size of

programs has changed, undertaking date / time comparisons to detect any unauthorized modifications.

Corrective controls like maintaining a clean backup, having a recovery plan from virus infections, regularly running antivirus software (which is useful for both detection and removal of virus) Worms, unlike virus,

exist as separate and independent programs and like virus, propagate their copies with benign or malignant intention using operating system

as their medium of replication.

Abuse of software, Data and Services can arise

in the following ways:

(i) Generalized software and proprietary

databases of the organization are often copied and taken away without an authority, by the

employees who may keep it for their own

purposes or for handing it over to competitors,

(ii) Organization fails to protect the privacy of the individuals who data are stored in it databases,

(iii) Employees use system services for their own personal gains and activities,

Techniques of Network security Firewalls : Access controls are

common form of controls encountered in the boundary subsystem

by restricting the use of system resources to authorize users, limiting the actions authorized users can take with these resources and ensuring that the users obtain only authentic system resources.

Current systems are designed to allow users to share their

resources. This is done by having a single system simulate the

operations of several systems, where each of the simulated system works as virtual machine allowing more efficient use of resources by lowering the idle capacity of the real system.

Firewall is a device that forms a barrier between a secure and an open environment when the latter environment is usually considered hostile, for example the Internet. It acts as a system or combination of systems that enforces a boundary between more that one networks.


Storage area network is

a dedicated centrally managed secure



Resource management is centralized and


It increases the overall efficiency

Network attached

storage is dedicated server for file sharing

It is a single purpose stand alone high

performance computer

Its failure will hall the entire work