Sunteți pe pagina 1din 41


In information technology, a server is a computer system that provides

services to other computing systems—called clients—over a network. The
term server can refer to hardware (such as a Sun computer system) or
software (such as an RDBMS server)

Servers occupy a place in computing similar to that occupied by

minicomputers in the past, which they have largely replaced. The typical
server is a computer system that operates continuously on a network and
waits for requests for services from other computers on the network. Many
servers are dedicated to this role, but some may also be used simultaneously
for other purposes, particularly when the demands placed upon them as
servers are modest. For example, in a small office, a large desktop computer
may act as both a desktop workstation for one person in the office and as a
server for all the other computers in the office. The term 'Server' originates
from the word 'Serve', therefor this computer system is mainly serving the
whole network that it is connected to in any form, whether by queueing up
the printing jobs of several users, to even acting as a file server for
applications that online terminals could access. The name 'Server' is another
term given to 'Host computers'.

Servers today are physically similar to most other general-purpose

computers, although their hardware configurations may be particularly
optimized to fit their server roles, if they are dedicated to that role. Many use
hardware identical or nearly identical to that found in standard desktop PCs.
However, servers run software that is often very different from that used on
desktop computers and workstations.

Servers should not be confused with mainframes, which are very large
computers that centralize certain information-processing activities in large
organizations and may or may not act as servers in addition to their other
activities. Many large organizations have both mainframes and servers,
although servers usually are smaller and much more numerous and
decentralized than mainframes.
Servers frequently host hardware resources that they make available on a
controlled and shared basis to client computers, such as printers (print
servers) and file systems (file servers). This sharing permits better access
control (and thus better security) and can reduce costs by reducing
duplication of hardware.

Server hardware

Although servers can be built from commodity computer components—

particularly for low-load and/or non-critical applications—dedicated, high-
load, mission-critical servers use specialized hardware that is optimized for
the needs of servers.

For example, servers may incorporate “industrial-strength” mechanical

components such as disk drives and fans that provide very high reliability
and performance at a correspondingly high price. Aesthetic considerations
are ignored, since most servers operate in unattended computer rooms and
are only visited for maintenance or repair purposes. Although servers usually
require large amounts of disk space, smaller disk drives may still be used in
a trade-off of capacity vs. reliability. CPU speeds are far less critical for
many servers than they are for many desktops. Not only are typical server
tasks likely to be delayed more by I/O requests than processor requirements,
but the lack of any graphic user interface in many servers frees up very large
amounts of processing power for other tasks, making the overall processor
power requirement lower. If a great deal of processing power is required in a
server, there is a tendency to add more CPUs rather than increase the speed
of a single CPU, again for reasons of reliability and redundancy.

The lack of a GUI in a server (or the rare need to use it) makes it
unnecessary to install expensive video adapters. Similarly, elaborate audio
interfaces, joystick connections, USB peripherals, and the like are usually

Because servers must operate continuously and reliably, noisy but efficient
and trustworthy fans may be used for ventilation instead of inexpensive and
quiet fans; and in some cases, centralized air-conditioning may be used to
keep servers cool, instead of or in addition to fans. Special uninterruptible
power supply may be used to ensure that the servers continue to run in the
event of a power failure.
All servers include heavy-duty network connections in order to allow them
to handle the large amounts of traffic that they typically receive and generate
as they receive and reply to client requests.

Server software

The major difference between servers and desktop computers is not in the
hardware but in the software. Servers often run operating systems that are
designed specifically for use in servers. They also run special applications
that are designed specifically to carry out server tasks.

Operating systems

The Microsoft Windows operating system is predominant among desktop

computers, but in the world of servers, the most popular operating systems
—such as FreeBSD, Solaris, and GNU/Linux—are derived from or similar
to the UNIX operating system. UNIX was originally a minicomputer
operating system, and as servers gradually replaced traditional
minicomputers, UNIX was a logical and efficient choice of operating system
for the servers.

Server-oriented operating systems tend to have certain features in common

that make them more suitable for the server environment, such as the
absence of a GUI (or an optional GUI); the ability to be reconfigured (in
both hardware and software) to at least some extent without stopping the
system; advanced backup facilities to permit online backups of critical data
at regular and frequent intervals; facilities to enable the movement of data
between different volumes or devices in such a way that is transparent to the
end user; flexible and advanced networking capabilities; features (such as
daemons in UNIX or services in Windows) that make unattended execution
of programs more reliable; tight system security, with advanced user,
resource, data, and memory protection, and so on. Server-oriented operating
systems in many cases can interact with hardware sensors to detect
conditions such as overheating, processor and disk failure, and either alert an
operator, take remedial action, or both, depending on the configuration.

Because the requirements of servers are in some cases almost diametrically

opposed to those of desktop computers, it is extremely difficult to design an
operating system that handles both environments well; thus, operating
systems that are well suited to the desktop may not be ideal for servers and
vice versa. Nevertheless, certain versions of Windows are also used on a
minority of servers as are recent versions of the popular Mac OS X (which is
Unix-based, and gives users complete access to the Unix operating system)
family of desktop operating systems and even some proprietary mainframe
operating systems (such as z/OS); but the the dominant operating systems
among servers continues to be UNIX versions or clones. Even in the case of
GNU/Linux, a popular UNIX-like operating system frequently used on
servers, configurations that are ideal for servers may be unsatisfactory for
desktop use, and configurations that perform well on the desktop may leave
much to be desired on servers.

The rise of the microprocessor-based server was facilitated by the

development of several versions of the Unix operating system to run on the
Intel x86 microprocessor architecture, including Solaris, GNU/Linux and
FreeBSD. The Microsoft Windows family of operating systems also runs on
Intel hardware, and versions beginning with Windows NT have incorporated
features making them suitable for use on servers.

Whilst the role of server and desktop operating systems remains distinct,
improvements in both hardware performance and reliability and operating
system reliability have blurred the distinction between these two classes of
system, which at one point remained largely separate in terms of code base,
hardware and vendor providers. Today, many desktop and server operating
systems share the same code base, and differ chiefly in terms of
configuration. Furthermore, the rationalisation of many corporate
applications towards web-based and middleware platforms has lessened
demand for specialist application servers.

Server applications

Server applications are tailored to the tasks performed by servers just as

desktop or mainframe applications are tailored to their own respective
Most server applications are distinguished by the fact that they are
completely non-interactive on the local server itself; that is, they do not
display information on a screen and do not expect user input. Instead, they
run unobtrusively within the server and interact only with client computers
on the network to which the server is attached. Applications of this kind are
called daemons in UNIX terminology, and services in Windows

Server applications are typically started once when the server is booted, and
thereafter run continuously until the server is stopped. A given server
usually runs the same set of applications at all times, since there is no way
for the server to predict when a given service might be requested of it by a
client computer. Some server applications in some server systems are
automatically started when a request from a client is received, and are then
stopped when request has been satisfied.

Servers on the Internet

Almost the entire structure of the Internet is based upon a client-server

model. Many millions of servers are connected to the Internet and run
continuously throughout the world.

Among the many services provided by Internet servers are: the Web; the
Domain Name System; electronic mail; file transfer; instant messaging;
streaming audio and video, online gaming, and countless others. Virtually
every action taken by an ordinary Internet user requires one or more
interactions with one or more servers

Exchange Server
For most businesses today, e-mail is the mission-critical
communications tool that allows their people to produce the
best results. This greater reliance on e-mail has increased
the number of messages sent and received, the variety of
work getting done, and even the speed of business itself.
Amid this change, employee expectations have also evolved.
Today, employees look for rich, efficient access—to e-mail,
calendars, attachments, contacts, and more—no matter
where they are or what type of device they are using.
For IT professionals, delivering a messaging system that
addresses these needs must be balanced against other
requirements such as security and cost. Enterprise security
requirements have become more complex as the demand
and use for e-mail has increased. Today, IT departments
must contend with e-mail security threats that are wide
ranging: continually evolving spam and viruses,
noncompliance risks, the vulnerability of e-mail to
interception and tampering, in addition to the potential
harmful effects of natural and man-made disasters.
While security is clearly a priority, IT is ever cognizant of the
need to manage cost. Time, money, and resource
constraints are a fact of life as IT is made accountable to do
more with less. As a result, IT professionals look for a
messaging system that addresses both enterprise and
employee needs while also being cost-effective to deploy
and manage. Microsoft Exchange Server 2007 has been
designed specifically to meet these challenges and address
the needs of the different groups who have a stake in the
messaging system. The new capabilities of Microsoft
Exchange Server 2007 deliver the advanced protection your
company demands, the anywhere access your people want,
and the operational efficiency you, in IT, need.
Exchange Server

Built-in Protection
Exchange Server 2007 offers built-in protective technologies
to keep your business moving, reduce spam and viruses,
enable confidential communications, and help your company
to be compliant.

Key Benefits:

Keeps communication alive and e-mail flowing with
enterprise-class availability and reliability

Helps safeguard users and the organization’s valuable data
from the harmful effects of spam and viruses

Provides trusted communications within the organization
automatically and without added cost or complexity

Simplifies regulatory compliance in a way that supports
the different needs of employees, compliance managers,
and messaging administrators

Anywhere Access

With Exchange Server 2007, employees get anywhere

access* to their e-mail, voice mail, calendars, and contacts
from a variety of clients and devices.

Key Benefits:

Increases the productivity of today’s employees who
require the ability to respond quickly at home, work, or on
the go

Offers employees a single inbox to access all of their
important communications—including voice mail, fax, and
e-mail—while avoiding the cost and effort of maintaining
separate systems

Delivers a fast, seamless, and familiar Microsoft Office
Outlook experience across different devices and clients
with no requirement for extra software or services outside
of an Internet or basic phone connection

Improves collaboration and productivity by making it
easier to find and share data, documents, and schedules
from anywhere
*Anywhere access requires Internet connectivity. Outlook Voice Access requires phone connectivity.

Operational Efficiency

Exchange Server 2007 enables new levels of operational

efficiency through capabilities that optimize hardware and
networking investments and features that help make
administrators more productive.

Key Benefits:

Gets more from hardware, software, and network
investments through the power of x64 computing and
bandwidth-optimizing routing algorithms

Improves administrator productivity by making it easier to
find and fix problems, and automate tasks more simply

Drives deployment efficiencies with automatic client
connections, a new server roles-based architecture, and
improved diagnostics and monitoring

Simplifies integrating Exchange Server data within line-of-
business applications and third-party solutions through
new Exchange Web Services

Proxy server

A proxy server is a computer that offers a computer network service to allow

clients to make indirect network connections to other network services. A
client connects to the proxy server, then requests a connection, file, or other
resource available on a different server. The proxy provides the resource
either by connecting to the specified server or by serving it from a cache. In
some cases, the proxy may alter the client's request or the server's response
for various purposes


Proxies which attempt to block offensive web content are implemented as
web proxies. Other web proxies reformat web pages for a specific purpose or
audience; for example, Skweezer reformats web pages for cell phones and
PDAs. Network operators can also deploy proxies to intercept computer
viruses and other hostile content served from remote web pages.
A special case of web proxies are "CGI proxies." These are web sites that
allow a user to access a site through them. They generally use PHP or CGI
to implement the proxying functionality. CGI proxies are frequently used to
gain access to web sites blocked by corporate or school proxies. Since they
also hide the user's own IP address from the web sites they access through
the proxy, they are sometimes also used to gain a degree of anonymity,
called "Proxy Avoidance."

Many organizations — including corporations, schools, and families — use
a proxy server to enforce acceptable network use policies (see censorware)
or to provide security, anti-malware and/or caching services. A traditional
web proxy is not transparent to the client application, which must be
configured to use the proxy (manually or with a configuration script). In
some cases, where alternative means of connection to the Internet are
available (e.g. a SOCKS server or NAT connection), the user may be able to
avoid policy control by simply resetting the client configuration and
bypassing the proxy. Furthermore administration of browser configuration
can be a burden for network administrators.
An intercepting proxy, often incorrectly called transparent proxy (also
known as a forced proxy) combines a proxy server with NAT. Connections
made by client browsers through the NAT are intercepted and redirected to
the proxy without client-side configuration (or often knowledge).
Intercepting proxies are commonly used in businesses to prevent avoidance
of acceptable use policy, and to ease administrative burden, since no client
browser configuration is required.
Intercepting proxies are also commonly used by Internet Service Providers
in many countries in order to reduce upstream link bandwidth requirements
by providing a shared cache to their customers.
It is often possible to detect the use of an intercepting proxy server by
comparing the external IP address to the address seen by an external web
server, or by examining the HTTP headers on the server side.
Some poorly implemented intercepting proxies have historically had certain
downsides, e.g. an inability to use user authentication if the proxy does not
recognise that the browser was not intending to talk to a proxy. Some
problems are described in RFC 3143. A well-implemented proxy however
should not inhibit browser authentication at all.
The term transparent proxy, often incorrectly used instead of intercepting
proxy to describe the same behavior, is defined in RFC 2616 (Hypertext
Transfer Protocol -- HTTP/1.1) as:
"[A] proxy that does not modify the request or response beyond what is
required for proxy authentication and identification."


An open proxy is a proxy server which will accept client connections from
any IP address and make connections to any Internet resource. Abuse of
open proxies is currently implicated in a significant portion of e-mail spam
delivery. Spammers frequently install open proxies on unwitting end users'
operating systems by means of computer viruses designed for this purpose.
Internet Relay Chat (IRC) abusers also frequently use open proxies to cloak
their identities.
Because proxies might be used for abuse, system administrators have
developed a number of ways to refuse service to open proxies. IRC networks
such as the Blitzed network automatically test client systems for known
types of open proxy [1]. Likewise, an email server may be configured to
automatically test e-mail senders for open proxies, using software such as
Michael Tokarev's proxycheck [2].
Groups of IRC and electronic mail operators run DNSBLs publishing lists of
the IP addresses of known open proxies, such as AHBL, CBL, NJABL, and
The ethics of automatically testing clients for open proxies are controversial.
Some experts, such as Vernon Schryver, consider such testing to be
equivalent to an attacker portscanning the client host. [3] Others consider the
client to have solicited the scan by connecting to a server whose terms of
service include testing.


A reverse proxy is a proxy server that is installed in the neighborhood of

one or more web servers. All traffic coming from the Internet and with a
destination of one of the web servers is going through the proxy server.
There are several reasons for installing reverse proxy servers:
• Security: the proxy server is an additional layer of defense and
therefore protects the web servers further up the chain
• Encryption / SSL acceleration: when secure websites are created, the
SSL encryption is often not done by the web server itself, but by a
reverse proxy that is equipped with SSL acceleration hardware. See
Secure Sockets Layer.
• Load balancing: the reverse proxy can distribute the load to several
web servers, each web server serving its own application area. In such
a case, the reverse proxy may need to rewrite the URLs in each web
page (translation from externally known URLs to the internal
• Serve/cache static content: A reverse proxy can offload the web
servers by caching static content like pictures and other static
graphical content
• Compression: the proxy server can optimize and compress the content
to speed up the load time.
• Spoon feeding: if a program is producing the web page on the web
servers, the web servers can produce it, serve it to the reverse-proxy,
which can spoon-feed it however slowly the clients need and then
close the program rather than having to keep it open while the clients
insist on being spoon fed.


A split proxy is essentially a pair of proxies installed across two computers.

Since they are effectively two parts of the same program, they can
communicate with each other in a more efficient way than they can
communicate with a more standard resource or tool such as a website or
browser. This is ideal for compressing data over a slow link, such as a
wireless or mobile data service and also for reducing the issues regarding
high latency links (such as satellite internet) where establishing a TCP
connection is time consuming. Taking the example of web browsing, the
user's browser is pointed to a local proxy which then communicates with its
other half at some remote location. This remote server fetches the requisite
data, repackages it and sends it back to the user's local proxy, which unpacks
the data and presents it to the browser in the standard fashion .
See Google's Web Accelerator

A circumventor is a web-based page that takes a site that is blocked and
"circumvents" it through to an unblocked website, allowing the user to view
blocked pages. A famous example is 'elgoog', which allowed users in China
to use Google after it had been blocked there. Elgoog differs from most
circumventors in that it circumvents only one block.
The most common use is in schools where many blocking programs block
by site rather than by code; students are able to access blocked sites (games,
chatrooms, messenger, weapons, racism, forbidden knowledge, etc.) through
a circumventor. As fast as the filtering software blocks circumventors, others
spring up. It should be noted, however, that in some cases the filter may still
intercept traffic to the circumventor, thus the person who manages the filter
can still see the sites that are being visited.
Circumventors are also used by people who have been blocked from a
Another use of a circumventor is to allow access to country-specific
services, so that Internet users from other countries may also make use of
them. An example is country-restricted reproduction of media and
The use of circumventors is usually safe with the exception that
circumventor sites run by an untrusted third party can be run with hidden
intentions, such as collecting personal information, and as a result users are
typically advised against running personal data such as credit card numbers
or passwords through a circumventor.

In using a proxy server (for example, anonymizing HTTP proxy), all data
sent to the service being used (for example, HTTP server in a website) must
pass through the proxy server before being sent to the service, mostly in
unencrypted form. It is therefore possible, and has been demonstrated (see,
for example, Sugarcane) for a malicious proxy server to record everything
sent to the proxy: including unencrypted logins and passwords.
By chaining proxies which do not reveal data about the original requester, it
is possible to obfuscate activities from the eyes of the user's destination.
However, more traces will be left on the intermediate hops, which could be
used or offered up to trace the user's activities. If the policies and
administrators of these other proxies are unknown, the user may fall victim
to a false sense of security just because those details are out of sight and
The bottom line of this is to be wary when using proxy servers, and only use
proxy servers of known integrity (e.g., the owner is known and trusted, has a
clear privacy policy, etc.), and never use proxy servers of unknown integrity.
If there is no choice but to use unknown proxy servers, do not pass any
private information (unless it is properly encrypted) through the proxy.
In what is more of an inconvenience than a risk, proxy users may find
themselves being blocked from certain Web sites, as numerous forums and
Web sites block IP addresses from proxies known to have spammed or
trolled the site.

Popular proxy software

• The Squid cache is a popular HTTP proxy server in the UNIX/Linux
• The HTTP-Tunnel is a popular SOCKS proxy server and Client for
• The Apache HTTP Server can be configured to act as a proxy server.
• Blue Coat's (formerly Cacheflow's) purpose-built SGOS proxies 15
protocols including HTTPS/SSL, has an extensive policy engine and
runs on a range of appliances from branch-office to enterprise.
• WinGate is a multi-protocol proxy server and NAT solution that can
be used to redirect any kind of traffic on a Microsoft Windows host. It
also provides firewall, VPN and mail server functionality. Its WWW
proxy supports integrated windows authentication, intercepting proxy,
and multi-host reverse-proxying.
• Privoxy is a free, open source web proxy with privacy features
• Microsoft Internet Security and Acceleration Server is a product that
runs on Windows 2000/2003 servers and combines the functions of
both a proxy server and a firewall.
• Tor - A proxy-based anonymizing Internet communication system
• Proxomitron - User-configurable web proxy used to re-write web
pages on the fly. Most noted for blocking ads, but has many other
useful features.
• PHProxy is a Web HTTP proxy programmed in PHP to bypass
firewalls and other proxy restrictions through a Web interface very
similar to the popular CGIProxy.
• SJSWebProxy (SunMicrosystems) is a proxy server for HTTP and
HTTPS (CONNECT) requests. It can also serve as a gateway for Ftp
and Gopher traffic. It is also free for download.
• Nginx Web and Reverse proxy server, that can act as POP3 proxy
• Ssh Secure Shell can be configured to proxify a connection, by setting
up a SOCKS proxy on the client, and tunneling the traffic through the
SSH connection.
• CCProxy An all in one easy proxy server for windows,
graphical interface and easy to configure. demo version
supports 3 users at a time.
• NetShade An anonymous proxy server management program
and subscription-based proxy service for MacOS X.
Microsoft Small Business Server
Microsoft Small Business Server is an integrated suite of server
products from Microsoft designed for running network
infrastructure (both intranet management and Internet access) of
small and medium enterprises having no more than 75
workstations or users. The suite currently consists of Windows
2003 Server, Microsoft Exchange Server, Internet Information
Services (IIS), Windows SharePoint Services, Microsoft Outlook
for clients, Routing and Remote Access Server (RRAS), and Fax
Server. Premium version also includes Microsoft SQL Server,
Microsoft ISA Server and Microsoft FrontPage 2003.
Initially Small Business Server was marketed as an edition of
Microsoft BackOffice Server, then at version 2000 it became a
separate product, and finally at version 2003 it was rebranded as a
member of the Windows Server 2003 family. It is technically not
an 'Edition' of Windows Server 2003 but rather a collection of
server technologies optimized especially for small businesses.

IBM Systems Network Architecture

Systems Network Architecture (SNA) is IBM's proprietary

networking architecture created in 1974. It is a complete protocol
stack for interconnecting computers and their resources. SNA
describes the protocol and is, in itself, not actually a program. The
implementation of SNA takes the form of various communications
packages, most notably VTAM which is the mainframe package
for SNA communcations. SNA is still used extensively in banks
and other financial transaction networks, as well as in many
government agencies. While IBM is still providing support for
SNA, one of the primary pieces of hardware, the 3745/3746
communications controller has been withdrawn from marketing by
the IBM Corporation. However, there are an estimated 20,000 of
these controllers installed and IBM continues to provide hardware
maintenance service and microcode features to support users. A
robust market of smaller companies continues to provide the
3745/3746, features, parts and service. The VTAM
telecommunications access method is also supported by IBM, as is
the IBM Network Control Program (NCP) required by the
3745/3746 controllers.

Advantages and Disadvantages

SNA removed link control from the application program and placed it in the
NCP. This had the following advantages and disadvantages:

• Localization of problems in the telecommunications network was
easier because a relatively small amount of software actually dealt
with communication links. There was a single error reporting system.
• Adding communication capability to an application program was
much easier because the formidable area of link control software
which typically requires interrupt processors and software timers was
relegated to system software and NCP.

• Connection to non-SNA networks was difficult. An application which
needed access to some communication scheme which was not
supported in the current version of SNA faced obstacles. Before IBM
included X.25 support (NPSI) in SNA, connecting to an X.25 network
would have been awkward. Conversion between X.25 and SNA
protocols could have been provided either by NCP software
modifications or by an external protocol converter.
• At first glance, SNA networks appear to be very expensive in
comparison to TCP/IP networks. For small networks, this may be true,
but as the complexity of a large routed network grows, the SNA
structure provides a cheaper path.

Logical Unit Types

SNA defines several kinds of devices, identifying each group with a Logical
Unit grouping. LU0 provides for undefined devices, or build your own
protocol. LU1 devices are printers. LU2 devices are dumb terminals. LU3
devices are printers using 3270 protocols. LU4 devices are batch terminals.
LU5 has never been defined. LU6 provides for protocols between two
applications. LU7 provides for sessions with 5250 terminals. The primary
ones in use are LU1, LU2, and LU6.2 (an advanced protocol for application
to application conversations).
Within SNA there are two types of datastream to connect local terminals and
printers; there is the 3270 datastream mainly used by mainframes (zSeries
family) and the 5250 datastream mainly used by minicomputers/servers such
as the S/36, S/38, and AS/400 (now the iSeries).
Starting from version 5.2 of OS/400, SNA for client-access is no longer
The term 37xx refers to IBM's family of SNA communications controllers.
The 3745 supports up to eight high-speed T1 circuits, the 3725 is a large-
scale node and front-end processor for a host, and the 3270 is a remote node
that functions as a concentrator and router.

Microsoft SQL Server

Microsoft SQL Server is a relational database management system

(RDBMS) produced by Microsoft. Its primary query language is Transact-
SQL, an implementation of the ANSI/ISO standard Structured Query
Language (SQL) used by both Microsoft and Sybase. SQL Server is
commonly used by businesses for small- to medium-sized databases, but the
past five years have seen greater adoption of the product for larger enterprise


MS SQL Server uses a variant of SQL called T-SQL, or Transact-SQL, an

implementation of SQL-92 (the ISO standard for SQL, certified in 1992)
with some extensions. T-SQL mainly adds additional syntax for use in
stored procedures, and affects the syntax of transaction support. (Note that
SQL standards require Atomic, Consistent, Isolated, Durable or "ACID"
transactions.) MS SQL Server and Sybase/ASE both communicate over
networks using an application-level protocol called Tabular Data Stream
(TDS). The TDS protocol has also been implemented by the FreeTDS
project [2] in order to allow more kinds of client applications to
communicate with MS SQL Server and Sybase databases. MS SQL Server
also supports Open Database Connectivity (ODBC). The latest release SQL
Server 2005 [3] also supports the ability to deliver client connectivity via the
Web Services SOAP[4] protocol. This allows non-Windows Clients to
communicate cross platform with SQL Server. Microsoft has also released a
certified JDBC[5]> driver to let JAVA [6] Applications like BEA[7] and
IBM WebSphere[8] communicate with Microsoft SQL Server 2000 and

Microsoft Systems Management Server

From Wikipedia, the free encyclopedia
(Redirected from Systems Management Server)
Jump to: navigation, search
Microsoft Systems Management Server (SMS) is a systems management
software product by Microsoft for managing large groups of Windows-based
computer systems. SMS provides remote control, patch management,
software distribution, and hardware and software inventory. An optional
feature is operating system deployment which requires the installation of the
SMS 2003 OS Deployment Feature Pack. The current version is 2003 SP2.
Vintela Management Extensions provides the possibility to manage Unix
computer systems from within SMS as well. [1]
There have been three major iterations of SMS. The 1.x versions of the
product defined the scope of control of the management server (the site) in
terms of the NT domain that was being managed. Since the 1.x versions, that
site paradigm has switched to a group of subnets that will be managed
together. Since SMS 2003, the site could also be defined as one or more
Active Directory sites.
The major difference between the 2.x product and SMS 2003 is the
introduction of the Advanced Client. The Advanced Client communicates
with a more scalable management infrastructure, namely the Management
Point. A Management Point (MP) can manage up to twenty five thousand
Advanced Clients.
The Advanced Client was introduced to provide a solution to the problem
that a managed laptop might connect to a corporate network from multiple
different locations and should not always download content from the same
place within the enterprise (though it should always receive policy from its
own site).
Microsoft announced the next generation of the product, "Version 4" at the
Microsoft Management Summit in April 2005, with a tentative release for
2007. The beta version has been released, and is available at Microsoft's
SMS website.


The best way to cut down on computer repair is by good preventive
maintenance. There are several factors that can drastically shorten a
computer's life. Some of these factors are common sense things, such as, not
to spill drinks into the keyboard. Other factors are not so obvious:
Excessive Heat can destroy the chips inside the computer. To avoid this,
install an adequate fan in the power supply or add an auxiliary fan.
CPU, chips, drive motors, etc. create heat. Most computers are built to
work in a temperature range of 60-85 degrees F.
Because a computer is warmer inside than outside, changes in room
temperature can become multiplied inside the computer. This leads to
thermal shock, exposing components to rapid and large temperature
changes. It will disable your computer due to expansion/contraction
Another heat effect is caused by sunbeams. Avoid placing a computer
in direct sunlight.
Dust: Dust is everywhere and is responsible for several evils in a
computer. First, it sticks to the internal components like the circuit
boards, causing thermal insulation. The second dust evil is that it
clogs spaces such as the air intake area to the power supply or hard
disk, and the space between the floppy disk drive head and the disk.
Every six months to a year remove the dust from the inside of the
computer by blowing the dust off with a can of compressed air. When
you blow the dust off make sure you are not just blowing it back into
the computer. Another effective method of cleaning is to use a dust-
free cloth wetted with water and ammonia (just a few drops). Don't
use this cloth on circuit boards.
Compressed Air: Compressed air is actually compressed gas. It
contains freon or some other chlorinated fluorocarbon (CFC), which
enlarges the hole in the ozone layer. There are a number of ozone-
friendly alternatives. One is marketed by Chemtronics.
Magnetism: Magnets can cause permanent loss of data on hard or floppy
disks. Electric motors or electromagnets produce magnetism. There
are magnets in phones that ring instead of beep, speakers, monitors,
magnetic screwdrivers, magnetic clip and paper holders, and magnets
themselves. It is best to keep anything magnetic away from computers
and floppy disks too.
Stray Electromagnetism: Radiated electromagnetic interference and
static electricity can cause stray electromagnetism. Wires that are
physically too close to each other can cause radiated electromagnetic
interference. This closeness of the wires causes the transmitted signals
on these wires to become faint and jumbled. High radio frequency can
also cause interference. Sources of this are high-speed digital circuits,
radio sources, cordless telephones or keyboards, power-line intercoms
and motors.
Power surges, incorrect line voltage, and power outages: Power
problems can be caused by overvoltage and undervoltage, a power
blackout, spikes and surges. Overvoltage can damage a chip because
too much voltage destroys the circuits. Undervoltage is undesirable
because the power supply will draw too much current. This heats up
and destroys components. Spikes are brief overvoltages under a
millisecond in length, and surges are overvoltages from a millisecond
to seconds. To prevent some of these conditions use surge protectors,
surge suppressors and spike isolators. Also make sure computers have
a proper ground. Power conditioners are available to boost up
undervoltage so that your computer can continue to work through
brownouts. Backup power supplies, called Standby Power Supplies
(SPS) and Uninterruptible Power Supplies (UPS) are available. These
provide computers with added power after a blackout has occurred.
Water and corrosive agents: Liquids can be very hazardous to the
computer's health. These are caused by operator spills, leaks and
flooding. Certainly operators spills can be controlled by not having
liquids near the computer. However, leaks and flooding are not always
preventable. Corrosion of computer components can be caused by
sweat in skin oils. Carbonated liquids contain carbonic acids and
coffee or tea contains tannic acids, which both lead to corrosion.
Cleaning fluids also contain ingredients that can cause corrosion. It is
best to be cautious when using any sprays or liquids around


There are also software applications that can be used a preventive
maintenance for software corruption.
Anti-Virus Applications: This program when run will detect a computer
virus which is willfully destructive computer program. One
classification of virus programs is based on the action that the virus
takes. The three most common types are worms (a program that
replicates itself), a trojan horse (a program hidden inside another that
may erase valuable information, and a bomb ( a program that embeds
code in your operating system and based a specific time, causes your
computer to not function at all).
Another classification is based on where the virus resides. Those are
the parasitic viruses (begins in an executable file) and the boot sector
viruses (resides in the hard drive).
Suggestions to keep viruses away:
Be cautious of pirated, shareware, freeware, and downloaded
software, expecially games.
Use a virus checker regularly. Some virus checkers are set to
constantly run behind all applications.
Put a right-protect tag on floppy disks or media you want to save
Take good care of your PC, and it will take good care of you."
It's a nice sentiment, but reality is more like "Take good care of your PC,
and it won't crash, lose your data, and cost you your job--probably." Follow
these steps to stop PC problems before they stop you.
Your PC's two mortal enemies are heat and moisture. Excess heat
accelerates the deterioration of the delicate circuits in your system. The most
common causes of overheating are dust and dirt: Clogged vents and CPU
cooling fans can keep heat-dissipating air from moving through the case, and
even a thin coating of dust or dirt can raise the temperature of your
machine's components.
Any grime, but especially the residue of cigarette smoke, can corrode
exposed metal contacts. That's why it pays to keep your system clean, inside
and out.
If your PC resides in a relatively clean, climate-controlled environment, an
annual cleaning should be sufficient. But in most real-world locations, such
as dusty offices or shop floors, your system may need a cleaning every few
All you need are lint-free wipes, a can of compressed air, a few drops of a
mild cleaning solution such as Formula 409 or Simple Green in a bowl of
water, and an antistatic wrist strap to protect your system when you clean
inside the case.
Think Outside the Box
Before you get started cleaning, check around your PC for anything nearby
that could raise its temperature (such as a heating duct or sunshine coming
through a window). Also clear away anything that might fall on it or make it
dirty, such as a bookcase or houseplants.
Always turn off and unplug the system before you clean any of its
components. Never apply any liquid directly to a component. Spray or pour
the liquid on a lint-free cloth, and wipe the PC with the cloth.

Clean the case: Wipe the case and clear its ventilation ports of any
obstructions. Compressed air is great for this, but don't blow dust into the PC
or its optical and floppy drives. Keep all cables firmly attached to their
connectors on the case.

Maintain your mechanical mouse: When a nonoptical mouse gets dirty,

the pointer moves erratically. Unscrew the ring on the bottom of the unit and
remove the ball. Then scrape the accumulated gunk off the two plastic
rollers that are set 90 degrees apart inside the ball's housing.

Keep a neat keyboard: Turn the keyboard upside down and shake it to
clear the crumbs from between the keys. If that doesn't suffice, blast it
(briefly) with compressed air. If your keys stick or your keyboard is really
dirty, pry the keys off for easier cleaning. Computer shops have special tools
for removing keys, but you can also pop them off by using two pencils with
broken tips as jumbo tweezers--just be sure to use a soft touch.

Make your monitor sparkle: Wipe the monitor case and clear its vents of
obstructions, without pushing dust into the unit. Clean the screen with a
standard glass cleaner and a lint-free cloth. If your monitor has a degauss
button (look for a small magnet icon), push it to clear magnetic interference.
Many LCDs can be cleaned with isopropyl alcohol; check with your LCD
manufacturer. Wipe your LCD lightly: The underlying glass is fragile.

Check your power protection: Reseat the cables plugged into your surge
protector. Check the unit's warning indicator, if it has one. Surge protectors
may power your PC even after being compromised by a voltage spike
(making your system susceptible to a second spike). If your power protector
doesn't have a warning indicator and your area suffers frequent power
outages, replace it with one that has such an indicator and is UL 1449
Swipe your CD and DVD media: Gently wipe each disc with a moistened,
soft cloth. Use a motion that starts at the center of the disc and then moves
outward toward the edge. Never wipe a disc in a circular motion.

Inside the Box

Before cracking open the case, turn off the power and unplug your PC.
Ground yourself before you touch anything inside to avoid destroying your
circuitry with a static charge. If you don't have a grounding wrist strap, you
can ground yourself by touching any of various household objects, such as a
water pipe, a lamp, or another grounded electrical device. Be sure to unplug
the power cord before you open the case.
Use antistatic wipes to remove dust from inside the case. Avoid touching
any circuit-board surfaces. Pay close attention to the power-supply fan, as
well as to the case and to CPU fans, if you have them. Spray these
components with a blast of compressed air to loosen dust; but to remove the
dust rather than rearrange it, you should use a small vacuum like the $12
Belkin MiniVak.
If your PC is more than four years old, or if the expansion cards plugged into
its motherboard are exceptionally dirty, remove each card, clean its contacts
with isopropyl alcohol, and reseat it. If your system is less than a couple
years old, however, just make sure each card is firmly seated by pressing
gently downward on its top edge while not touching its face. Likewise,
check your power connectors, EIDE connectors, and other internal cables for
a snug fit.
While you have the case open, familiarize yourself with the CMOS battery
on the motherboard (see FIGURE 1). For its location, check the motherboard
manual. If your PC is more than four or five years old, the CMOS battery
may need to be replaced. (A system clock that loses time is one indicator of
a dying CMOS battery.)
Look for Trouble
Give your PC a periodic checkup with a good hardware diagnostic utility.
Two excellent choices are Sandra Standard from SiSoftware and #1-
TuffTest-Lite from #1-PC Diagnostics. Go to PC World's download page to
download the free version of Sandra (the full version of the application costs
$35) or to download #1-TuffTest-Lite (the fully functional version is $10).
Adding and removing system components leaves orphaned entries in the
Windows Registry. This can increase the time your PC takes to boot and can
slow system performance. Many shareware utilities are designed to clean the
Registry, but my favorite is Registry Drill from Easy Desk Software. The
program is free to try and $40 to keep. Go to PC World's download page to
download a trial copy of Registry Drill.
Windows stores files on a hard drive in rows of contiguous segments, but
over time the disk fills and segments become scattered, so they take longer
to access. To keep your drive shipshape, run Windows' Disk Defragmenter
utility. Click Start, Programs (All Programs in XP), Accessories, System
Tools, Disk Defragmenter. If your drive is heavily fragmented, you could
boost performance (see FIGURE 2). Defragging may take hours, however.
Disable your screen saver and other automatic programs beforehand to keep
the defrag from restarting every few minutes.
Disk Defragmenter won't defragment the file on your hard drive that holds
overflow data from system memory (also known as the swap file). Since the
swap file is frequently accessed, defragmenting it can give your PC more
pep. You can defragment your swap file by using a utility such as the
SpeedDisk program included with Norton SystemWorks 2004, but there's a
way to reset it in Windows.
In Windows XP, right-click My Computer and choose Properties. Click
Advanced, and then choose the Settings button under Performance. Click
Advanced again and the Change button under Virtual Memory. Select
another drive or partition, set your swap file size, and click OK. Visit
"Hardware Tips: Jog Your Memory for Faster PC Performance" for
instructions on moving your swap file in other versions of Windows. If you
have only one partition and no way to create a second one, and you have at
least 256MB of RAM, disable the swap file rather than moving it: Select No
paging file in the Virtual Memory settings (see FIGURE 3). If you have
trouble booting, start Windows in Safe Mode and re-enable this option.
Hard-Drive Checkup
Windows XP offers a rudimentary evaluation of your hard disk's health with
its error-checking utility: Right-click the drive's icon in Windows Explorer
and select Properties, Tools, Check Now. (Windows can fix errors and
recover bad sectors automatically if you wish.) If the check discovers a few
file errors, don't worry, but if it comes up with hundreds of errors, the drive
could be in trouble.
To conduct a more thorough examination, go to PC World's download page
and download Panterasoft's free HDD Health utility, which monitors hard-
drive performance and warns of impending disaster (see FIGURE 4). The
program works only with drives that support S.M.A.R.T technology, but
nearly all drives released since 2000 are S.M.A.R.T.-compliant.
Many hardware and software designers humbly assume you want their
program running on your PC all the time, so they tell Windows to load the
application at startup (hence, the ever-growing string of icons in your system
tray). These programs eat up system resources and make hardware conflicts
and compatibility problems more likely. To prevent them from launching,
just click Start, Run, type msconfig, and press Enter. The programs listed
under the Startup tab are set to start along with Windows. Uncheck the box
at the left of each undesirable program to prevent it from starting

• Check and be sure that the cloth is clean and free of

any particles that could scratch the screen
• Always spray the cloth, not directly onto the computer
• Watch to see that you aren't spraying the liquid at
Begin with the someone or near your own face
• Clean the screen first. Wipe the monitor going around
the whole screen. Be sure to clean the small lip that
frames the glass section. Look carefully to see that it
isn't streaked when you finish.
• This is a dust and dirt collector
• Clean the lips of the CD rom drive and the disk drive
Now wipe the
plastic case • When cleaning the top, back and sides, try to wipe the
areas that allow air to circulate
• Tip the keyboard over gently. Tap the keyboard and
watch the dust & paper particles drop on the counter.
Don't hit it too hard, but don't be afraid to tap it to
loosen the dust particles.
• Clean the frame. A little extra rubbing may be needed
How to clean on the spacebar, Enter (IBM) and Return (Mac) keys.
the keyboard • Wrap one finger with the cloth and walk the keyboard,
going over each indivdual key

• Spray a cotton swab. Use your fingers to wring it out

so it's not too wet. Use it to go between the rows and
clean the sides of the keys.
• Remember: spray the cloth first
• Clean the outside of the mouse. This is done by taking
an index card and folding it in half vertically. The
small open ridge around the mouse is a dirt collector.
Use the index card and go into the ridge area and
remove the dirt.
Cleaning the • Turn the mouse over and remove the plastic plate at the
mouse is vital bottom. (The arrows usually tell you which way the
plate comes off) Use the toothpick to get the dirt off
the wheels on the inside that the trackball uses.
• Spray the rag again, place the ball into the rag and
clean the track ball

• Reassemble the mouse

• Use the toothbrush to dust off the mouse pad
• Lift up the keyboard and mouse. Clean the areas they
Clean the rest on.
• Have an adult lift the monitor and clean behind and
under it; repeat the process for the CPU
Reconnect the
• Now, reboot the machine and be sure that the monitor,
keyboard and
CPU, keyboard & mouse are all working properly
Computer virus
A computer virus is a computer program written to alter the way a
computer operates, without the permission or knowledge of the user. Though
the term is commonly used to refer to a range of malware, a true virus must
replicate itself, and must execute itself. The latter criteria are often met by a
virus which replaces existing executable files with a virus-infected copy.
While viruses can be intentionally destructive—destroying data, for example
—some viruses are benign or merely annoying.

Distinction between malware and computer viruses

Malware is a broad category of software designed to infiltrate or damage a
computer system. Types of malware include spyware, adware, Trojan
horses, Worms, and viruses. While modern anti-virus software works to
protect computers from this range of threats, computer viruses make up only
a small subset of malware.

Comparison with biological viruses

A computer virus behaves in a way similar to a biological virus, which
spreads by inserting itself into living cells. Extending the analogy, the
insertion of a virus into the program is termed as an "infection", and the
infected file, or executable code that is not part of a file, is called a "host".

How viruses work

A computer virus will pass from one computer to another like a real life
biological virus passes from person to person. For example, it is estimated
by experts that the Mydoom worm infected a quarter-million computers in a
single day in January 2004. In March 1999, the Melissa virus spread so
rapidly that it forced Microsoft and a number of other very large companies
to completely turn off their e-mail systems until the virus could be dealt
with. Another example is the ILOVEYOU virus, which occurred in 2000
and had a similar effect. It stole most of its operating style from Melissa.
There are tens of thousands of viruses out there, and new ones are
discovered every day. It is difficult to come up with a generic explanation of
how viruses work, since they all have variations in the way they infect or the
way they spread. So instead, we’ve taken some broad categories that are
commonly used to describe various types of virus.

Basic types of viruses

File viruses (parasitic viruses)

File viruses are pieces of code that attach themselves to executable files,
driver files or compressed files, and are activated when the host program is
run. After activation, the virus may spread itself by attaching itself to other
programs in the system, and also carry out the malevolent activity it was
programmed for. Most file viruses spread by loading themselves in system
memory and looking for any other programs located on the drive. If it finds
one, it modifies the program’s code so that it contains and activates the virus
the next time it’s run. It keeps doing this over and over until it spreads across
the system, and possibly to other systems that the infected program may be
shared with. Besides spreading themselves, these viruses also carry some
type of destructive constituent that can be activated immediately or by a
particular ‘trigger’. The trigger could be a specific date, or the number of
times the virus has been replicated, or anything equally trivial. Some
examples of file viruses are Randex, Meve and MrKlunky.

Boot sector viruses

A boot sector virus affects the boot sector of a hard disk, which is a very
crucial part. The boot sector is where all information about the drive is
stored, along with a program that makes it possible for the operating system
to boot up. By inserting its code into the boot sector, a virus guarantees that
it loads into memory during every boot sequence. A boot virus does not
affect files; instead, it affects the disks that contain them. Perhaps this is the
reason for their downfall. During the days when programs were carried
around on floppies, the boot sector viruses used to spread like wildfire.
However, with the CD-ROM revolution, it became impossible to infect pre-
written data on a CD, which eventually stopped such viruses from spreading.
Though boot viruses still exist, they are rare compared to new-age malicious
software. Another reason why they’re not so prevalent is that operating
systems today protect the boot sector, which makes it difficult for them to
thrive. Examples of boot viruses are Polyboot.B and AntiEXE.
Multipartite viruses
Multipartite viruses are a combination of boot sector viruses and file viruses.
These viruses come in through infected media and reside in memory. They
then move on to the boot sector of the hard drive. From there, the virus
infects executable files on the hard drive and spreads across the system.
There aren’t too many multipartite viruses in existence today, but in their
heyday, they accounted for some major problems due to their capacity to
combine different infection techniques. A significantly famous multipartite
virus is Ywinz. Macro Viruses infect files that are created using certain
applications or programs that contain macros. These include Microsoft
Office documents such as Word documents, Excel spreadsheets, PowerPoint
presentations, Access databases, and other similar application files such as
Corel Draw, AmiPro, etc. Since macro viruses are written in the language of
the application, and not in that of the operating system, they are known to be
platform-independent—they can spread between Windows, Mac, and any
other system, so long as they’re running the required application. With the
ever-increasing capabilities of macro languages in applications, and the
possibility of infections spreading over net-works, these viruses are major
threats. The first macro virus was written for Microsoft Word and was
discovered back in August 1995. Today, there are thousands of macro
viruses in existence—some examples are Relax, Melissa.A and Bablas.

Network viruses
This kind of virus is proficient in quickly spreading across a Local Area
Network (LAN) or even over the Internet. Usually, it propagates through
shared resources, such as shared drives and folders. Once it infects a new
system, it searches for potential targets by searching the network for other
vulnerable systems. Once a new vulnerable system is found, the network
virus infects the other system, and thus spreads over the network. Some of
the most notorious network viruses are Nimda and SQLSlammer. E-mail
Viruses An e-mail virus could be a form of a macro virus that spreads itself
to all the contacts located in the host’s email address book. If any of the e-
mail recipients open the attachment of the infected mail, It spreads to the
new host’s address book contacts, and then proceeds to send itself to all
those contacts as well. These days, e-mail viruses can infect hosts even if the
infected e-mail is previewed in a mail client. One of the most common and
destructive e-mail viruses is the ILOVEYOU virus. There are many ways in
which a virus can infect or stay dormant on your PC. However, whether
active or dormant, it’s dangerous to let one loose on your system, and should
be dealt with immediately.

Other malicious software

Earlier, the only way a computer was at risk was when you inserted an
infected floppy. With the new age of technology, every computer is
interconnected to the rest of the world at some point or the other, so it’s
difficult to pinpoint the source and/or time of the infection. As if that
weren’t bad enough, new-age computing has also brought about a new breed
of malicious software. Today, the term ‘virus’ has become a generic term
used for all the software. Besides the type of viruses we mentioned here’s a
look at some of the newer problems we face today.

Trojan horses
The biggest difference between a Trojan horse—or Trojan—and a virus is
that Trojans don’t spread themselves. Trojan horses disguise themselves as
useful software available for down-load on the Internet, and naïve users
download and run them only to realise their mistake later. A Trojan horse is
usually divided into two parts—a server and a client. It’s the client that is
cunningly disguised as important soft-ware and placed in peer-to-peer file
sharing networks, or unofficial download sites. Once the client runs on your
system, the attacker—the person running the server—has a high level of
control over your system, which can lead to devastating effects depending
on the attacker’s intentions. Trojan horses have evolved to a tremendous
level of sophistication, which makes each one significantly different from
the other. We have categorized them roughly into the following:

Remote access trojans

These are the most commonly available Trojans. These give an attacker
complete control over the victim’s computers log files. Most of them come
with two functions, such as online and offline recording. Of course, they can
be configured to send the log file to a specific-mail address on a daily basis

The only function of these Trojans is to destroy and delete files. They can
automatically delete all the core system files on your machine. The Trojan
could be Controlled by the attacker or could be programmed to strike like
logic bomb-starting on a specific day or at specific hour. The main idea
behind Denial of Service (DoS) Attack Trojans is to generate a lot of internet
traffic on the victim’s machine, to the extent that the Internet connection is
too overloaded to let the user visit a website or download anything. Another
variation of a DoS Trojan is the mail-bomb Trojan, whose main aim is to
infect as many machines as possible and simultaneously attack specific e-
mail addresses with random subjects and contents that cannot be filtered.
Proxy/Wingate Trojans These types of Trojan turn the victim’s computer
into a proxy/wingate server. That way, the infected computer is available to
the whole world to be used for anonymous access to various risky Internet
services. The attacker can register domains or access pornographic Web sites
with stolen credit cards or do similar illegal activities without being traced.
FTP Trojans These trojans are probably the most simple, and are outdated.
The only thing they do is open port 21—the port for FTP transfers—and let
everyone connect to your machine. Newer versions are password-protected,
so only the attacker can connect to your computer. Software Detection
Killers These trojans kill popular antivirus/firewall programs that protect
your machine to give the attacker access to the victim’s machine. A trojan
could have any one or a combination of the above mentioned functionalities.
Worms Computer Worms are programs that reproduce and run
independently, and travel across network connections. The main difference
between viruses and worms is the method in which they reproduce and
spread. A virus is dependent upon a host file or boot sector, and the transfer
of files between machines to spread, while a worm can run completely
independently and spread of its own accord through network connections.
The security threat of worms is equivalent to that of a virus. Worms are
capable of doing a whole range of damage such as destroying essential files
in your system, slowing it down to a great extent, or even causing some
essential programs to crash. Two famous examples of worms are the MS-
Blaster and Sesser worms.

Spyware is the new-age term for advertising-supported software (Adware).
Advertising in shareware products is a way for shareware authors to make
money, other than by selling it to the user. There are several large media
companies that offer to place banner ads in their products in exchange for a
portion of the revenue from banner sales. If the user finds the banners
annoying, there is usually an option to get rid of them by paying the
licensing fee. Unfortunately, the advertising companies often also install
additional tracking software on your system, which is continuously using
your Internet connection to send statistical data back to the advertisers.
While the privacy policies of the companies claim there will be no sensitive
or identifying data collected from your system and that you shall remain
anonymous, the fact remains that you have a server sitting on your PC that is
sending information about you and your surfing habits to a remote location,
using your bandwidth. Spyware has been known to slow down computers
with their semi-intensive usage of processing power, bringing up annoying
pop-up windows at the most inappropriate times and changing your Internet
browsing settings such as your home page or default search engine to their
own services. Even if many do not consider this illegal, it is still is a major
security threat, and the fact that there’s no way to get rid of them makes
them as much of a nuisance as viruses. Logic Bombs A logic bomb is a
program which has deliberately been written or modified to produce results
when certain conditions are met that are unexpected and unauthorized by
legitimate users or owners of the software. Logic bombs may reside within
standalone programs, or they may be part of worms or viruses. A variation
of the logic bomb is the time bomb that ‘explodes’ at a certain time. An
example of a time bomb is the infamous ‘Friday the 13th’ virus.

In the field of information technology, backup refers to the copying of data
so that these additional copies may be restored after a data loss event.
Backups are useful primarily for two purposes: to restore a computer to an
operational state following a disaster (called disaster recovery) and to restore
small numbers of files after they have been accidentally deleted or
corrupted. Backups differ from archives in the sense that archives are the
primary copy of data and backups are a secondary copy of data. Backup
systems differ from fault-tolerant systems in the sense that backup systems
assume that a fault will cause a data loss event and fault-tolerant systems
assume a fault will not. Backups are typically that last line of defense against
data loss, and consequently the least granular and the least convenient to use.
Since a backup system contains at least one copy of all data worth saving,
the data storage requirements are considerable. Organizing this storage space
and managing the backup process is a complicated undertaking.
Storage, the base of a backup system

Data repository models

Any backup strategy starts with a concept of a data repository. The backup
data needs to be stored somehow and probably should be organized to a
degree. It can be as simple as a sheet of paper with a list of all backup tapes
and the dates they were written or a more sophisticated setup with a
computerized index, catalog, or relational database. Different repository
models have different advantages. This is closely related to choosing a
backup rotation scheme.
An unstructured repository may simply be a stack of floppy disks or CD-
R media with minimal information about what was backed up and when.
This is the easiest to implement, but probably the least likely to achieve a
high level of recoverability.
Full + Incrementals
A Full + Incremental repository aims to make storing several copies of
the source data more feasible. At first, a full backup (of all files) is taken.
After that an incremental backup (of only the files that have changed
since the previous backup) can be taken. Restoring whole systems to a
certain point in time would require locating the full backup taken
previous to that time and all the incremental backups taken between that
full backup and the particular point in time to which the system is
supposed to be restored. This model offers a high level of security that
something can be restored and can be used with removeable media such
as tapes and optical disks. The downside is dealing with a long series of
incrementals and the high storage requirements.
Mirror + Reverse Incrementals
A Mirror + Reverse Incrementals repository is similar to a Full +
Incrementals repository. The difference is instead of an aging full backup
followed by a series of incrementals, this model offers a mirror that
reflects the system state as of the last backup and a history of reverse
incrementals. One benefit of this is it only requires an initial full backup.
Each incremental backup is immediately applied to the mirror and the
files they replace are moved to a reverse incremental. This model is not
suited to use removable media since every backup must be done in
comparison to the mirror.
Continuous data protection
This model takes it a step further and instead of scheduling periodic
backups, the system immediately logs every change on the host system.

Storage media
Regardless of the repository model that is used, the data has to be stored on
some data storage medium somewhere.
Magnetic tape
Magnetic tape has long been the most commonly used medium for bulk
data storage, backup, archiving, and interchange. Tape has typically had
an order of magnitude better capacity/price ratio when compared to hard
disk, but recently the ratios for tape and hard disk have become a lot
closer. There are myriad formats, many of which are proprietary or
specific to certain markets like mainframes or a particular brand of
personal computers. Tape is a sequential access medium, so even though
access times may be poor, the rate of continuously writing or reading
data can actually be very fast. Some new tape drives are even faster than
modern hard disks.
Hard disk
The capacity/price ratio of hard disk has been rapidly improving for
many years. This is making it more competitive with magnetic tape as a
bulk storage medium. The main advantages of hard disk storage are the
high capacity and low access times.
Optical disc
A CD-R can be used as a backup device. One advantage of CDs is that
they can hold 650 MiB of data on a 12 cm (4.75") reflective optical disc.
(This is equivalent to 12,000 images or 200,000 pages of text.) They can
also be restored on any machine with a CD-ROM drive. Another
common format is DVD+R. Many optical disk formats are WORM type,
which makes them useful for archival purposes since the data can't be
Floppy disk
During the 1980s and early 1990s, many personal/home computer users
associated backup mostly with copying floppy disks. The low data
capacity of a floppy disk makes it an unpopular choice in 2006.
Solid state storage
Also known as flash memory, thumb drives, USB keys, compact flash,
smart media, memory stick, Secure Digital cards, etc., these devices are
relatively costly for their low capacity, but offer excellent portability and
Remote backup service
As broadband internet access becomes more widespread, remote backup
services are gaining in popularity. Backing up via the internet to a remote
location can protect against some worse case scenarios, such as
someone's house burning down, destroying any backups along with
everything else. A drawback to remote backup is the internet connection
is usually substantially slower than the speed of local data storage
devices, so this can be a problem for people with large amounts of data. It
also has the risk of potentially losing control over personal or sensitive

Managing the data repository

Regardless of the data repository model or data storage media used for
backups, a balance needs to be struck between accessibility, security and
On-line storage (sometimes called secondary storage) is typically the
most accessible type of data storage. A good example would be a large
disk array. This type of storage is very convenient and speedy, but is
relatively expensive and is typically located in close proximity to the
systems being backed up. This proximity is a problem in the case of a
disaster. Additionally, on-line storage is vulnerable to being deleted or
overwritten, either by accident, or in the wake of a data-deleting virus
Near-line storage (sometimes called tertiary storage) is typically less
accessible and less expensive than on-line storage, but still useful for
backup data storage. A good example would be a tape library. A
mechanical device is usually involved in moving media units from
storage into a drive where the data can be read or written.
Off-line storage is similar to near-line, except it requires human
interaction to make storage media available. This can be as simple as
storing backup tapes in a file cabinet.
Off-site vault
To protect against a disaster or other site-specific problem, many people
choose to send backup media to an off-site vault. The vault can be as
simple as the sysadmin's home office or as sophisticated as a disaster
hardened, temperature controlled, high security bunker that has facilities
for backup media storage.
Data Recovery Center
In the event of a disaster, the data on backup media will not be sufficient
to recover. Computer systems onto which the data can be restored and
proper networks are necessary too. Some organizations have their own
data recovery centers that are equipped for this scenario. Other
organizations contract this out to a third-party recovery center.

Selection, access, and manipulation of data

Approaches to backing up files

Deciding what to backup at any given time is a harder process than it seems.
By backing up too much redundant data, the data repository will fill up too
quickly. If we don't backup enough data, critical information can get lost.
The key concept is to only backup files that have changed.
Copying files
Just copy the files in question somewhere.
Filesystem dump
Copy the filesystem that holds the files in question somewhere. This
usually involves unmounting the filesystem and running a program like
dump. This is also known as a raw partition backup. This type of backup
has the possibility of running faster than a backup that simply copies
files. A feature of some dump software is the ability to restore specific
files from the dump image.
Identification of changes
Some filesystems have an archive bit for each file that says it was
recently changed. Some backup software looks at the date of the file and
compares it with the last backup, to determine whether the file was
Block Level Incremental
A more sophisticated method of backing up changes to files is to only
backup the blocks within the file that changed. This requires a higher
level of integration between the filesystem and the backup software.
Versioning file system
A versioning filesystem keeps track of all changes to a file and makes
those changes accessible to the user. This is a form of backup that is
integrated into the computing environment.

Approaches to backing up live data

If a computer system is in use while it is being backed up, the possibility of
files being open for reading or writing is real. If a file is open, the contents
on disk may not correctly represent what the owner of the file intends. This
is especially true for database files of all kinds.
When attempting to understand the logistics of backing up open files, one
must consider that the backup process could take several minutes to back up
a large file such as a database. In order to back up a file that is in use, it is
vital that the entire backup represent a single-moment snapshot of the file,
rather than a simple copy of a read-through. This represents a challenge
when backing up a file that is constantly changing. Either the database file
must be locked to prevent changes, or a method must be implemented to
ensure that the original snapshot is preserved long enough to be copied, all
while changes are being preserved. Backing up a file while it is being
changed, in a manner that causes the first part of the backup to represent data
before changes occur to be combined with later parts of the backup after the
change results in a corrupted file that is unusable, as most large files contain
internal references between their various parts that must remain consistent
throughout the file.
Snapshot - copy-on-write
A snapshot is an instantaneous function of some filesystems that presents
a copy of the filesystem as if it were frozen in a specific point in time.
Closing all files, taking a snapshot, then reopening the files and runnning
the backup on the snapshot is an effective way to work around this
Open file backup - file locking
Many backup software packages feature the ability to backup open files.
Some simply check for openness and try again later.
Hot database backup
Some database management systems offer a means to generate a backup
image of the database while it is online and useable ("hot"). This usually
includes a consistent image of the data files at a certain point in time plus
a log of changes made while the procedure is running.

Backing up non-file data

Not all information stored on the computer is stored in files. Accurately
recovering a complete system from scratch requires keeping track of this
non-file data too.
System description
System specifications are needed to procure an exact replacement after a
File metadata
Each file's permissions, owner, group, ACLs, and any other metadata
need to be backed up for a restore to properly recreate the original
Partition layout
The layout of the original disk, as well as partition tables and filesystem
settings, is needed to properly recreate the original system.
Boot sector
The boot sector can sometimes be recreated more easily than saving it.
Still, it usually isn't a normal file and the system won't boot without it.
Deleted files
How does one backup the fact that a file once existed (and could be restored
from backups) but is now deleted from the system and shouldn't be part of
any potential restore.
Moved files
How does one backup the fact that a file has moved?
Manipulating the backed up data
It is frequently useful to manipulate the backed up data to optimize the
backup process. These manipulations can improve backup speed, restore
speed, data security, and media usage.
Data compression can be very useful for fitting the maximum amount of
source data onto a limited amount of backup storage media. Compression
is frequently performed by tape drives transparently.
When multiple similar systems are backed up to the same destination
storage device, there exists the potential for much redundancy within the
backed up data. If 20 Windows workstations were backed up to the same
data repository, they might share a common set of system files. The data
repository really only needs to store one copy of those files to be able to
restore any one of those workstations. This technique can be applied at
the file level or even on raw blocks of data, potentially resulting in a
massive reduction in required storage space.
Sometimes backup jobs are duplicated to a second set of storage media.
This can be done to rearrange the backup images to optimize restore
speed, to have a second copy for safe keeping in a different location or on
a different storage medium.
High capacity removable storage media such as backup tapes present a
data security risk if they are lost. Encrypting the data on these media can
mitigate this problem, but presents new problems. First, encryption is a
CPU intensive process that can slow down backup speeds. Second, once
data has been encrypted, it can not be effectively compressed. Third, the
security of the encrypted backups is only as effective as the security of
the key management policy.
Sometimes backup jobs are copied to a staging disk before being copied
to tape. This can be useful if there is a problem matching the speed of the
final destination device with the source system as is frequently faced in
network-based backup systems.