Sunteți pe pagina 1din 198

INDEX

1.

SECURITY BASICS.................................................................................................4
1.1 WHY SECURITY..........................................................................................................4
1.2 LIST OF CERTIFICATIONS VENDOR SPECIFIC/VENDOR NEUTRAL..............................5
1.3 OPEN SOURCE AND SECURITY...................................................................................5
1.4 SECURITY CONCEPTS..................................................................................................5

2.

INTERNET SECURITY...........................................................................................9
2.1 PROTOCOLS................................................................................................................9

3.

INFRASTRUCTURE SECURITY........................................................................25
3.1 FIREWALL.................................................................................................................25
3.2 ROUTER....................................................................................................................28
3.3 SWITCHES.................................................................................................................29
3.4 WIRELESS.................................................................................................................30
3.5 MODEMS...................................................................................................................31
3.6 RAS (REMOTE ACCESS SERVER).............................................................................32
3.7 TELECOM / PBX (PRIVATE BRANCH EXCHANGE)....................................................32
3.8 VPN (VIRTUAL PRIVATE NETWORK)........................................................................33
3.9 IDS (INTRUSION DETECTION SYSTEM)....................................................................37
3.10 NETWORK MONITORING / DIAGNOSTICS...............................................................39
3.11 WORKSTATIONS......................................................................................................41
3.12 SERVERS (MAIL, LDAP, WEB & APPLICATION)......................................................41
3.13 MOBILE DEVICES..................................................................................................48
3.14 SECURITY ZONES...................................................................................................50
3.15 DMZ (DEMILITARIZED ZONE)................................................................................53
3.16 INTRANET...............................................................................................................55
3.17 EXTRANET..............................................................................................................57
3.18 VLANS (VIRTUAL LOCAL AREA NETWORK).........................................................59
3.19 NAT (NETWORK ADDRESS TRANSLATION)............................................................60
3.20 ACCESS CARD........................................................................................................61
3.21 OPERATING SYSTEMS AND DATABASES.................................................................63

4.

IDENTITY/ACCESS MANAGEMENT...............................................................64
4.1 SINGLE SIGN-ON......................................................................................................64
4.2 AUTHENTICATION.....................................................................................................67
4.3 ACCESS CONTROL AND MANAGEMENT...................................................................81
4.4 USER IDENTITY MANAGEMENT................................................................................86
4.5 PROVISIONING..........................................................................................................87
4.6 DIRECTORY SERVICES/LDAP...................................................................................87
4.7 FEDERATION.............................................................................................................88

5.

CRYPTOGRAPHY.................................................................................................90
5.1 ALGORITHMS............................................................................................................95
5.2 PUBLIC KEY INFRASTRUCTURE (PKI)....................................................................109
2

5.3 DIGITAL SIGNATURES..............................................................................................112


6.

COMPLIANCE SERVICES & STANDARDS...................................................123


6.1 FFIEC COMPLIANCE..............................................................................................123
6.2 SOX COMPLIANCE.................................................................................................124
6.3 GLBA COMPLIANCE..............................................................................................126
6.4 HIPAA COMPLIANCE.............................................................................................128
6.5 PCI COMPLIANCE...................................................................................................129
6.6 BS 7799.................................................................................................................133

7.

SECURITY IN SOFTWARE DEVELOPMENT................................................135


7.1 THE SOFTWARE LIFE CYCLE..................................................................................135
7.2 ROLE OF SECURITY IN TESTING ..............................................................................140
7.3 UNDERSTAND THE APPLICATION ENVIRONMENT AND SECURITY CONTROLS........144
7.4 DATABASES AND DATA WAREHOUSING VULNERABILITIES, THREATS AND
PROTECTIONS...............................................................................................................146
7.5 APPLICATION & SYSTEM DEVELOPMENT KNOWLEDGE SECURITY-BASED SYSTEMS
.....................................................................................................................................153
7.6 SYSTEM VULNERABILITIES AND THREATS.............................................................157
7.7 WEBSERVICES SECURITY AND SOA.......................................................................163

8.

ENTERPRISE TECHNICAL ASSESSMENTS & AUDITS.............................167


8.1 APPLICATION..........................................................................................................167
8.2 NETWORK...............................................................................................................168
8.3 OWASP..................................................................................................................169
8.4 DATA SECURITY BEST PRACTICES..........................................................................171
8.5 ETHICAL HACKING.................................................................................................173

9.

SECURITY MODELS & ARCHITECTURE.....................................................183


9.1 A,I,C.......................................................................................................................183
9.2 OPERATING SYSTEM ARCHITECTURES....................................................................184
9.3 TRUSTED COMPUTING BASE AND SECURITY MECHANISMS....................................186
9.4 PROTECTION MECHANISMS WITHIN AN OPERATING SYSTEM..................................186
9.5 SECURITY MODELS - EXAMPLES ETC.....................................................................187

10.

ACCESS CONTROL........................................................................................188

10.1 CONCEPTS: ACCESS, SUBJECT..............................................................................188


10.2 ACCESS CONTROL MODELS...................................................................................188
11.

WIRELESS SECURITY...................................................................................195

11.1 WTLS (WIRELESS TRANSPORT LAYER SECURITY)..............................................196


11.2 802.11 AND 802.11X.............................................................................................196
11.3 WEP / WAP (WIRED EQUIVALENT PRIVACY / WIRELESS APPLICATION PROTOCOL)
.....................................................................................................................................197

1. Security Basics
1.1 Why Security
Why Security Matters
One of the biggest problems in computer security is that people have trouble believing
that anything bad can happen to them until it does. The truth is that bad things do
happen and they happen more often than you might think. Surveys conducted by the
Computer Security Institute and the U.S. Federal Bureau of Investigation (FBI) estimate
that 90 percent of corporations and government agencies detected computer security
breaches every year. Of those corporations and agencies, 80 percent acknowledged
resulting financial losses.
Regardless of how or why your business is attacked, recovery usually takes significant
time and effort. Imagine if your computer systems were unavailable for a week. Imagine
if you lost all the data stored on all the computers in your company. Imagine if your worst
competitor was able to obtain a list of your customers, along with sales figures and sales
notes. How long would it take before you noticed? What would these breaches cost your
company? Could you afford these losses?

It seems like common sense. You wouldn't leave your building unlocked at night. The
same is true with information security and a few simple steps can make you a lot less
vulnerable. Technology experts have a way of making basic security seem like a huge and
difficult issue. Luckily, securing businesses is easier than you might think.
Of course, there is no way to guarantee 100 percent security. As the old saying goes, "You
can make a door only so strong before it's easier to come through the wall." However,
you can achieve a reasonable level of security and be prepared in case breaches do
happen. Properly weighing risks and consequences against the cost of prevention is a
good place to start.
1.2 List of Certifications Vendor specific/Vendor neutral
In alphabetical order:
Certified Ethical Hacker ('CEH')
Certified Information Systems Security Professional ('CISSP')
Certified Information Systems Auditor ('CISA')
Certified Information Systems Manager ('CISM')
Global Information Assurance Certification ('GIAC')
Certified Vulnerability Assessor (CVA)
Security+
1.3 Open Source and Security
Bruce Schneier is a well-known expert on computer security and cryptography. He argues
that smart engineers should ``demand open source code for anything related to security``
[Schneier 1999], and he also discusses some of the preconditions which must be met to
make open source software secure. Vincent Rijmen, a developer of the winning Advanced
Encryption Standard (AES) encryption algorithm, believes that the open source nature of
Linux provides a superior vehicle to making security vulnerabilities easier to spot and fix,
``Not only because more people can look at it, but, more importantly, because the model
forces people to write more clear code, and to adhere to standards. This in turn facilitates
security review``[Rijmen 2000].
1.4 Security concepts
Confidentiality
Confidentiality refers to the assurance that only authorized individuals are able to view
and access data and systems.
Breaches of Confidentiality can occur when data is not handled in a manner adequate to
safeguard the confidentiality of the information concerned. Such breach can occur by

word of mouth, by printing, copying, e-mailing or creating documents and other data etc.
The classification of the information should determine is confidentiality and hence the
appropriate safeguards.
Confidentiality is related to the broader concept of data privacy -- limiting access to
individuals' personal information. In the US, a range of state and federal laws, with
abbreviations like FERPA, FSMA, and HIPAA, set the legal terms of privacy.
Integrity
Integrity refers to the assurance that the information is authentic and complete, the
trustworthiness of information resources. Ensuring that information can be relied upon to
be sufficiently accurate for its purpose.
"Data integrity" refers that data have not been changed inappropriately, whether by
accident or deliberately malign activity. It also includes "origin" or "source integrity" -that is, that the data actually came from the person or entity you think it did, rather than
an imposter.
For example, making copies (say by e-mailing a file) of a sensitive document, threatens
both confidentiality and the integrity of the information. Why? Because, by making one
or more copies, the data is then at risk of change or modification
Authentication
Authentication is any process by which you verify that someone is who they claim they
are. This usually involves a username and a password, but can include any other method
of demonstrating identity, such as a smart card, retina scan, voice recognition, or
fingerprints.
Basic authentication
As the name implies, basic authentication is the simplest method of authentication, and
for a long time was the most common authentication method used. However, other
methods of authentication have recently passed basic in common usage, due to usability
issues.
How basic authentication works
When a particular resource has been protected using basic authentication, web servers
send a 401 Authentication Required header with the response to the request, in order to
notify the client that user credentials must be supplied in order for the resource to be
returned as requested.

Upon receiving a 401 response header, the client's browser, if it supports basic
authentication, will ask the user to supply a username and password to be sent to the
server. If you are using a graphical browser, such as Netscape or Internet Explorer, what
you will see is a box which pops up and gives you a place to type in your username and
password, to be sent back to the server. If the username is in the approved list, and if the
password supplied is correct, the resource will be returned to the client.
Because the HTTP protocol is stateless, each request will be treated in the same way,
even though they are from the same client. That is, every resource which is requested
from the server will have to supply authentication credentials over again in order to
receive the resource.
Fortunately, the browser takes care of the details here, so that you only have to type in
your username and password one time per browser session - that is, you might have to
type it in again the next time you open up your browser and visit the same web site.
Along with the 401 response, certain other information will be passed back to the client.
In particular, it sends a name which is associated with the protected area of the web site.
This is called the realm, or just the authentication name. The client browser caches the
username and password that you supplied, and stores it along with the authentication
realm, so that if other resources are requested from the same realm, the same username
and password can be returned to authenticate that request without requiring the user to
type them in again. This caching is usually just for the current browser session, but some
browsers allow you to store them permanently, so that you never have to type in your
password again.
The authentication name, or realm, will appear in the pop-up box, in order to identify
what the username and password are being requested for.
Form based authentication
The authentication challenge is an HTML form with one or more text input fields for user
credentials.
How Form based authentication works
In a typical form-based authentication, text boxes are provided for the user name and
password. Users enter their credentials in these fields. The most common credential
choices are username and password, but any user attributes can be used, for example,
user name, password, and domain. A Submit button posts the content of the form. When
the user clicks the Submit button, the form data is posted to the Web server.
You may want to use form-based authentication for reasons such as the following:

1. To use your organizations look and feel in the authentication process. For example, a
custom form can include a company logo and a welcome message instead of the standard
username and password pop-up window used in Basic authentication.
2. To gather additional information at the time of login.
3. To provide additional functionality with the login procedure, such as a link to a page
for lost password management.
Client Certificate Authentication
The client certificate challenge method uses the Secure Sockets Layer version 3 (SSLv3)
certificate authentication protocol built into browsers and Web servers. Authenticating
users with a client certificate requires the client to establish an SSL connection with a
Web server that has been configured to process client certificates.
Client Certificate Authentication works
When the user requests access to a resource over SSL, Web server provides its servercertificate, which allows the client to establish an SSL session. Web server then asks the
client for a client-side certificate. If the client does not present a certificate, the SSL
connection with the client is closed and client-side certificate authentication is not
attempted.

Non-repudiation
Non-repudiation means that it can be verified that the sender and the recipient were, in
fact, the parties who claimed to send or receive the message, respectively. Nonrepudiation is a way to guarantee that the sender of a message cannot later deny having
sent the message and that the recipient cannot deny having received the message.
Non-repudiation can be obtained through the use of digital signatures, confirmation
services, and timestamps.
Access control
Access control is the process by which users are identified and granted certain privileges
to information, systems, or resources. The primary objective of access control is to
preserve and protect the confidentiality, integrity, and availability of information,
systems, and resources.
Access control devices properly identify people, and verify their identity through an
authentication process so they can be held accountable for their actions. Good access

control systems record and timestamp all communications and transactions so that access
to systems and information can be audited at later dates.
Authorization
Authorization determines what services and access an authenticated user is authorized
for. Authorization may be based on your credits and roles in the organization. You may be
authenticated user, but you must be authorized to access an resource. Authentication
ensures that a user is who he or she claims to be.
Availability
Availability refers to the assurance that the systems responsible for delivering, storing
and processing information are accessible when needed, by those who need them.
An information system that is not available when you need it is at least as bad as none at
all. It may be much worse, depending on how reliant the organization has become on a
functioning computer and communications infrastructure.
Availability, like other aspects of security, may be affected by purely technical issues
(e.g., a malfunctioning part of a computer or communications device), natural phenomena
(e.g., wind or water), or human causes (accidental or deliberate).

2.

Internet security

2.1 Protocols
A Protocol can be defined as the rules governing the syntax, semantics, and
synchronization of communication.
For example, there are protocols for the data interchange at the hardware device level and
protocols for data interchange at the application program level.
In the standard model known as Open Systems Interconnection (OSI), there are one or
more protocols at each layer in the telecommunication exchange that both ends of the
exchange must recognize and observe. Protocols are often described in an industry or
international standard.

SSL/TLS
SSL
Secure Sockets Layer or SSL is a protocol which is used to communicate over the
Internet in a secure fashion. SSL is all about encryption. It is the key to E-commerce
Security.
Why SSL? To understand this let us compare the communication between computers on
the Internet to a telephone conversation between two people
What issues arise? In a telephone conversation
Who are you speaking with? Is Someone Listening to Your Conversation?
Similarly in communication between computers on the internet
Being sure you are connected to the right website (say a bank website) or is it a
phishers scam website? Keeping your data safe and out of malicious hands during
transit on the Internet.
SSL Details
SSL technology relies on the concept of public key cryptography to accomplish its
tasks. In normal encryption, two communicating parties each share a password or key,
and this is used to both encrypt and decrypt messages. While this is a very simple and
efficient method, it doesnt solve the problem of giving the password to someone you
have not yet met or trust. In public key cryptography, each party has two keys, a public
key and a private key. Information encrypted with a persons public key can only be
decrypted with the private key and vice versa. Each user publicly tells the world what his
public key is but keeps his private key for himself.
How SSLWorks
SSL Certificates
The SSL certificate helps to prove the site belongs to who it says it belongs to and
contains information about the certificate holder, the domain that the certificate was
issued to, the name of the Certificate Authority who issued the certificate, the root and the
country it was issued in.
SSL certificates come in 40-bit and 128-bit varieties, though 40-bit encryption has been
hacked. As such, you definitely should be looking at getting a 128-bit certificate.

10

According to SSL certificate vendor VeriSign, in order to have 128-bit encryption you
need a certificate that has SGC (server grade cryptography) capabilities.
I. Obtaining an SSL Certificate There are two principal ways of getting an SSL certificate
You can either buy one from a certificate vendor You can "self-sign" your own
certificate. Self-signing a certificate is like issuing yourself a driver's license. Roads are
safer because governments issue licenses. Making sure those roads are safe is the role of
the certificate authorities. Certificate authorities make sure the site is legitimate. SelfSigned certificates will trigger a warning window in most browser configurations that
will indicate that the certificate was not recognized. A site that conveys trust is also more
likely to be a site that makes (more) money. Hence let us see how we Obtain an SSL
Certificate from a CA
XYZ Inc., intends to secure their customer checkout process, account management, and
internal employee correspondence on their website, xyz.com. Step 1: XYZ creates a
Certificate Signing Request (CSR) and during this process, a private key is generated.
Step 2: XYZ goes to a trusted, third party Certificate Authority, such as XRamp. XRamp
takes the certificate signing request and validates XYZ in a two step process. XRamp
validates that XYZ has control of the domain xyz.com and XYZ Inc. is an official
organization listed in public government records. Step 3: When the validation process is
complete, XRamp gives XYZ a new public key (certificate) encrypted with XRamps
private key. Step 4: XYZ installs the certificate on their webserver/s.
II. How Customers Communicate with the Server using SSL
Step 1: A customer makes a connection to xyz.com on an SSL port, typically 443. This
connection is denoted with https instead of http. Step 2: xyz.com sends back its public
key to the customer. Once customer receives it, his/her browser decides if it is alright to
proceed. the xyz.com public key must NOT be expired the xyz.com public key must
be for xyz.com only Client must have XRamp public key for XRamp installed in their
browser certificate store. 99.9% of all modern browsers (1998+) include the XRamp root
certificate. The customer has XRamp trusted public key, then they can trust that they are
really communicating with XYZ, Inc. Step 3: If the customer decides to trust the
certificate, then the customer will be sent to xyz.com his/her public key. Step 4: xyz.com
will next create a unique hash and encrypt it using both the customers public key and
xyz.coms private key, and send this back to the client. Step 5: Customers browser will
decrypt the hash. This process shows that the xyz.com sent the hash and only the
customer is able to read it. Step 6: Customer and website can now securely exchange
information. Uses for Secure Socket Layer SSL
Almost any service on the Internet can be protected with SSL. WebMail, Control Panels,
POP, IMAP, SMTP, FTP and more are a few of the many applications for SSL
Certificates.
TLS Transport Layer Security

11

The TLS protocol(s) allow applications to communicate across a network in a way


designed to prevent eavesdropping, tampering, and message forgery.
TLS provides endpoint authentication and communications privacy over the Internet
using cryptography. Typically, only the server is authenticated (i.e., its identity is ensured)
while the client remains unauthenticated; this means that the end user (be that a person, or
an application such as a web browser), can be sure of whom they are "talking" to. The
next level of security - both ends of the "conversation" being sure of who they are
"talking" to - is known as mutual authentication. Mutual authentication requires public
key infrastructure (PKI) deployment to clients.
TLS involves three basic phases:
1.Peer negotiation for algorithm support 2.Public key encryption -based key exchange
and certificate-based authentication 3.Symmetric cipher -based traffic encryption
TLS Enhancements to SSL
The keyed-Hashing for Message Authentication Code (HMAC) algorithm replaces the
SSL Message Authentication Code (MAC) algorithm. HMAC produces more secure
hashes than the MAC algorithm. The HMAC produces an integrity check value as the
MAC does, but with a hash function construction that makes the hash much harder to
break TLS is standardized in RFC 2246. Many new alert messages are added. In
TLS, it is not always necessary to include certificates all the way back to the root CA.
You can use an intermediary authority. TLS specifies padding block values that are used
with block cipher algorithms. RC4, which is used by Microsoft, is a streaming cipher, so
this modification is not relevant. Fortezza algorithms are not included in the TLS RFC,
because they are not open for public review. (This is Internet Engineering Task Force
(IETF) policy.) Minor differences exist in some message fields. Benefits of TLS/SSL
TLS/SSL provides numerous benefits to clients and servers over other methods of
authentication, including: Strong authentication, message privacy, and integrity
Interoperability Algorithm flexibility Ease of deployment Ease of use Strong
authentication, message privacy, and integrity
TLS/SSL can help to secure transmitted data using encryption. TLS/SSL also
authenticates servers and, optionally, authenticates clients to prove the identities of
parties engaged in secure communication. It also provides data integrity through an
integrity check value. In addition to protecting against data disclosure, the TLS/SSL
security protocol can be used to help protect against masquerade attacks, man-in-themiddle or bucket brigade attacks, rollback attacks, and replay attacks. Interoperability
TLS/SSL works with most Web browsers, including Microsoft Internet Explorer and
Netscape Navigator, and on most operating systems and Web servers, including the
Microsoft Windows operating system, UNIX, Novell, Apache (version 1.3 and later),

12

Netscape Enterprise Server, and Sun Solaris. It is often integrated in news readers, LDAP
servers, and a variety of other applications. Algorithm flexibility
TLS/SSL provides options for the authentication mechanisms, encryption algorithms, and
hashing algorithms that are used during the secure session. Note Data can be encrypted
and decrypted, but you cannot reverse engineer a hash. Hashing is a one-way process.
Running the process backward will not create the original data. This is why a new hash is
computed and then compared to the sent hash. Ease of deployment
Many applications use TLS/SSL transparently on a Windows Server 2003 operating
system. You can use TLS for more secure browsing when you are using Internet Explorer
and Internet Information Services (IIS) and, if the server already has a server certificate
installed, you only have to select the check box. Ease of use
Because you implement TLS/SSL beneath the application layer, most of its operations are
completely invisible to the client. This allows the client to have little or no knowledge of
the security of communications and still be protected from attackers.
Limitations of TLS/SSL
There are a few limitations to using TLS/SSL, including: Increased processor load
This is the most significant limitation to implementing TLS/SSL. Cryptography,
specifically public key operations, is CPU-intensive. As a result, performance varies
when you are using SSL. Unfortunately, there is no way to know how much performance
you will lose. The performance varies, depending on how often connections are
established and how long they last. TLS uses the greatest resources while it is setting up
connections. Administrative overhead
A TLS/SSL environment is complex and requires maintenance; the system administrator
must configure the system and manage certificates.
What is the difference between SSL and TLS? SSL stands for Secure Sockets Layer.
Netscape originally developed this protocol to transmit information privately, ensure
message integrity, and guarantee the server identity. SSL works mainly through using
public/private key encryption on data. It is commonly used on web browsers, but SSL
may also be used with email servers or any kind of client-server transaction. For example,
some instant messaging servers use SSL to protect conversations. TLS stands for
Transport Layer Security. The Internet Engineering Task Force (IETF) created TLS as the
successor to SSL. It is most often used as a setting in email programs, but, like SSL, TLS
can have a role in any client-server transaction. The differences between the two
protocols are very minor and very technical, but they are different standards. TLS uses
stronger encryption algorithms and has the ability to work on different ports.
Additionally, TLS version 1.0 does not interoperate with SSL version 3.0.

HTTP/ S
13

HTTP
Hypertext Transfer Protocol (HTTP) is a method used to transfer or convey information
on the World Wide Web. Its original purpose was to provide a way to publish and retrieve
HTML pages.
Development of HTTP was coordinated by the World Wide Web Consortium and the
Internet Engineering Task Force, culminating in the publication of a series of RFCs, most
notably RFC 2616 (1999), which defines HTTP/1.1, the version of HTTP in common use
today.
HTTP is a request/response protocol between clients and servers. The originating client,
such as a web browser, spider, or other end-user tool, is referred to as the user agent. The
destination server, which stores or creates resources such as HTML files and images, is
called the origin server. In between the user agent and origin server may be several
intermediaries, such as proxies, gateways, and tunnels.
An HTTP client initiates a request by establishing a Transmission Control Protocol (TCP)
connection to a particular port on a remote host (port 80 by default; see List of TCP and
UDP port numbers). An HTTP server listening on that port waits for the client to send a
request message.
Upon receiving the request, the server sends back a status line, such as "HTTP/1.1 200
OK", and a message of its own, the body of which is perhaps the requested file, an error
message, or some other information.
Resources to be accessed by HTTP are identified using Uniform Resource Identifiers
(URIs) (or, more specifically, URLs) using the http: or https URI schemes.
Request message The request message consists of the following:
Request line, such as GET /images/logo.gif HTTP/1.1, which requests the file logo.gif
from the /images directory Headers, such as Accept-Language: en An empty line An
optional message body The request line and headers must all end with CRLF (i.e. a
carriage return followed by a line feed). The empty line must consist of only CRLF and
no other whitespace.
In the HTTP/1.1 protocol, all headers except Host are optional.
Request methods HTTP defines eight methods (sometimes referred to as "verbs")
indicating the desired action to be performed on the identified resource.

14

HEAD Asks for the response identical to the one that would correspond to a GET request,
but without the response body. This is useful for retrieving meta-information written in
response headers, without having to transport the entire content.
GET Requests a representation of the specified resource. By far the most common
method used on the Web today. Should not be used for operations that cause side-effects
(using it for actions in web applications is a common mis-use). See 'safe methods' below.
POST Submits data to be processed (e.g. from a HTML form) to the identified resource.
The data is included in the body of the request.
PUT Uploads a representation of the specified resource.
DELETE Deletes the specified resource.
TRACE Echoes back the received request, so that a client can see what intermediate
servers are adding or changing in the request.
OPTIONS Returns the HTTP methods that the server supports. This can be used to check
the functionality of a web server.
CONNECT For use with a proxy that can change to being an SSL tunnel. HTTP servers
are supposed to implement at least the GET and HEAD methods and, whenever possible,
also the OPTIONS method.
HTTP session state HTTP can occasionally pose problems for Web developers (Web
Applications), because HTTP is stateless. The advantage of a stateless protocol is that
hosts don't need to retain information about users between requests, but this forces the
use of alternative methods for maintaining users' state, for example, when a host would
like to customize content for a user who has visited before. The common method for
solving this problem involves the use of sending and requesting cookies. Other methods
include server side sessions, hidden variables (when current page is a form), and URL
encoded parameters (such as index.php?userid=3).
Secure HTTP There are currently two methods of establishing a secure HTTP connection:
the https URI scheme and the HTTP 1.1 Upgrade header. The https URI scheme has been
deprecated by RFC 2817, which introduced the Upgrade header; however, as browser
support for the Upgrade header is nearly non-existent, the https URI scheme is still the
dominant method of establishing a secure HTTP connection.
https URI Scheme Main article: https https: is a URI scheme syntactically identical to the
http: scheme used for normal HTTP connections, but which signals the browser to use an
added encryption layer of SSL/TLS to protect the traffic. SSL is especially suited for
HTTP since it can provide some protection even if only one side of the communication is

15

authenticated. In the case of HTTP transactions over the Internet, typically only the server
side is authenticated.
HTTPS
https is a URI scheme which is syntactically identical to the http:// scheme normally used
for accessing resources using HTTP. Using an https: URL indicates that HTTP is to be
used, but with a different default port (443) and an additional encryption/authentication
layer between HTTP and TCP. This system was designed by Netscape Communications
Corporation to provide authentication and encrypted communication and is widely used
on the World Wide Web for security-sensitive communication such as payment
transactions and corporate logons.
How it works Strictly speaking, https is not a separate protocol, but refers to the
combination of a normal HTTP interaction over an encrypted Secure Sockets Layer
(SSL) or Transport Layer Security (TLS) transport mechanism. This ensures reasonable
protection from eavesdroppers and man-in-the-middle attacks, provided it is properly
implemented and the top level certification authorities do their job. The default TCP port
of an https: URL is 443 (for unsecured HTTP, the default is 80). To prepare a web-server
for accepting https connections the administrator must create a public key certificate for
the web-server. These certificates can be created for Unix based servers with tools such as
OpenSSL's ssl-ca [1] or SuSE's gensslcert. This certificate must be signed by a certificate
authority of one form or another, who certifies that the certificate holder is who they say
they are. Web browsers are generally distributed with the signing certificates of major
certificate authorities, so that they can verify certificates signed by them. Organizations
may also run their own certificate authority, particularly if they are responsible for setting
up browsers to access their own sites (for example, sites on a company intranet), as they
can trivially add their own signing certificate to the defaults shipped with the browser.
Some sites use self signed certificates. Using these provides protection against pure
eavesdropping but unless the certificate is verified by some other method (for example,
phoning the certificate owner to verify its checksum) and that other method is secure,
there is a risk of a man-in-the-middle attack. The system can also be used for client
authentication, in order to restrict access to a Web server to only authorized users. For
this, typically the site administrator creates certificates for each user which are loaded
into their browser, although certificates signed by any certificate authority the server
trusts should work. These normally contain the name and e-mail of the authorized user,
and are automatically checked by the server on each reconnect to verify the user's
identity, potentially without ever entering a password. Limitations The level of protection
depends on the correctness of the implementation by the web browser and the server
software and the actual cryptographic algorithms supported. A common misconception
among credit card users on the Web is that https: fully protects their card number from
thieves. In reality, an encrypted connection to the Web server only protects the credit card
number in transit between the user's computer and the server itself. It doesn't guarantee
that the server itself is secure, or even that it hasn't already been compromised by an
attacker. Attacks on the Web sites that store customer data are both easier and more
common than attempts to intercept data in transit. Merchant sites are supposed to

16

immediately forward incoming transactions to a payment gateway and retain only a


transaction number, but they often save card numbers in a database. It is that server and
database that is usually attacked and compromised by unauthorized users. Because SSL
operates below http and has no knowledge of the higher level protocol, SSL servers can
only present one certificate for a particular IP/port combination. This means that in most
cases it is not feasible to use name-based virtual hosting with HTTPS. (This is subject to
change in the upcoming TLS 1.1, which will enable name-based virtual hosting. As of
December 2006, all major web browsers support TLS's Server Name Indication feature,
but the feature is not widely used by web sites.)

S/FTP
FTP or File Transfer Protocol is used to connect two computers over the Internet so that
the user of one computer can transfer files and perform file commands on the other
computer.
Specifically, FTP is a commonly used protocol for exchanging files over any network that
supports the TCP/IP protocol (such as the Internet or an intranet). There are two
computers involved in an FTP transfer: a server and a client. The FTP server, running
FTP server software, listens on the network for connection requests from other
computers. The client computer, running FTP client software, initiates a connection to the
server. Once connected, the client can do a number of file manipulation operations such
as uploading files to the server, download files from the server, rename or delete files on
the server and so on. Any software company or individual programmer is able to create
FTP server or client software because the protocol is an open standard. Virtually every
computer platform supports the FTP protocol. This allows any computer connected to a
TCP/IP based network to manipulate files on another computer on that network
regardless of which operating systems are involved (if the computers permit FTP access).
Overview
FTP runs exclusively over TCP. FTP servers by default listen on port 21 for incoming
connections from FTP clients. A connection to this port from the FTP Client forms the
control stream on which commands are passed to the FTP server from the FTP client and
on occasion from the FTP server to the FTP client. For the actual file transfer to take
place, a different connection is required which is called the data stream. Depending on
the transfer mode, the process of setting up the data stream is different.
In active mode, the FTP client opens a random port (> 1023), sends the FTP server the
random port number on which it is listening over the control stream and waits for a
connection from the FTP server. When the FTP server initiates the data connection to the
FTP client it binds the source port to port 20 on the FTP server.
In passive mode, the FTP Server opens a random port (> 1023), sends the FTP client the
port on which it is listening over the control stream and waits for a connection from the

17

FTP client. In this case the FTP client binds the source port of the connection to a random
port greater than 1023.
While data is being transferred via the data stream, the control stream sits idle. This can
cause problems with large data transfers through firewalls which time out sessions after
lengthy periods of idleness. While the file may well be successfully transferred, the
control session can be disconnected by the firewall, causing an error to be generated.
When FTP is used in a UNIX environment, there is an often-ignored but valuable
command, "reget" (meaning "get again") that will cause an interrupted "get" command to
be continued, hopefully to completion, after a communications interruption. The principle
is obviousthe receiving station has a record of what it got, so it can spool through the
file at the sending station and re-start at the right place for a seamless splice. The
converse would be "reput" but is not available. Again, the principle is obvious: The
sending station does not know how much of the file was actually received, so it would
not know where to start.
FTP over SSH
FTP over SSH refers to the practice of tunneling a normal FTP session over an SSH
connection.
Because FTP uses multiple TCP connections (unusual for a TCP/IP protocol that is still in
use), it is particularly difficult to tunnel over SSH. With many SSH clients, attempting to
set up a tunnel for the control channel (the initial client-to-server connection on port 21)
will only protect that channel; when data is transferred, the FTP software at either end
will set up new TCP connections (data channels) which will bypass the SSH connection,
and thus have no confidentiality, integrity protection, etc.
If the FTP client is configured to use passive mode and to connect to a SOCKS server
interface that many SSH clients can present for tunnelling, it is possible to run all the FTP
channels over the SSH connection.
Otherwise, it is necessary for the SSH client software to have specific knowledge of the
FTP protocol, and monitor and rewrite FTP control channel messages and autonomously
open new forwardings for FTP data channels. Version 3 of SSH Communications
Security's software suite, and the GPL licensed FONC are two software packages that
support this mode.
FTP over SSH is sometimes referred to as secure FTP; this should not be confused with
other methods of securing FTP, such as with SSL/TLS (FTPS). Other methods of
transferring files using SSH that are not related to FTP include SFTP and SCP; in each of
these, the entire conversation (credentials and data) is always protected by the SSH
protocol.

SSH
18

Secure Shell or SSH is a set of standards and an associated network protocol that allows
establishing a secure channel between a local and a remote computer. It uses public-key
cryptography to authenticate the remote computer and (optionally) to allow the remote
computer to authenticate the user. SSH provides confidentiality and integrity of data
exchanged between the two computers using encryption and message authentication
codes (MACs). SSH is typically used to log into a remote machine and execute
commands, but it also supports tunneling, forwarding arbitrary TCP ports and X11
connections; it can transfer files using the associated SFTP or SCP protocols. An SSH
server, by default, listens on the standard TCP port 22.

IPSEC
IPsec (IP security) is a suite of protocols for securing Internet Protocol (IP)
communications by authenticating and/or encrypting each IP packet in a data stream.
IPsec also includes protocols for cryptographic key establishment
IPsec protocols operate at the network layer, layer 3 of the OSI model. Other Internet
security protocols in widespread use, such as SSL and TLS, operate from the transport
layer up (OSI layers 4 - 7). This makes IPsec more flexible, as it can be used for
protecting both TCP- and UDP-based protocols, but increases its complexity and
processing overhead, as it cannot rely on TCP (OSI layer 4) to manage reliability and
fragmentation.
Modes
There are two modes of IPsec operation: transport mode and tunnel mode.
Transport mode
In transport mode only the payload (message) of the IP packet is encrypted. It is fully
routable since the IP header is sent as plain text; however, it cannot cross NAT interfaces,
as this will invalidate its hash value. Transport mode is used for host-to-host
communications.
A means to encapsulate ipsec messages for NAT traversal have been defined by UDP
Encapsulation of IPsec ESP Packets.
Tunnel mode
In tunnel mode, the entire IP packet is encrypted. It must then be encapsulated into a new
IP packet for routing to work. Tunnel mode is used for network-to-network
communications (secure tunnels between routers) or host-to-network and host-to-host
communications over the Internet.

Tunneling
19

Introduction
IP tunneling (IP encapsulation) is a technique to encapsulate IP datagram within IP
datagrams, which allows datagrams destined for one IP address to be wrapped and
redirected to another IP address. IP encapsulation is now commonly used in Extranet,
Mobile-IP, IP-Multicast, tunneled host or network.
How to use IP tunneling on virtual server First, let's look at the figure of virtual server via
IP tunneling. The most different thing of virtual server via IP tunneling to that of virtual
server via NAT is that the load balancer sends requests to real servers through IP tunnel in
the former, and the load balancer sends request to real servers via network address
translation in the latter.

When a user accesses a virtual service provided by the server cluster, a packet destined
for virtual IP address (the IP address for the virtual server) arrives. The load balancer
examines the packet's destination address and port. If they are matched for the virtual
service, a real server is chosen from the cluster according to a connection scheduling
algorithm, and the connection is added into the hash table which records connections.
Then, the load balancer encapsulates the packet within an IP datagram and forwards it to
the chosen server. When an incoming packet belongs to this connection and the chosen
server can be found in the hash table, the packet will be again encapsulated and
forwarded to that server. When the server receives the encapsulated packet, it
decapsulates the packet and processes the request, finally return the result directly to the
user according to its own routing table. After a connection terminates or timeouts, the
connection record will be removed from the hash table. The workflow is illustrated in the
following figure.

20

Note that real servers can have any real IP address in any network, they can be
geographically distributed, but they must support IP encapsulation protocol. Their tunnel
devices are all configured up so that the systems can decapsulate the received
encapsulation packets properly, and the <Virtual IP Address> must be configured on nonarp devices or any alias of non-arp devices, or the system can be configured to redirect
packets for <Virtual IP Address> to a local socket. See the arp problem page for more
information.

21

Finally, when an encapsulated packet arrives, the real server decapsulates it and finds that
the packet is destined for <Virtual IP Address>, it says, "Oh, it is for me, so I do it.", it
processes the request and returns the result directly to the user in the end

INTERNET PROTOCOLS

22

The Internet protocols are the world's most popular open-system (nonproprietary)
protocol suite because they can be used to communicate across any set of interconnected
networks and are equally well suited for LAN and WAN communications. The Internet
protocols consist of a suite of communication protocols, of which the two best known are
the Transmission Control Protocol (TCP) and the Internet Protocol (IP). The Internet
protocol suite not only includes lower-layer protocols (such as TCP and IP), but it also
specifies common applications such as electronic mail, terminal emulation, and file
transfer. This chapter provides a broad introduction to specifications that comprise the
Internet protocols. Discussions include IP addressing and key upper-layer protocols used
in the Internet. Specific routing protocols are addressed individually later in this
document.
Internet protocols were first developed in the mid-1970s, when the Defense Advanced
Research Projects Agency (DARPA) became interested in establishing a packet-switched
network that would facilitate communication between dissimilar computer systems at
research institutions. With the goal of heterogeneous connectivity in mind, DARPA
funded research by Stanford University and Bolt, Beranek, and Newman (BBN). The
result of this development effort was the Internet protocol suite, completed in the late
1970s.
TCP/IP later was included with Berkeley Software Distribution (BSD) UNIX and has
since become the foundation on which the Internet and the World Wide Web (WWW) are
based.
The Internet Protocol (IP) is a network-layer (Layer 3) protocol that contains addressing
information and some control information that enables packets to be routed. IP is
documented in RFC 791 and is the primary network-layer protocol in the Internet
protocol suite. Along with the Transmission Control Protocol (TCP), IP represents the
heart of the Internet protocols. IP has two primary responsibilities: providing
connectionless, best-effort delivery of datagrams through an internetwork; and providing
fragmentation and reassembly of datagrams to support data links with different
maximum-transmission unit (MTU) sizes
Transmission Control Protocol (TCP)
The TCP provides reliable transmission of data in an IP environment. TCP corresponds to
the transport layer (Layer 4) of the OSI reference model. Among the services TCP
provides are stream data transfer, reliability, efficient flow control, full-duplex operation,
and multiplexing.
With stream data transfer, TCP delivers an unstructured stream of bytes identified by
sequence numbers. This service benefits applications because they do not have to chop
data into blocks before handing it off to TCP. Instead, TCP groups bytes into segments
and passes them to IP for delivery.

23

TCP offers reliability by providing connection-oriented, end-to-end reliable packet


delivery through an internetwork. It does this by sequencing bytes with a forwarding
acknowledgment number that indicates to the destination the next byte the source expects
to receive. Bytes not acknowledged within a specified time period are retransmitted. The
reliability mechanism of TCP allows devices to deal with lost, delayed, duplicate, or
misread packets. A time-out mechanism allows devices to detect lost packets and request
retransmission.
TCP offers efficient flow control, which means that, when sending acknowledgments
back to the source, the receiving TCP process indicates the highest sequence number it
can receive without overflowing its internal buffers.
Full-duplex operation means that TCP processes can both send and receive at the same
time.
Finally, TCP's multiplexing means that numerous simultaneous upper-layer conversations
can be multiplexed over a single connection.
TCP Connection Establishment
To use reliable transport services, TCP hosts must establish a connection-oriented session
with one another. Connection establishment is performed by using a "three-way
handshake" mechanism.
A three-way handshake synchronizes both ends of a connection by allowing both sides to
agree upon initial sequence numbers. This mechanism also guarantees that both sides are
ready to transmit data and know that the other side is ready to transmit as well. This is
necessary so that packets are not transmitted or retransmitted during session
establishment or after session termination.
Each host randomly chooses a sequence number used to track bytes within the stream it is
sending and receiving. Then, the three-way handshake proceeds in the following manner:
The first host (Host A) initiates a connection by sending a packet with the initial sequence
number (X) and SYN bit set to indicate a connection request. The second host (Host B)
receives the SYN, records the sequence number X, and replies by acknowledging the
SYN (with an ACK = X + 1). Host B includes its own initial sequence number (SEQ =
Y). An ACK = 20 means the host has received bytes 0 through 19 and expects byte 20
next. This technique is called forward acknowledgment. Host A then acknowledges all
bytes Host B sent with a forward acknowledgment indicating the next byte Host A
expects to receive (ACK = Y + 1). Data transfer then can begin.
User Datagram Protocol (UDP)
The User Datagram Protocol (UDP) is a connectionless transport-layer protocol (Layer 4)
that belongs to the Internet protocol family. UDP is basically an interface between IP and

24

upper-layer processes. UDP protocol ports distinguish multiple applications running on a


single device from one another.
Unlike the TCP, UDP adds no reliability, flow-control, or error-recovery functions to IP.
Because of UDP's simplicity, UDP headers contain fewer bytes and consume less
network overhead than TCP.
UDP is useful in situations where the reliability mechanisms of TCP are not necessary,
such as in cases where a higher-layer protocol might provide error and flow control.
UDP is the transport protocol for several well-known application-layer protocols,
including Network File System (NFS), Simple Network Management Protocol (SNMP),
Domain Name System (DNS), and Trivial File Transfer Protocol (TFTP).

3. Infrastructure security
3.1 Firewall
A firewall is an information technology (IT) security device which is configured to
permit, deny or proxy data connections set and configured by the organization's security
policy. Firewalls can either be hardware and/or software based.
A firewall's basic task is to control traffic between computer networks with different
zones of trust. Typical examples are the Internet which is a zone with no trust and an
internal network which is (and should be) a zone with high trust. The ultimate goal is to
provide controlled interfaces between zones of differing trust levels through the
enforcement of a security policy and connectivity model based on the least privilege
principle and separation of duties.
A firewall is also called a Border Protection Device (BPD) in certain military contexts
where a firewall separates networks by creating perimeter networks in a Demilitarized
zone (DMZ). In a BSD context they are also known as a packet filter. A firewall's
function is analogous to firewalls in building construction.
Proper configuration of firewalls demands skill from the firewall administrator. It
requires considerable understanding of network protocols and of computer security.
Small mistakes can render a firewall worthless as a security tool.
Types
There are three basic types of firewalls depending on:

25

Whether the communication is being done between a single node and the network, or
between two or more networks.
Whether the communication is intercepted at the network layer, or at the application
layer.
Whether the communication state is being tracked at the firewall or not.
With regard to the scope of filtered communications there exist:
Personal firewalls, a software application which normally filters traffic entering or
leaving a single computer.
Network firewalls, normally running on a dedicated network device or computer
positioned on the boundary of two or more networks or DMZs (demilitarized zones).
Such a firewall filters all traffic entering or leaving the connected networks.
The latter definition corresponds to the conventional, traditional meaning of "firewall" in
networking.
In reference to the layers where the traffic can be intercepted, three main categories of
firewalls exist:
Network layer firewalls. An example would be iptables.
Application layer firewalls. An example would be TCP Wrappers.
Application firewalls. An example would be restricting ftp services through
/etc/ftpaccess file
These network-layer and application-layer types of firewall may overlap, even though the
personal firewall does not serve a network; indeed, single systems have implemented
both together.
There's also the notion of application firewalls which are sometimes used during wide
area network (WAN) networking on the world-wide web and govern the system software.
An extended description would place them lower than application layer firewalls, indeed
at the Operating System layer, and could alternately be called operating system firewalls.
Lastly, depending on whether the firewalls keeps track of the state of network
connections or treats each packet in isolation, two additional categories of firewalls exist:
Stateful firewalls

26

Stateless firewalls
Network layer
Network layer firewalls operate at a (relatively) low level of the TCP/IP protocol stack as
IP-packet filters, not allowing packets to pass through the firewall unless they match the
rules. The firewall administrator may define the rules; or default built-in rules may apply
(as in some inflexible firewall systems).
A more permissive setup could allow any packet to pass the filter as long as it does not
match one or more "negative-rules", or "deny rules". Today network firewalls are built
into most computer operating systems and network appliances.
Modern firewalls can filter traffic based on many packet attributes like source IP address,
source port, destination IP address or port, destination service like WWW or FTP. They
can filter based on protocols, TTL values, netblock of originator, domain name of the
source, and many other attributes.
Application-layer
Application-layer firewalls work on the application level of the TCP/IP stack (i.e., all
browser traffic, or all telnet or ftp traffic), and may intercept all packets traveling to or
from an application. They block other packets (usually dropping them without
acknowledgement to the sender). In principle, application firewalls can prevent all
unwanted outside traffic from reaching protected machines.
By inspecting all packets for improper content, firewalls can even prevent the spread of
the likes of viruses. In practice, however, this becomes so complex and so difficult to
attempt (given the variety of applications and the diversity of content each may allow in
its packet traffic) that comprehensive firewall design does not generally attempt this
approach.
The XML firewall exemplifies a more recent kind of application-layer firewall.
Proxies
A proxy device (running either on dedicated hardware or as software on a generalpurpose machine) may act as a firewall by responding to input packets (connection
requests, for example) in the manner of an application, whilst blocking other packets.
Proxies make tampering with an internal system from the external network more difficult
and misuse of one internal system would not necessarily cause a security breach

27

exploitable from outside the firewall (as long as the application proxy remains intact and
properly configured). Conversely, intruders may hijack a publicly-reachable system and
use it as a proxy for their own purposes; the proxy then masquerades as that system to
other internal machines. While use of internal address spaces enhances security, crackers
may still employ methods such as IP spoofing to attempt to pass packets to a target
network..
3.2 Router
A router is a computer networking device that forwards data packets across a network
toward their destinations, through a process known as routing. A router is connected to at
least two networks, commonly two LANs or WANs or a LAN and its ISPs network.
Routers are located at gateways, the places where two or more networks connect, and are
the critical device that keeps data flowing between networks and keeps the networks
connected to the Internet.
When data is sent between locations on one network or from one network to a second
network the data is always seen and directed to the correct location by the router. They
accomplish his by using headers and forwarding tables to determine the best path for
forwarding the data packets, and they use protocols such as ICMP to communicate with
each other and configure the best route between any two hosts.
A router may create or maintain a table of the available routes and their conditions and
use this information along with distance and cost algorithms to determine the best route
for a given packet. Typically, a packet may travel through a number of network points
with routers before arriving at its destination. Routing is a function associated with the
Network layer (layer 3) in the standard model of network programming, the Open
Systems Interconnection (OSI) model. A layer-3 switch is a switch that can perform
routing functions.
Why Would I Need a Router?
For most home users, they may want to set-up a LAN (local Area Network) or WLAN
(wireless LAN) and connect all computers to the Internet without having to pay a full
broadband subscription service to their ISP for each computer on the network. In many
instances, an ISP will allow you to use a router and connect multiple computers to a
single Internet connection and pay a nominal fee for each additional computer sharing the
connection. This is when home users will want to look at smaller routers, often called
broadband routers that enable two or more computers to share an Internet connection.
Within a business or organization, you may need to connect multiple computers to the
Internet, but also want to connect multiple private networks and these are the types of
functions a router is designed for.

28

3.3 Switches
A Network switch is a small hardware device that joins multiple computers together
within one local area network (LAN). Technically, network switches operate at layer two
(Data Link Layer) of the OSI model.
Network switches appear nearly identical to network hubs, but a switch generally
contains more "intelligence". Network switches are capable of inspecting data packets as
they are received, determining the source and destination device of that packet, and
forwarding it appropriately. By delivering each message only to the connected device it
was intended for, a network switch conserves network bandwidth and offers generally
better performance.
A switch can connect Ethernet, token ring, Fibre Channel or other types of packet
switched network segments together to form an Internetwork.
If a network has only switches and no hubs then the collision domains are either reduced
to a single link or, if both ends support full duplex, eliminated altogether. The principle of
a fast hardware forwarding device with many ports can be extended to higher layers
giving the multilayer switch.
A repeater is the simplest multi-port device in use. However, its technology has been
considered outdated since a hub is a "dumb device", as it resends every datagram it
receives to every port except the original incoming. With multiple computers, the speed
quickly slows down, and collisions start occurring, making the connection even slower.
However, with the advent of the network switch, this problem has been solved.
Switches are:
Network devices that select a path/circuit for sending data to its next destination, e,g,.
LAN switches & WAN/backbone switches (Note: May include router function)
Switch is a simpler & faster mechanism than a router
Located at:
Backbone & gateway levels of a network where one network connects to another
Subnetwork level where data is forwarded close to its destination/origin
Switch not always required; LANs may be organized as rings or buses in which
destinations inspect each message & read only those intended for that destination

29

3.4 Wireless
The term wireless is normally used to refer to any type of electrical or electronic
operation which is accomplished without the use of a "hard wired" connection. Some of
these operations may also be accomplished with the use of wires if desired, while others,
such as long range communications, are impossible or impractical to implement with the
use of wires. The term is commonly used in the telecommunications industry to refer to
telecommunications systems (e.g., radio transmitters and receivers, remote controls,
computer networks, network terminals, etc.) which use some form of energy (e.g.,radio
frequency (RF), infrared light, laser light, visible light, acoustic energy, etc.) to transfer
information without the use of wires.Information is transferred in this manner over both
short and long distances.
The term "wireless" should not be confused with the term "cordless", which is generally
used to refer to powered devices that are able to operate from a portable power source
(e.g., a battery pack) without any cable or cord to limit the mobility of the cordless device
through a connection to the mains power supply. It is interesting to note that some
cordless devices, such as cordless telephones, are also wireless in the sense that
information is transferred from the cordless telephone to the telephone's base unit via
some type of wireless communications link. This has caused some disparity in the usage
of the term "cordless", for example in Digital Enhanced Cordless Telecommunications.
Wireless technology
Popular in vertical markets such as health care, retail, manufacturing
Standards are needed
Examples of wireless technology at work
Security systems
One common example of an operation or operations where the implementation of
wireless technology may supplement or replace hard wired implementations is in security
systems for homes or office buildings. The operations that are required (e.g., detecting
whether a door or window is open or closed) may be implemented with the use of hard
wired sensors or they may be implemented with the use of wireless sensors which are
also equipped with some type of wireless transmitter (e.g., infrared, radio frequency, etc.)
to transmit the information concerning the current state of the door or window.
Television remote control
Another example would be the use of a wireless remote control unit to replace the old
hard wired remote control units that were sometimes used in the television industry.
Some televisions were previously manufactured with hard wired remote controls which
30

plugged in to a receptacle or jack in the television whereas more modern televisions use
wireless (generally infrared) remote control units.
Cellular telephones
Perhaps one of the most well known examples of wireless technology in action is the
cellular telephone. These instruments use radio waves to enable the operator to make
phone calls from many locations world-wide. They can be used anywhere that there is a
cellular telephone site to house the equipment that is required to transmit and receive the
signal that is used to transfer both voice and data to and from these instruments.
3.5 Modems
MoDem short for modulator-demodulator. A modem is a device or program that enables a
computer to transmit data over, for example, telephone or cable lines. Computer
information is stored digitally, whereas information transmitted over telephone lines is
transmitted in the form of analog waves. A modem converts between these two forms.
How do you connect modem: Fortunately, there is one standard interface for connecting
external modems to computers called RS-232. Consequently, any external modem can be
attached to any computer that has an RS-232 port, which almost all personal computers
have. There are also modems that come as an expansion board that you can insert into a
vacant expansion slot. These are sometimes called onboard or internal modems.
The following characteristics distinguish one modem from another:

bps : How fast the modem can transmit and receive data. At slow rates, modems
are measured in terms of baud rates. The slowest rate is 300 baud (about 25 cps).
At higher speeds, modems are measured in terms of bits per second (bps). The
fastest modems run at 57,600 bps, although they can achieve even higher data
transfer rates by compressing the data. Obviously, the faster the transmission rate,
the faster you can send and receive data. In addition, some telephone lines are
unable to transmit data reliably at very high rates.

voice/data: Many modems support a switch to change between voice and data
modes. In data mode, the modem acts like a regular modem. In voice mode, the
modem acts like a regular telephone. Modems that support a voice/data switch
have a built-in loudspeaker and microphone for voice communication.

auto-answer : An auto-answer modem enables your computer to receive calls in


your absence. This is only necessary if you are offering some type of computer
service that people can call in to use.

31

data compression : Some modems perform data compression, which enables them
to send data at faster rates. However, the modem at the receiving end must be able
to decompress the data using the same compression technique.

flash memory : Some modems come with flash memory rather than conventional
ROM, which means that the communications protocols can be easily updated if
necessary.

Fax capability: Most modern modems are fax modems, which means that they can
send and receive faxes.

3.6 RAS (Remote Access Server)


A server that is dedicated to handling users that are not on a LAN but need remote access
to it. The remote access server allows users to gain access to files and print services on
the LAN from a remote location. For example, a user who dials into a network from
home using an analog modem or an ISDN connection will dial into a remote access
server. Once the user is authenticated he can access shared drives and printers as if he
were physically connected to the office LAN.
Remote Access Services refers to any combination of hardware and software to enable
the remote access to tools or information that typically reside on a network of IT devices.
Originally coined by Microsoft when referring to their built-in NT remote access tools,
RAS was a service provided by Windows NT which allows most of the services which
would be available on a network to be accessed over a modem link. The service includes
support for dialup and logon, and then presents the same network interface as the normal
network drivers (albeit slightly slower). It is not necessary to run Windows NT on the
client - there are client versions for other Windows operating systems. A feature built into
Windows NT that enables users to log into an NT-based LAN using a modem, X.25
connection or WAN link. RAS works with several major network protocols, including
TCP/IP, IPX, and NetBEUI. To use RAS from a remote node, you need a RAS client
program, which is built into most versions of Windows, or any PPP (Point-to-Point
Protocol) client software. For example, most remote control programs work with RAS.
Over the years, many vendors have provided both hardware and software solutions to
gain remote access to various types of networked information. In fact, most modern
routers include a basic RAS capability that can be enabled for any dial-up interface.

3.7 Telecom / PBX (Private Branch Exchange)


Telecommunication is the transmission of signals over a distance for the purpose of
communication. In modern times, this process almost always involves the sending of
electromagnetic waves by electronic transmitters.

32

A Private Branch eXchange (also called PBX, Private Business eXchange or PABX
for Private Automatic Branch eXchange) is a telephone exchange that serves a
particular business or office, as opposed to one a common carrier or telephone company
operates for many businesses or for the general public.
A PBX is a telephone system within an enterprise that switches calls between enterprise
users on local lines while allowing all users to share a certain number of external phone
lines. The main purpose of a PBX is to save the cost of requiring a line for each user to
the telephone company's central office.
The PBX is owned and operated by the enterprise rather than the telephone company
(which may be a supplier or service provider, however). Private branch exchanges used
analog technology originally. Today, PBXs use digital technology (digital signals are
converted to analog for outside calls on the local loop using plain old telephone service).
A PBX includes:

Telephone trunk (multiple phone) lines that terminate at the PBX


A computer with memory that manages the switching of the calls within the PBX
and in and out of it
The network of lines within the PBX
Usually a console or switchboard for a human operator

3.8 VPN (Virtual Private Network)

Virtual Private Network


A virtual private network (VPN) is a private communications network often used within a
company, or by several companies or organizations, to communicate confidentially over a
non-private network. VPN traffic can be carried over a public networking infrastructure
(e.g. the Internet) on top of standard protocols, or over a service provider's private
network with a defined Service Level Agreement (SLA) between the VPN customer and
the VPN service provider.
Authentication mechanism
Virtual private networks can be a cost effective and secure way for different corporations
to provide user access to the corporate network and for remote networks to communicate
with each other across the Internet. VPN connections are more cost-effective than
dedicated private lines; usually a VPN involves two parts: the protected or "inside"
network, which provides physical and administrative security to protect the transmission;
and a less trustworthy, "outside" network or segment (usually through the Internet).
Generally, a firewall sits between a remote user's workstation or client and the host
network or server. As the user's client establishes the communication with the firewall,
the client may pass authentication data to an authentication service inside the perimeter. A

33

known trusted person, sometimes only when using trusted devices, can be provided with
appropriate security privileges to access resources not available to general users.
Many VPN client programs can be configured to require that all IP traffic must pass
through the tunnel while the VPN connection is active, for increased security. From the
user's perspective, this means that while the VPN connection is active, all access outside
the secure network must pass through the same firewall as if the user were physically
connected to the inside of the secured network. This reduces the risk that an attacker
might gain access to the secured network by attacking the VPN client's host machine: to
other computers on the employee's home network, or on the public internet, it is as
though the machine running the VPN client simply does not exist. Such security is
important because other computers local to the network on which the client computer is
operating may be untrusted or partially trusted. Even with a home network that is
protected from the outside internet by a firewall, people who share a home may be
simultaneously working for different employers over their respective VPN connections
from the shared home network. Each employer would therefore want to ensure their
proprietary data is kept secure, even if another computer in the local network gets
infected with malware. And if a travelling employee uses a VPN client from a Wi-Fi
access point in a public place, such security is even more important. However, the use of
IPX/SPX is one way users might still be able to access local resources.
Types of VPN
Secure VPNs use cryptographic tunneling protocols to provide the intended
confidentiality (blocking snooping and thus Packet sniffing), sender authentication
(blocking identity spoofing), and message integrity (blocking message alteration) to
achieve privacy. When properly chosen, implemented, and used, such techniques can
provide secure communications over unsecured networks. This has been the usually
intended purpose for VPN for some years.
Because such choice, implementation, and use are not trivial, there are many insecure
VPN schemes available on the market.
Secure VPN technologies may also be used to enhance security as a "security overlay"
within dedicated networking infrastructures.
Secure VPN protocols include the following:
IPsec (IP security) - commonly used over IPv4, and an obligatory part of IPv6.
SSL/TLS used either for tunneling the entire network stack, as in the OpenVPN project,
or for securing what is, essentially, a web proxy. SSL is framework more often associated
with e-commerce, but it has been built-upon by vendors like Aventail and Juniper to
provide remote access VPN capabilities. A major practical advantage of an SSL-based
VPN is that it can be accessed from any public wireless access point that allows access to

34

SSL-based e-commerce websites, whereas other VPN protocols may not work from such
public access points.
PPTP (Point-to-Point Tunneling Protocol), developed jointly by a number of companies,
including Microsoft.
L2TP (Layer 2 Tunneling Protocol), which includes work by both Microsoft and Cisco.
L2TPv3 (Layer 2 Tunneling Protocol version 3), a new release.
VPN-Q The machine at the other end of a VPN could be a threat and a source of attack;
this has no necessary connection with VPN designs and has been usually left to system
administration efforts. There has been at least one attempt to address this issue in the
context of VPNs. On Microsoft ISA Server, an application called QSS (Quarantine
Security Suite) is available.
MPVPN (Multi Path Virtual Private Network). MPVPN is a registered trademark
owned by Ragula Systems Development Company.
Some large ISPs now offer "managed" VPN service for business customers who want
the security and convenience of a VPN but prefer not to undertake administering a VPN
server themselves. In addition to providing remote workers with secure access to their
employer's internal network, other security and management services are sometimes
included as part of the package. Examples include keeping anti-virus and anti-spyware
programs updated on each client's computer.
Trusted VPNs do not use cryptographic tunneling, and instead rely on the security of a
single provider's network to protect the traffic. In a sense, these are an elaboration of
traditional network and system administration work.
Multi-Protocol Label Switching (MPLS) is often used to build trusted VPN.
L2F (Layer 2 Forwarding), developed by Cisco, can also be used.
Characteristics in application
A well-designed VPN can provide great benefits for an organization. It can:
Extend geographic connectivity.
Improve security where data lines have not been ciphered.
Reduce operational costs versus traditional WAN.
Reduce transit time and transportation costs for remote users.

35

Simplify network topology in certain scenarios.


Provide global networking opportunities.
Provide telecommuter support.
Provide broadband networking compatibility.
Provide faster ROI (return on investment) than traditional carrier leased/owned WAN
lines.
Show a good economy of scale.
Scale well, when used with a public key infrastructure.
However, since VPNs extend the "mother network" by such an extent (almost every
employee) and with such ease (no dedicated lines to rent/hire), there are certain security
implications that must receive special attention:
Security on the client side must be tightened and enforced, lest security be lost at any of
a multitude of machines and devices. This has been termed Central Client Administration,
and Security Policy Enforcement. It is common for a company to require that each
employee wishing to use their VPN outside company offices (eg, from home) first install
an approved firewall (often hardware). Some organizations with especially sensitive data,
such as healthcare companies, even arrange for an employee's home to have two separate
WAN connections: one for working on that employer's sensitive data and one for all other
uses.
The scale of access to the target network may have to be limited.
Logging policies must be evaluated and in most cases revised.
A single breach or failure can result in the privacy and security of the network being
compromised. In situations in which a company or individual has legal obligations to
keep information confidential, there may be legal problems, even criminal ones, as a
result. Two examples are the HIPAA regulations in the U.S. with regard to health data,
and the more general European Union data privacy regulations which apply to even
marketing and billing information and extend to those who share that data elsewhere.
Tunneling
Tunneling is the transmission of data through a public network in such a way that routing
nodes in the public network are unaware that the transmission is part of a private
network. Tunneling is generally done by encapsulating the private network data and
protocol information within the public network protocol data so that the tunneled data is
not available to anyone examining the transmitted data frames. Tunneling allows the use

36

of public networks (eg, the Internet), to carry data on behalf of users as though they had
access to a 'private network', hence the name.
Port forwarding is one aspect of tunneling in particular circumstances.
VPN security dialogs
The most important part of a VPN solution is security. The very nature of VPNs
putting private data on public networks raises concerns about potential threats to that
data and the impact of data loss. A Virtual Private Network must address all types of
security threats by providing security services in the areas of:
Authentication (access control) - Authentication is the process of ensuring that a user or
system is who the user claims to be. There are many types of authentication mechanisms,
but they all use one or more of the following approaches:
something you know (eg, a login name, a password, a PIN),
something you have (eg, a computer readable token (eg, a Smartcard), a card key),
something you are (eg, fingerprint, retinal pattern, iris pattern, hand configuration, etc).
What is generally regarded as weak authentication makes use of one of these
components, usually a login name/password sequence. Strong authentication is usually
taken to combine at least two authentication components from different areas (i.e., twofactor authentication). But note that use of weak and strong in this context can be
misleading. A stolen SmartCard and a shoulder-surfed login name / PIN sequence is not
hard to achieve and will pass a strong authentication two-factor text handily. More
seriously, stolen or lost security data (eg, on a backup tape, a laptop, or stolen by an
employee) dangerously furthers many such attacks on most authentication schemes.
There is no fully adequate technique for the authentication problem, including biometric
ones.
3.9 IDS (Intrusion Detection System)
An intrusion detection system (IDS) inspects all inbound and outbound network activity
and identifies suspicious patterns that may indicate a network or system attack from
someone attempting to break into or compromise a system
An intrusion detection system is used to detect all types of malicious network traffic and
computer usage that can't be detected by a conventional firewall. This includes network
attacks against vulnerable services, data driven attacks on applications, host based attacks
such as privilege escalation, unauthorized logins and access to sensitive files, and
malware (viruses, trojan horses, and worms).

37

Types of Intrusion-Detection systems


In a network-based intrusion-detection system (NIDS), the sensors are located at choke
points in the network to be monitored, often in the demilitarized zone (DMZ) or at
network borders. The sensor captures all network traffic and analyzes the content of
individual packets for malicious traffic. In systems, PIDS and APIDS are used to monitor
the transport and protocols illegal or inappropriate traffic or constricts of language (say
SQL). In a host-based system, the sensor usually consists of a software agent, which
monitors all activity of the host on which it is installed. Hybrids of these two systems also
exist.
A network intrusion detection system is an independent platform which identifies
intrusions by examining network traffic and monitors multiple hosts. Network Intrusion
Detection Systems gain access to network traffic by connecting to a hub, network switch
configured for port mirroring, or network tap. An example of a NIDS is Snort.
A protocol-based intrusion detection system consists of a system or agent that would
typically sit at the front end of a server, monitoring and analyzing the communication
protocol between a connected device (a user/PC or system). For a web server this would
typically monitor the HTTPS protocol stream and understand the HTTP protocol relative
to the web server/system it is trying to protect. Where HTTPS is in use then this system
would need to reside in the "shim" or interface between where HTTPS is un-encrypted
and immediately prior to it entering the Web presentation layer.
An application protocol-based intrusion detection system consists of a system or agent
that would typically sit within a group of servers, monitoring and analyzing the
communication on application specific protocols. For example; in a web server with
database this would monitor the SQL protocol specific to the middleware/business-login
as it transacts with the database.
A host-based intrusion detection system consists of an agent on a host which identifies
intrusions by analyzing system calls, application logs, file-system modifications (binaries,
password files, capability/acl databases) and other host activities and state.
A hybrid intrusion detection system combines one or more approaches. Host agent data
is combined with network information to form a comprehensive view of the network. An
example of a Hybrid IDS is Prelude.
There are several ways to categorize an IDS:
misuse detection vs. anomaly detection: in misuse detection, the IDS analyzes the
information it gathers and compares it to large databases of attack signatures. Essentially,
the IDS looks for a specific attack that has already been documented. Like a virus
detection system, misuse detection software is only as good as the database of attack
signatures that it uses to compare packets against. In anomaly detection, the system

38

administrator defines the baseline, or normal, state of the networks traffic load,
breakdown, protocol, and typical packet size. The anomaly detector monitors network
segments to compare their state to the normal baseline and look for anomalies.
network-based vs. host-based systems: in a network-based system, or NIDS, the
individual packets flowing through a network are analyzed. The NIDS can detect
malicious packets that are designed to be overlooked by a firewalls simplistic filtering
rules. In a host-based system, the IDS examines at the activity on each individual
computer or host.
passive system vs. reactive system: in a passive system, the IDS detects a potential
security breach, logs the information and signals an alert. In a reactive system, the IDS
responds to the suspicious activity by logging off a user or by reprogramming the firewall
to block network traffic from the suspected malicious source.
Though they both relate to network security, an IDS differs from a firewall in that a
firewall looks out for intrusions in order to stop them from happening. The firewall limits
the access between networks in order to prevent intrusion and does not signal an attack
from inside the network. An IDS evaluates a suspected intrusion once it has taken place
and signals an alarm. An IDS also watches for attacks that originate from within a
system.

Network Monitoring / Diagnostics


3.10 Network Monitoring / Diagnostics
The term network monitoring describes the use of a system that constantly monitors a
computer network for slow or failing systems and that notifies the network administrator
in case of outages via email, pager or other alarms. It is a subset of the functions involved
in network management.
While an intrusion detection system monitors a network for threats from the outside, a
network monitoring system monitors the network for problems due to overloaded and/or
crashed servers, network connections or other devices. For example, to determine the
status of a webserver, monitoring software may periodically send an HTTP request to
fetch a page; for email servers, a test message might be sent through SMTP and retrieved
by IMAP or POP3.
Commonly measured metrics are response time and availability (or uptime), although
both consistency and reliability metrics are starting to gain popularity. Status request
failures, such as when a connection cannot be established, it times-out, or the document
or message cannot be retrieved, usually produce an action from the monitoring system.
These actions vary: an alarm may be sent out to the resident (SMS, email,...) sysadmin,
automatic failover systems may be activated to remove the troubled server from duty
until it can be repaired, etcetera
Various types of protocol

39

Website monitoring service can check HTTP pages, HTTPS, FTP, SMTP, POP3, IMAP,
DNS, SSH, TELNET, SSL, TCP, ping and a range of other ports with great variety of
check intervals from every 4 hours to every one minute. Typically, most website
monitoring services test your server anywhere between once-per hour to once-per-minute.
Using Network Monitor
By using Network Monitor, you can capture frames directly from the network traffic data
stream and examine them. You can use this information to analyze ongoing patterns of
usage and diagnose specific network problems. To capture network frames, Network
Monitor places the network adapter of the computer you are using into promiscuous
mode. In promiscuous mode, all frames detected by the network adapter are transferred to
a temporary capture file, regardless of the destination address of each frame.
Frames, also known as packets, are packages of information that are transmitted as a
single unit over a network. Every frame follows the same basic structure and contains:
Control information such as synchronizing characters.
Source and destination addresses.
Protocol information.
An error-checking value.
A variable amount of data.
You either can capture all the frames that pass by the network adapter or design a capture
filter to capture only specific frames, such as those originating from a specific source
address or using a particular protocol. When you begin capturing network data, the
captured frames are stored in a temporary capture file. After the data capture process
concludes, you can view the frames immediately or save the frames in the temporary
capture file to a capture file. These files provide important diagnostic information to
administrators and third-party support services. By default, the capture file name
extension is .cap.
The default size of the temporary capture file is 1 MB. When the temporary capture file
fills to capacity, the oldest frames captured are lost. If your temporary capture file fills
too quickly and you begin to overwrite buffered data, increase the size of the temporary
capture file. When you increase the size of the temporary capture file, you should
consider the amount of RAM on your system. If the temporary capture file size exceeds
the amount of RAM, some frames might not be captured while your system swaps
memory to disk. You can use capture triggers to automatically stop the data capture
process when the temporary capture file fills to a predetermined level. You can also

40

reduce the amount of data placed in the temporary capture file during data capture by
using capture filters
3.11 Workstations
A type of computer used for engineering applications (CAD/CAM), desktop publishing,
software development, and other types of applications that require a moderate amount of
computing power and relatively high quality graphics capabilities.
Workstations generally come with a large, high-resolution graphics screen, at least 64 MB
(megabytes) of RAM, built-in network support, and a graphical user interface. Most
workstations also have a mass storage device such as a disk drive, but a special type of
workstation, called a diskless workstation, comes without a disk drive. The most common
operating systems for workstations are UNIX and Windows NT.
In terms of computing power, workstations lie between personal computers and
minicomputers, although the line is fuzzy on both ends. High-end personal computers are
equivalent to low-end workstations. And high-end workstations are equivalent to
minicomputers.
Like personal computers, most workstations are single-user computers. However,
workstations are typically linked together to form a local-area network, although they can
also be used as stand-alone systems.
3.12 Servers (mail, LDAP, web & application)

Servers

Server, a computer that provides services to other computers, or the software that
runs on it
o Application server, a server dedicated to running certain software
applications
o Communications server, carrier-grade computing platform for
communications networks
o Database server, provides database services
o Fax server, provides fax services for clients
o File server, provides file services
o Game server, a server that video game clients connect to in order to play
online together
o Standalone server, an emulator for client-server (web-based) programs
o Web server, a server that HTTP clients connect to in order to send
commands and receive responses along with data contents.
o Mail server is an application that receives incoming e-mail from local
users and remote senders and forwards outgoing e-mail for delivery.
o Directory Server provides global directory services, meaning it provides
information to a wide variety of applications. Lightweight Directory
41

Access Protocol (LDAP) provides a common language that client


applications and servers use to communicate with one another.
Client-server, a software architecture that separates "server" functions from
"client" functions
o The X Server part of the X Window System
o Peer-to-peer, a network of computers running as both clients and servers

Mail Server
A computer in a network that provides "post office" facilities. It stores incoming mail for
distribution to users and forwards outgoing mail through the appropriate channel. The
term may refer to just the software that performs this service, which can reside on a
machine with other services.
Messaging System is a software that provides an electronic mail delivery system. It is
made up of the following functional components, which may be packaged together or
independently.
Mail User Agent
The mail user agent (MUA or UA) is the client e-mail program, such as Outlook, Eudora,
Mac Mail or Mozilla Thunderbird, which is used to compose, send and receive messages.
Message Transfer Agent
The message transfer agent (MTA), also called the "mail transfer agent," either forwards
the message from user agents (MUAs) and other mail servers (MTAs) or delivers it to its
own message store (MS) if the recipient is local. Sendmail and Microsoft Exchange are
the most widely used MTAs on the Internet. In a large enterprise, there may be several
MTA servers (mail servers) dedicated to Internet e-mail while others support internal email.
Message Store
The message store (MS) holds the mail until it is selectively retrieved and deleted by an
access server. In the Internet world, a local delivery agent (LDA) writes the messages
from the MTA to the message store, and the primary mailbox access protocols used to
retrieve the mail are POP and IMAP (POP3 and IMAP4).
Definition of: POP3
(Post Office Protocol 3) A standard interface between an e-mail client program and the
mail server, defined by IETF RFC 1939. POP3 and IMAP4 are the two common mailbox
access protocols used for Internet e-mail. POP3 provides a message store that holds
incoming e-mail until users log on and download it.
POP3 is a simple system with limited selectivity. All pending messages and attachments
are downloaded when users check their mail.
Definition of:IMAP4

42

(Internet Message Access Protocol) A standard interface between an e-mail client


program and the mail server, as defined by IETF RFC 3501. IMAP4 and POP3 are the
two common access protocols used for Internet e-mail. IMAP4 provides a message store
that holds incoming e-mail until users log on and download it. Whereas POP3 downloads
the entire message with attachments when mail is checked, IMAP4 can be configured to
download only the headers, which display to/from addresses and subject. The user can
then selectively download messages and attachments.
The Internet's SMTP
Internet e-mail, the most ubiquitous messaging system in the world, is based on the
SMTP protocol. Prior to the Internet's enormous growth in the late 1990s, numerous
proprietary messaging systems were widely used, including cc:Mail, Microsoft Mail,
PROFS and DISOSS.
Definition of: SMTP
(Simple Mail Transfer Protocol) The standard e-mail protocol on the Internet and part of
the TCP/IP protocol suite, as defined by IETF RFC 2821. SMTP defines the message
format and the message transfer agent (MTA), which stores and forwards the mail. SMTP
was originally designed for only plain text (ASCII text), but MIME and other encoding
methods enable executable programs and multimedia files to be attached to and
transported with the e-mail message.
SMTP servers route SMTP messages throughout the Internet to a mail server that
provides a message store for incoming mail. The mail server uses the POP3 or IMAP4
access protocol to communicate with the user's e-mail program.
LDAP
The Lightweight Directory Access Protocol, better known as LDAP, is based on the
X.500 standard, but significantly simpler and more readily adapted to meet custom needs.
Unlike X.500, LDAP supports TCP/IP, which is necessary for Internet access. The core
LDAP specifications are all defined in RFCs.
Is an LDAP information directory a database?
Just as a Database Management System (DBMS) from Sybase, Oracle, Informix, or
Microsoft is used to process queries and updates to a relational database, an LDAP server
is used to process queries and updates to an LDAP information directory. In other words,
an LDAP information directory is a type of database, but it's not a relational database.
And unlike databases that are designed for processing hundreds or thousands of changes
per minute - such as the Online Transaction Processing (OLTP) systems often used in ecommerce - LDAP directories are heavily optimized for read performance.
The advantages of LDAP directories
Now that we've straightened that out, what are the advantages of LDAP directories? The
current popularity of LDAP is the culmination of a number of factors.Perhaps the biggest

43

plus for LDAP is that your company can access the LDAP directory from almost any
computing platform, from any one of the increasing number of readily available, LDAPaware applications. It's also easy to customize your company's internal applications to
add LDAP support.
The LDAP protocol is both cross-platform and standards-based, so applications needn't
worry about the type of server hosting the directory. In fact, LDAP is finding much wider
industry acceptance because of its status as an Internet standard. Vendors are more
willing to write LDAP integration into their products because they don't have to worry
about what's at the other end. Your LDAP server could be any one of a number of opensource or commercial LDAP directory servers (or perhaps even a DBMS server with an
LDAP interface), since interacting with any true LDAP server involves the same
protocol, client connection package, and query commands. By contrast, vendors looking
to integrate directly with a DBMS usually must tailor their product to work with each
database server vendor individually.
Unlike many relational databases, you do not have to pay for either client connection
software or for licensing.
Most LDAP servers are simple to install, easily maintained, and easily optimized.
LDAP servers can replicate either some or all of their data via push or pull methods,
allowing you to push data to remote offices, to increase security, and so on. The
replication technology is built-in and easy to configure. By contrast, many of the big
DBMS vendors charge extra for this feature, and it's far more difficult to manage.
LDAP allows you to securely delegate read and modification authority based on your
specific needs using ACIs (collectively, an ACL, or Access Control List). For example,
your facilities group might be given access to change an employee's location, cube, or
office number, but not be allowed to modify entries for any other fields. ACIs can control
access depending on who is asking for the data, what data is being asked for, where the
data is stored, and other aspects of the record being modified. This is all done through the
LDAP directory directly, so you needn't worry about making security checks at the user
application level.
LDAP is particularly useful for storing information that you wish to read from many
locations, but update infrequently. For example, your company could store all of the
following very efficiently in an LDAP directory:

The company employee phone book and organizational chart


External customer contact information
Infrastructure services information, including NIS maps, email aliases, and so on
Configuration information for distributed software packages
Public certificates and security keys

The structure of an LDAP directory tree

44

LDAP directory servers store their data hierarchically. If you've seen the top-down
representations of DNS trees or UNIX file directories, an LDAP directory structure will
be familiar ground. As with DNS host names, an LDAP directory record's Distinguished
Name (DN for short) is read from the individual entry, backwards through the tree, up to
the top level. More on this point later.
Getting to the root of the matter: Your base DN and you
The top level of the LDAP directory tree is the base, referred to as the "base DN." Let's
assume I work at a US electronic commerce company called FooBar, Inc., which is on
the Internet at foobar.com.
dc=foobar, dc=com

(base DN derived from the company's DNS domain components)


As with the previous format, this uses the DNS domain name as its basis. But where the
other format leaves the domain name intact (and thus human-readable), this format is
split into domain components: foobar.com becomes dc=foobar, dc=com. In theory, this
could be slightly more versatile, though it's a little harder for end users to remember. By
way of illustration, consider foobar.com. When foobar.com merges with gizmo.com, you
simply start thinking of "dc=com" as the base DN. Place the new records into your
existing directory under dc=gizmo, dc=com, and you're ready to go. (Of course, this
approach doesn't help if foobar.com merges with wocket.edu.) This is the format I'd
recommend for any new installations. Oh, and if you're planning to use Active Directory,
Microsoft has already decided for you that this is the format you wanted.
Time to branch out: How to organize your data in your directory tree
In a UNIX file system, the top level is the root. Beneath the root you have numerous files
and directories. As mentioned above, LDAP directories are set up in much the same
manner. Underneath your directory's base, you'll want to create containers that logically
separate your data. For historical (X.500) reasons, most LDAP directories set these
logical separations up as OU entries. OU stands for "Organizational Unit," which in
X.500 was used to indicate the functional organization within a company: sales, finance,
et cetera. Current LDAP implementations have kept the ou= naming convention, but
break things apart by broad categories like ou=people, ou=groups, ou=devices, and so
on. Lower level OUs are sometimes used to break categories down further. For example,
an LDAP directory tree (not including individual entries) might look like this:
dc=foobar, dc=com
ou=customers
ou=asia
ou=europe
ou=usa
ou=employees
ou=rooms
ou=groups
ou=assets-mgmt
ou=nisgroups
ou=recipes

45

Web server
A computer that delivers Web pages to browsers and other files to applications via the
HTTP protocol. It includes the hardware, operating system, Web server software, TCP/IP
protocols and site content (Web pages and other files). If the Web server is used internally
and not by the public, it may be called an "intranet server."
HTTP Server
"Web server" may refer to just the software and not the entire computer system. In such
cases, it refers to the HTTP server (IIS, Apache, etc.) that manages requests from the
browser and delivers HTML documents and files in response. It also executes server-side
scripts (CGI scripts, JSPs, ASPs, etc.) that provide functions such as database searching
and e-commerce.
One Computer or Thousands
A single computer system that provides all the Internet services for a department or a
small company would include the HTTP server (Web pages and files), FTP server (file
downloads), NNTP server (newsgroups) and SMTP server (mail service). This system
with all its services could be called a Web server. In ISPs and large companies, each of
these services could be in a separate computer or in multiple computers. A datacenter for
a large public Web site could contain hundreds and thousands of Web servers.
Web Servers Are Built Into Everything
Web servers are not only used to deliver Web pages. Web server software is built into
numerous hardware devices and functions as the control panel for displaying and editing
internal settings. Any network device, such as a router, access point or print server may
have an internal Web server (HTTP server), which is accessed by its IP address just like a
Web site.

46

Web Server Fundamentals

Web browsers communicate with Web servers via the TCP/IP protocol. The browser
sends HTTP requests to the server, which responds with HTML pages and possibly
additional programs in the form of ActiveX controls or Java applets.

Application Server
(1) Before the Web, the term referred to a computer in a client/server environment that
performed the business logic (the data processing). In a two-tier client/server
environment, which is most common, the user's machine performs the business logic as
well as the user interface, and the server provides the database processing. In a three-tier
environment, a separate computer (application server) performs the business logic,
although some part may still be handled by the user's machine. After the Web exploded in
the mid-1990s, application servers became Web based (see definition #2 below).
(2) Since the advent of the Web, the term most often refers to software in an
intranet/Internet environment that hosts a variety of language systems used to program
database queries and/or general business processing. These scripts and services, such as
JavaScript and Java server pages (JSPs), typically access a database to retrieve up-to-date
data that is presented to users via their browsers or client applications.
The application server may reside in the same computer as the Web server (HTTP server)
or be in a separate computer. In large sites, multiple computers are used for both
application servers and Web servers (HTTP servers). Examples of Web application
servers include BEA Weblogic Server and IBM's WebSphere Application Server.

47

Application Servers & Web Servers


There is overlap between an application server and a Web server, as both can perform
similar tasks. The Web server (HTTP server) can invoke a variety of scripts and services
to query databases and perform business processing, and application servers often come
with their own HTTP server which delivers Web pages to the browser.
3.13 Mobile Devices
Handheld device

48

Handheld device (also known as handheld computer or simply handheld) is a pocketsized computing device, typically utilising a small visual display screen for user output
and a miniaturised keyboard for user input. In the case of the personal digital assistant
(PDA) the input and output are combined into a touch-screen interface. Along with
mobile computing devices such as laptops and smartphones, PDAs are becoming
increasingly popular amongst those who require the assistance and convenience of a
conventional computer, in environments where carrying one would not be practicable.
The following are typical handhelds:
Information appliance Smartphone Personal digital assistant Mobile phone Personal
Communicator Handheld game console Ultra-Mobile PC Handheld television The
following is far from typical:
Aleutia Handheld
Categories of mobile devices
Due to the varying levels of functionality associated with mobile devices, in 2005 T38
and the DuPont Global Mobility Innovation Team proposed the following standardized
definition of mobile devices:

Limited Data Mobile Device: devices that have a small, primarily text-based
screen, with data services usually limited to SMS (Short Message Service) and
WAP access. Typical examples of these devices are cellular phones.
Basic Data Mobile Device: devices that have a medium-size screen (typically
between 120 x 120 and 240 x 240 pixels), menu or icon-based navigation via a
thumb-wheel or cursor, and which offer access to e-mail, address book, SMS, and
a basic web browser. Typical examples of these devices are BlackBerry and
Smartphone.
Enhanced Data Mobile Device: devices that have medium to large screens
(typically above 240 x 120 pixels), stylus-based navigation, and which offer the
same features as the "Basic Data Mobile Devices" plus native applications such as
Microsoft Office applications (Word, Excel, PowerPoint) and custom corporate
applications such as mobilized versions of SAP, intranet portals, etc. Typical
devices include those running Windows Mobile 2003 or version 5, such as Pocket
PCs.

Internet faxing with handhelds


Most handhelds can be used to send and receive faxes by email using an Internet
fax service. Internet faxing also enables handheld users to print documents by
sending them to a nearby fax machine. This service is available through most
internet fax providers.

49

3.14 Security Zones


The computer network is arranged in four zones: outside, public, lab and servers.
Each zone has assigned tasks and is given the resources and privileges necessary to
accomplish the tasks. The zones are ordered from highest to lowest security. Rules
determine whether traffic is allowed to pass between security zones, but have no effect on
traffic within a zone.
The rule applied depend whether the traffic is part of an inbound or an outbound
connection. A connection is inbound if its establishing packet passed from a lower to a
higher security zone. A connection is outbound if its establishing packet passed from a
higher to lower security zone.
At present, outside is the lowest security zone. Then in ascending order: public, lab and
servers.
Outside Zone
Outgoing connections from higher security zones receive routable addresses in the
global IP address block when passing into this zone.
Public Zone
Incoming connections are allowed from the outside zone to access departmental public
services: notably the web server, professor and faculty web pages, and our external
domain name service. Outgoing connections from the public zoner are required to
secondary other name servers and to respond to pings.
Lab Zone
Incoming connections are allowed from the outside zone to allow remote access for
students to the lab and to receive email addressed to the lab machines. In order to enforce
student privacy, only authenticated channels can pass inward. Outgoing connections of
any sort are permitted to the Internet. Pings and certain other ICMP messages are also
allowed.
Server Zone
All outgoing connections are allowed, to the lab zone or outside side. Incoming
connections are restricted to fully authenticated connections. An incoming connection is
allowed from the public zone so that the public web server can retrieve faculty and staff
web pages. Incoming email is allowed.

50

51

52

3.15 DMZ (Demilitarized Zone)


In computer networks, a DMZ (demilitarized zone) is a computer host or small network
inserted as a "neutral zone" between a company's private network and the outside public
network,usually the Internet. It prevents outside users from getting direct access to a
server that has company data. (The term comes from the geographic buffer zone that was
set up between North Korea and South Korea following the UN "police action" in the
early 1950s.) A DMZ is an optional and more secure approach to a firewall and
effectively acts as a proxy server as well.
The point of a DMZ is that connections from the internal and the external network to the
DMZ are permitted, whereas connections from the DMZ are only permitted to the
external network -- hosts in the DMZ may not connect to the internal network. This
allows the DMZ's hosts to provide services to the external network while protecting the
internal network in case intruders compromise a host in the DMZ. For someone on the
external network who wants to illegally connect to the internal network, the DMZ is a
dead end.
The DMZ is typically used for connecting servers that need to be accessible from the
outside world, such as e-mail, web and DNS servers.
Connections from the external network to the DMZ are usually controlled using port
address translation (PAT).
53

A DMZ is often created through a configuration option on the firewall, where each
network is connected to a different port on the firewall - this is called a three-legged
firewall set-up. A stronger approach is to use two firewalls, where the DMZ is in the
middle and connected to both firewalls, and one firewall is connected to the internal
network and the other to the external network. This helps prevent accidental
misconfiguration, allowing access from the external network to the internal network. This
type of setup is also referred to as screened-subnet firewall.
In a typical DMZ configuration for a small company, a separate computer (or host in
network terms) receives requests from users within the private network for access to Web
sites or other companies accessible on the public network. The DMZ host then initiates
sessions for these requests on the public network. However, the DMZ host is not able to
initiate a session back into the private network. It can only forward packets that have
already been requested.
Users of the public network outside the company can access only the DMZ host. The
DMZ may typically also have the company's Web pages so these could be served to the
outside world. However, the DMZ provides access to no other company data. In the event
that an outside user penetrated the DMZ host's security, the Web pages might be
corrupted but no other company information would be exposed. Cisco, the leading maker
of router s, is one company that sells products designed for setting up a DMZ.

DMZ host
54

Some home routers refer to a DMZ host. A home router DMZ host is a host on the
internal network that has all ports exposed, except those ports forwarded otherwise.
This is not a true DMZ by definition since these pseudo DMZs provide no security
between that host and the internal network. That is, the DMZ host is able to connect to
hosts on the internal network, but hosts in a real DMZ are prevented from doing so by the
firewall that sits between them.

3.16 Intranet
An intranet is a private computer network that uses Internet protocols, network
connectivity, and possibly the public telecommunication system to securely share part of
an organization's information or operations with its employees. Sometimes the term refers
only to the most visible service, the internal website. The same concepts and technologies
of the Internet such as clients and servers running on the Internet protocol suite are used
to build an intranet. HTTP and other Internet protocols are commonly used as well,
especially FTP and e-mail. There is often an attempt to use Internet technologies to
provide new interfaces with corporate 'legacy' data and information systems.
Briefly, an intranet can be understood as "a private version of the Internet", or as a
version of the internet confined to an organization.
Advantages
1. Workforce productivity: Intranets can help employees to quickly find and view
information and applications relevant to their roles and responsibilities. Via a simple-touse web browser interface, users can access data held in any database the organization
wants to make available, anytime and - subject to security provisions - from anywhere,
increasing employees' ability to perform their jobs faster, more accurately, and with
confidence that they have the right information.
2. Time: With intranets, organizations can make more information available to employees
on a "pull" basis (ie: employees can link to relevant information at a time which suits
them) rather than being deluged indiscriminately by emails.
3. Communication: Intranets can serve as powerful tools for communication within an
organization, vertically and horizontally.
4. Web publishing allows cumbersome corporate knowledge to be maintained and easily
accessed throughout the company using hypermedia and Web technologies. Examples
include: employee manuals, benefits documents, company policies, business standards,
newsfeeds, and even training, can be accessed using common Internet standards (Acrobat
files, Flash files, CGI applications). Because each business unit can update the online

55

copy of a document, the most recent version is always available to employees using the
intranet.
5. Business operations and management: Intranets are also being used as a platform for
developing and deploying applications to support business operations and decisions
across the internetworked enterprise.
Disadvantages
1. Publication of information must be controlled to ensure only correct and appropriate
information is provided in the intranet.
2. Appropriate security permissions must be in place to ensure there are no concerns over
who accesses the intranet or abuse of the intranet by users.
3. Intranets must be carefully planned and structured to ensure that staff do not suffer
from "information overload," - delivering too much information for users to handle - or,
conversely, "information underload," delivering insufficient information for users' needs.

3.17 Extranet
An extranet is a private network that uses Internet protocols, network connectivity, and
possibly the public telecommunication system to securely share part of an organization's
information or operations with suppliers, vendors, partners, customers or other
businesses. An extranet can be viewed as part of a company's Intranet that is extended to
users outside the company (eg: normally over the Internet). It has also been described as a
"state of mind" in which the Internet is perceived as a way to do business with other
companies as well as to sell products to customers.
Briefly, an extranet can be understood as "a private internet over the Internet". An
argument has been made that "extranet" is just a buzzword for describing what
institutions have been doing for decades, that is, interconnecting to each other to create
private networks for sharing information.
Another very common use of the term "extranet" is to designate the "private part" of a
website, where "registered users" can navigate, enabled by authentication mechanisms on
a "login page".
Contents
Security

56

Industry uses
Advantages
Disadvantages
Security
An extranet requires security and privacy. These can include firewalls, server
management, the issuance and use of digital certificates or similar means of user
authentication, encryption of messages, and the use of virtual private networks (VPNs)
that tunnel through the public network.
Industry uses
During the late 1990s and early 2000s, several industries started to use the term
"extranet" to describe central repositories of shared data made accessible via the web only
to authorised members of particular work groups.
For example, in the construction industry, project teams could login to and access a
'project extranet' to share drawings and documents, make comments, issue requests for
information, etc. In 2003 in the United Kingdom, several of the leading vendors formed
the Network of Construction Collaboration Technology Providers, or NCCTP, to promote
the technologies and to establish data exchange standards between the different systems.
The same type of construction-focused technologies have also been developed in the
United States, Australia, Scandinavia, Germany and Belgium, among others. Some
applications are offered on a Software as a Service (SaaS) basis by vendors functioning
as Application service providers (ASPs).
Specially secured extranets are used to provide virtual data room services to companies in
several sectors (including law and accountancy). There are a variety of commercial
extranet applications, some of which are for pure file management, and others which
include broader collaboration and project management tools.
Advantages
1. Extranets can improve organization productivity by automating processes that were
previously done manually (eg: reordering of inventory from suppliers). Automation can
also reduce the margin of error of these processes.
2. Extranets allow organization or project information to be viewed at times convenient
for business partners, customers, employees, suppliers and other stake-holders. This cuts

57

down on meeting times and is an advantage when doing business with partners in
different time zones.
3. Information on an extranet can be updated, edited and changed instantly. All authorised
users therefore have immediate access to the most up-to-date information.
4. Extranets can improve relationships with key customers, providing them with accurate
and updated information.
Disadvantages
1. Extranets can be expensive to implement and maintain within an organisation (eg:
hardware, software, employee training costs) - if hosted internally instead of via an ASP.
2. Security of extranets can be a big concern when dealing with valuable information.
System access needs to be carefully controlled to avoid sensitive information falling into
the wrong hands.
3. Extranets can reduce personal contact (face-to-face meetings) with customers and
business partners. This could cause a lack of connections made between people and a
company, which hurts the business when it comes to loyalty of its business partners and
customers.
3.18 VLANs (Virtual Local Area Network)
VLAN
A virtual LAN, commonly known as a vLAN or as a VLAN, is a method of creating
independent logical networks within a physical network. Several VLANs can co-exist
within such a network. This helps in reducing the broadcast domain and administratively
separating logical segments of LAN (like company departments) which should not
exchange data using LAN (they still can by routing).
A VLAN consists of a network of computers that behave as if connected to the same wire
- even though they may actually be physically connected to different segments of a LAN.
Network administrators configure VLANs through software rather than hardware, which
makes them extremely flexible. VLAN's allow a network manager to logically segment a
LAN into different broadcast domains. Since this is a logical segmentation and not a
physical one, workstations do not have to be physically located together. Users on
different floors of the same building, or even in different buildings can now belong to the
same LAN.
Advantages of VLAN

58

Increases the number of broadcast domains but reduces the size of each broadcast
domain, which in turn reduces network traffic and increases network security (both of
which are hampered in case of single large broadcast domain)
Reduces management effort to create subnetworks
Reduces hardware requirement, as networks can be logically instead of physically
separated
Increases control over multiple traffic types.
Protocols and design
The IEEE 802.1Q tagging protocol dominates the VLAN world. Prior to the introduction
of 802.1Q several proprietary protocols existed, such as Cisco's ISL (Inter-Switch Link, a
variant of IEEE 802.10) and 3Com VLT (Virtual LAN Trunk). ISL is no longer supported
by Cisco.
Early network designers often configured VLANs with the aim of reducing the size of the
collision domain in a large single Ethernet segment and thus of improving performance.
When Ethernet switches made this a non-issue (because they have no collision domain),
attention turned to reducing the size of the broadcast domain at the MAC layer. Virtual
networks can also serve to restrict access to network resources without regard to physical
topology of the network, although the strength of this method remains debatable as
VLAN Hopping is a common means of bypassing such security measures.
Virtual LANs operate at layer 2 (the data link layer) of the OSI model. However,
administrators often configure a VLAN to map directly to an IP network, or subnet,
which gives the appearance of involving layer 3 (the network layer).
In the context of VLANs, the term "trunk" denotes a network link carrying multiple
VLANs which are identified by labels (or "tags") inserted into their packets. Such trunks
must run between "tagged ports" of VLAN-aware devices, so are often switch-to-switch
or switch-to-router links rather than links to hosts. (Confusingly, the term 'trunk' also gets
used for what Cisco call "channels" : Link Aggregation or Port Trunking). A router
(Layer 3 switch) serves as the backbone for network traffic going across different
VLANs.
On Cisco devices, VTP (VLAN Trunking Protocol) allows for VLAN domains, which
can aid in administrative tasks. VTP also allows "pruning", which involves directing
specific VLAN traffic only to switches which have ports on the target VLAN.
Assigning VLAN Memberships

59

The four ways that are in use are:


Port-based: A switch port is manually configured to be a member of a VLAN. This
method only works if all machines on the port belong to the same VLAN.
MAC-based: VLAN membership is based on the MAC address of the workstation. The
switch has a table listing of the MAC address of each machine, along with the VLAN to
which it belongs.
Protocol-based: Layer 3 data within the frame is used to determine VLAN membership.
For example, IP machines can be classified as the first VLAN, and AppleTalk machines
as the second. The major disadvantage of this method is that it violates the independence
of the layers, so an upgrade from IPv4 to IPv6, for example, will cause the switch to fail.
Authentication based: Devices can be automatically placed into VLANs based on the
authentication credentials of a user or device using the 802.1x protocol
3.19 NAT (Network Address Translation)
NAT Short for Network Address Translation, an Internet standard that enables a localarea network (LAN) to use one set of IP addresses for internal traffic and a second set of
addresses for external traffic. A NAT box located where the LAN meets the Internet
makes all necessary IP address translations.
NAT serves three main purposes:
1. Provides a type of firewall by hiding internal IP addresses
2. Enables a company to use more internal IP addresses. Since they're used
internally only, there's no possibility of conflict with IP addresses used by other
companies and organizations.
3. Allows a company to combine multiple ISDN connections into a single Internet
connection
Dynamic NAT
A type of NAT in which a private IP address is mapped to a public IP address drawing
from a pool of registered (public) IP addresses. All packets leaving your LAN for the
Internet contain the same source IP address, which is the public one assigned to your
router. Typically, the NAT router in a network will keep a table of registered IP addresses,
and when a private IP address requests access to the Internet, the router chooses an IP
address from the table that is not at the time being used by another private IP address.
Dynamic NAT helps to secure a network as it masks the internal configuration of a
private network and makes it difficult for someone outside the network to monitor
individual usage patterns. Another advantage of dynamic NAT is that it allows a private
network to use private IP addresses that are invalid on the Internet but useful as internal
addresses.

60

Static NAT
A type of NAT in which a private IP address is mapped to a public IP address, where the
public address is always the same IP address (i.e., it has a static address). This allows an
internal host, such as a Web server, to have an unregistered (private) IP address and still
be reachable over the Internet.

3.20 Access Card


The Access Card(AC) is a used as a general identification card as well as for
authentication to enable access to computers, networks, and certain network facilities. It
enables encrypting and crytographically signing email, facilitating the use of PKI
authentication tools, and establishes an authoritative process for the use of identity
credentials.
Objectives
The AC has many objectives, including controlling access to computer networks,
enabling users to sign documents electronically, encrypt email messages, and enter
controlled facilities.
Security
The idea that the AC significantly increases security is severely flawed. Under the
username/password approach, hacking a person's password required either an over-theshoulder approach, intercepting the user's hashed password and using a tool such as
L0phtCrack, or the use of a keystroke logger, a small device which sits between the
keyboard and the USB or PS2 port. These approaches required physical access to the
LAN. In fact, with the device cleared and the computer rebooted, the username is usually
the first entry, immediately followed by the password. With the AC approach, hacking a
person's password became only slightly more complicated. It now requires both a
keyboard recorder as well as a tap on the digital stream of information between the
computer and the network. The keyboard recorder will record the PIN, which is strongly
encrypted over the network, but not encrypted between the keyboard and the computer,
while the digital stream tap will record the AC's unique ID (also called an Electronic Data
Interchange Personal Identifier, or EDIPI), which is not encrypted over the LAN[citation
needed]. The general consensus at the Defense Manpower Data Center is "to use twofactor authentication, in which some type of hardware or software token, or biometric, is
used, usually in conjunction with a password. Ideally, the best solution is to use several
approaches, such as facial biometrics, a pin, and a physical object such as the AC.
Regardless, the encryption process must take place on the AC using a strong PKI
encryption engine to encrypt a unique user ID that's not visible to others without the right

61

credentials and specialized equipment. It's conceivable to make a unique user ID that's
never visible to anyone except the authentication engine itself.
Scalability
The US Army has enjoyed password scalability, or single point access to many SSLsecured websites through its Army Knowledge Online program for several years.
Non-Windows Support
The Access Card, historically, has only been supported by Windows machines, however
an increasing number of Department of Defense workstations are using operating systems
such as Linux or Mac OS X. Fortunately, Apple has done work for adding support for
Access Cards to their operating system right out of the box using the MUSCLE
Movement for the Use of Smartcards in a Linux Environment) project. Some work has
also been done in the Linux realm. Some users are using the MUSCLE project combined
with Apple's Apple Public Source Licensed Access Card software.

3.21 Operating Systems and Databases


Operating Systems
An operating system (OS) is a computer program that manages the hardware and
software resources of a computer. It performs many functions and is, in very basic terms,
an interface between your computer and the outside world.
Operating System Functions The operating system provides for several other functions
including:

System tools (programs) used to monitor computer


performance, debug problems, or maintain parts of
the system.
A set of libraries or functions which programs may
use to perform specific tasks especially relating to
interfacing with computer system components.
In a multitasking operating system where multiple
programs can be running at the same time, the
operating system determines which applications
should run in what order and how much time
should be allowed for each application before
giving another application a turn.

62

It manages the sharing of internal memory among


multiple applications.
It handles input and output to and from attached
hardware devices, such as hard disks, printers, and
dial-up ports.
It sends messages to each application or interactive
user (or to a system operator) about the status of
operation and any errors that may have occurred.
It can offload the management of what are called
batch jobs (for example, printing) so that the
initiating application is freed from this work.
On computers that can provide parallel processing,
an operating system can manage how to divide the
program so that it runs on more than one processor
at a time.

The operating system makes these interfacing functions along with its other functions
operate smoothly and these functions are mostly transparent to the user
DataBase
A database is a structured collection of data. The data stored in a database is managed by
a Data Base Management System (DBMS). The DBMS is responsible for adding,
modifying, and deleting data from the database. The DBMS is also responsible for
providing access to the data for viewing and reporting. Open source DBMS's include
MySQL, Postgres, and BerkleyDB. Commercial DBMS's include Oracle, DB2, Sybase,
Informix, and Microsoft SQL.
The central concept of a database is that of a collection of records, or pieces of
knowledge. Typically, for a given database, there is a structural description of the type of
facts held in that database: this description is known as a schema. The schema describes
the objects that are represented in the database, and the relationships among them. There
are a number of different ways of organizing a schema, that is, of modeling the database
structure: these are known as database models (or data models). The model in most
common use today is the relational model, which in layman's terms represents all
information in the form of multiple related tables each consisting of rows and columns
(the true definition uses mathematical terminology). This model represents relationships
by the use of values common to more than one table. Other models such as the
hierarchical model and the network model use a more explicit representation of
relationships.
Applications of databases
Databases are used in many applications, spanning virtually the entire range of computer
software. Databases are the preferred method of storage for large multiuser applications,

63

where coordination between many users is needed. Even individual users find them
convenient, though, and many electronic mail programs and personal organizers are
based on standard database technology. Software database drivers are available for most
database platforms so that application software can use a common application
programming interface (API) to retrieve the information stored in a database. Two
commonly used database APIs are JDBC and ODBC. A database is also a place where
you can store data and then arrange that data easily and efficiently.
4. Identity/Access Management
4.1 Single Sign-On
Single sign-on (SSO) is a session/user authentication process that permits a user to enter
one name and password in order to access multiple applications. The process
authenticates the user for all the applications they have been given rights to and
eliminates further login prompts when they switch applications during a particular
session.
In e-commerce, single sign-on is designed to centralize consumer financial information
on one server, not only for the consumer's convenience but also to offer increased
security by limiting the number of times credit card numbers or other sensitive
information must be entered. Microsoft's "Passport" single sign-on service is an example
of a growing trend towards the use of Web-based single signons that allow users to
register financial information once and shop at multiple Web sites.
Why choose single sign-on?
How many of you have had to implement your own authentication mechanism -- usually
some simple database lookup? How often have you stopped to think about the workflow
needed for creating and managing user accounts? This is a common task in any
development project. If you are lucky, your organization already possesses some common
classes or libraries you can use. But it is often a task that is overlooked -- seen as trivial,
something that occurs only in the background. In general, a coherent authentication
strategy or a solid authentication framework is missing. Over time this leads to a
proliferation of applications, each of which comes with their own authentication needs
and user repositories. At one time or another, everyone needs to remember multiple
usernames and passwords to access different applications on a network. This poses a
huge cost for the administration and support departments -- accounts must be set up in
each application for each employee, users forget their passwords, and so on.
Authentication is a horizontal requirement across multiple applications, platforms, and
infrastructures. In general, there's no reason why user Mary should need multiple
usernames. Ideally she should only need to identify herself once and then be provided
with access to all authorized network resources. The objective of SSO is to allow users
access to all applications from one logon. It provides a unified mechanism to manage the
authentication of users and implement business rules determining user access to

64

applications and data. Before I get into the technical details of single sign-on, take a
quick look at some of the benefits and some of the risks.
Benefits include the following:
Improved user productivity. Users are no longer bogged down by multiple logins and
they are not required to remember multiple IDs and passwords. Also, support personnel
answer fewer requests to reset forgotten passwords.
Improved developer productivity. SSO provides developers with a common
authentication framework. In fact, if the SSO mechanism is independent, then developers
don't have to worry about authentication at all. They can assume that once a request for
an application is accompanied by a username, then authentication has already taken
place.
Simplified administration. When applications participate in a single sign-on protocol,
the administration burden of managing user accounts is simplified. The degree of
simplification depends on the applications since SSO only deals with authentication. So,
applications may still require user-specific attributes (such as access privileges) to be set
up.
Some of the more frequently mentioned problems with single sign-on include the
following:
Difficult to retrofit. An SSO solution can be difficult, time-consuming, and expensive to
retrofit to existing applications.
Unattended desktop. Implementing SSO reduces some security risks, but increases
others. For example, a malicious user could gain access to a user's resources if the user
walks away from his machine and leaves it logged in. Although this is a problem with
security in general, it is worse with SSO because all authorized resources are
compromised. At least with multiple logons, the user may only be logged into one system
at the time and so only one resource is compromised.
Single point of attack. With single sign-on, a single, central authentication service is
used by all applications. This is an attractive target for hackers who may decide to carry
out a denial of service attack.
Many free and commercial SSO or reduced sign-on solutions are currently available. A
partial list follows:
The JA-SIG Central Authentication Service (CAS) is an open single sign-on service
(originally developed by Yale University) that allows web applications the ability to defer
all authentication to a trusted central server or servers. Numerous clients are freely
available, including clients for Java, .Net, PHP, Perl, Apache, uPortal, Liferay and others.

65

A-Select is the Dutch authentication system for higher education that was codeveloped
by SURFnet (the Dutch NREN). A-Select has now become open source and is used by
the Dutch Government, for instance, for DigiD, their authentication system. A-Select
allows staff and students to gain access to several web services through a single on-line
authentication. Institutions can use A-Select to secure their web applications in a simple
fashion. They can use different means of authentication ranging from username/password
to stronger (more secure) methods such as a one-time password sent to a mobile phone or
Internet banking authentication.
CoSign, an open-source project originally designed to provide the University of
Michigan with a secure single sign-on web authentication system. CoSign authenticates
users on the web server and then provides an environment variable for the users' name.
When the users access a part of the site that requires authentication, the presence of that
variable allows access without having to sign-on again. Cosign is part of the National
Science Foundation Middleware Initiative (NMI) software release.
Enterprise single sign-on (E-SSO), also called legacy single sign-on, after primary
user authentication, intercepts login prompts presented by secondary applications, and
automatically fills in fields such as a login ID or password. E-SSO systems allow for
interoperability with applications that are unable to externalize user authentication,
essentially through "screen scraping."
Web single sign-on (Web-SSO), also called Web access management (Web-AM),
works strictly with applications and resources accessed with a web browser. Access to
web resources is intercepted, either using a web proxy server or by installing a
component on each targeted web server. Unauthenticated users who attempt to access a
resource are diverted to an authentication service, and returned only after a successful
sign-on. Cookies are most often used to track user authentication state, and the Web-SSO
infrastructure extracts user identification information from these cookies, passing it into
each web resource.
Kerberos is a popular mechanism for applications to externalize authentication entirely.
Users sign into the Kerberos server, and are issued a ticket, which their client software
presents to servers that they attempt to access. Kerberos is available on Unix, Windows
and mainframe platforms, but requires extensive modification of client/server application
code, and is consequently not used by many legacy applications.
Federation is a new approach, also for web applications, which uses standards-based
protocols to enable one application to assert the identity of a user to another, thereby
avoiding the need for redundant authentication. Standards to support federation include
SAML and WS-Federation [1].
Light-Weight Identity and OpenID, under the YADIS umbrella, offer distributed and
decentralized SSO, where identity is tied to an easily-processed URL which can be
verified by any server using one of the participating protocols.

66

JOSSO or Java Open Single Sign-On, is an open source J2EE-based SSO infrastructure
aimed to provide a solution for centralized platform neutral user authentication. It uses
web services for asserting user identity, allowing the integration of non-Java applications
(i.e: PHP, Microsoft ASP, etc.) to the Single Sign-On Service using the SOAP over HTTP
protocol.
Reference
1. http://www.ibm.com
2. http://www.whatis.com
3. http://www.wikipedia.org
4.2 Authentication
Authentication is any process by which you verify that someone is who they claim they
are. This usually involves a username and a password, but can include any other method
of demonstrating identity, such as a smart card, retina scan, voice recognition, or
fingerprints.
Basic authentication
As the name implies, basic authentication is the simplest method of authentication, and
for a long time was the most common authentication method used. However, other
methods of authentication have recently passed basic in common usage, due to usability
issues.
How basic authentication works
When a particular resource has been protected using basic authentication, web servers
send a 401 Authentication Required header with the response to the request, in order to
notify the client that user credentials must be supplied in order for the resource to be
returned as requested.
Upon receiving a 401 response header, the client's browser, if it supports basic
authentication, will ask the user to supply a username and password to be sent to the
server. If you are using a graphical browser, such as Netscape or Internet Explorer, what
you will see is a box which pops up and gives you a place to type in your username and
password, to be sent back to the server. If the username is in the approved list, and if the
password supplied is correct, the resource will be returned to the client.

67

Because the HTTP protocol is stateless, each request will be treated in the same way,
even though they are from the same client. That is, every resource which is requested
from the server will have to supply authentication credentials over again in order to
receive the resource.
Fortunately, the browser takes care of the details here, so that you only have to type in
your username and password one time per browser session - that is, you might have to
type it in again the next time you open up your browser and visit the same web site.
Along with the 401 response, certain other information will be passed back to the client.
In particular, it sends a name which is associated with the protected area of the web site.
This is called the realm, or just the authentication name. The client browser caches the
username and password that you supplied, and stores it along with the authentication
realm, so that if other resources are requested from the same realm, the same username
and password can be returned to authenticate that request without requiring the user to
type them in again. This caching is usually just for the current browser session, but some
browsers allow you to store them permanently, so that you never have to type in your
password again.
The authentication name, or realm, will appear in the pop-up box, in order to identify
what the username and password are being requested for.
Form based authentication
The authentication challenge is an HTML form with one or more text input fields for user
credentials.
How Form based authentication works
In a typical form-based authentication, text boxes are provided for the user name and
password. Users enter their credentials in these fields. The most common credential
choices are username and password, but any user attributes can be used, for example,
user name, password, and domain. A Submit button posts the content of the form. When
the user clicks the Submit button, the form data is posted to the Web server.
You may want to use form-based authentication for reasons such as the following:
1. To use your organizations look and feel in the authentication process. For example, a
custom form can include a company logo and a welcome message instead of the standard
username and password pop-up window used in Basic authentication.
2. To gather additional information at the time of login.

68

3. To provide additional functionality with the login procedure, such as a link to a page
for lost password management.
Client Certificate Authentication
The client certificate challenge method uses the Secure Sockets Layer version 3 (SSLv3)
certificate authentication protocol built into browsers and Web servers. Authenticating
users with a client certificate requires the client to establish an SSL connection with a
Web server that has been configured to process client certificates.
Client Certificate Authentication works
When the user requests access to a resource over SSL, Web server provides its servercertificate, which allows the client to establish an SSL session. Web server then asks the
client for a client-side certificate. If the client does not present a certificate, the SSL
connection with the client is closed and client-side certificate authentication is not
attempted.
Reference
1. http://httpd.apache.org/docs/1.3/howto/auth.html

Kerberos
What is Kerberos?
Kerberos is an authentication protocol suitable for authenticating users to several
networked entities. Kerberos is suitable in scenarios when there are a number of users
wanting to access several applications / hosts / devices and hence needed to be identified
before granting access. This protocol avoids the need for the user to remember multiple
passwords thereby paving the way for Single sign-on. This protocol centers on a clientserver model and symmetric key cryptography and also has built-in provisions to prevent
replay attacks.
Kerberos stands for the mythical monstrous three-headed dog that guards the underworld
preventing the dead from leaving out and the living from entering in. The protocol was
designed by Massachusetts Institute of Technology in order to securely verify the
identities of clients accessing the servers over an open network vulnerable for
eavesdropping.
Kerberos has been made available by MIT for free and has found its place in many
applications. Windows 2000 is a classic example of a product implementing Kerberos.
Why Kerberos?

69

Simple password based authentication


Let us consider a situation where Alice is a client and needs to access Bob, which is the
server. In a traditional scenario, Alice presents her a password to Bob to identify herself.
One of the pre-requisites for this to happen is the knowledge of Alices password by both
Alice and Bob. This is simple password authentication and has its own drawback. The
password is transmitted in clear text over the wire. So Eve, the eavesdropper could sniff
the network traffic, steal the password and impersonate as Alice before Bob.
Authentication with secret key cryptography
Since passwords are vulnerable for eavesdropping when transmitted in clear text, we
could have Alice and Bob agree on a secret key prior to their communication (of course,
via a trusted medium of communication) and make use of a challenge-response
handshake for identification. Thus,

If Alice wants to authenticate Bob, she could repeat the same sequence described above
(i.e. generate a random challenge, send it to Bob, get the encrypted value, decrypt it) and
conclude if the other party is Bob. This is referred to as mutual authentication.
Secret key cryptography definitely presents a lot of advantages over the Simple Password
based authentication, which are,
1. Password is not transmitted and hence resistant to sniffing
2. Brute forcing a cryptographic key is definitely harder than guessing a password
through brute-force
However this model does not scale when we have m number of clients like Alice trying to
communicate with n number of servers like Bob. Thus we end up with a system that has
to maintain mxn secrets.

70

Authentication with secret key cryptography and a trusted third party


In the example scenario mentioned above, where authentication is carried out with secret
key cryptography, if we introduce a trusted third party, who maintains the keys for each
client and server, we end up maintaining m+n secrets. Thus Alice will be having a single
key irrespective of the other party she is communicating with.
Kerberos protocol is based on the above principle where we have a trusted third party
called Key Distribution Center (KDC), which distributes secret keys to all the clients and
servers. Thus every client and server shares a secret key with KDC. When a client wants
to communicate with a server, the client requests a session key from KDC. KDC
distributes the session keys to the communicating parties securely. The client then proves
the knowledge of the session key with the server. The following picture demonstrates this
sequence.

where Ka is the Alices secret key shared with KDC,


<t> Kb is Bobs secret key shared with KDC and <t> Kab is the session key generated by
KDC for Alice to communicate with Bob.
In the above sequence, it is to be noted that the session key is disguised in such a way that
only the recipient it is intended for could make any sense out of it. The disguising is done
by encrypting the session key with the shared secrets of Alice and Bob and distributed to
them respectively.
Alternatively, the KDC could also hand over the session key disguised for Bob to Alice
and make her responsible for giving it to Bob. This is depicted in the diagram below.

71

where Ticketb is the session key issued to Alice to talk to Bob encrypted with Bobs
shared secret. Every ticket has an expiry time associated with it, which will be validated
by the recipient. Hence it is imperative that all the clocks in a Kerberos environment be
synchronized with each other.
The following picture combines the techniques discussed so far for Kerberos
authentication and demonstrates the Kerberos protocol.
1. The workstation informs KDC that Alice wants to talk to Bob. During this process
the workstation submits Alices credentials to KDC.
2. KDC invents the session key Kab
3. KDC encrypts a copy of Kab with Alices shared secret. KDC also creates the
Ticket, which is nothing but Kab with an expiry time, encrypted with Bobs
shared secret.
4. KDC returns encrypted Kab and the Ticket created in step 3 to the workstation.
5. The workstation submits the Ticket to Bob. In order to demonstrate the
knowledge of Kab, the workstation encrypts the current timestamp with Kab and
sends it to Bob, the application server.
6. Bob decrypts the Ticket and gets Kab. Using Kab Bob decrypts the timestamp,
increments it, encrypts it again with Kab and sends it back to the workstation.
Thus the workstation and Bob mutually authenticate to each other.

72

In the above scenario, the workstation is required to authenticate to KDC each time it
needs to talk to Bob. Thus Alices credentials are required to be submitted each time. This
could be avoided by inserting a separate Ticket Granting Service in the conversation
between steps 2 and 3.
In the modified scenario, the KDCs responsibilities are divided between an
Authentication Service (AS) and a Ticket Granting Service (TGS). The AS verifies the
credentials of Alice and issues a Session Key to communicate with the TGS. One copy of
this Session Key will be encrypted with Alices shared secret and the other with that of
TGS. The second copy of the Session key encrypted with the shared secret of TGS is
called as Ticket Granting Ticket (TGT). Each time when the workstation wants to
communicate with a server, it presents the TGT to TGS to get hold of a Ticket to
authenticate itself to the server.
The actual sequence of steps involved in a Kerberos authentication is described in the
picture below.

73

Thus Kerberos can be summarized as follows


As an authentication method,
1. Enables the user to enter his password in his machine and get authenticated by
KDC only once in a while for example, once per day.
2. Passwords are never made to travel in the network.

74

As a mechanism to achieve Single Sign on,


1. The KDC issues a ticket, which is good for a while for example, for a day. This
Ticket could be used to gain access to all Kerberized applications in the
environment without the user having to present his password again.
The following are the advantages of Kerberos
1. Passwords are not exposed to eavesdropping
2. Password is only typed to the local workstation. It never travels over the network
and is never transmitted to a remote server
3. Password guessing more difficult
4. Facilitates Single Sign-on. More convenient - only one password, entered once.
Users need not remember several passwords.
5. Stolen tickets are hard to reuse. Timestamps included in the tickets make them
difficult for use in replay attacks
6. It is very much easier to effectively secure a small set of limited access machines
(namely, the KDCs)
7. Provides for Centralized user administration.
References:

1. http://web.mit.edu/Kerberos/
2. Presentation: An Introduction to Kerberos Shumon Huque. University of
Pennsylvania

CHAP (Challenge Handshake Authentication Protocol)


In computing, the Challenge-Handshake Authentication Protocol (CHAP) authenticates a
user to an Internet access provider.
RFC 1994: PPP Challenge Handshake Authentication Protocol (CHAP) defines the
protocol.
CHAP is an authentication scheme used by Point to Point Protocol (PPP) servers to
validate the identity of remote clients. CHAP periodically verifies the identity of the
client by using a three-way handshake. This happens at the time of establishing the initial
link, and may happen again at any time afterward. The verification is based on a shared
secret (such as the client user's password).
1. After the completion of the link establishment phase, the authenticator sends a
"challenge" message to the peer.
2. The peer responds with a value calculated using a one-way hash function, such as
an MD5 checksum hash.

75

3. The authenticator checks the response against its own calculation of the expected
hash value. If the values match, the authenticator acknowledges the
authentication; otherwise it should terminate the connection.
4. At random intervals the authenticator sends a new challenge to the peer and
repeats steps 1 through 3.
CHAP provides protection against playback attack by the peer through the use of an
incrementally changing identifier and of a variable challenge-value. CHAP requires that
the client make the secret available in plaintext form.
Microsoft has implemented the Challenge-handshake authentication protocol as MSCHAP.
Advantages
1. CHAP provides protection against playback attack by the peer through the use of
an incrementally changing identifier and a variable challenge value. The use of
repeated challenges is intended to limit the time of exposure to any single attack.
The authenticator is in control of the frequency and timing of the challenges.
2. This authentication method depends upon a "secret" known only to the
authenticator and that peer. The secret is not sent over the link.
3. Although the authentication is only one-way, by negotiating CHAP in both
directions the same secret set may easily be used for mutual authentication.
4. Since CHAP may be used to authenticate many different systems, name fields
may be used as an index to locate the proper secret in a large table of secrets. This
also makes it possible to support more than one name/secret pair per system, and
to change the secret in use at any time during the session.
Disadvantages
1. CHAP requires that the secret be available in plaintext form. Irreversably
encrypted password databases commonly available cannot be used.
2. It is not as useful for large installations, since every possible secret is maintained
at both ends of the link.
Implementation Note
To avoid sending the secret over other links in the network, it is recommended that the
challenge and response values be examined at a central server, rather than each network
access server. Otherwise, the secret SHOULD be sent to such servers in a reversably
encrypted form.

76

Certificates
Digital Certificates provide a means of proving your identity in electronic transactions,
much like a driver license or a passport does in face-to-face interactions. With a Digital
Certificate, you can assure friends, business associates, and online services that the
electronic information they receive from you are authentic.
Digital Certificates bind an identity to a pair of electronic keys that can be used to encrypt
and sign digital information. A Digital Certificate makes it possible to verify someone's
claim that they have the right to use a given key, helping to prevent people from using
phony keys to impersonate other users. Used in conjunction with encryption, Digital
Certificates provide a more complete security solution, assuring the identity of all parties
involved in a transaction.
A Digital Certificate is issued by a Certification Authority (CA) and signed with the CA's
private key.
A Digital Certificate typically contains the:
1.
2.
3.
4.
5.
6.

Owner's public key


Owner's name
Expiration date of the public key
Name of the issuer (the CA that issued the Digital Certificate
Serial number of the Digital Certificate
Digital signature of the issuer

The most widely accepted format for Digital Certificates is defined by the CCITT X.509
international standard; thus certificates can be read or written by any application
complying with X.509. Further refinements are found in the PKCS standards and the
PEM standard.
Digital Certificates can be used for a variety of electronic transactions including e-mail,
electronic commerce, groupware and electronic funds transfers. Netscape's popular
Enterprise Server requires a Digital Certificate for each secure server.
Username / Password
Username
A name or an ID used to gain access to a computer system. Usernames, and often
passwords, are required in multi-user systems. In most such systems, users can choose
their own usernames and passwords.
Example: Card Number in ATM access or Customer ID in online access
Password

77

A password is a form of secret authentication data that is used to control access to a


resource. The password is kept secret from those not allowed access, and those wishing to
gain access are tested on whether or not they know the password and are granted or
denied access accordingly.
Example: PIN in ATM access

Tokens
Token Authentication
The general concept behind a token-based authentication system is simple. Allow users to
enter their username and password in order to obtain a token which allows them to fetch a
specific resource - without using their username and password. Once their token has been
obtained, the user can offer the token - which offers access to a specific resource for a
time period - to the remote site. Using some form of authentication: a header, GET or
POST request, or a cookie of some kind, the site can then determine what level of access
the request in question should be afforded.
The type of changes this type of authentication requires is obviously dependent on the
current implementation of your site. Example code one might be able to write in Perl or
PHP would not only be language and implementation specific, it would also be
application specific. However, some general principles should be considered in both the
creation of a process to obtain tokens and the process of using them. Simplicity for users,
robustness for interoperability, and protection of user data are all important for your
application, and each can fall by the wayside in attempting to design a system which fits
user expectations.

Multi-factor
Multi-Factor Authentication
With the increasing electronification of business, it is easier than ever to obtain another
person's identifying information, and perpetrate identity fraud. To detect fraudulent or
stolen identities and stem the increasing tide of losses due to identity fraud, financial
institutions need to employ systems that will enable them to stay one step ahead of
potential frauds. Implementing a multi-factor authentication process, performed upon the
opening of a new account, can help to accomplish this. Multi-factor authentication
consists of verifying and validating the authenticity of an identity using more than one
validation mechanism.
Generally, this is accomplished by verifying:
1. Something you are, in the form of identifying information, or biometric
identification, such as an iris scan or a fingerprint.
2. Something you have, for example a driver's license, or a security token.
78

3. Something you know, such as a password or pin number.


This is not a new concept-it has been a cornerstone of cryptography for centuries. Today,
multi-factor authentication is used in numerous applications, from ATM cards that are
secured by Personal Identification Numbers (PINs) to websites that are protected by
digital certificates and passwords. However, this concept has yet to be widely applied
when it comes to opening new accounts.
Many institutions use an automated system to validate and verify applicants' identifying
information. This is a highly effective means of mitigating fraud risk, but as fraudsters
become more sophisticated, the means of detecting and preventing identity fraud need to
keep pace.
By employing multi-factor authentication, financial institutions can lessen vulnerabilities
in the account opening process, making it more difficult for those with fraudulent intent
to access the nation's financial system.

Mutual
Mutual Authentication
Mutual authentication or two-way authentication refers to two parties authenticating each
other suitably. In technology terms, it refers to a client or user authenticating themselves
to a server and that server authenticating itself to the user in such a way that both parties
are assured of the others' identity.
Typically, this is done for a client process and a server process without user interaction.
Mutual SSL provides the same things as SSL, with the addition of authentication and
non-repudiation of the client, using digital signatures. However, due to issues with
complexity, cost, logistics, and effectiveness, most web applications are designed so they
do not require client-side certificates.

Biometrics
Biometrics (ancient Greek: bios ="life", metron ="measure") is the study of methods for
uniquely recognizing humans based upon one or more intrinsic physical or behavioural
traits.
In information technology, biometric authentication refers to technologies that measure
and analyze human physical and behavioural characteristics for authentication purposes.
Examples of physical (or physiological or biometrc) characteristics include fingerprints,
eye retinas and irises, facial patterns and hand measurements, while examples of mostly
behavioural characteristics include signature, gait and typing patterns. All behavioral
biometric characteristics have a physiological component, and, to a lesser degree,
physical biometric characteristics have a behavioral element.
79

In a typical IT biometric system, a person registers with the system when one or more of
his physical and behavioural characteristics are obtained. This information is then
processed by a numerical algorithm, and entered into a database. The algorithm creates a
digital representation of the obtained biometric. If the user is new to the system, he or she
enrolls, which means that the digital template of the biometric is entered into the
database. Each subsequent attempt to use the system, or authenticate, requires the
biometric of the user to be captured again, and processed into a digital template. That
template is then compared to those existing in the database to determine a match. The
process of converting the acquired biometric into a digital template for comparison is
completed each time the user attempts to authenticate to the system.
Biometric systems have the potential to identify individuals with a very high degree of
certainty. Forensic DNA evidence enjoys a particularly high degree of public trust at
present and substantial claims are being made in respect of iris recognition technology,
which has the capacity to discriminate between individuals with identical DNA, such as
monozygotic twins.
4.3 Access Control And Management
Access control is the ability to permit or deny the use of resources by someone.
Computer security
Identification and authentication (I&A)
Authorization
Accountability
Access control Techniques
Discretionary access control (DAC)
Mandatory access control (MAC)
Role-Based Access Control (RBAC)
Computer security
In computer security, access control includes authentication, authorization and audit. It
also includes measures such as physical devices, including biometric scans and metal
locks, hidden paths, digital signatures, encryption, social barriers, and monitoring by
humans and automated systems.

80

Access control models can be broadly divided into two types: those based on capabilities
and those based on access control lists. In a capability-based model, holding a secure
pointer to an object provides access to the object; access is conveyed to another party by
transmitting such a pointer over a secure channel. In an access-control-list-based model,
your access to an object depends on whether your identity is on a list associated with the
object; access is conveyed by editing the list. (Different ACL systems have a variety of
different conventions regarding who or what is responsible for editing the list and how it
is edited.) Access control systems provide the essential services of identification and
authentication (I&A), authorization, and accountability where identification and
authentication determine who can log on to a system, authorization determines what an
authenticated user can do, and accountability identifies what a user did.
Identification and authentication (I&A)
Identification and authentication (I&A) is a two-step process that determines who can log
on to a system. Identification is how a user tells a system who he or she is (for example,
by using a username). The identification component of an access control system is
normally a relatively simple mechanism based on either Username or User ID. In the case
of a system or process, identification is usually based on:
Computer name
Media Access Control (MAC) address
Internet Protocol (IP) address
Process ID (PID)
The only requirements for identification are that the identification:
Must uniquely identify the user.
Shouldn't identify that user's position or relative importance in an organization (such as
labels like president or CEO).
Should avoid using common or shared user accounts, such as root, admin, and
sysadmin. Such accounts provide no accountability and are juicy targets for hackers.
Authentication is the process of verifying a user's claimed identity (for example, by
comparing an entered password to the password stored on a system for a given
username).
Authentication is based on at least one of these three factors:

81

Something you know, such as a password or a personal identification number (PIN).


This assumes that only the owner of the account knows the password or PIN needed to
access the account.
Something you have, such as a smart card or token. This assumes that only the owner of
the account has the necessary smart card or token needed to unlock the account.
Something you are, such as fingerprint, voice, retina, or iris characteristics.
Authorization
Authorization (or establishment) defines a user's rights and permissions on a system.
After a user (or process) is authenticated, authorization determines what that user can do
on the system.
Most modern operating systems define sets of permissions that are variations or
extensions of three basic types of access:
Read (R): The user can

Read file contents


List directory contents

Write (W): The user can change the contents of a file or directory with these tasks:

Add
Create
Delete
Rename

Execute (X): If the file is a program, the user can run the program. User can enter the
directory if set.
These rights and permissions are implemented differently in systems based on
discretionary access control (DAC) and mandatory access control (MAC).
Accountability
Accountability uses such system components as audit trails (records) and logs to
associate a user with his actions. Audit trails and logs are important for
Detecting security violations
Re-creating security incidents

82

If no one is regularly reviewing your logs and they are not maintained in a secure and
consistent manner, they may not be admissible as evidence.
Many systems can generate automated reports based on certain predefined criteria or
thresholds, known as clipping levels. For example, a clipping level may be set to generate
a report for the following:
More than three failed logon attempts in a given period
Any attempt to use a disabled user account
These reports help a system administrator or security administrator more easily identify
possible break-in attempts.
Access control Techniques
Access control techniques are generally categorized as either discretionary or mandatory.
Discretionary access control
Discretionary access control (DAC) is an access policy determined by the owner of a file
(or other resource). The owner decides who is allowed access to the file and what
privileges they have.
Two important concepts in DAC are
File and data ownership: Every object in a system must have an owner. The access
policy is determined by the owner of the resource (including files, directories, data,
system resources, and devices). Theoretically, an object without an owner is left
unprotected. Normally, the owner of a resource is the person who created the resource
(such as a file or directory).
Access rights and permissions: These are the controls that an owner can assign to
individual users or groups for specific resources.
Discretionary access controls can be applied through the following techniques:
Access control lists (ACLs) name the specific rights and permissions that are assigned
to a subject for a given object. Access control lists provide a flexible method for applying
discretionary access controls.

83

Role-based access control assigns group membership based on organizational or


functional roles. This strategy greatly simplifies the management of access rights and
permissions:
Access rights and permissions for objects are assigned any group or, in addition to,
individuals. Individuals may belong to one or many groups. Individuals can be designated
to acquire cumulative permissions (every permission of any group they are in) or
disqualified from any permission that isn't part of every group they are in.
Mandatory access control
Mandatory access control (MAC) is an access policy determined by the system, not the
owner. MAC is used in multilevel systems that process highly sensitive data, such as
classified government and military information. A multilevel system is a single computer
system that handles multiple classification levels between subjects and objects.
Sensitivity labels: In a MAC-based system, all subjects and objects must have labels
assigned to them. A subject's sensitivity label specifies its level of trust. An object's
sensitivity label specifies the level of trust required for access. In order to access a given
object, the subject must have a sensitivity level equal to or higher than the requested
object.
Data import and export: Controlling the import of information from other systems and
export to other systems (including printers) is a critical function of MAC-based systems,
which must ensure that sensitivity labels are properly maintained and implemented so
that sensitive information is appropriately protected at all times.
Two methods are commonly used for applying mandatory access control:
Rule-based access controls: This type of control further defines specific conditions for
access to a requested object. All MAC-based systems implement a simple form of rulebased access control to determine whether access should be granted or denied by
matching:
An object's sensitivity label
A subject's sensitivity label
Lattice-based access controls: These can be used for complex access control decisions
involving multiple objects and/or subjects. A lattice model is a mathematical structure
that defines greatest lower-bound and least upper-bound values for a pair of elements,
such as a subject and an object.

84

Role-Based Access Control


In computer systems security Role-Based Access Control (RBAC) is an approach to
restricting system access to authorized users. It is a newer and alternative approach to
Mandatory Access Control (MAC) and Discretionary Access Control (DAC)
Telecommunication
4.4 User Identity Management
Identity Management
The Identity Management platform automates user identity provisioning and
deprovisioning and allows enterprises to manage the end-to-end lifecycle of user
identities across all enterprise resources, both within and beyond the firewall. It provides
an identity management platform that automates user provisioning, identity
administration, and password management, wrapped in a comprehensive workflow
engine.
Automating user identity provisioning can reduce IT administration costs and improve
security. Provisioning also plays an important role in regulatory compliance. Key features
of Identity Management include password management, workflow and policy
management, identity reconciliation, reporting and auditing, and extensibility through
adapters.
Purpose
The Identity System enables creation, removal and management of identity information
relating to individual users, groups and organizations.
Feature
Features of the identity system include
1. Advanced user, group and organization management
2. Restricted searchbase capabilities for display and modification of identity information
3. Granular access control to determine self service and modify rights
4. Internally or externally customizable business process workflows for creating,
deleting, modifying, self registering etc.,

85

5. A multi level delegation model for all management rights that allows workload to be
distributed.

IDM Solutions
Solutions which fall under the category of Identity Management:
Management of Identities
Provisioning/Deprovisioning of accounts
Workflow automation
Delegated administration
Password synchronization
Self Service Password Reset
4.5 Provisioning
In general, provisioning means "providing" or making something available.
User Provisioning
User provisioning refers to the creation, maintenance and deactivation of user objects and
user attributes, as they exist in one or more systems, directories or applications, in
response to automated or interactive business processes. User provisioning software may
include one or more of the following processes: change propagation, self service
workflow, consolidated user administration, delegated user administration, and federated
change control. User objects may represent employees, contractors, vendors, partners,
customers or other recipients of a service. Services may include electronic mail, inclusion
in a published user directory, access to a database, access to a network or mainframe, etc.
User provisioning is a type of identity management software, particularly useful within
organizations, where users may be represented by multiple objects on multiple systems.
4.6 Directory Services/LDAP
LDAP Lightweight Directory Access Protocol
A directory is a way of organzing a set of information with similar attributes in a logical
and hierarchical manner. The directory might contain entries about people, organizational
units, printers, documents, groups of people or anything.

86

LDAP has two components. A Server and a console.


Server contains the data and the console is used to access the server and do operations.
LDAP server is like ftp or mail server running on a host listening on some port number.
By default, LDAP server runs on the port 389.
A client starts an LDAP session by connecting to an LDAP server.
Usually, the LDAP is access by using the URL, "ldap://host:port/DN?"
Operations in LDAP
Here is the list of basic operations performed on a LDAP.
1. Bind
2. Search/Modify/Delete
3. Unbind
In order to bind to a LDAP, the client needs to have Bind DN and a Password. It is like
accessing the mail service using the username and password.
Once the bind is complete, the client can perform the operations like
search/add/delete/update.
Making LDAP Connection Secure
To make connection with the LDAP secure, we need to use SSL. This is denoted in
LDAP URLs by using the URL scheme "ldaps". The standard port for LDAP over SSL is
636.
4.7 Federation
Businesses become global and move towards business models with closer relationships
with partners, suppliers and customers, the issue of establishing secure and intimate
trading relationships has become a challenge. Federated identity allows users to benefit
from trust relationships between business partners, and has been reaching critical mass in
several industries, including telecommunications, financial services and manufacturing.
Identity Federation
Identity Federation provides the infrastructure that enables identities and their relevant
entitlements to be propagated across security domains - this applies to domains existing

87

within an organization as well as between organizations. The concept of identity


federation includes all the technology, standards and contracts that are necessary for a
federated relationship to be established. The following are some terms commonly used in
the context of federation and SAML that the reader should be familiar with: Identity
Federation is comprised of the standards, technology and trust agreements that enable
disparate security domains to port identities and attributes.
Assertion - A statement or statements that are asserted as true by an authority. In the
SAML specification, assertions are defined as statements of authentication, attributes and
authorization.
Identity Provider (IdP) - The site that authenticates the user and then sends an assertion
to the destination site or service provider.
Service Provider (SP) - The site that relies on an assertion to determine the entitlements
of the user and grants or denies access to the requested resource.
Circle of Trust (COT) - A group of service and/or identity providers who have
established trust relationships.
Federation - User account linking between providers in a circle of trust.
Name Identifier - An identifier for the user (e.g., email address, DN, opaque string) that
is used in federation protocol messages.
As the next step in the evolution of access management and single sign-on (SSO),
identity federation is an interoperable solution for enterprises offering services so they
can reliably receive and process identity information for users outside their organization
or security domain. One of the benefits is a better end-user experience where users will
not be asked to log into every application or web site accessed during their session. This
also eliminates the need for the user to remember multiple username and password
combinations - significantly lowering IT expenditures by reducing help desk calls and
tickets. Further, establishing a circle of trust frees the organization from having to
manage their partner and customer user bases as well as mitigates the risks associated
with authentication by placing the liability of user actions on the asserting party.
Web Cross-Domain Single Sign-On
How does federation work? Consider a simple scenario where two business partners wish
to link their applications, so users can access external applications without additional
logins. In this scenario, the identity provider has a customer portal that enables users to
manage their profiles as well as provides corporate and partner services. One of the
services available to these users is the ability to view research reports sold by its partner,
the service provider. Access to these reports is restricted according to service agreements
established between these two partners. When the user logs in at the identity provider and
clicks the link to view reports, the service providers application is served up. The user,

88

once authenticated with the identity provider, does not require a separate log in and
neither company has to synchronize passwords, IDs, or profiles. Behind the scenes,
federated identity management solutions at both companies transparently manage the
steps required to make this simple federation scenario possible. In the example below,
SAML is used to share identity data between the two environments. Here is how this
process works:
Step 1: The user logs in to the identity provider using an ID and password for
authentication. Once the user is authenticated, a session cookie is placed in the browser.
Step 2: The user then clicks on the link to view an application residing on the service
provider. The IdP creates a SAML assertion based on the users browser cookie, digitally
signs the assertion, and then redirects to the SP.
Step 3: The SP receives the SAML assertion, extracts the users identity information, and
maps the user to a local user account on the destination site.
Step 4: An authorization check is then performed and if successfully authorized, redirects
the users browser to the protected resource. If the SP successfully received and validated
the user, it will place its own cookie in the users browser so the user can now navigate
between applications in both domains without additional logins.
users browser so the user can now navigate between applications in both domains
without additional logins.
5. Cryptography

Introduction to Cryptography
Cryptography is a technique used to hide the meaning of a message and is derived from
the Greek word kryptos (hidden). Cryptography is a method of storing and transmitting
data in a form that only those it is intended for can read and process. It is a science of
protecting information by encoding it into an unreadable format.
If a message were to fall into the hands of the wrong person, cryptography should ensure
that that message could not be read. Typically the sender and receiver agree upon a
message scrambling protocol beforehand and agree upon methods for encrypting and
decrypting messages.
Cryptography methods began with a person carving messages into wood or stone, which
were then passed to the intended individual who had the necessary means to decipher the
messages. This is a long way from how cryptography is being used today. Cryptography
that used to be carved into materials is now being inserted into streams of binary code
that passes over network wires, Internet communication paths, and airwaves.
Cryptography is used in hardware devices and software to protect data, banking

89

transactions, corporate extranets, e-mail, Web transactions, wireless communication,


storing of confidential information, faxes, and phone calls.

History of Cryptography
Cryptography has roots that began around 2000 B.C. in Egypt when hieroglyphics were
used to decorate tombs to tell the story of the life of the deceased. The practice was not as
much to hide the messages themselves, but to make them seem more noble, ceremonial,
and majestic. Encryption methods evolved from being mainly for show into practical
applications used to hide information from others.
A Hebrew cryptographic method required the alphabet to be flipped so that each letter in
the original alphabet is mapped to a different letter in the flipped alphabet. The
encryption method was called atbash. An example of an encryption key used in the atbash
encryption scheme is shown in following:
ABCDEFGHI JK LMNOPQ R STU VW XYZ
ZYXWVUTSR QP ONMLKJ I HGF ED CBA
For example, the word security is encrypted into hvxfirgb. What does xrhhk come
out to be? This is a substitution cipher, because one character is replaced with another
character. This type of substitution cipher is referred to as a monoalphabetic substitution
because it uses only one alphabet, compared to other ciphers that use multiple alphabets
at a time. This simplistic encryption method worked for its time and for particular
cultures, but eventually more complex mechanisms were required.
Around 400 B.C., the Spartans used a system of encrypting information by writing a
message on a sheet of papyrus, which was wrapped around a staff. (This would look like
a piece of paper wrapped around a stick or wooden rod.) The message was only readable
if it was around the correct staff, which allowed the letters to properly match up. This is
referred to as the scytale cipher. When the papyrus was removed from the staff, the
writing appeared as just a bunch of random characters. The Greek government had
carriers run these pieces of papyrus to different groups of soldiers. The soldiers would
then wrap the papyrus around a staff of the right diameter and length and all the
seemingly random letters would match up and form an understandable message. These
could be used to instruct the soldiers on strategic moves and provide them with military
directives.
In another time and place in history, Julius Caesar developed a simple method of shifting
letters of the alphabet, similar to the atbash scheme. Today this technique seems too
simplistic to be effective, but in that day not many people could read in the first place, so
it provided a high level of protection. The evolution of cryptography continued as Europe
refined its practices using new methods, tools, and practices throughout the Middle Ages,

90

and by the late 1800s, cryptography was commonly used in the methods of
communication between military factions.
During World War II, simplistic encryption devices were used for tactical
communication, which drastically improved with the mechanical and electromechanical
technology that provided the world with telegraphic and radio communication. The rotor
cipher machine, which is a device that substitutes letters using different rotors within the
machine, was a huge breakthrough in military cryptography that provided complexity that
proved difficult to break. This work gave way to the most famous cipher machine in
history to date: Germanys Enigma machine. The Enigma machine had three rotors, a
plugboard, and a reflecting rotor. The originator of the message configured the Enigma
machine to its initial settings before starting the encryption process. The operator would
type in the first letter of the message and the machine would substitute the letter with a
different letter and present it to the operator. This encryption was done by moving the
rotors a predefined number of times, which would substitute the original letter with a
different letter. So if the operator typed in a T as the first character, the Enigma machine
might present an M as the substitution value. The operator would write down the letter M
on his sheet. The operator would then advance the rotors and enter the next letter. Each
time a new letter was to be encrypted, the operator advanced the rotors to a new setting.
This process was done until the whole message was encrypted. Then the encrypted text
was transmitted over the airwaves most likely to a U-boat. The chosen substitution for
each letter was dependent upon the rotor setting, so the crucial and secret part of this
process (the key) was how the operators advanced the rotors when encrypting and
decrypting a message. The operators at each end needed to know this sequence of
increments to advance each rotor in order to enable the German military units to properly
communicate. Although the mechanisms of the Enigma were complicated for the time, a
team of Polish cryptographers broke its code and gave Britain insight into Germanys
attack plans and military movement. It is said that breaking this encryption mechanism
shortened World War II by two years.
Cryptography has a deep, rich history. Mary, the Queen of Scots, lost her life in the
sixteenth century when an encrypted message she sent was intercepted. During the
Revolutionary War, Benedict Arnold used a codebook cipher to exchange information on
troop movement and strategic military advancements. The military has always had a big
part in using cryptography by encoding information and attempting to decrypt their
enemys encrypted information.
William Frederick Friedman published The Index of Coincidence and Its Applications in
Cryptography. He is referred to as the Father of Modern Cryptography and broke many
messages that were intercepted during WWII.
As computers came to be, the possibilities for encryption methods and devices advanced,
and cryptography efforts expanded exponentially. This era brought unprecedented
opportunity for cryptographic designers and encryption techniques. The most well-known
and successful project was Lucifer, which was developed at IBM. Lucifer introduced
complex mathematical equations and functions that were later adopted and modified by

91

the U.S. National Security Agency (NSA) to come up with the U.S. Data Encryption
Standard (DES). DES has been adopted as a federal government standard, is used
worldwide for financial transactions, and is imbedded into numerous commercial
applications. DES has had a rich history in computer-oriented encryption and has been in
use for over 20 years.
Different types of cryptography have been used throughout civilization, but today it is
deeply rooted in very part of our communication and computing world. Automated
information systems and cryptography play a huge role in the effectiveness of militaries,
functionality of governments, and economics of private businesses. As our dependency
upon technology increases, so does our dependency upon cryptography, because secrets
will always need to be kept.

Crypto System and its Components


Cryptosystem
Encryption is a method of transforming original data, called plaintext or cleartext, into a
form that appears to be random and unreadable, which is called ciphertext. Plaintext is
either in a form that can be understood by a person (a document) or by a computer
(executable code). Once it is transformed into ciphertext neither human nor machine can
properly process it until it is decrypted. This enables the transmission of confidential
information over insecure channels without unauthorized disclosure.

A system that provides encryption and decryption is referred to as a Cryptosystem and


can be created through hardware components or program code in an application. The
cryptosystem uses an encryption algorithm, which determines how simple or complex the
process will be. Most algorithms are complex mathematical formulas that are applied in a
specific sequence to the plaintext. Most encryption methods use a secret value called a
key (usually a long string of bits), which works with the algorithm to encrypt and decrypt
the text.
Ciphers, Algorithms, and Keys
Ciphers are any form of cryptographic substitution applied to message text. Algorithm,
the set of mathematical rules, dictates how enciphering and deciphering take place. The
secret piece of using a well-known encryption algorithm is the Key. The key can be any
value that is made up of a large sequence of random bits. An algorithm contains a
keyspace, which is a range of values that can be used to construct a key. The key is made
up of random values within the keyspace range. The larger the keyspace, the more
92

available values can be used to represent different keys, and the more random the keys
are, the harder it is for intruders to figure them out. A large keyspace allows for more
possible keys. The encryption algorithm should use the entire keyspace and choose the
values to make up the keys as random as possible. If a smaller keyspace were used, there
would be fewer values to choose from when forming a key; this would increase an
attackers chance of figuring out the key value and deciphering the protected information.
Types of Ciphers
There are two basic types of encryption ciphers: substitution and transposition
(permutation). The substitution cipher replaces bits, characters, or blocks of characters
with different bits, characters, or blocks. The transposition cipher does not replace the
original text with different text, but moves the original text around. It rearranges the bits,
characters, or blocks of characters to hide the original meaning.
Substitution Cipher
Substitution is a cryptographic technique where each letter of the plaintext message is
replaced by a different letter. Each letter retains its original position in the message text,
but the identity of the letter is changed. This type of technique was documented during
Julius Caesar's Gallic Wars. A substitution cipher uses a key to know how the substitution
should be carried out. In the Caesar Cipher, each letter is replaced with the letter three
places beyond it in the alphabet. This is referred to as a shift alphabet. Many different
types of substitutions take place usually with more than one alphabet.
Transposition Cipher
Transposition is a cryptographic technique whereby the letters in a message are
rearranged to provide secrecy. In a transposition cipher, permutation is used, meaning that
letters are scrambled. The key determines the positions that the characters are moved to.
This is a simplistic example of a transposition cipher and only shows one way of
performing transposition. When introduced with complex mathematical functions,
transpositions can become quite sophisticated and difficult to break. Most ciphers used
today use long sequences of complicated substitutions and permutations together on
messages. The key value is inputted into the algorithm and the result is the sequence of
operations (substitutions and permutations) that are performed on the plaintext. Simple
substitution and transposition ciphers are vulnerable to attacks that perform frequency
analysis. In every language, there are words and patterns that are used more often than
others. For instance, in the English language, the words the, and, that, and is are
very frequent patterns of letters used in messages and conversation. The beginning of
messages usually starts Hello or Dear and ends with Sincerely or Goodbye.
These patterns help attackers figure out the transformation between plaintext to
ciphertext, which enables them to figure out the key that was used to perform the
transformation. It is important for cryptosystems to not reveal these patterns. More
complex algorithms usually use more than one alphabet for substitution and permutation,
which reduces the vulnerability to frequency analysis. The more complicated the

93

algorithm, the more the resulting text (ciphertext) differs from the plaintext; thus, the
matching of these types of patterns becomes more difficult.
Running and Concealment Ciphers
More of the spy-novel-type ciphers would be the running key cipher and the concealment
cipher. The running key cipher could use a key that does not require an electronic
algorithm and bit alterations, but clever steps in the physical world around you. For
instance, a key in this type of cipher could be a book page, line number, and word count.
If I get a message from my super-secret spy buddy and the message reads
14967.29937.91158, this could mean for me to look at the first book in our
predetermined series of books, the 49th page, 6th line down the page, and the 7th word in
that line. So I write down this word, which is cat. The second set of numbers starts with
2, so I go to the 2nd book, 99th page, 3rd line down, and write down the 7th word on that
line, which is is. The last word I get from the 9th book in our series, the 11th page, 5th
row, and 8th word in that row, which is dead. So now I have come up with my
important secret message, which is cat is dead. This means nothing to me and I need to
look for a new spy buddy. Running key ciphers can be used in different and more
complex ways. Another type of spy novel cipher is the concealment cipher. If my other
super-secret spy buddy and I decide our key value is every third word, then when I get a
message from him, I will pick out every third word and write it down. So if he sends me a
message that reads, The saying, The time is right is not cow language, so is now a dead
subject. Because my key is every third word, I come up with The right cow is dead.
This again means nothing to me and I am now turning in my decoder ring. No matter
which type of cipher is used, the roles of the algorithm and key are the same, even if they
are not mathematical equations. In the running key cipher, the algorithm states that
encryption and decryption will take place by choosing characters out of a predefined set
of books. The key indicates the book, page, line, and word within that line. In substitution
cipher, the algorithm dictates that substitution will take place using a predefined alphabet
or sequence of characters, and the key indicates that each character will be replaced with
the third character that follows it in that sequence of characters. In actual mathematical
structures, the algorithm is a set of mathematical functions that will be performed on the
message and the key can indicate in which order these functions take place. So even if an
attacker knows the algorithm, say the predefined set of books, if he does not know the
key, the message is still useless to him.
5.1 Algorithms

Symmetric Key Cryptography


Symmetric cryptography, otherwise known as secret key cryptography, has been in use
for thousands of years in forms ranging from simple substitution ciphers to more complex
constructions. Symmetric systems are generally very fast but are vulnerable so that the
key used to encrypt must be shared with whomever needs to decrypt the message.

94

Symmetric or secret key, cryptography has been in use for thousands of years and
includes any form where the same key is used both to encrypt and to decrypt the text
involved. One of the simplest forms is sometimes known as the Caesar cipher -reputedly used by Julius Caesar to conceal messages -- in which the process is simply one
of shifting the alphabet so many places in one direction or another.
A variation on this simple scheme involves using an arbitrarily ordered alphabet of the
same length as the one used for the plain text message. In this case the key might be a
long sequence of numbers such as 5, 19, 1, 2, 11 ... indicating that A would map to E, B
to S, C to A, D to B, E to K and so on -- or it might be one of a number of more or less
ingenious schemes involving letters taken from, say, sentences of particular novels.
Such systems are ludicrously weak, of course, and modern systems use sophisticated
algorithms based on mathematical problems that are difficult to solve and so tend to be
very strong.

95

Unlike the situation in asymmetric cryptography where there is a public element to the
process and where the private key is almost never shared, symmetric cryptography
normally requires the key to be shared and simultaneously kept secret within a restricted
group. It's simply not possible for a person who views the encrypted data with a
symmetric cipher to be able to do so without having access to the key used to encrypt it in
the first place. If such a secret key falls into the wrong hands, then the security of the data
encrypted using that key is immediately and completely compromised. Hence, what all
systems in this group of secret key methods share is the problem of key management,
something discussed in more detail in the feature on practical implications (to follow
shortly in the series).
Reference is often made to keys of particular bit lengths, such as 56-bit or 128-bit. These
lengths are those for symmetric key ciphers, while key lengths for at least the private
element of asymmetric ones are considerably longer. Further, there is no correlation
between the key lengths in the two groups except incidentally through the perceived level
of security which a given key length might offer using a given system. However, Phil
Zimmermann, originator of the extremely efficient and important software package
known as Pretty Good Privacy (PGP), suggests than an 80-bit symmetric key might
approximately equate in security terms at the present moment to a 1024-bit asymmetric
key; to gain the security offered by a 128-bit symmetric key, one might need to use a
3000-bit asymmetric key. Others will certainly take issue with some of those comparisons
as well as, no doubt, with the attempt even to make them.
Within any particular group, however, the length of the key used is generally a significant
element in determining security. Further, key length is not linear but doubles with each
additional bit. Two to the power two is four, to the power three is eight, to the power four
sixteen, and so on. Giga Group offers a homespun analogy suggesting that if a teaspoon
were sufficient to hold all possible 40-bit key combinations, it would take a swimming
pool to hold all 56-bit key combinations, while the volume to hold all possible 128-bit
key combinations would be roughly equivalent to that of the earth. A 128-bit value,
rendered in decimal, is approximately 340 followed by 36 zeros.

96

Symmetric key methods are considerably faster than asymmetric methods and so are the
preferred mechanism for encrypting large chunks of text. A cipher such as DES (qv) will
be at least 100 times faster than the asymmetric cipher RSA (discussed in the feature on
asymmetric systems) in software and might be up to 10,000 times faster when
implemented on specialist hardware. Secret key ciphers are most suitable for protecting
data in a single-user or small group environment, typically through the use of passwords
or passphrases. In practice, as mentioned elsewhere, the most satisfactory methods for
dispersed or large-scale practical use tend to combine both symmetric and asymmetric
systems.
Advantages of Using Symmetric Encryption
1. the encryption process is simple
2. each trading partner can use the same publicly known encryption algorithm - no
need to develop and exchange secret algorithms
3. security is dependent on the length of the key
Disadvantages of Using Symmetric Encryption
1. a shared secret key must be agreed upon by both parties
2. if a user has n trading partners, then n secret keys must be maintained, one for
each trading partner
3. authenticity of origin or receipt cannot be proved because the secret key is shared
4. management of the symmetric keys becomes problematic
Types of symmetric ciphers
Symmetric ciphers are now usually implemented using block ciphers or stream ciphers.
This feature also looks at what are known as Message Authentication Codes (MACs), a
checksum mechanism that uses a secret key. MACs are quite different from message
digests, which are used in digital signatures.
Block ciphers
Block ciphers convert a fixed-length block of plain text into cipher text of the same
length, which is under the control of the secret key. Decryption is effected using the
reverse transformation and the same key. For many current block ciphers the block size is
64 bits, but this is likely to increase.
Plain text messages are typically much longer than the particular block size and different
techniques, or modes of operation, that are used. Examples of such modes are electronic
codebook (ECB), cipher block chaining (CBC) or cipher feedback (CFB). ECB simply
encrypts each block of plain text, one after another, using the same key; in CBC mode,
each plain text block is XORed with the previous cipher text block before being
encrypted, thus adding a level of complexity that can make certain attacks harder to
mount. Output FeedBack mode (OFB) resembles CBC mode although the quantity that's
XORed is generated independently. CBC is widely used, for example in DES (qv)
implementations, and these various modes are discussed in depth in appropriate books on

97

technical aspects of cryptography. Note that a common vulnerability of roll-your-own


cryptosystems is to use some published algorithm in a simple form rather than in a
particular mode that gives additional protection.
Iterated block ciphers are those where the process of encryption has several rounds, thus
improving security. In each round, an appropriate transformation may be applied using a
subkey derived from the original secret key that uses a special function. Inevitably, this
additional computing requirement has an impact on the speed at which encryption can be
managed, therefore there is a balance between security needs and speed of execution.
Nothing is free and in cryptography; as elsewhere, part of the skill in applying
appropriate methods is derived from understanding the tradeoffs that need to be made and
how these relate to the balance of requirements.
Block ciphers include DES, IDEA, SAFER, Blowfish, and Skipjack -- this last being the
algorithm used in the US National Security Agency (NSA) Clipper chip.

Stream ciphers
Stream ciphers can be extremely fast compared with block ciphers although some block
ciphers working in certain modes (such as DES in CFB or OFB) effectively operate as
stream ciphers. Stream ciphers operate on small groups of bits, typically applying bitwise
XOR operations to them using as a key a sequence of bits, known as a keystream. Some
stream ciphers are based on what is termed a Linear Feedback Shift Register (LFSR), a
mechanism for generating a sequence of binary bits.
Stream ciphers are developed out of a specialist cipher, the Vernam cipher, also known as
the one-time pad. Examples of stream ciphers include RC4 and the Software Optimized
Encryption Algorithm (SEAL), as well as the special case of the Vernam cipher or onetime pad.

Message authentication codes


A message authentication code (MAC) is not a cipher but a particular form of checksum,
typically 32 bits, generated using a secret key in combination with a particular
authentication scheme and appended to a message. In contrast to message digests,
generated using a one-way hash function, and the closely-connected digital signature,
generated and validated using asymmetric key pairs, the intended recipient requires
access to the secret key in order to validate the code.
Examples of symmetric ciphers
DES

98

Data Encryption Algorithm (DEA), of which the Data Encryption Standard (DES) is the
formal description, derives from work done by IBM and adopted officially by the US
government in 1977. It is probably the most widely used secret key system, particularly
in securing financial data, and was originally developed to be embedded in hardware.
Automated Teller Machines (ATMs) typically use DES.
DES uses a 56-bit key with an additional eight parity bits to bring the block size up to 64
bits. It's an iterated block cipher using what's known as Feistel techniques where the text
block being encrypted is split into two halves. The round function is applied to one half
using a subkey and that output is then XORed with the other half; the two halves are then
swapped and the process continues except that the last round is not swapped. DES uses
16 rounds.
The main form of attack on DES is what's known as brute force or exhaustive key search,
a repeated trying of keys until one fits. Given that DES uses a 56-bit key, the number of
possible keys is 256. With the growth in power of computer systems, this makes DES far
less secure than it was when first implemented, although for practical purposes of a noncritical nature, it can still be considered adequate. However, DES is now certified only for
legacy systems and a new encryption standard -- Advanced Encryption Standard (AES) -has been selected.
A common variant on DES is triple-DES, a mechanism that encrypts the material three
times using a key of 168; this generally (but not always) provides considerably more
security. If the three-key 56-bit sub-elements are identical, then triple-DES is backwards
compatible with DES.
DES has four distinct modes of operation that are used in different situations for different
types of results.
Electronic Code Book (ECB) Mode This mode is the native encryption method for DES
and operates like a code book. A 64-bit data block is entered into the algorithm with a key
and a block of ciphertext is produced. For a given block of plaintext and a given key, the
same block of ciphertext is always produced. Not all messages end up in neat and tidy
64-bit blocks, so ECB incorporates padding to address this problem. This mode is usually
used for small amounts of data like encrypting and protecting encryption keys.
This mode is used for challenge-response operations and some key management tasks. It
is also used to encrypt personal identification numbers (PINs) in ATM machines for
financial institutions. It is not used to encrypt large amounts of data because patterns
would eventually show themselves.
Cipher Block Chaining (CBC) Mode In ECB mode, a block of plaintext and a key will
always give the same ciphertext. This means that if the word balloon was encrypted
and the resulting ciphertext was hwicssn each time it was encrypted using the same
key, the same ciphertext would always be given. This can show evidence of a pattern,
which if an evildoer put some effort into revealing, could get him a step closer to

99

compromising the encryption process. Cipher Block Chaining (CBC) does not reveal a
pattern because each block of text, the key, and the value based on the previous block is
processed in the algorithm and applied to the next block of text, as shown in Figure 8-17.
This gives a more random resulting ciphertext. A value is extracted and used from the
previous block of text. This provides dependence between the blocks and in a sense they
are chained together. This is where the title of Cipher Block Chaining (CBC) comes from,
and it is this chaining effect that hides any repeated patterns.
The results of one block are fed into the next block, meaning that each block is used to
modify the following block. This chaining effect means that a particular ciphertext block
is dependent upon all blocks before it, not just the previous block.
Cipher Feedback (CFB) Mode In this mode, the previously generated ciphertext from
the last encrypted block of data is inputted into the algorithm to generate random values.
These random values are processed with the current block of plaintext to create
ciphertext. This is another way of chaining blocks of text together, but instead of using a
value from the last data block, CFB mode uses the previous data block in the ciphertext
and runs it through a function and combines it with the next block in line. This mode is
used when encrypting individual characters is required.
Output Feedback (OFB) Mode This mode is very similar to Cipher Feedback (CFB)
mode, but if DES is working in Output Feedback (OFB) mode, it is functioning like a
stream cipher by generating a stream of random binary bits to be combined with the
plaintext to create ciphertext. The ciphertext is fed back to the algorithm to form a portion
of the next input to encrypt the next stream of bits.
As previously stated, block cipher works on blocks of data and stream ciphers work on a
stream of data. Stream ciphers use a keystream method of applying randomization and
encryption to the text, whereas block ciphers use an S-box-type method. In OFB mode,
the DES block cipher crosses the line between block cipher and stream cipher and uses a
keystream for encryption and decryption purposes.
IDEA
The International Data Encryption Algorithm (IDEA) was developed at ETH in Zurich by
two researchers, Xuejia Lai and James L. Massey, with the patent rights held by a Swiss
company, Ascom Systec. IDEA is implemented as an iterative block cipher and uses 128bit keys and eight rounds. This gives much more security than DES does, but when
choosing keys for IDEA it's important to exclude what are known as "weak keys."
Whereas DES has only four weak keys and 12 semi-weak keys, the number of weak keys
in IDEA is considerable at 2 51 . However, given that the total number of keys is
substantially greater at 2 128 this still leaves 2 77 keys to choose from.
IDEA is widely available throughout the world with royalty charges, typically of around
$6.00 a copy (these charges apply in some areas but not in others. IDEA is considered
extremely secure. With a 128-bit key, the number of tests made in a brute force attacks

100

needs to be increased significantly compared with DES, even allowing for weak keys.
Further, it's shown itself particularly resistant to specialist forms of analytical attack.
CAST
CAST is named for its designers, Carlisle Adams and Stafford Tavares of Nortel. It's a
64-bit Feistel cipher using 16 rounds and allowing key sizes up to 128 bits. A variant,
CAST-256, uses a 128-bit block size and allows the use of keys of up to 256 bits.
Although CAST is fairly new, it appears to be extremely secure against attacks, both
brute force and analytical. Although reasonably fast, its main benefit is security rather
than speed. It is used in recent versions of PGP as well as in products from IBM,
Microsoft, and elsewhere.
Entrust Technologies holds a patent on CAST but says that it can be used without royalty
payments in both commercial and non-commercial applications.
The one-time pad
The one-time pad, or Vernam cipher, has the merit of being considered completely secure
and so has great value in certain specialized situations, typically in war time. It uses a
randomly-generated key exactly as long as the message. This is applied to the plain text,
typically using bitwise XOR, to produce the encrypted text. Applying the same key and
appropriate algorithm easily decrypts the message:
Simple illustration of one-time pad encryption/decryption
00101100010....11011100101011 Original plain text message
01110111010....10001011101011 Randomly generated key equal to message in length
01011011000....01010111000000 Encrypted message
01110111010....10001011101011 Key re-used to decrypt
00101100010....11011100101011 Original message restored

Although the one-time pad is completely and absolutely secure, it is often not very
practical, since the key of the same length as the message needs to be transmitted in some
secure way to the receiver to allow decryption. Further, the key is used only once and is
then discarded, and although this clearly benefits security, it adds to the key management
problems. One area where the one-time pad might currently be used is in MACs.

101

AES
The Advanced Encryption Standard (AES) is intended to replace DES as a new, secure
standard, given that DES has reached the end of its useful life. In 1997, a competition
was announced by the US National Institute of Standards and Technology (NIST) and the
15 original entries were reduced to a short list of five. The eventual winner was a product
submitted by Joan Daemen and Vincent Rijmen of Belgium, named Rijndael, which is
currently undergoing extensive trials and evaluation.
Rijndael is technically complex and somewhat unconventional in its construction but
appears to be extremely secure and versatile in that it is fast in execution, well-suited to
modern requirements (such as in smart cards), and capable of being used with a range of
key sizes.

Asymmetric Key Cryptography


Asymmetric Cryptography
In symmetric key cryptography, a single secret key is used between entities, whereas in
public key systems, each entity has different keys, or asymmetric keys. The two different
asymmetric keys are mathematically related. If a message is encrypted by one key, the
other key is required to decrypt the message.
In a public key system, the pair of keys is made up of one public key and one private key.
The public key can be known to everyone, and the private key must only be known to the
owner. Many times, public keys are listed in directories and databases of e-mail addresses
so they are available to anyone who wants to use these keys to encrypt or decrypt data
when communicating with a particular person. Figure 8-11 illustrates an asymmetric
cryptosystem.

102

The public and private keys are mathematically related, but cannot be derived from each
other. This means that if an evildoer gets a copy of Bobs public key, it does not mean he
can now use some mathematical magic and find out Bobs private key.
If Bob encrypts a message with his private key, the receiver must have a copy of Bobs
public key to decrypt it. The receiver can decrypt Bobs message and decide to reply back
to Bob in an encrypted form. All she needs to do is encrypt her reply with Bobs public
key, and then Bob can decrypt the message with his private key. It is not possible to
encrypt and decrypt using the exact same key when using an asymmetric key encryption
technology.
Bob can encrypt a message with his private key and the receiver can then decrypt it with
Bobs public key. By decrypting the message with Bobs public key, the receiver can be
sure that the message really came from Bob. A message can only be decrypted with a
public key if the message was encrypted with the corresponding private key. This
provides authentication, because Bob is the only one who is supposed to have his private
key. When the receiver wants to make sure Bob is the only one that can read her reply,
she will encrypt the response with his public key. Only Bob will be able to decrypt the
message because he is the only one who has the necessary private key.Now the receiver
can also encrypt her response with her private key instead of using Bobs public key.
Why would she do that? She wants Bob to know that the message came from her and no
one else. If she encrypted the response with Bobs public key, it does not provide
authenticity because anyone can get a hold of Bobs public key. If she uses her private
key to encrypt the message, then Bob can be sure that the message came from her and no
one else. Symmetric keys do not provide authenticity because the same key is used on

103

both ends. Using one of the secret keys does not ensure that the message originated from
a specific entity.

If confidentiality is the most important security service to a sender, she would encrypt the
file with the receivers public key. This is called a secure message format because it can
only be decrypted by the person who has the corresponding private key.
If authentication is the most important security service to the sender, then she would
encrypt the message with her private key. This provides assurance to the receiver that the
only person who could have encrypted the message is the individual who has possession
of that private key.
If the sender encrypted the message with the receivers public key, authentication is not
provided because this public key is available to anyone.
Encrypting a message with the senders private key is called an open message format
because anyone with a copy of the corresponding public key can decrypt the message;
thus, confidentiality is not ensured.
For a message to be in a secure and signed format, the sender would encrypt the
message with her private key and then encrypt it again with the receivers public key. The
receiver would then need to decrypt the message with his own private key and then
decrypt it again with the senders public key. This provides confidentiality and
authentication for that delivered message.
Each key type can be used to encrypt and decrypt, so do not get confused and think the
public key is only for encryption and the private key is only for decryption. They both
have the capability to encrypt and decrypt data. However, if data is encrypted with a
private key, it cannot be decrypted with a private key. If data is encrypted with a private
key, it must be decrypted with the corresponding public key. If data is encrypted with a

104

public key, it must be decrypted with the corresponding private key. Figure 8-13 further
explains the steps of a signed and secure message.
An asymmetric cryptosystem works much slower than symmetric systems, but can
provide confidentiality, authentication, and nonrepudiation depending on its configuration
and use. Asymmetric systems also provide for easier and more manageable key
distribution than symmetric systems and do not have the scalability issues of symmetric
systems.
Examples of Asymmetric Encryption Algorithm
RSA
RSA, named after its inventors Ron Rivest, Adi Shamir, and Leonard Adleman, is a
public key algorithm that is the most understood, easiest to implement, and most popular
when it comes to asymmetric algorithms. RSA is a worldwide de facto standard and can
be used for digital signatures and encryption. It was developed in 1978 at MIT and
provides authentication as well as encryption. The security of this algorithm comes from
the difficulty of factoring large numbers. The public and private keys are functions of a
pair of large prime numbers and the necessary activities required to decrypt a message
from ciphertext to plaintext using a public key is comparable to factoring the product of
two prime numbers. (A prime number is a positive whole number with no proper divisors,
meaning the only numbers that can divide a prime number is one and the number itself.)
One advantage of using RSA is that it can be used for encryption and digital signatures.
Using its one-way function, RSA provides encryption and signature verification and the
inverse direction performs decryption and signature generation. RSA is used in many
Web browsers with the Secure Sockets Layer (SSL) protocol. PGP and government
systems that use public key cryptosystems (encryption systems that use asymmetric
algorithms) also use RSA.
El Gamal
El Gamal is a public key algorithm that can be used for digital signatures and key
exchange. It is not based on the difficulty of factoring large numbers, but is based on
calculating discrete logarithms in a finite field.
Elliptic Curve Cryptosystems (ECCs)
Elliptic curves are rich mathematical structures that have shown usefulness in many
different types of applications. An Elliptic Curve Cryptosystem (ECC) provides much of
the same functionality that RSA provides: digital signatures, secure key distribution, and
encryption. One differing factor is ECCs efficiency. Some devices have limited
processing capacity, storage, power supply, and bandwidth like the newer wireless
devices and cellular telephones. With these types of devices, efficiency of resource use is
very important. ECC provides encryption functionality requiring a smaller percentage of
the resources required by RSA and other algorithms, so it is used in these types of
devices.

105

In most cases, the longer the key length, the more protection that is provided, but ECC
can provide the same level of protection with a key size that is smaller than what RSA
requires. Because longer keys require more resources to perform mathematical tasks, the
smaller keys used in ECC require fewer resources of the device.
ECC cryptosystems use the properties of elliptic curves in their public key systems. The
elliptic curves provide ways of constructing groups of elements and specific rules of how
the elements within these groups combine. The properties between the groups are used to
build cryptographic algorithms.
Advantages of Asymmetric Cryptography
With the asymmetric (also known as public key) approach, only the private key must be
kept secret, and that secret needs to be kept only by one party. This is a big improvement
in many situations, especially if the parties have no previous contact with one another.
However, for this to work, the authenticity of the corresponding public key must typically
be guaranteed somehow by a trusted third party, such as a CA. Because the private key
needs to be kept only by one party, it never needs to be transmitted over any potentially
compromised networks. Therefore, in many cases an asymmetric key pair may remain
unchanged over many sessions or perhaps even over several years. Another benefit of
public key schemes is that they generally can be used to implement digital signature
schemes that include nonrepudiation. Finally, because one key pair is associated with one
party, even on a large network, the total number of required keys is much smaller than in
the symmetric case.

Comparison of Symmetric Vs Asymmetric


Comparison of Methods
Although asymmetric encryption provides far more functionality, there are still many
applications in which symmetric encryption is the best solution, and does the job as
securely and more efficiently. Due to its nature, symmetric technology is far less
expensive to implement.

106

Applications of Cryptography
Public Key Cryptography
Public key cryptography uses two keys (public and private) generated by an asymmetric
algorithm for protecting encryption keys and key distribution, and a secret key is
generated by a symmetric algorithm and used for bulk encryption. It is a hybrid use of
two different algorithms: asymmetric and symmetric. Each algorithm has its pros and
cons, so using them together can bring together the best of both worlds.
Symmetric cryptography provides limited security because two users use the same key,
and although asymmetric cryptography enables the two users to use different keys, it is
too slow when compared to symmetric methods. Both of them can be used together to
accomplish a high level of security in an acceptable amount of time. In the hybrid
approach, the two different approaches are used in a complementary manner, with each
performing a different function. A symmetric algorithm creates keys that are used for

107

encrypting bulk data and an asymmetric algorithm creates keys that are used for
automated key distribution. The secret key which is used to encrypt the plaintext should
be sent to the receiving end to be able to decrypt it. You do not want this key to travel
unprotected, because if the message was intercepted and the key was not protected, any
intruder can get the key and decrypt the original message. If the secret key that is needed
to decrypt the message is not protected, then there is no use in encrypting the message in
the first place. So we use an asymmetric algorithm to encrypt the secret key. Why do we
use the symmetric algorithm on the message and the asymmetric algorithm on the key?
We said earlier that the asymmetric algorithm takes longer because the math is more
complex. Because your message is most likely going to be longer than the length of the
key, we use the faster algorithm on the message (symmetric) and the slower algorithm on
the key (asymmetric).
Public Key Cryptography performs the following steps:
1. Asymmetric algorithm performs encryption and decryption by using public and
private keys.
2. Symmetric algorithm performs encryption and decryption by using a secret key.
3. A secret key is used to encrypt the actual message.
4. Public and private keys are used to encrypt the secret key.
5. A secret key is synonymous to a symmetric key.
6. An asymmetric key refers to a public or private key.
That is how a hybrid system works. The symmetric algorithm creates a secret key that
will be used to encrypt the bulk, or the message, and the asymmetric key (either public or
private key) encrypts the secret key.
5.2 Public Key Infrastructure (PKI)
Public key infrastructure (PKI) consists of programs, data formats, procedures,
communication protocols, security policies, and public key cryptographic mechanisms
working in a comprehensive manner to enable a wide range of dispersed people to
communicate in a secure and predictable fashion. PKI is an ISO authentication
framework that uses public key cryptography and the X.509 standard protocols. The
framework was set up to enable authentication to happen across different networks and
the Internet. Specific protocols and algorithms are not specified, and that is why it is
called a framework and not a specific technology. PKI provides authentication,
confidentiality, nonrepudiation, and integrity of the messages exchanged. PKI is a hybrid
system of symmetric and asymmetric key algorithms and methods, which was discussed
in earlier sections. There is a difference between public key cryptography and PKI. So to
be clear, public key cryptography entails the algorithms, keys, and technology required to
encrypt and decrypt messages. PKI is what its name statesit is an infrastructure. The
infrastructure of this technology assumes that the receivers identity can be positively
ensured through certificates and that the Diffie-Hellman exchange protocol (or another
type of key exchange protocol) will automatically negotiate the process of key exchange.
So the infrastructure contains the pieces that will identify users, create and distribute

108

certificates, maintain and revoke certificates, distribute and maintain encryption keys, and
enable all technologies to communicate and work together for the purpose of encrypted
communication. Public key cryptography is one piece in PKI, but there are many other
pieces that are required to make up this infrastructure. An analogy is the e-mail protocol
Simple Mail Transfer Protocol (SMTP). SMTP is the technology used to get e-mail
messages from here to there, but many other things must be in place before this protocol
can be productive. We need e-mail clients, e-mail servers, and e-mail messages, which
together build a type of infrastructure, an e-mail infrastructure. PKI is made up of many
different parts: certificate authorities, registration authorities, certificates, keys, and users.
The following sections explain these parts and how they all work together. Each person
who wants to participate in a PKI requires a digital certificate, which is a credential that
contains the public key of that individual along with other identifying information. The
certificate is signed (digital signature) by a trusted third party or a certificate authority
(CA). The CA is responsible for verifying the identity of the key owner. When the CA
signs the certificate, it binds the individuals identity to the public key and the CA takes
liability for the authenticity of that public key. It is this trusted third party (the CA) that
allows people who have never met to authenticate to each other and communicate in a
secure method.
X.509 Certificates
X.509 is a widely used standard for digital certificates. A X.509 certificate is a structured
grouping of information about an individual, a device, or anything one can imagine. The
X.509 standard defines what information can go into a certificate, and describes how to
write it down (the data format). All X.509 certificates have the following data, in addition
to the signature:
Version
This identifies which version of the X.509 standard applies to this certificate, which
affects what information can be specified in it. Thus far, three versions are defined.
Serial Number
The entity that created the certificate is responsible for assigning it a serial number to
distinguish it from other certificates it issues. This information is used in numerous ways,
for example when a certificate is revoked its serial number is placed in a Certificate
Revocation List (CRL).
Signature Algorithm Identifier
This identifies the algorithm used by the CA to sign the certificate.
Issuer Name

109

The X.500 name of the entity that signed the certificate. This is normally a CA. Using
this certificate implies trusting the entity that signed this certificate. (Note that in some
cases, such as root or top-level CA certificates, the issuer signs its own certificate.)
Validity Period
Each certificate is valid only for a limited amount of time. This period is described by a
start date and time and an end date and time, and can be as short as a few seconds or
almost as long as a century. The validity period chosen depends on a number of factors,
such as the strength of the private key used to sign the certificate or the amount one is
willing to pay for a certificate. This is the expected period that entities can rely on the
public value, if the associated private key has not been compromised.
Subject Name
The name of the entity whose public key the certificate identifies. This name uses the
X.500 standard, so it is intended to be unique across the Internet. This is the
Distinguished Name (DN) of the entity, for example,
CN=user1, OU=dsf, O=cts, C=US

(These refer to the subject's Common Name, Organizational Unit, Organization, and
Country.)
Subject Public Key Information
This is the public key of the entity being named, together with an algorithm identifier
which specifies which public key crypto system this key belongs to and any associated
key parameters.
X.509 Version 1 has been available since 1988, is widely deployed, and is the most
generic.
X.509 Version 2 introduced the concept of subject and issuer unique identifiers to handle
the possibility of reuse of subject and/or issuer names over time. Most certificate profile
documents strongly recommend that names not be reused, and that certificates should not
make use of unique identifiers. Version 2 certificates are not widely used.
X.509 Version 3 is the most recent (1996) and supports the notion of extensions, whereby
anyone can define an extension and include it in the certificate. Some common extensions
in use today are: KeyUsage (limits the use of the keys to particular purposes such as
"signing-only") and AlternativeNames (allows other identities to also be associated with
this public key, e.g. DNS names, Email addresses, IP addresses). Extensions can be
marked critical to indicate that the extension should be checked and enforced/used. For
example, if a certificate has the KeyUsage extension marked critical and set to
"keyCertSign" then if this certificate is presented during SSL communication, it should

110

be rejected, as the certificate extension indicates that the associated private key should
only be used for signing certificates and not for SSL use.
All the data in a certificate is encoded using two related standards called ASN.1/DER.
Abstract Syntax Notation 1 describes data. The Definite Encoding Rules describe a single
way to store and transfer that data. People have been known to describe this combination
simultaneously as "powerful and flexible" and as "cryptic and awkward".
5.3 Digital signatures
Digital signatures, much like real-life signatures, provide proof of authenticity of the
sender and the integrity of the message. Digital signatures can be used for nonrepudiation
-- the sender cannot deny that he or she signed it. Digital signatures need to be
unforgeable and not reusable to be successful, and the signed document must be
unalterable.
The basic digital signature protocol is:
1. The sender encrypts the document with his/her private key, implicitly signing the
document.
2. The message is sent.
3. The receiver decrypts the document with the sender's public key, thereby
verifying the signature
Since signing large documents is time consuming, quite often only a hash of the message
is signed. The one-way hash and the digital signature algorithm is agreed a priori. The
original message is sent with the signature. The receiver verifies the signature by
decrypting the hash with the sender's public key and matching it with the hash generated
against the received message. Figure 1 below illustrates the signature generation and
verification process. That scheme also has a nice effect of keeping the document and the
signature separate.

Figure 1. Digital signatures


Notice that the message uses a hashing algorithm to generate a fixed size hash, which is
then encrypted to generate a signature. Those signatures are sometimes referred to as
111

digital fingerprints since they represent the original message in a unique manner. Using
digital signatures does not guarantee confidentiality since the message is sent as plaintext.
To further guarantee confidentiality, instead of sending the plaintext message, it could be
encrypted with the receiver's public key, a process illustrated in Figure 2.

Figure 2. Digital signatures with encryption


Digital signatures come in several algorithms, such as ElGamal signatures, RSA, or
digital signature algorithm (DSA). Both ElGamal and RSA algorithms can be used for
encryption and digital signatures. In contrast, DSA, part of the digital signature standard
(DSS), can be used only for digital signatures and not for encryption. A separate
algorithm for encryption has to be used in conjunction with DSA if encryption is desired.
One-Way Function
A one-way function is a mathematical function that is easier to compute in one direction
than in the opposite direction. This concept is similar to how a one-way function is used
in cryptography. The easy direction of computation in a one-way function is like
multiplying two large prime numbers. It is easy to multiply the two numbers and get the
resulting product, but it is much harder to factor the product and recover the two initial
large prime numbers. Many public key encryption algorithms are based on the difficulty
of factoring large numbers that are the product of two large prime numbers. So when
there are attacks on these types of cryptosystems, the attack is not necessarily trying
every possible key value, but trying to factor the large number. So the easy function in a
one-way function is multiplying two large prime numbers and the hard function is
working backwards by figuring out the large prime numbers that were used to calculate
the obtained product number. Public key cryptography is based on trapdoor one-way
functions. When a user encrypts a message with a public key, this message is encoded
with a one-way function. This function supplies a trapdoor, but the only way the trapdoor
can be taken advantage of is if it is known about and the correct code is applied. The
private key provides this service. The private key knows about the trapdoor and has the
necessary programming code to take advantage of this secret trapdoor to unlock the
encoded message. Knowing about the trapdoor and having the correct functionality to
take advantage of it makes a private key. The crux of this section is that public key
cryptography provides security by using mathematical equations that are easy to perform
one way (using the public key) and next to impossible to perform the other way (using
112

the private key). An attacker would have to go through a lot of work to perform the
mathematical equations in reverse (or figure out the private key).
Message Integrity
Cryptography can detect if a message has been modified in an unauthorized manner in a
couple of different ways. The first way is that the message will usually not decrypt
properly if parts of it have been changed. The same type of issue happens in compression.
If a file is compressed and then some of the bits are modified, either intentionally or
accidentally, many times the file cannot be uncompressed because it cannot be
successfully transformed from one form to another. Parity bits have been used in different
protocols to detect modifications of streams of bits as they are passed from one computer
to another, but parity bits can usually only detect unintentional modifications.
Unintentional modifications can happen if there is a spike in the power supply, if there is
interference or attenuation on a wire, or if some other type of physical condition occurs
that causes the corruption of bits as they travel from one destination to another. Parity bits
cannot identify if a message was captured by an intruder, altered, and then sent on to the
intended destination because the intruder can just recalculate a new parity value that
includes his changes and the receiver would never know the difference. For this type of
protection, cryptography is required to successfully detect intentional and unintentional
unauthorized modifications to data.
One-Way Hash
A one-way hash is a function (usually mathematical) that takes a variable-length string, a
message, and compresses and transforms it into a fixed-length value referred to as a hash
value. A hash value is also called a message digest. Just as fingerprints can be used to
identify individuals, hash values can be used to identify a specific message. The hashing
function, usually an algorithm, is not a secretit is publicly known. The secrecy of the
one-way hashing function is its one-wayness. The function is only run in one direction,
not the other direction. This is different than the one-way function used in public key
cryptography. In public key cryptography, the security is provided because it is very hard,
without knowing the key, to perform the one-way function backwards on a message and
come up with readable plaintext. However, one-way hash functions are never used in
reverse; they create a hash value and call it a day. The receiver does not attempt to
reverse the process at the other end, but instead runs the same hashing function one way
and compares the two results. The hashing one-way function takes place without the use
of any keys. This means that anyone who receives the message can run the hash value
and verify the messages integrity. However, if a sender only wants a specific person to
be able to view the hash value sent with the message, the value would be encrypted with
a key. This is referred to as the message authentication code (MAC).
MAC
A MAC (message authentication code) is a key dependent one-way hash function. It has
the same functionality as a one-way hash, but it requires a symmetric key to be used in

113

the process and a one-way hash does not. Basically, the MAC is a one-way hash value
that is encrypted with a symmetric key.
Hashing Algorithms
As stated in an earlier section, the goal of using a one-way hash function is to provide a
fingerprint of the message. If two different messages produced the same hash value, then
it would be easier for an attacker to break that security mechanism because patterns
would be revealed. A strong one-hash function is hard to break and also does not provide
the same hash value for two or more different messages. If a hashing algorithm takes
steps to ensure that it does not create the same hash value for two or more messages, it is
said to be collision free, or repetitive free. Good cryptographic hash functions should
have the following characteristics:
The hash should be computed on the entire message. The hash should be a one-way
function so that messages are not disclosed by their signatures. It should be impossible,
given a message and its hash value, to compute another message with the same hash
value. It should be resistant to birthday attacks, meaning an attacker should not be able
to find two messages with the same hash value.
MD4
MD4 is a one-way hash function designed by Ron Rivest. It produces 128-bit hash, or
message digest, values. It is used for high-speed computation in software
implementations and is optimized for microprocessors.
MD5
MD5 is the newer version of MD4. It still produces a 128-bit hash, but the algorithm is a
bit more complex to make it harder to break than MD4. The MD5 added a fourth round
of operations to be performed during the hashing functions and makes several of its
mathematical operations carry more steps or more complexity to provide a higher level of
security.
MD2
MD2 is also a 128-bit one-way hash function designed by Ron Rivest. It is not
necessarily any weaker than the previously mentioned hash functions, but it is much
slower.
SHA
SHA was designed by NIST and NSA to be used with the DSS. The SHA was designed to
be used in digital signatures and developed when a more secure digital signature
algorithm was required for federal applications. SHA produces a 160-bit hash value, or
message digest. This is then inputted into the DSA, which computes the signature for a

114

message. The message digest is signed instead of the whole message because it is a much
quicker process. The sender computes a 160-bit hash value, encrypts it with his private
key (signs it), appends it to the message, and sends it. The receiver decrypts the value
with the senders public key, runs the same hashing function, and compares the two
values. If the values are the same, the receiver can be sure that the message has not been
tampered with while in transit. SHA is similar to MD4. It has some extra mathematical
functions and produces a 160-bit hash instead of 128-bit, which makes it more resistant to
brute force attacks, including birthday attacks.
HAVAL
HAVAL is a variable-length one-way hash function and is the modification of MD5. It
processes message blocks twice the size of those used in MD5; thus, it processes blocks
of 1,024 bits.
Session Keys
A Session Key is a secret key that is used to encrypt messages between two users. A
session key is not any different than the secret key that was described in the previous
section, but it is only good for one communication session between users. If Tanya had a
secret key she used to encrypt messages between Lance and herself all the time, then this
secret key would not be regenerated or changed. They would use the exact same key each
and every time they communicated using encryption. However, using the same key over
and over again increases the chances of the key being captured and the secure
communication being compromised. If, on the other hand, a new secret key was
generated each time Lance and Tanya wanted to communicate, as shown in Figure 8-19,
it would only be used during their one dialog and then destroyed. If they wanted to
communicate an hour later, a new session key would be created and shared. A session key
provides security because it is only valid for one session between two computers. If an
attacker captured the session key, she would have a very small window of time to use it to
try and decrypt messages being passed back and forth. When two computers want to
communicate using encryption, they must first go through a handshaking process. The
two computers agree on the encryption algorithms that will be used and exchange the
session key that will be used for data encryption. In a sense, the two computers set up a
virtual connection between each other and are said to be in session. When this session is
done, each computer tears down any data structures it built to enable this communication
to take place, the resources are released, and the session key is destroyed. So there are
keys used to encrypt data and different types of keys used to encrypt keys. These keys
must be kept separate from each other and neither should try to perform the other keys
job. A key that has a purpose of encrypting keys should not be used to encrypt data and
anything encrypted with a key used to encrypt other keys should not appear in clear text.
This will reduce the vulnerability to certain brute force attacks. These things are taken
care of by operating systems and applications in the background, so a user would not
necessarily need to be worried about using the wrong type of key for the wrong reason.
The software will handle this, but as a security professional, it is important to understand
the difference between the key types and the issues that surround them.

115

SSL
Secure Sockets Layer (SSL) is similar to S-HTTP, but it protects a communication
channel instead of individual messages. It uses public key encryption and provides data
encryption, server authentication, message integrity, and optional client authentication.
When a client accesses a Web site, it is possible for that Web site to have secured and
public portions. The secured portion would require the user to be authenticated in some
fashion. When the client goes from a public page on the Web site to a secured page, the
Web server will start the necessary tasks to invoke SSL and protect this type of
communication. The server sends a message back to the client indicating that a secure
session needs to be established, and the client sends its public key and security
parameters. The server compares those security parameters to its own until it finds a
match. This is the handshaking phase. The server authenticates to the client by sending it
a digital certificate and if the client decides to trust the server, the process continues. The
server can require the client to send over a digital certificate for mutual authentication,
but that is rare. The server creates a session key and encrypts it with its private key. The
client decrypts the session key with the servers public key and the two use this session
key throughout the session. Just like S-HTTP, SSL keeps the communication path open
until one of the parties requests to end the session. Usually the client will click on a
different URL, and the session is complete. SSL protocol requires an SSL-enabled server
and browser. SSL will provide security for the connection but does not provide security
for the data once it is received. This means the data is encrypted while it is being
transmitted, but once it is received by a computer, it is no longer encrypted. So if a user
sends bank account information to a financial institution via a connection protected by
SSL, that communication path is protected but the user must trust the financial institution
that receives this information because at this point, SSLs job is done. In the protocol
stack, SSL lies beneath the application layer and above the transport layer. This ensures
that SSL is not limited to specific application protocols and can still use the
communication transport standards of the Internet. The user can verify a secure
connection by looking at the URL to see that it states https://. The same is true for a
padlock or key icon, depending on the browser type, at the bottom corner of the browser
window.
Pretty Good Privacy (PGP)
Pretty Good Privacy (PGP) was designed by Phil Zimmerman as a freeware e-mail
security program and released in 1991. It was the first widespread public key encryption
program. PGP is a complete working system that uses cryptographic protection to protect
e-mail and files. It mainly uses RSA public key encryption for key management and
IDEA symmetric cipher for bulk encryption of data, although the user has the option of
picking different types of algorithms to use. PGP can provide confidentiality through the
IDEA encryption algorithm, integrity by using the MD5 hashing algorithm,
authentication by using the public key certificates, and nonrepudiation through the use of
cryptographically signed messages. The users private key is generated and encrypted
when the application asks the user to randomly type on her keyboard for a specific
amount of time. Instead of using passwords, PGP uses passphrases. The passphrase is

116

used to encrypt the users private key that is stored on her hard drive. PGP does not use a
hierarchy of CAs, but relies on a web of trust in its key management approach. Each
user generates and distributes his or her public key and users sign each others public
keys, which creates a community of users who trust each other. This is different than the
CA approach where no one trusts each other; they only trust the CA. For example, if
Mark and Mike want to communicate using PGP, Mark can give his public key to Mike.
Mike signs Marks key and keeps a copy for himself. Then Mike gives a copy of his
public key to Mark so they can start communicating securely. Later, Mark would like to
communicate with Joe, but Joe does not know Mark, and does not know if he can trust
him. Mark sends Joe his public key, which has been signed by Mike. Joe has Mikes
public key, because they have communicated before, and trusts Mike. Because Mike
signed Marks public key, Joe now trusts Mark also and sends his public key and begins
communicating with him. So basically it is a system of I dont know you, but my buddy
Mike says you are an all right guy, so I will trust you on behalf of Mikes word. Each
user keeps a collection of signed public keys he has received from other users in a file
referred to as a key ring. Each key in that ring has a parameter that indicates the level of
trust assigned to that user and the validity of that particular key. If Steve has known Liz
for many years and trusts her, he might have a higher level of trust indicated on her stored
public key than Tom, whom he does not trust much at all. There is also a field indicating
who can sign other keys within in Steves realm of trust. If Steve receives a key from
someone he doesnt know, like Kevin, and the key is signed by Liz, he can look at the
field that pertains to who he trusts to sign other peoples keys. If the field indicates that
Steve trusts Liz enough to sign another persons key, then Steve will accept Kevins key
and communicate with him. However, if Steve receives a key from Kevin and it is signed
by untrustworthy Tom, then Steve might choose to not trust Kevin and not communicate
with him. These fields are available for updating and alteration. If one day Steve really
gets to know Tom and finds out he is okay after all, he can modify these parameters
within PGP and give Tom more trust when it comes to cryptography and secure
communication. Because the web of trust does not have a central leader, like a CA,
certain standardized functionality is harder to accomplish. If Steve lost his private key, it
means anyone else trusting his public key must be notified that it should no longer be
trusted. In a PKI, Steve would only need to notify the CA and anyone attempting to verify
the validity of Steves public key will be told not to trust it when the other users contacted
the CA. In the PGP world, this is not as centralized and organized. Steve can send out a
key revocation certificate, but there is no guarantee that it will reach each users key ring
file. PGP is public domain software that uses public key cryptography. It has not been
endorsed by the NSA, but because it is a great product and free for individuals to use, it
has become somewhat of an encryption standard on the Internet.
MIME
Multipurpose Internet Mail Extension (MIME) is a technical specification indicating
how multimedia data and e-mail attachments are to be transferred. The Internet has mail
standards that dictate how mail is to be formatted, encapsulated, transmitted, and opened.
If a message or document contains a multimedia attachment, MIME dictates how that
portion of the message should be handled. When a user requests a file from a Web server

117

that contains an audio clip, graphic, or some other type of multimedia component, the
server will send the file with a header that describes the file type. For example, the header
might indicate that the MIME type is Image and the subtype is jpeg. Although this will be
in the header, many times systems also use the files extension to identify the MIME type.
So in our example, the files name might be stuff.jpeg. The users system will see the
extension jpeg, or see that data in the header field, and look in its association list to see
what program it needs to initialize to open this particular file. If the system has jpeg files
associated with the Explorer application, then the Explorer will open and present the
picture to the user. Sometimes systems either do not have an association for a specific file
type or do not have the necessary helper program necessary to review and use the
contents of the file. When a file has an unassociated icon assigned to it, it might require
the user to choose the Open With command and choose an application in the list to
associate this file with that program. So when the user double-clicks on that file, the
associated program will initialize and present the file. If the system does not have the
necessary program, the Web site might offer the necessary helper program, like Acrobat
or an audio program that plays wave files. So MIME is a specification that dictates how
certain file types should be transmitted and handled. This specification has several types
and subtypes, enables different computers to exchange data in varying formats, and
provides a standardized way of presenting the data. So if Sean views a funny picture that
is in GIF format, he can be sure that when he sends it to Debbie, it will look exactly the
same.
S/MIME
Secure MIME (S/MIME) is a standard for encrypting and digitally signing electronic
mail that contains attachments and providing secure data transmissions. S/MIME extends
the MIME standard by allowing for the encryption of e-mail and attachments. The
encryption and hashing algorithms can be specified by the user of the mail package
instead of having it dictated to them. S/MIME provides confidentiality through the users
encryption algorithm, integrity through the users hashing algorithm, authentication
through the use of X.509 public key certificates, and nonrepudiation through
cryptographically signed messages.
SET
Secure Electronic Transaction (SET) is a security technology proposed by Visa and
MasterCard to allow for more secure credit card transaction possibilities than what is
currently available. SET has been waiting in the wings for full implementation and
acceptance as the standard for quite sometime. Although SET provides a very effective
way of transmitting credit card information, businesses and users do not see it as efficient
because it requires more parties to coordinate their efforts, more software installation and
configuration for each entity involved, and more effort and cost than the widely used SSL
method. SET is a cryptographic protocol developed to send encrypted credit card
numbers over the Internet. It is comprised of three main parts: the electric wallet, the
software running on the merchants server at its Web site, and the payment server that is
located at the merchants bank. To use SET, a user must enter her credit card number into

118

electronic wallet software. This information will be stored on the users hard drive or on a
smart card. The software will then create a public and private key used specifically for
encrypting financial information before it is sent. Lets say Tanya wants to buy her
mother a gift from a Web site using her electric wallet. When she finds the perfect gift
and decides to purchase it, her encrypted credit card information is sent to the merchants
Web server. The merchant does not decrypt this information, but instead digitally signs it
and sends it on to its processing bank. At the bank, the payment server decrypts the
information, verifies that Tanya has the necessary funds, and transfers the funds from
Tanyas account to the merchants account. Then the payment server sends a message to
the merchant telling it to finish the transaction and a receipt is sent to Tanya and the
merchant. This is basically a very secure way of doing business over the Internet, but
today everyone seems to be happy enough with the security SSL provides and they do not
feel motivated enough to move to a different and more encompassing technology.

Cryptanalysis
Cryptanalysis is a science of studying and breaking the secrecy of encryption algorithms
and their necessary pieces. It is performed in academic settings and by curious and
motivated hackers, either to quench their inquisitiveness or use their findings to commit
fraud and destruction. Cryptanalysis is the process of trying to decrypt encrypted data
without the key. When new algorithms are tested, they go through stringent processes of
cryptanalysis to ensure that the new encryption process is unbreakable or that it takes too
much time and resources to break. Cryptanalysis is the study of breaking cryptosystems.
Attacks
Eavesdropping, network sniffing, and capturing data as it passes over a network is
considered passive because the attacker is not affecting the protocol, algorithm, key,
message or any parts of the encryption system. Passive attacks are hard to detect, so
methods are put in place to try and prevent them rather than detect and stop them.
Altering messages, modifying system files, and masquerading as another individual are
acts that are considered Active attacks because the attacker is actually doing something
instead of sitting back gathering data. Passive attacks are usually used to gain information
prior to carrying out an active attack. The following sections go over active attacks that
can relate to cryptography.
Ciphertext-Only Attack
In this type of an attack, the attacker has the ciphertext of several messages. Each of the
messages has been encrypted using the same encryption algorithm. The attackers goal is
to discover the plaintext of the messages by figuring out the key used in the encryption
process. Once the attacker figures out the key, she can now decrypt all other messages
encrypted with the same key.
Known-Plaintext Attack

119

In this type of attack, the attacker has the plaintext and ciphertext of one or more
messages. Again, the goal is to discover the key used to encrypt the messages so that
other messages can be deciphered and read.
Chosen-Plaintext Attack
In this attack, the attacker has the plaintext and ciphertext, but what makes this type of
attack different is that they can choose the plaintext that gets encrypted. This gives the
attacker more power and possibly a deeper understanding of the way that the encryption
process works so that they can gather more information about the key that is being used.
Once the key is discovered, other messages encrypted with that key can be decrypted.
Chosen-Ciphertext Attack
In this attack, the attacker can choose the ciphertext to be decrypted and has access to the
resulting decrypted plaintext. Again, the goal is to figure out the key. These are the
definitions, but how do these attacks actually get carried out? An attacker may make up a
message and send it to someone she knows will encrypt the message and then send it out
to others. In that case, the attacker knows the plaintext and can then capture the message
as it is being transmitted, which gives her the ciphertext. Also, messages usually start
with the same type of beginnings and ends. An attacker might know that each message a
general sends out to his commanders always starts with certain greetings and ends with
specific salutations and the generals name and contact information. In this instance, the
attack has some of the plaintext (the data that is the same on each message) and can
capture an encrypted message, and therefore capture the ciphertext. Once a few pieces of
the puzzle are discovered, the rest is accomplished by reverse-engineering and trial-anderror attempts. Known-plaintext attacks were used by the United States against the
Germans and the Japanese during World War II. The public mainly uses algorithms that
are known and understood versus the secret algorithms where the internal processes and
functions are not released to the public. In general, the strongest and best engineered
algorithms are the ones that are released for peer review and public scrutiny, because a
thousand brains are better than five and many times some smarty-pants within the public
population can find problems within an algorithm that the developers did not think of.
This is why vendors and companies have competitions to see if anyone can break their
code and encryption processes. If someone does break it, that means the developers must
go back to the drawing board and strengthen this or that piece. Not all algorithms are
released to the public, such as the ones used by the NSA. Because the sensitivity level of
what they are encrypting is so important, they want as much of the process as secret as
possible. It does not mean that their algorithms are weak because they are not released for
public examination and analysis. Their algorithms are developed, reviewed, and tested by
many top cryptographic smarty-pants and are of very high quality.
Man-in-the-Middle Attack
If David is eavesdropping on different conversations that happen over a network and
would like to know the juicy secrets that Lance and Tanya pass between each other, he

120

can perform a man-in-the-middle attack. The following are the steps for this type of
attack:
1. Tanya sends her public key to Lance and David intercepts this key and sends
Lance his own public key. Lance thinks he has received Tanyas key, but in fact
received Davids.
2. Lance sends Tanya his public key. David intercepts this key and sends Tanya his
own public key.
3. Tanya sends a message to Lance, encrypted in Lances public key. David
intercepts the message and can decrypt it because it is encrypted with his own
public key, not Lances. David decrypts it with his private key, reads the message,
and reencrypts it with Lances real public key and sends it to Lance.
4. Lance answers Tanya by encrypting his message with Tanyas public key. David
intercepts it, decrypts it with his private key, reads the message, and encrypts it
with Tanyas real public key and sends it on to Tanya.
Many times public keys are kept on a public server for anyone to access. David can
intercept queries to the database for individual public keys or David can substitute his
public key in the database itself in place of Lance and Tanyas public keys. The SSL
protocol has been known to be vulnerable to some man-in-the-middle attacks. The
attacker injects herself right at the beginning of the authentication phase so that she
obtains both parties public keys. This enables her to decrypt and view messages that
were not intended for her. Using digital signatures during the session-key exchange can
circumvent the man-inthe-middle attack. If using Kerberos, when Lance and Tanya obtain
each others public keys from the KDC, the public keys are signed by the KDC. Because
Tanya and Lance have the public key of the KDC, they both can decrypt and verify the
signature on each others public key and be sure that it came from the KDC itself.
Because David does not have the private key of the KDC, he cannot substitute his public
key during this type of transmission. When Lance and Tanya communicate, they can use
a digital signature, which means they sign the message with their private keys. Because
David does not have Tanya or Lances private key, he cannot intercept the messages, read
them, and encrypt them again as he would in a successful man-in-the-middle attack.
Dictionary Attacks
If Daniel steals a password file that is filled with one-way function values, how can this
be helpful to him? He can take 100,000 or 1,000,000 if he is more motivated, of the most
commonly used passwords and run them through the same one-way function and store
them in a file. Now Daniel can compare his file results of the hashed values of the
common passwords to the password file he stole from an authentication server. The ones
that match will correlate to the passwords he entered as commonly used passwords. Most
of the time the attacker does not need to come up with thousands or millions of
passwords. This work is already done and is readily available at many different hacker
sites on the Internet. The attacker only needs a dictionary program and he can then insert
the file of common passwords, or dictionary file, into the program and run it against a
captured password file.

121

Replay Attack
A big concern in distributed environments is replay attacks. Kerberos is especially
vulnerable to this type of attack. A replay attack is when an attacker copies a ticket and
breaks the encryption and then tries to impersonate the client and resubmit the ticket at a
later time to gain unauthorized access to a resource. Replay attacks do not only happen in
Kerberos. Many authentication protocols are susceptible to these types of attacks because
the goal of the attacker is usually to gain access to authentication information of an
authorized person so that she can turn around and use it herself to gain access to the
network. The Kerberos ticket is a form of authentication credentials, but an attacker can
also capture passwords or tokens as they are transmitted over the network. Once captured
she will resubmit the credentials, or replay them, in the hopes of being authorized.
Timestamps and sequence numbers are two countermeasures to the replay vulnerability.
Packets can contain sequence numbers so each machine will be expecting a specific
number on each receiving packet. If a packet has a sequence number that had been
previously used, this is an indication of a replay attack. Packets can also be timestamped.
A threshold can be set on each computer to only accept packets within a certain time
frame. If a packet is received that is past this threshold, it can help identify a replay
attack.
References
1. http://www.cse-cst.gc.ca/tutorials/english/section3/m2/s3_2_9_1.htm#symmetric
2. http://www-128.ibm.com/developerworks/library/s-crypt03.html
3. http://www.javaworld.com/javaworld/jw-04-2000/jw-0428-security-p2.html
6. Compliance Services & standards
6.1 FFIEC Compliance
The Federal Financial Institutions Examination Council (FFIEC) has issued a guidance
called "Authentication in an Electronic Banking Environment." This guidance focuses on
the risk-management controls necessary to authenticate the identity of customers
accessing electronic financial services. It also addresses the verification of new customers
and the authentication of existing customers. The guidance applies to both retail and
commercial customers of financial institutions.

Effective Authentication
Customer interaction with financial institutions is migrating from in-person, paper-based
transactions to remote electronic access and transaction initiation. This migration
increases the risk of doing business with unauthorized or incorrectly identified parties
that could result in financial loss or reputation damage to the financial institution.

122

Effective authentication can help financial institutions reduce fraud and promote the legal
enforceability of their electronic agreements and transactions.
There are a variety of authentication tools and methodologies financial institutions can
use to authenticate customers. These include the use of passwords and personal
identification numbers (PINs), digital certificates using a public key infrastructure (PKI),
physical devices such as smart cards or other types of "tokens," database comparisons,
and biometric identifiers. The level of risk protection afforded by each of these tools
varies and is evolving as technology changes.
Existing authentication methodologies involve three basic "factors":

something the user knows (e.g., password, PIN);


something the user possesses (e.g., ATM card, smart card); and
something the user is (e.g., biometric characteristic, such as a fingerprint or retinal
pattern).

Authentication methods that depend on more than one factor typically are more difficult
to compromise than single factor systems. Accordingly, properly designed and
implemented multi-factor authentication methods are more reliable indicators of
authentication and stronger fraud deterrents. For example, the use of a logon ID/password
is single factor authentication (i.e., something the user knows); whereas, a transaction
using an ATM typically requires two-factor authentication: something the user possesses
(i.e., the card) combined with something the user knows (i.e., PIN). In general, multifactor authentication methods should be used on higher risk systems.

Risk Assessment
The implementation of appropriate authentication methodologies starts with an
assessment of the risk posed by the institutions electronic banking systems. The risk
should be evaluated in light of the type of customer (e.g., retail or commercial); the
institutions transactional capabilities (e.g., bill payment, wire transfer, loan origination);
the sensitivity and value of the stored information to both the institution and the
customer; the ease of using the method; and the size and volume of transactions.
6.2 SOX Compliance
Sarbox Act or Sarbanes-Oxley Act
Sarbanes-Oxley is a US law passed in 2002 to strengthen corporate governance and
restore investor confidence. This act was sponsored by US Senator Paul Sarbanes and US
Representative Michael Oxley.

123

Sarbanes-Oxley law passed in response to a number of major corporate and accounting


scandals involving prominent companies in the United States. These scandals resulted in
a loss of public trust in accounting and reporting practices. Legislation is wide ranging
and establishes new or enhanced standards for all US public company Boards,
Management, and public accounting firms. Sarbanes-Oxley law contains 11 titles, or
sections, ranging from additional Corporate Board responsibilities to criminal penalties.
Requires Security and Exchange Commission (SEC) to implement rulings on
requirements to comply with the new law
Requirements of SOX
Sarbanes-Oxley is a broad act that addresses a number of accountability issues. The most
relevant requirements of the law are the following:
CEOs and CFOs must attest to the accuracy of financial statements and disclosures in the
periodic report. (Section 302)
Companies are responsible for having adequate internal control structure and procedures
for financial reporting. Management must assess these internal controls. (Section 404)
Companies must provide real-time disclosures of any events that may affect a firm's stock
price or financial performance within a 48-hour period. (Section 409)
Companies must protect and retain financial audit records. (Section 802)
Related SEC releases define internal controls and procedures for financial reporting as
controls that provide reasonable assurances that:
Transactions are properly authorized.
Assets are safeguarded against unauthorized or improper use.
Transactions are properly recorded to permit the preparation of financial statements that
are presented in a manner consistent with GAAP.
To meet the assessment requirement, management must select a suitable, recognized
framework for assessing the effectiveness of internal controls.
Two popular control frameworks are COSO (Committee of Sponsoring Organizations)
and COBIT (Control Objectives for Information and Related Technologies). COSO
focuses on controls for financial processes, and COBIT focuses on IT.
Penalties for SOX non-compliance

124

The SEC has directed national securities exchanges and associations to prohibit the
listing of securities of a non-compliant company. If material non-compliance causes the
company to restate its financials, the CEO and CFO forfeit any bonuses and other
incentives received during the 12-month period following the first filing of the erroneous
financials. SOX takes specific note of violations involving destruction or falsification of
documents or records related to any federal investigation or bankruptcy proceeding.
Personal penalties range from fines of up to $1 million to prison sentences of not more
than 20 years for "whoever knowingly alters, destroys, mutilates" any record or document
with the intent to impede an investigation.
6.3 GLBA Compliance
The GLBA (Gramm Leach Bliley Act) legislation, which falls under the umbrella of the
U.S. Federal Trade Commission, was enacted in 1999 under President Bill Clinton.
Similar to the Health Insurance Portability and Accountability Act (HIPAA) regulations
in the healthcare industry, GLBA focuses on the privacy and security of NPI(Non-public
Personal Information), which is essentially personal financial information. This NPI is
defined in the GLBA Financial Privacy Rule as any financial information that is
personally identifiable, such as name, social security number, income, and other
information that meets one of the following criteria:
Provided by a customer to a financial institution
Results from any transaction with the consumer or any service performed for the
consumer
Information otherwise obtained by the financial institution
GLBA specifically applies to U.S. financial institutions such as banks, credit unions,
securities brokerages, and insurance firms. Companies providing the following other
types of financial products and services to consumers are also affected: lending,
brokering or servicing any type of consumer loan, transferring or safeguarding money,
preparing individual tax returns, providing financial advice or credit counseling,
providing residential real estate settlement services, collecting consumer debts, and so on
The GLBA Safeguards Rule, which is the section thats applicable to Web application
security, recommends information security best practices. The compliance date for this
rule was May 23, 2003. The objectives, as listed in the rule itself, are as follows:
Ensure the security and confidentiality of customer information
Protect against any anticipated threats or hazards to the security or integrity of such
information

125

Protect against unauthorized access to or use of such information that could result in
substantial harm or inconvenience to any customer
The Safeguards Rule is nothing more than information security best practices, but for
those financial organizations that fall under the scope of GLBA, it still has to be
implemented nonetheless. The positive thing about the Safeguards Rule is that it is
flexible and scalable, regardless of the size of the organization.
Penalties for GLBA non-compliance
The GLBA gives authority to eight federal agencies and the states to administer and
enforce the Financial Privacy Rule and the Safeguard Rule. Non-compliance of GLBA
can result in a variety of fines and up to 5 years imprisonment for each violation.
Violation of the GLBA may result in a civil action brought by the United States Attorney
General. A 2003 amendment to the act specified, (1) "the financial institution shall be
subject to a civil penalty of not more than $100,000 for each such violation," and (2) "the
officers and directors of the financial institution shall be subject to, and shall be
personally liable for, a civil penalty of not more than $10,000 for each such violation."
GLBA Compliance Measures
While most financial services firms are informing their customers of the company's
privacy policy, fewer have the strong data protection measures in place to secure the
personal information.
The Safeguards Rule requires companies to develop a written information security plan
that describes their program to protect customer information. The FTC explicitly notes
that part of the plan should include "encrypting sensitive customer information when it is
transmitted electronically via public networks."
To meet the spirit and letter of the law, companies must:
Ensure confidentiality (encryption during flight and in storage)
Prevent unauthorized access (authentication and access controls)
Protect customer data against anticipated hazards or threats to security
Protect customer data integrity (provide data integrity schemes)
References

126

http://www.ftc.gov/privacy/privacyinitiatives/financial_rule.html
http://www.ftc.gov/privacy/privacyinitiatives/safeguards.htm
6.4 HIPAA Compliance
HIPAA Security Compliance
The Health Insurance Portability and Accountability Act of 1996 (HIPAA) is designed to
improve the efficiency and effectiveness of the health care system and to reduce the
incidence of fraud. The focus of this policy requires, among other things, the secure
transfer of electronic health care information. Recognizing the risks inherent to that,
HIPAA contains regulations for information privacy and information systems security. All
healthcare providers must now (as of April 2006) comply with the law.
All entities that handle, maintain, store, or exchange private health- or patient-related
information, regardless of size, are subject to HIPAA requirements. This includes the
following: healthcare organizations; employers maintaining health records; health plans;
life insurers; most doctors, nurses, pharmacies, hospitals, clinics, nursing homes; and
many more.
Companies that contract or conduct electronic business transactions related to medical
services (e.g., claims inquiries, payment advice, eligibility inquiries, referral
authorization inquiries) are also affected.
Requirements of HIPAA
HIPAA requires safeguards to improve the confidentiality of patient information. It
includes a Privacy Rule and a Security Rule, both of which require healthcare
organizations to increase the security of their patient-related data.
The HIPAA Privacy Rule requires health plan administrators, healthcare clearinghouses,
and healthcare providers to protect and secure any individually-identifiable health-related
information. The Privacy Rule broadly covers all types of patient health information
including written, oral, and electronic.
The HIPAA Security Rule ensures the confidentiality, integrity, and availability of
electronic protected health information (ePHI). It provides a uniform level of protection
of all health information that (a) is housed or transmitted electronically, and that (b)
pertains to an individual. The Security Rule specifies certain safeguards that are
"required" (i.e., must be implemented) and others that are "addressable" (i.e., do not have
to be implemented if the organization can document why the specification is not
reasonable or appropriate to its circumstances). These include: Authentication:
authenticating entities or individuals prior to data access (required) Data Integrity:
protecting against unauthorized modifications (addressable) Data Access: controlling

127

which users, applications, and devices can access patient information (addressable) Data
Confidentiality: encrypting data in transit or at rest (addressable)
Penalties for HIPAA non-compliance
Patients can file claims with the U.S. Department of Health and Human Services (DHHS)
if they believe a covered entity is non-compliant with HIPAA requirements. Those found
in violation of HIPAA could face: Civil penalties of $100 per violation up to $25,000 per
year for each violation or prohibition violated Criminal penalties for knowingly violating
patient privacy of up to $250,000 and 10 years imprisonment
6.5 PCI Compliance
PCI - HISTORY

VISA developed CISP in 1999 and published it in April of 2000 and it became mandatory
in June 2001. Other CCAs who followed the suit are MasterCard SDP, Amex DSOP,
Discover DISC and JCB.
The first set of requirements on Payment Card Industry Data Security Standards(PCI
DSS) was published in Dec 2004. This was later coupled with MasterCards network
scanning requirements, published in March 2005.
Finally, on September 2006 the PCI Security Standards Council was formed, which
intends to serve as an advisory group and manage the PCI Security Standards.
PCI - Birds Eye View

1. Standard of due care for the protection of credit card holder data
2. Has 12 major criteria under six control objectives
3. PCI DSS requirements are applicable if a Primary Account Number (PAN) is stored,
processed, or transmitted. If a PAN is not stored, processed, or transmitted, PCI DSS
requirements do not apply.
1. Build and Maintain Secure Network
Install/maintain firewall configuration to protect data (29)
Not utilizing vendor supplied defaults for system passwords (8)
2. Protect Cardholder Data
Protect stored data (21)
128

Encrypt transmission of cardholder/ sensitive data across public networks (3)


3. Vulnerability Management
Use, update anti-virus software & programs (2)
Develop & maintain secure systems and applications (27)
4. Strong Access Control
Restrict access to data by business need-to-know (2)
Assign unique id for computer users (21)
Restrict physical access to cardholder data (21)
5. Monitor and Test Networks
Track/monitor access to network resources and cardholder data (25)
Regularly test security systems and processes (5)
6. Information Security Policy Maintain information security policy addressing
employees/contractors (40)
PCI-LEVELS
Level-1
Criteria : Any merchant processing over 6m transactions per year, or has suffered a hack
attack resulting in data loss, or payment service providers
Annual Requirements : Annual On site Audit Independent Security Assessor or Internal
Audit signed by Company Officer
Quarterly Requirements : Quarterly Scan by an independent PCI standards compliance
vendor
Level-2
Criteria : Any e-commerce merchant processing 150,000 to 6,000,000 Visa transactions
per year

129

Annual Requirements : Completion of an annual self assessment questionnaire


submitted to a PCI standards vendor
Quarterly Requirements : Quarterly Scan by an independent PCI standards compliance
vendor
Level-3
Criteria : Any e-commerce merchant processing 20,000 to 150,000 Visa transactions per
year.
Annual Requirements : Completion of an annual self assessment questionnaire
submitted to a PCI standards vendor
Quarterly Requirements : Quarterly Scan by an independent PCI standards compliance
vendor
Level-4
Criteria : All other merchants, regardless of acceptance channel
Annual Requirements : Recommended to undertake as above with an annual scan
through an independent PCI auditor
Quarterly Requirements : NA
Risk of non-compliance
PCI Violation can form the basis for violation of other laws
GLBA requires protection of consumer financial data that overlaps PCI card holder data
Card schemes may enforce the standards with financial penalties for non -compliance.
In extreme circumstances, the acceptance privileges of a merchant or service provider
may be revoked if compromised and non-compliant.
Sample Penalty (VISA)
Members can be fined up to $500,000 per incident if their Merchant or Service Provider
that is not PCI compliant is compromised

130

Visa members who fail to immediately notify Visa of a suspected or known loss or theft
of transaction information may be fined up to $100,000 per incident, plus additional fines
if a PCI violation presents immediate and substantial risks to Visa and its members
Preparation
1. Dataflow analysis (Basic risk analysis)
2. Necessity determination (Risk Reduction)
Eliminate unnecessary data

3. PCI Compliance
Readiness Assessment

4. Use the auditing and reporting document for guidance and know the reporting rules
Key considerations to address PCI DSS
Encryption is easy, key management is not(3.5, 3.6)
Only one primary function per server(2.2.1)
Email encryption/filtering (4.2)
Secure code training and reviews
Data Destruction (9.10)
Daily log reviews(10.6)
IDS (11.4)(challenging for smaller entities)
User awareness(12.6) and background checks(12.7)
Third-party contracts(12.8)
Technology Considerations to address PCI DSS
1. File Integrity monitoring/change reporting tools (10.5.5 & 11.5)
2. Database column level encryption

131

3. Scanning tools
Nessus
4. Asymmetric Encryption
PGP/GNUPG
5. Hashing Powerful risk mitigation and a good cost saving solution
6. Two factor authentication(8.3)
7. NTP(10.4)
8. Policy baselines
9. Determine your policy approach
10. Incident Response Plan baselines
11. Outsourced IDS (expensive)
SELF-ASSESSMENT

A set of 75 questions form part of the Self-Assessment Questionnaire based on 204


requirements on PCI DSS. The questionnaire can be obtained from the pci security
council for self-assessment.
Quarterly Scans are required for all Level 1, 2, and 3 merchants and Level 1, 2 and 3
service providers. Annual scans are recommended for Level 4 merchants
The scanning procedure would include all servers (web, application, email, DNS, etc),
network (firewall, routers, WLAN, load balancers, etc.,)
REFERENCES

More details on the requirements can be found in the following website


https://pcisecuritystandards.org
6.6 BS 7799
BS7799
BS 7799 Part 1 was a standard originally published as BS 7799 by the British Standards
Institute (BSI) in 1995. It was written by the United Kingdom Government's Department
of Trade and Industry (DTI), and after several revisions, was eventually adopted by ISO
132

as ISO/IEC 17799, "Information Technology - Code of practice for information security


management." in 2000. ISO 17799 was most recently revised in June 2005 and is
expected to be renamed ISO/IEC 27002 during 2007.
A second part to BS7799 was first published by BSI in 1999, known as BS 7799 Part 2,
titled "Information Security Management Systems - Specification with guidance for use."
BS 7799-2 focused on how to implement an Information security management system
(ISMS), referrering to the information security management structure and controls
identified in ISO 17799. The 2002 version of BS 7799-2 introduced the Plan-Do-CheckAct (PDCA) (Deming quality assurance model), aligning it with quality standards such as
ISO 9000. BS 7799 Part 2 was adopted by ISO as ISO/IEC 27001 in November 2005.
BS7799 Part 3 was published in 2005, covering risk analysis and management. It aligns
with ISO 27001

ISO17799
ISO/IEC 17799 provides best practice recommendations on information security
management for use by those who are responsible for initiating, implementing or
maintaining Information Security Management Systems (ISMS). Information security is
defined within the standard in the context of the C-I-A triad:
the preservation of confidentiality (ensuring that information is accessible only to those
authorized to have access), integrity (safeguarding the accuracy and completeness of
information and processing methods) and availability (ensuring that authorised users
have access to information and associated assets when required)
After the introductory sections, the 2005 version of the standard contains the following
twelve main sections:
1. Risk assessment and treatment - analysis of the organization's information
security risks
2. Security policy - management direction <
3. Organization of information security - governance of information security
4. Asset management - inventory and classification of information assets
5. Human resources security - security aspects for employees joining, moving and
leaving an organization
6. Physical and environmental security - protection of the computer facilities
7. Communications and operations management - management of technical security
controls in systems and networks
8. Access control - restriction of access rights to networks, systems, applications,
functions and data
9. Information systems acquisition, development and maintenance - building
security into applications

133

10. Information security incident management - anticipating and responding


appropriately to information security breaches
11. Business continuity management - protecting, maintaining and recovering
business-critical processes and systems
12. Compliance - ensuring conformance with information security policies, standards,
laws and regulations
Within each section, information security controls and their objectives are specified and
outlined. The information security controls are generally regarded as best practice means
of achieving those objectives. For each of the controls, implementation guidance is
provided.
ISO/IEC 17799 has directly equivalent national standards in countries such as Australia
and New Zealand (AS/NZS ISO/IEC 17799:2006), the Netherlands (NEN-ISO/IEC
17799:2002 nl,), Sweden (SS 627799), Japan (JIS Q 27002), UNE 71501 (Spain), the
United Kingdom (BS ISO/IEC 17799:2005) and Uruguay (UNIT/ISO 17799:2005)
7. Security in Software Development
7.1 The Software Life Cycle
SDLC is the process of developing information systems through investigation, analysis,
design, implementation and maintenance. SDLC is also known as information systems
development or application development.
SDLC Phases
Initiation Phase
Acquisition / Development Phase
Implementation Phase
Operations / Maintenance Phase
Disposition Phase

Functional & Non-functional security requirements & best practices


Functional Security Requirements
Security should begin at the requirements level. Security requirements must cover both
functional security (say, the use of applied cryptography) and emergent characteristics.
One way to cover the emergent security space is to build abuse cases. Similar to use
cases, abuse cases describe a system's behavior under attack, providing explicit coverage
of what should be protected, from whom, and for how long.

134

Security functional components express security requirements intended to counter threats


in the assumed operating environment. These requirements describe security properties
that users can detect by direct interaction with the system (i.e. input, output) or by the
system's response to stimulus.
Non-functional security requirements

The system shall comply with security requirements related to the domain for
example HIPAA for healthcare ,etc.

The system shall provide secure Client to Webserver communication, using SSL
(Secure Sockets Layer) security for data encryption, client and server
authentication.

Where applicable, integration with operating system and directory/administration


services should be sought such that the same username used for network logon for
their daily work can also be used to log on to the application

Design considerations & best practices


Design can be categorized in to Functional and Technical design
1. Functional design

Update regulatory and compliance assumptions: Update the regulatory and


compliance assumptions and document them in the functional design document.

Identify functional assumptions: All the functional assumptions need to be


identified and documented in the functional design document.

Data classification: Data classification is a crucial step. Based on the sensitivity


of data being collected, stored and transmitted, the data should be classified. The
results determine to what extent the data needs to be secured.

Prepare system boundary document: This document identifies the boundary


where the control is transferred to or from the external system. This document
helps identify areas where extra security controls need to be put in place. It can
also serve as an input to the threat modeling document.

Review by business security executive: The functional document should be


reviewed and signed off by the business security executive after ensuring
adherence to regulatory and compliance policies.

135

2. Technical design

Identify and document technical assumptions: All the technical assumptions


should be identified and documented. For example: An e-mail address will serve
as a user ID or only the last four digits of Social Security number will be
displayed.

Prepare misuse cases: A misuse case is the opposite of use case. Its sole purpose
is to identify how a use case should not behave. A misuse case should capture the
type of attacks that can be made on the system and how the system should behave
in such situations. This is a relatively new approach, but it will go a long way in
addressing the security requirements of an application. A misuse case can also
help prepare test cases purely to test the security strength of the system.

Prepare a threat model: This is another newer approach, initiated by Microsoft.


With this, a threat model document (threat profile) captures the security strength
of a system. It identifies threats and vulnerabilities and helps in providing a more
accurate sense of security of the system.

Security patterns: Security patterns are like design patterns that are applied
toward information security. These patterns encapsulate security expertise in the
form of identified solutions and present issues and trade-offs in the usage of the
pattern. Architects might want to look at these patterns to see if they can use some
of the existing solutions, saving time and effort.

Data backup strategies: It is important to capture the data backup strategies in


the design phase. For example, frequency of data backup, details of backup
media, access privileges to the backup and whether the backup be encrypted.
These points will help maintain the confidentiality, availability and integrity of the
data.

Data transmission strategies: This is where we capture details around the data
transmission. Depending on the data classification, should the data be encrypted
during transmission? Is the data being decrypted and re-encrypted at any point? If
yes, then what kinds of security controls are in place at that point? Is the customer
information like Social Security number or credit card number partially masked?
This will prevent the data from being stolen in most kinds of attacks.

Data storage security requirements: These are influenced by the type of data.
Names and addresses can be stored in clear text, but sensitive data such as Social
Security numbers, driver's license numbers and credit card numbers should be
encrypted in the database. Data required for authentication should be given more
thought, such as applying one-way encryption.

136

Authentication strategy: This is a critical step, as this is the entry point of the
system. If this is compromised, the attacker will have unauthorized entry into the
system and can do a lot of harm. Some of the things to consider are strong
password policy, two-factor authentication for critical features such as funds
transfers, input validation strategies, account inactivity period, account locking,
password retrieval mechanism, and a username/password storage strategy.

Session management strategy: Some of the things that need to be identified are
details around the session creation:

1. Session ID should be stored in a secure cookie


2. Session ID should be encrypted
3. Cookie should expire when the session is invalidated or the browser is closed
4. Application should force the browser to be closed when the user logs out or session
is invalidated.
These details should be captured in the design phase and should be documented in the
design document.

Error handling: In most cases, a standard error page should be displayed in any
error situation. The application should trap all the system errors (such as missing
configuration file, ODBC error, database not working, etc.) or application errors
(such as invalid username/password) and redirect them to one standard error page.
Ideally, the list of possible error situations and the corresponding error messages
should be identified, and a generic error message for unknown or unidentified
errors with no technical information should be displayed to the user. Also, an
entry in the logs should be created with the time stamp and other relevant
information.

Identify trust boundaries, trust levels, entitlements, encryption requirements:


This is crucial and should be identified during the design phase. Extra security
controls should be put in place where trust level is low. Trust levels and trust
boundaries are also identified in the threat modeling document. Similarly,
entitlements are important in identifying who has what level of access to what
resources in the system. Data classification will help determine what should be
encrypted. Encryption details such as encryption algorithm, hashing algorithm,
and key length should be identified and documented in the design document.

Design audit logs: You must identify in the design phase what kind of
information needs to be captured should there be an error. You should also
consider the log retention strategy. The focus on designing audit log should be not
only to identify error situations but also to establish a pattern in case someone is
trying to hack into the application.

137

Prepare infrastructure security best practices document addressing


operating system, Web server, application server, database, FTP, e-mail: This
document will help strengthen the security of the infrastructure and should be
prepared during the design phase.

Development & testing best practices


Development Stage security best practices
Identify and document Application security requirements: Majority of the concerns
with respect to security of an application can be addressed during the development stage
itself. The software development team (either in-house or outsourced) should be
presented with the necessary security requirements. As an example if the security policy
states that Password complexity should be enabled for all user accounts, this control
cannot be enforced if the application does not support this feature. The software
development team should ensure that the security requirements are implemented in the
applications to be developed. The security requirements should be part of the RFP
provided to the application vendor, along with the functional requirements. This will
ensure that security features are considered at product design and development stage.
Ideally the application developement should include controls in the following areas.
1. Authentication & Authorization
2. Auditing & Logging
3. Password controls
4. Input validation
5. Web application controls
6. Session management
7. Data encryption
8. Interaction with other applications
9. Error handling
10. Documentation

138

Testing Process
Testing is not an afterthought or cutback when the schedule gets tight. It is an integral
part of software development that needs to be planned. It is also important that testing is
done proactively; meaning that test cases are planned before coding starts, and test cases
are developed while the application is being designed and coded. There are also a number
of testing patterns that have been developed. Testing is usually the last resort to catch
application defects.
7.2 Role of security in testing
Risk based testing
Risk-based testing is based on software risks, and each test is intended to probe a specific
risk that was previously identified through risk analysis. A simple example is that in
many web-based applications, there is a risk of injection attacks, where an attacker fools
the server into displaying results of arbitrary SQL queries. A risk-based test might
actually try to carry out an injection attack, or at least provide evidence that such an
attack is possible.
Unit Testing
When planning unit tests for security, care should be taken not to underestimate the
possible security threats to components deep inside an application. It must be kept in
mind that the attacker's view of the software environment may be quite different from
that of the ordinary user. In other words, the attacker might be able to get at the software
in ways that an ordinary user might not
Integration Testing
Integration testing focuses on a collection of subsystems, which may contain many
executable components. There are numerous software bugs that appear only because of
the way components interact, and this is true for security bugs as well as traditional ones.
Error handling is one more form of component interaction, and it must be addressed
during integration testing.

System Testing
In the late stages of a software development process, the entire system is available for
testing. This testing stage can (and should) involve integration and functional tests, but it
is treated separately here because the complete system is the artifact that will actually be
attacked. Furthermore, certain activities relevant to software security, such as stress
testing, are often carried out at the system level. Penetration testing is also carried out at

139

the system level, and when a vulnerability is found in this way there is tangible proof that
the vulnerability is real; a vulnerability that can be exploited during system testing will be
exploitable by attackers. Under resource constraints, these are the most important
vulnerabilities to fix, and they are also the ones that will be taken most seriously by
developers
Role-based Security Testing
When you test a solution, keep in mind that a user might have read access but not write
access to the data. Test whether given users or groups have read access or write access.
You might want to add a user interface warning or functionality limitation to the solution
for such users or groups. Try to log on to the solution by using different user accounts or
user types. This may require knowledge of the security design of your enterprise.

Role of Security in each phase


Role of Security in SDLC:

Initiation Phase

Security Categorization defines three levels (i.e., low, moderate, or high) of


potential impact on organizations or individuals should there be a breach of
security (a loss of confidentiality, integrity, or availability). Security
categorization standards assist organizations in making the appropriate selection
of security controls for their information systems.

Preliminary Risk Assessment results in an initial description of the basic


security needs of the system. A preliminary risk assessment should define the
threat environment in which the system will operate.

Acquisition / Development Phase

Risk Assessment analysis that identifies the protection requirements for the
system through a formal risk assessment process. This analysis builds on the
initial risk assessment performed during the Initiation phase, but will be more indepth and specific.

Security Functional Requirements Analysis analysis of requirements that may


include the following components: (1) system security environment, (i.e.,
enterprise information security policy and enterprise security architecture) and (2)
security functional requirements

140

Security Assurance Requirements Analysis analysis of requirements that


address the developmental activities required and assurance evidence needed to
produce the desired level of confidence that the information security will work
correctly and effectively. The analysis, based on legal and functional security
requirements, will be used as the basis for determining how much and what kinds
of assurance are required.

Cost Considerations and Reporting determines how much of the development


cost can be attributed to information security over the life cycle of the system.
These costs include hardware, software, personnel, and training

Security Planning ensures those agreed upon security controls, planned or in


place, are fully documented. The security plan also provides a complete
characterization or description of the information system as well as attachments or
references to key documents supporting the agencys information security
program (e.g., configuration management plan, contingency plan, incident
response plan, security awareness and training plan, rules of behavior, risk
assessment, security test and evaluation results, system interconnection
agreements, security authorizations/accreditations, and plan of action and
milestones).

Security Control Development ensures that security controls described in the


respective security plans are designed, developed, and implemented. For
information systems currently in operation, the security plans for those systems
may call for the development of additional security controls to supplement the
controls already in place or the modification of selected controls that are deemed
to be less than effective.

Developmental Security Test and Evaluation ensures that security controls


developed for a new information system are working properly and are effective.
Some types of security controls (primarily those controls of a non-technical
nature) cannot be tested and evaluated until the information system is deployed
these controls are typically management and operational controls.

Other Planning Components ensures that all necessary components of the


development process are considered when incorporating security into the life
cycle. These components include selection of the appropriate contract type,
participation by all necessary functional groups within an organization,
participation by the certifier and accredit or, and development and execution of
necessary contracting plans and processes.

Implementation Phase

Inspection and Acceptance ensures that the organization validates and verifies
that the functionality described in the specification is included in the deliverables.

141

System Integration ensures that the system is integrated at the operational site
where the information system is to be deployed for operation. Security control
settings and switches are enabled in accordance with vendor instructions and
available security implementation guidance.

Security Certification ensures that the controls are effectively implemented


through established verification techniques and procedures and gives organization
officials confidence that the appropriate safeguards and countermeasures are in
place to protect the organizations information system. Security certification also
uncovers and describes the known vulnerabilities in the information system.

Security Accreditation provides the necessary security authorization of an


information system to process, store, or transmit information that is required. This
authorization is granted by a senior organization official and is based on the
verified effectiveness of security controls to some agreed upon level of assurance
and an identified residual risk to agency assets or operations.

Operations / Maintenance Phase

Configuration Management and Control ensures adequate consideration of


the potential security impacts due to specific changes to an information system or
its surrounding environment. Configuration management and configuration
control procedures are critical to establishing an initial baseline of hardware,
software, and firmware components for the information system and subsequently
controlling and maintaining an accurate inventory of any changes to the system.

Continuous Monitoring ensures that controls continue to be effective in their


application through periodic testing and evaluation. Security control monitoring
(i.e., verifying the continued effectiveness of those controls over time) and
reporting the security status of the information system to appropriate agency
officials is an essential activity of a comprehensive information security program.

Disposition Phase

Information Preservation ensures that information is retained, as necessary, to


conform to current legal requirements and to accommodate future technology
changes that may render the retrieval method obsolete.

Media Sanitization ensures that data is deleted, erased, and written over as
necessary.

Hardware and Software Disposal ensures that hardware and software is


disposed of as directed by the information system security officer

142

7.3 Understand the Application Environment and Security Controls


Application environment is combination of hardware and software on which the
application is installed.Web server and application server configurations play a key role
in the security of a web application. These servers are responsible for serving content and
invoking applications that generate content. In addition, many application servers provide
a number of services that web applications can use, including data storage, directory
services, mail, messaging, and more. Failure to manage the proper configuration of your
servers can lead to a wide variety of security problems
Frequently, the web development group is separate from the group operating the site. In
fact, there is often a wide gap between those who write the application and those
responsible for the operations environment. Web application security concerns often span
this gap and require members from both sides of the project to properly ensure the
security of a sites application. There are a wide variety of server configuration problems
that can plague the security of a site. These include:
1)Unpatched security flaws in the server and application software
2)Server/Application software flaws or misconfigurations that permit directory listing
and directory traversal attacks
3)Unnecessary default, backup, or sample files, including scripts, applications,
configuration files, and web pages
4)Improper file and directory permissions
5)Unnecessary services enabled, including content management and remote
administration
6)Default accounts with their default passwords
7)Administrative or debugging functions that are enabled or accessible
8)Overly informative error messages
9)Misconfigured SSL certificates and encryption settings
10)Use of self-signed certificates to achieve authentication and man-in-the-middle
protection
11)Use of default certificates
12)Improper authentication with external systems

143

Some of these problems can be detected with readily available security scanning tools.
Once detected, these problems can be easily exploited and result in total compromise of a
website. Successful attacks can also result in the compromise of backend systems
including databases and corporate networks. Having secure software and a secure
configuration are both required in order to have a secure site. The first step is to create a
hardening guideline for your particular web server and application server configuration.
This configuration should be used on all hosts running the application and in the
development environment as well. We recommend starting with any existing guidance
you can find from your vendor or those available from the various existing security
organizations such as OWASP, CERT, and SANS and then tailoring them for your
particular needs.
The hardening guideline should include the following topics:
1)Configuring all security mechanisms
2)Turning off all unused services
3)Setting up roles, permissions, and accounts, including disabling all default accounts or
changing their passwords
4)Logging and alerts
Once your guideline has been established, use it to configure and maintain your servers.
If you have a large number of servers to configure, consider semi-automating or
completely automating the configuration process. Use an existing configuration tool or
develop your own. A number of such tools already exist. You can also use disk replication
tools such as Ghost to take an image of an existing hardened server, and then replicate
that image to new servers. Such a process may or may not work for you given your
particular environment. Keeping the server configuration secure requires vigilance. You
should be sure that the responsibility for keeping the server configuration up to date is
assigned to an individual or team. The maintenance process should include:
1)Monitoring the latest security vulnerabilities published
2)Applying the latest security patches
3)Updating the security configuration guideline
4)Regular vulnerability scanning from both internal and external perspectives
5)Regular internal reviews of the servers security configuration as compared to your
configuration guide
6)Regular status reports to upper management documenting overall security posture

144

7.4 Databases and Data Warehousing Vulnerabilities, Threats and Protections


Databases:
Database is a of a collection of records, or pieces of knowledge.
Eg : SQL,oracle,DB2 etc.,
Database Security:
Basically database security can be broken down into the following key points of interest.

Server Security

Database Connections

Table Access Control

Restricting Database Access

Server Security
Server security is the process of limiting actual access to the database server itsel.. The
basic idea is this, "You can't access what you can't see".
Trusted IP addresses

Every server, should be configured to only allow trusted IP addresses. If it's a back end
for a web server., then only that web server's address should be allowed to access that
database server. If the database server is supplying information to a homegrown
application that is running on the internal network, then it should only answer to
addresses from within the internal network.
Database Connections

These days with the number of Dynamic Applications it becomes tempting to allow
immediate unauthenticated updates to a database. Ensure that you are removing any
possible SQL code from a user supplied input. If a normal user should never be inputting
it don't allow the data to ever be submitted.
Table Access Control

Table access control is probably one of the most overlooked forms of database security
because of the inherent difficult in applying it. Properly using Table access control will
require the collaboration of both system administrator and database developer.

145

There are many ways to prevent open access from the Internet and each database system
has it's own set of unique features as well as each OS,a few methods are:

Trusted IP addresses - UNIX servers are configured to answer only pings from a
list of trusted hosts. In UNIX, this is accomplished by configuring the rhosts file,
which restricts server access to a list of specific users.
Server account disabling- If you suspend the server ID after three password
attempts, attackers are thwarted. Without user ID suspension, an attacker can run
a program that generates millions of passwords until it guesses the user ID and
password combination.
Special tools -Products such as RealSecure by ISS send an alert when an external
server is attempting to breach your system's security.

Oracle has a wealth of authentication methods:

Kerberos security- This popular "ticket"-based authentication system sidesteps


several security risks.

Virtual private databases- VPD technology can restrict access to selected rows of
tables.

Role-based security- Object privileges can be grouped into roles, which can then
be assigned to specific users.

Grant-execute security- Execution privileges on procedures can be tightly coupled


to users. When a user executes the procedures, they gain database access, but only
within the scope of the procedure.

Authentication servers-Secure authentication servers provide positive


identification for external users.

Port access security - All Oracle applications are directed to listen at a specific port
number on the server. Like any standard HTTP server, the Oracle Web Listener can be
configured to restrict access.

Data Warehousing:
Data warehouse (DW) is a collection of integrated databases designed to support
managerial decision-making and problem-solving functions. DW is an integral part of
enterprise-wide decision support system, and does not ordinarily involve data updating. It
empowers end-users to perform data access and analysis.
Security in Data Warehousing:

146

The security requirements of the DW environment are not unlike those of other
distributed computing systems. Thus, having an internal control mechanism to assure the
confidentiality, integrity and availability of data in a distributed environment is of
paramount importance. Unfortunately, most data warehouses are built with little or no
consideration given to security during the development phase. Achieving proactive
security requirements of DW is a seven-phase process: 1) identifying data, 2) classifying
data, 3) quantifying the value of data, 4) identifying data security vulnerabilities, 5)
identifying data protection measures and their costs, 6) selecting cost-effective security
measures, and 7) evaluating the effectiveness of security measures. These phases are part
of an enterprise-wide vulnerability assessment and management program.
Phase One - Identifying the Data
The first security task is to identify all digitally stored corporate data placed in the DW.
This is an often ignored, but critical phase of meeting the security requirements of the
DW environment since it forms the foundation for subsequent phases. It entails taking a
complete inventory of all the data that is available to the DW end-users. The installed
data monitoring software -- an important component of the DW -- can provide an
accurate information about all databases, tables, columns, rows of data, and profiles of
data residing in the DW environment as well as who is using the data and how often they
use the data. A manual procedure would require preparing a checklist of the same
information described above. Whether the required information is gathered through an
automated or a manual method, the collected information needs to be organized,
documented and retained for the next phase.
Phase Two - Classifying the Data
Classifying all the data in the DW environment is needed to satisfy security requirements
for data confidentiality, integrity and availability in a prudent manner. In some cases, data
classification is a legally mandated requirement. Performing this task requires the
involvement of the data owners, custodians, and the end-users. Data is generally
classified on the basis of criticality or sensitivity to disclosure, modification, and
destruction. The sensitivity of corporate data can be classified as:

PUBLIC (Least Sensitive Data): For data that is less sensitive than confidential
corporate data. Data in this category is usually unclassified and subject to public
disclosure by laws, common business practices, or company policies. All levels of
the DW end-users can access this data (e.g., audited financial statements,
admission information, phone directories, etc.).

CONFIDENTIAL (Moderately Sensitive Data): For data that is more sensitive


than public data, but less sensitive than top secret data. Data in this category is not
subject to public disclosure. The principle of least privilege applies to this data
classification category, and access to the data is limited to a need-to-know basis.

147

Users can only access this data if it is needed to perform their work successfully
(e.g., personnel/payroll information, medical history, investments, etc.).

TOP SECRET (Most Sensitive Data): For data that is more sensitive than
confidential data. Data in this category is highly sensitive and mission-critical.
The principle of least privilege also applies to this category -- with access
requirements much more stringent than those of the confidential data. Only highlevel DW users (e.g., unlimited access) with proper security clearance can access
this data (e.g., R&D, new product lines, trade secrets, recruitment strategy, etc.).
Users can access only the data needed to accomplish their critical job duties.

Regardless of which categories are used to classify data on the basis of sensitivity, the
universal goal of data classification is to rank data categories by increasing degrees of
sensitivity so that different protective measures can be used for different categories.
Classifying data into different categories is not as easy as it seems. Certain data
represents a mixture of two or more categories depending on the context used (e.g., time,
location, and laws in effect). Determining how to classify this kind of data is both
challenging and interesting.
Phase Three - Quantifying the Value of Data
In most organizations, senior management demands to see the smoking gun (e.g., cost-vsbenefit figures, or hard evidence of committed frauds) before committing corporate funds
to support security initiatives. Cynic managers will be quick to point out that they deal
with hard reality -- not soft variables concocted hypothetically. Quantifying the value of
sensitive data warranting protective measures is as close to the smoking gun as one can
get to trigger senior management's support and commitment to security initiatives in the
DW environment.
Phase Four - Identifying Data Vulnerabilities
This phase requires the identification and documentation of vulnerabilities associated
with the DW environment.
Some common vulnerabilities of DW include the following:

In-built DBMS Security: Most data warehouses rely heavily on in-built security
that is primarily VIEW-based. The VIEW-based security is inadequate for the DW
because it can be easily bypassed by a direct dump of data. It also does not protect
data during the transmission from servers to clients -- exposing the data to
unauthorized access. The security feature is equally ineffective for the DW
environment where the activities of the end-users are largely unpredictable.

148

DBMS Limitations: Not all database systems housing the DW data have the
capability to concurrently handle data of different sensitivity levels. Most
organizations, for instance, use one DW server to process top secret and
confidential data at the same time. However, the programs handling high top
security data may not prevent leaking the data to the programs handling the
confidential data, and limited DW users authorized to access only the confidential
data may not be prevented from accessing the top secret data.

Dual Security Engines: Some data warehouses combine the in-built DBMS
security features with the operating system access control package to satisfy their
security requirements. Using dual security engines tends to present opportunity
for security lapses and exacerbate the complexity of security administration in the
DW environment.

Inference Attacks: Different access privileges are granted to different DW users.


All users can access public data, but only a select few would presumably access
confidential or top secret data. Unfortunately, general users can access protected
data by inference without having a direct access to the protected data. Sensitive
data is typically inferred from a seemingly non-sensitive data. Carrying out direct
and indirect inference attacks is a common vulnerability in the DW environment.

Availability Factor: Availability is a critical requirement upon which the shared


access philosophy of the DW architecture is built. However, availability
requirement can conflict with or compromise the confidentiality and integrity of
the DW data if not carefully considered.

Human Factors: Accidental and intentional acts such as errors, omissions,


modifications, destruction, misuse, disclosure, sabotage, frauds, and negligence
account for most of the costly losses incurred by organizations. These acts
adversely affect the integrity, confidentiality, and availability of the DW data.

Insider Threats: The DW users (employees) represent the greatest threat to


valuable data. Disgruntled employees with legitimate access could leak secret
data to competitors and publicly disclose certain confidential human resources
data. Rogue employees can also profit from using strategic corporate data in the
stock market before such information is released to the public. These activities
cause (a) strained relationships with business partners or government entities, (b)
loss of money from financial liabilities, (c) loss of public confidence in the
organization, and (d) loss of competitive edge.

Outsider Threats: Competitors and other outside parties pose similar threat to the
DW environment as unethical insiders. These outsiders engage in electronic
espionage and other hacking techniques to steal, buy, or gather strategic corporate
data in the DW environment. Risks from these activities include (a) negative
publicity which decimates the ability of a company to attract and retain customers

149

or market shares, and (b) loss of continuity of DW resources which negates user
productivity. The resultant losses tend to be higher than those of insider threats.

Natural Factors: Fire, water, and air damages can render both the DW servers and
clients unusable. Risks and losses vary from organization to organization -depending mostly on location and contingency factors.

Utility Factors: Interruption of electricity and communications service causes


costly disruption to the DW environment. These factors have a lower probability
of occurrence, but tend to result in excessive losses.

A comprehensive inventory of vulnerabilities inherent in the DW environment need to be


documented and organized (e.g., as major or minor) for the next phase.
Phase Five - Identifying Protective Measures and Their Costs
Vulnerabilities identified in the previous phase should be considered in order to
determine cost-effective protection for the DW data at different sensitivity levels. Some
protective measures for the DW data include:

The Human Wall: Employees represent the front-line of defense against security
vulnerabilities in any decentralized computing environment, including DW.
Addressing employee hiring, training (security awareness), periodic background
checks, transfers, and termination as part of the security requirements is helpful in
creating security-conscious DW environment. This approach effectively treats the
root causes, rather than the symptoms, of security problems. Human resources
management costs are easily measurable.

Access Users Classification: Classify data warehouse users as 1) General Access


Users, 2) Limited Access Users, and 3) Unlimited Access Users for access control
decisions.
Access Controls: Use access controls policy based on principles of least privilege
and adequate data protection. Enforce effective and efficient access control
restrictions so that the end-users can access only the data or programs for which
they have legitimate privileges. Corporate data must be protected to the degree
consistent with its value. Users need to obtain a granulated security clearance
before they are granted access to sensitive data. Also, access to the sensitive data
should rely on more than one authentication mechanism. These access controls
minimize damage from accidental and malicious attacks.

Integrity Controls: Use a control mechanism to a) prevent all users from updating
and deleting historical data in the DW, b) restrict data merge access to authorized
activities only, c) immunize the DW data from power failures, system crashes and
corruption, d) enable rapid recovery of data and operations in the event of
disasters, and e) ensure the availability of consistent, reliable and timely data to

150

the users. These are achieved through the OS integrity controls and well tested
disaster recovery procedures.

Data Encryption: Encrypting sensitive data in the DW ensures that the data is
accessed on an authorized basis only. This nullifies the potential value of data
interception, fabrication and modification. It also inhibits unauthorized dumping
and interpretation of data, and enables secure authentication of users. In short,
encryption ensures the confidentiality, integrity, and availability of data in the DW
environment.

Partitioning: Use a mechanism to partition sensitive data into separate tables so


that only authorized users can access these tables based on legitimate needs.
Partitioning scheme relies on a simple in-built DBMS security feature to prevent
unauthorized access to sensitive data in the DW environment. However, use of
this method presents some data redundancy problems.

Development Controls: Use quality control standards to guide the development,


testing and maintenance of the DW architecture. This approach ensures that
security requirements are sufficiently addressed during and after the development
phase. It also ensures that the system is highly elastic (e.g., adaptable or
responsive to changing security needs).

The estimated costs of each security measure should be determined and documented for
the next phase. Commercial packages (e.g., CORA, RANK-IT, BUDDY SYSTEM,
BDSS, BIA Professional, etc.) and in-house developed applications can help in
identifying appropriate protective measures for known vulnerabilities, and quantifying
their associated costs or fiscal impact. Measuring the costs usually involves determining
the development, implementation, and maintenance costs of each security measure.
Phase Six - Selecting Cost-Effective Security Measures
All security measures involve expenses, and security expenses require justification. This
phase relies on the results of previous phases to assess the fiscal impact of corporate data
at risk, and select cost-effective security measures to safeguard the data against known
vulnerabilities.
Phase Seven - Evaluating the Effectiveness of Security Measures
Evaluating the effectiveness of security measures should be conducted continuously to
determine whether the measures are: 1) small, simple and straightforward, 2) carefully
analyzed, tested and verified, 3) used properly and selectively so that they do not exclude
legitimate accesses, 4) elastic so that they can respond effectively to changing security
requirements, and 5) reasonably efficient in terms of time, memory space, and usercentric activities so that they do not adversely affect the protected computing resources. It

151

is equally important to ensure that the DW end-users understand and embrace the
propriety of security measures through an effective security awareness program. The data
warehouse administrator (DWA) with the delegated authority from senior management is
responsible for ensuring the effectiveness of security measures.
The seven phases of systematic vulnerability assessment and management program are
helpful in averting underprotection and overprotection (two undesirable security
extremes) of the DW data. This is achieved through the eventual selection of costeffective security measures which ensure that different categories of corporate data are
protected to the degree necessary.
7.5 Application & System Development Knowledge Security-Based Systems
Expert or knowledge-based systems:
Expert systems, also called knowledge-based systems, use artificial intelligence (AI) to
solve problems.
Artificial Intelligence software uses nonnumerical algorithms to solve complex
problems, recognize hidden patterns, prove theorems, play games, mine data, and help in
forecasting and diagnosing a range of issues. The type of computation done by AI
software cannot be accomplished by straightforward analysis and regular programming
logic techniques.
Expert systems emulate human logic to solve problems that would usually require human
intelligence and intuition. These systems represent expert knowledge as data or rules
within the software of a system, and this data and these rules are called upon when it is
necessary to solve a problem. Knowledge-based systems collect data of human knowhow and hold it in some type of database. These fragments of data are used to reason
through a problem.
A regular program can deal with inputs and parameters only in the ways that it has been
designed and programmed to. Although a regular program can calculate the mortgage
payments of a house over 20 years at an 8 percent interest rate, it cannot necessarily
forecast the placement of stars in 100 million years because of all the unknowns and
possible variables that come into play. Although both programsa regular program and
an expert systemhave a finite set of information available to them, the expert system
will attempt to think like a person, reason through different scenarios, and provide an
answer even without all the necessary data. Conventional programming deals with
procedural manipulation of data, whereas humans attempt to solve complex problems
using abstract and symbolic approaches.
A book may contain a lot of useful information, but a person has to read that book,
interpret its meaning, and then attempt to use those interpretations within the real world.
This is what an expert system attempts to do.

152

Professionals in the field of AI develop techniques that provide the modeling of


information at higher levels of abstraction. The techniques are part of the languages and
tools used, which enable programs to be developed that closely resemble human logic.
The programs that can emulate human expertise in specific domains are called expert
systems or knowledge-based systems.
An expert system is " a computer program containing a knowledge base and a set of
algorithms and rules used to infer new facts from knowledge and incoming data of a
knowledge-based system provides the necessary rules for the system to take the original
facts and combine them to form new facts ".
Rule-based programming is a common way of developing expert systems. The rules are
based on if-then logic units. The rules specify a set of actions to be performed for a given
situation. This is one way that expert systems are used to find patterns, which is called
pattern matching. A mechanism, called the inference engine, automatically matches
facts against patterns and determines which rules are applicable. The actions of the
corresponding rules are executed when the inference engine is instructed to begin
execution.
For example, lets say Dr. Gorenz is puzzled by a patients symptoms and is unable to
match the problems the patient is having to a specific ailment and find the right cure. So
he uses an expert system to help him in his diagnosis. Dr. Gorenz can initiate the expert
system, which will then take him through question-and-answer scenarios. The expert
system will use the information gathered through this interaction, and it will go step by
step through the facts, looking for patterns that can be tied to known diseases, ailments,
and medical issues. Although Dr. Gorenz is very smart and one of the top doctors in his
field, he cannot necessarily recall all possible diseases and ailments. The expert system
can analyze this information because it is working off of a database that has been stuffed
full of medical information that can fill up several libraries.
As the expert system goes through the medical information, it may see that the patient
had a case of severe hives six months ago, had a case of ringing in the ears and blurred
vision three months ago, and has a history of diabetes. The system will look at the
patients recent complaints of joint aches and tiredness. With each finding, the expert
system digs deeper, looking for further information, and then uses all the information
obtained and compares it with the knowledge base that is available to it. In the end, the
expert system returns a diagnosis to Dr. Gorenz that says that the patient is suffering from
a rare disease found only in Brazil that is caused by a specific mold that grows on
bananas. Because the patient has diabetes, his sensitivity is much higher to this
contaminant. The system spits out the necessary treatment.
Then Dr. Gorenz marches back into the room where the patient is waiting and explains
the problem and protects his reputation of being a really smart doctor. The system not
only uses a database of facts, but also collects a wealth of knowledge from experts in a
specific field. This knowledge is captured by using interactive tools that have been
engineered specifically to capture human knowledge. This knowledge base is then

153

transferred to automated systems that help in human decisions by offering advice, free up
experts from repetitive routine decisions, ensure that decisions are made in a consistent
manner in a quicker fashion, and help a company retain its organizations expertise even
as employees come and go.
An expert system usually consists of two parts: an inference engine and a knowledge
base. The inference engine handles the user interface, external files, scheduling, and
program-accessing capabilities. The knowledge base contains data pertaining to a specific
problem or domain. Expert systems use the inference engine to decide how to execute a
program or how the rules should be initiated and followed.The inference engine of a
knowledge-based system provides the necessary rules for the system to take the original
facts and combine them to form new facts.
The systems employ AI programming languages to allow for real-world decision making.
The system is built by a knowledge system builder (programmer), a knowledge engineer
(analyst), and subject matter expert(s). The system is built on facts, rules of thumb, and
the experts advice. The information gathered from the expert(s) during the development
of the system is kept in a knowledge base and is used during the question-and-answer
session with the end user. The system works as a consultant to the end user and can
recommend several alternative solutions by considering competing hypotheses at the
same time. Expert systems are commonly used to automate a security log review for an
IDS.
Application Vulnerabilities and Threats
Applications include programming language (e.g. ASP, PHP, Java, .Net, Perl or C) to
implement business logic and serve the client.
Web Application Vulnerability is the Vulnerability on web application itself.
Can be identified by:

Source code review

Application testing

Automatic scanner

Manual testing

OWASP TOP TEN WEB APPLICATION VULNERABILITIES ARE:


A1 Unvalidated Input
Information from web requests is not validated before being used by a web application.
Attackers can use these flaws to attack backend components through a web application.

154

A2 Broken Access Control


Restrictions on what authenticated users are allowed to do are not properly enforced.
Attackers can exploit these flaws to access other users' accounts, view sensitive files, or
use unauthorized functions.
A3 Broken Authentication and Session Management
Account credentials and session tokens are not properly protected. Attackers that can
compromise passwords, keys, session cookies, or other tokens can defeat authentication
restrictions and assume other users' identities.
A4 Cross Site Scripting
The web application can be used as a mechanism to transport an attack to an end user's
browser. A successful attack can disclose the end user?s session token, attack the local
machine, or spoof content to fool the user.
A5 Buffer Overflow
Web application components in some languages that do not properly validate input can be
crashed and, in some cases, used to take control of a process. These components can
include CGI, libraries, drivers, and web application server components.
A6 Injection Flaws
Web applications pass parameters when they access external systems or the local
operating system. If an attacker can embed malicious commands in these parameters, the
external system may execute those commands on behalf of the web application.
A7 Improper Error Handling
Error conditions that occur during normal operation are not handled properly. If an
attacker can cause errors to occur that the web application does not handle, they can gain
detailed system information, deny service, cause security mechanisms to fail, or crash the
server.
A8 Insecure Storage
Web applications frequently use cryptographic functions to protect information and
credentials. These functions and the code to integrate them have proven difficult to code
properly, frequently resulting in weak protection.
A9 Application Denial of Service

155

Attackers can consume web application resources to a point where other legitimate users
can no longer access or use the application. Attackers can also lock users out of their
accounts or even cause the entire application to fail.
A10 Insecure Configuration Management
Having a strong server configuration standard is critical to a secure web application.
These servers have many configuration options that affect security and are not secure out
of the box.

7.6 System Vulnerabilities and Threats


System vulnerabilities are weaknesses in the software or hardware on a server that can be
exploited to gain access to or shut down a network.
Once security holes or software bugs that give access to a network are identified, the
cracker sneaks in and gains information.Adding a component to a system increases the
overall insecurity, since this component may be directly broken into.

Protection of the Web server resources


Program File Shielding: The shielding protects the Web server executables and
configuration files from being modified or deleted.
Data File Shielding: The access to the Web server data files is restricted to the process
running the executable. Since the executable is shielded, it is guaranteed that no other
process will access the data files. In addition, the static Web pages are also shielded. Any
attempt to either modify, or delete, these pages is prohibited. The only program that can
modify these files is the predefined Web-authoring tool ran only by the predetermined
Web master.
Registry Shielding: To function properly, the Web server relies on settings stored in the
system registry. If the intruder modifies the appropriate registry entries, he can affect the
operation of the Web server. The Registry shielding makes sure that the correct level of
read/write access is granted only to the appropriate processes.
Service Shielding: To prevent denial-of-service attacks, the Web server shield will
include rules that prevent any attempt to stop the Web server service or change its startup
mode.
User Shielding: The shield makes sure that the privileges of the users under which the
Web server runs cannot be modified. This eliminates the possibility of escalating the

156

privileges of the Web server user, which is a common goal of intruders. The shield also
prevents changing the user under which it is running. If, for example, the user was
changed to Administrator, ensuing attacks could be harmful since the attacker could
execute code with Administrator rights.

Protection against malicious use of the Web server


Hackers exploit Web server and application vulnerabilities to perform malicious
commands or access data.Since the normal behavior of the Web server is well defined,
most of these deviations can be identified and prevented.
To Prevent Vulnerabilities:

Multiple layers of security to protect against various attack techniques,

preventing exploits in real-time before they execute.

Protection of Web server applications.

Protection against all known and unknown attacks against the operating

system.

Accurate identification of attacks to avoid generating numerous false

positives.

The ability to monitor incoming HTTP communication and other incoming

communication such as SMTP/FTP/DNS.

Receiving new updates in a seamless fashion, requiring only minimal

configuration.

Easily deploying the solution and its updates across multiple servers.

Network Vulnerabilities
Network vulnerabilities are present in every system. Network technology advances so
rapidly that it can be very difficult to eradicate vulnerabilities altogether; the best one can
hope for, in many cases, is simply to minimize them. Networks are vulnerable to
157

slowdowns due to both internal and external factors. Internally, networks can be affected
by overextension and bottlenecks, external threats, DoS/DDoS attacks, and network data
interception. The execution of arbitrary commands can lead to system malfunction,
slowed performance, and even failure. Indeed, total system failure is the largest threat
caused by a compromised system-understanding possible vulnerabilities is critical for
administrators.
Internal network vulnerabilities result from overextension of bandwidth (user needs
exceeding total resources) and bottlenecks (user needs exceeding resources in specific
network sectors). These problems can be addressed by network management systems and
utilities such as traceroute, which allow administrators to pinpoint the location of network
slowdowns. Traffic can then be rerouted within the network architecture to increase speed
and functionality.

External Network Vulnerabilities


DoS (Denial of Service attacks) and DDoS (Distributed Denial of Service attacks) are
external attacks as the result of one attack or a number of coordinated attacks,
respectively. Designed to slow down or disable networks altogether, these attacks are
among the most serious threats that networks face. Administrators must use tools to
monitor network performance in order to catch these threats as soon as possible. Many
monitoring systems are configured to send alarms or alerts to administrators when such
attacks occur, allowing for network access by intruders to be quickly terminated.
Data interception is another of the most common network vulnerabilities, for both LANs
and WLANs. Hackers within range of a WLAN workstation can infiltrate a secure
session, and monitor or change the network data for the purpose of accessing sensitive
information or altering the operation of the network. User authentication systems are used
to keep such interception from occurring. Firewalls can keep unauthorized users from
accessing the network in the first place, while base station discovery scans allow for the
rooting out of intruders on a given network.

Threats In Networks
Precursors to attack
Port scan

158

Social engineering
Reconnaissance
OS and application fingerprinting
Authentication failures
Impersonation
Guessing
Eavesdropping
Spoofing
Session hijacking
Man-in-the-middle attack
Programming flaws Confidentiality
Buffer overflow
Addressing errors
Parameter modification, time-of-check to time-of-use errors
Server-side include
Cookie
Malicious active code: JavaScript, ActiveX
Malicious code: virus, worm, Trojan horse
Malicious typed code
Protocol flaw
Eavesdropping
Passive wiretap
Misdelivery

159

Exposure within the network


Traffic flow analysis
Cookie
Integrity
Protocol flaw
Active wiretap
Impersonation
Falsification of message
Noise
Web site defacement
DNS attack
Availability
Protocol flaw
Transmission or component failure
Connection flooding, e.g., echo-chargen, ping of death, smurf, syn flood .
DNS attack
Traffic redirection
Distributed denial of service

Examples of System Vulnerabilities


Syn Flood:
This attack uses the TCP protocol suite, making the session-oriented nature of these
protocols work against the victim.

Host A send a large number of SYN packets requesting a connection.

160

Host B responds with the SYN/ACK packet of system specifications.

Host A never responds with the ACK packet.

Host B is stuck waiting for a huge number of ACK packets that never arrive.
While flooded, no new connections are allowed.

Occasionally packets get lost or damaged in transmission. The destination maintains a


queue called the SYN_RECV connections, tracking those items for which a SYNACK
has been sent but no corresponding ACK has yet been received. Normally, these
connections are completed in a short time. If the SYNACK (2) or the ACK (3) packet is
lost, eventually the destination host will time out the incomplete connection and discard it
from its waiting queue.
The attacker can deny service to the target by sending many SYN requests and never
responding with ACKs, thereby filling the victim's SYN_RECV queue. Typically, the
SYN_RECV queue is quite small, such as 10 or 20 entries. Because of potential routing
delays in the Internet, typical holding times for the SYN_RECV queue can be minutes.
So the attacker needs only to send a new SYN request every few seconds and it will fill
the queue.
Ping of Death:
A ping of death is a simple attack. Since ping requires the recipient to respond to the ping
request, all the attacker needs to do is send a flood of pings to the intended victim. The
attack is limited by the smallest bandwidth on the attack route. If the attacker is on a 10megabyte (MB) connection and the path to the victim is 100 MB or more, the attacker
cannot mathematically flood the victim alone. But the attack succeeds if the numbers are
reversed: The attacker on a 100-MB connection can easily flood a 10-MB victim. The
ping packets will saturate the victim's bandwidth.
Example:

Ping: a signal sent out to see if a machine is up and running on the Internet.

You receive a message to verify that a machine is actually active and receiving
connections or transmissions.

This is typically a small signal.

Max. Ethernet packet size (1,500)bytes

Maximum packet size for TCP/IP(65,510)bytes

One large ping data packet(65,510)

161

Packet is fragmented for transport(44 smaller packets of (1,499 + 1 value assigned


for reassembling in order).

7.7 Webservices security and SOA


Webservices
Web service is a software system designed to support interoperable machine-to-machine
interaction over a network. Web services are frequently just application programming
interfaces (API) that can be accessed over a network, such as the Internet, and executed
on a remote system hosting the requested services.

SOAP: An XML-based, extensible message envelope format, with "bindings" to


underlying protocols (e.g., HTTP, SMTP and XMPP).

WSDL: An XML format that allows service interfaces to be described, along with
the details of their bindings to specific protocols. Typically used to generate server
and client code, and for configuration.

UDDI: A protocol for publishing and discovering metadata about Web services, to
enable applications to find Web services, either at design time or runtime.

WS-Security: Defines how to use XML Encryption and XML Signature in SOAP
to secure message exchanges.

162

WS-ReliableMessaging: A protocol for reliable messaging between two Web


services.

WS-Security
WS-Security A specification that addresses end to end message based security for
SOAP.

It doesnt rely on transport level security

defines a standard set of SOAP extensions that can be used when building
secure Web services to implement integrity and confidentiality.

WS-Security relies on standards like XML Encryption, XML Signature.


WS-Security addresses,

Message privacy through encrypting the message XML Encryption

Message integrity through signing the message XML Signature

Proof of origin by including the user identity as part of message - Authentication

SOAP specification support to include extensions to SOAP headers.


SOAP Message security Implemented as extensions to headers.

WS-Security definition
XML for Security tokens, Signatures and Encrypted Data

How to include these security elements as part of SOAP headers.

SECURITY TOKENS Claims about the originator of the request and contains the
information using which the user can be authenticated
Username token

Username

Username and Password

Binary token
163

X509 certificates

Kerberos

XML tokens

SAML

User Defined tokens


SIGNATURE

SOAP body, Security token or both

Provides authenticity and integrity of the message

Based on XML Signature specification

ENCRYPTION

All or part of SOAP message


Provides confidentiality on all or parts of the message
Based on XML Encryption specification

WS-Security doesnt specify anything with respect to Security Token generation and
validation. It just specifies how the token to be included in the SOAP message

WS-Trust
WS-Trust defines mechanism for:

security token exchange to enable the issuance and dissemination of


credentials within different trust domains

Defines the Security Token Service - a Web Service to:

Request security tokens

Validate security tokens

Exchange security tokens

exchange one type of token for another type


164

Defines how to broker trust relationships


Some trust relationship must exist before the fact

Anyone can issue tokens (be a Security Token Service

SOA
A service-oriented architecture (SOA) is the underlying structure supporting
communications between services. In this context, a service is defined as a unit of work
to be performed on behalf of some computing entity, such as a human user or another
program. SOA defines how two computing entities, such as programs, interact in such a
way as to enable one entity to perform a unit of work on behalf of another entity. Service
interactions are defined using a description language. Each interaction is self-contained
and loosely coupled, so that each interaction is independent of any other interaction.
SOA is an increasingly popular concept, but in fact, it isn't a new idea at all. It's been
around since the mid-1980s. But it never really took off, because there has been no
standard middleware or application programming interfaces that would allow it to take
root. There were attempts to build them, such as the Distributed Computing Environment
(DCE) and Common Object Request Broker Architecture (CORBA), but neither really
caught on, and SOA languished as an interesting concept, but with no significant realworld applications. Then Web services came along and gave it a boost. The Web services
underlying architecture dovetails perfectly with the concept of SOA so much so, in
fact, that some analysts and software makers believe that the future of Web services rests
with SOA.
One reason for that is how in tune the two architectures are. "SOA is an architecture that
publishes services in the form of an XML interface,". In that, it's really no different from
a traditional Web service architecture, in which UDDI is used to create a searchable
directories of Web services. In fact, he says, UDDI is the solution of choice for
enterprises that want to make available software components as services internally on
their networks in SOA.
SOA Vs Web services
"Most Web services implementations are point-to-point, where you have an intimate
knowledge of the platform to which you're connecting. The implementation is fairly solid
and the interface doesn't really change," he says. That means that the Web service is not
made available publicly on the network, and cannot be "discovered" in a sense, it's
hard-coded in the point-to-point connection. In an SOA implementation, information
about the Web service and how to connect to it is published in a UDDI-built directory,
and so that Web service can be easily discovered and used in other applications and
implementations.
SOA LOGICAL ARCHITECTURE

165

SOA SECURITY

Identity management is a required for SOA security (TFIM/TAM)


Web Services Security Gateway is required for SOA security (DataPower)
Best solution for enterprise-class SOA security & management should include
both Tivoli FIM and DataPower SOA appliances
SOA Security Management includes these functions:
Both existing Tivoli products and DataPower appliances offer value on their own
Together they provide a comprehensive solution for securing & managing Web
services and SOAs

SOA is based on Web Services, hence all the web services security challenges are
applicable to SOA.

8. Enterprise Technical assessments & audits


8.1 Application
Need For Web Application Vulnerability Assessment
Application Security encompasses measures taken to prevent exceptions in the security
166

policy of an application or the underlying system (vulnerabilities) through flaws in the


design, development, or deployment of the application.
Networks still face many challenges but advancements in network security have made
networks less exploitable. As networks become more secure, the comparatively insecure
application layer becomes an attractive target for hackers. These are just a few of the
exploitable vulnerabilities and attack methods common to applications.
SQL injection
Cross-site scripting (XSS)
Buffer overflow
Directory traversal
Denial of Service (DoS)
Man-in-the-middle
Session hijacking
Web Application Vulnerability assessment and Penetration Testing services help
identify issues related to the following :
1. Vulnerabilities and risks in your web applications
2. Known and unknown vulnerabilities (0-day) to combat against the threat until your
security vendor provides the appropriate solution.
3. Technical vulnerabilities: URL manipulation, SQL injection, cross site scripting, backend authentication, password in memory, session hijacking, web server configuration,
credential management etc,
4. Business Risks: Day-to-Day threat analysis, unauthorized logins, Personal information
modification, pricelist modification, unauthorized funds transfer, breach of customer trust
etc.
8.2 Network
Network Vulnerability Assessment and Penetration testing
A system is vulnerable if it is open, or susceptible to, attack.Network vulnerability
analysis has been focused at the network or operating system level to detect the
vulnerabilities whereas penetration testing is to exploit the vulnerabilities which are
found in the vulnerability analysis stage.
1. The testing process begins with gathering as much information as possible about the
network architecture, topology, hardware, and software in order to find all security
vulnerabilities.

167

2. Researching public information such as Whois records, SEC filings, business news
articles, patents, and trademarks not only provides security engineers with background
information, but also gives insight into what information hackers can use to find
vulnerabilities.
3. Tools such as ping, traceroute, and nslookup can be used to retrieve information from
the target environment and help determine network topology, Internet provider, and
architecture.Tools such as port scanners, NMAP, SNMPC, and NAT help determine
hardware, operating systems, patch levels, and services running on each target device.

8.3 OWASP
The Open Web Application Security Project (OWASP) is dedicated to finding and
fighting the causes of insecure software. OWASP is an open source project dedicated to
finding and fighting the causes of insecure software.
OWASP's projects cover many aspects of application security. We build documents, tools,
teaching environments, guidelines, check lists, and other materials to help organizations
improve their capability to produce secure code.
OWASP TOP TEN PROJECT
The OWASP Top Ten provides a powerful awareness document for web application
security. The OWASP Top Ten represents a broad consensus about the most critical web
application security flaws.
A1 Unvalidated Input
Information from web requests is not validated before being used by a web application.
Attackers can use these flaws to attack backend components through a web application.
A2 Broken Access Control
Restrictions on what authenticated users are allowed to do are not properly enforced.
Attackers can exploit these flaws to access other users' accounts, view sensitive files, or
use unauthorized functions.
A3 Broken Authentication and Session Management
Account credentials and session tokens are not properly protected. Attackers that can
compromise passwords, keys, session cookies, or other tokens can defeat authentication
restrictions and assume other users identities.
A4 Cross Site Scripting

168

The web application can be used as a mechanism to transport an attack to an end user's
browser. A successful attack can disclose the end user?s session token, attack the local
machine, or spoof content to fool the user.
A5 Buffer Overflow
Web application components in some languages that do not properly validate input can be
crashed and, in some cases, used to take control of a process. These components can
include CGI, libraries, drivers, and web application server components.
A6 Injection Flaws
Web applications pass parameters when they access external systems or the local
operating system. If an attacker can embed malicious commands in these parameters, the
external system may execute those commands on behalf of the web application.
A7 Improper Error Handling
Error conditions that occur during normal operation are not handled properly. If an
attacker can cause errors to occur that the web application does not handle, they can gain
detailed system information, deny service, cause security mechanisms to fail, or crash the
server.
A8 Insecure Storage
Web applications frequently use cryptographic functions to protect information and
credentials. These functions and the code to integrate them have proven difficult to code
properly, frequently resulting in weak protection.
A9 Application Denial of Service
Attackers can consume web application resources to a point where other legitimate users
can no longer access or use the application. Attackers can also lock users out of their
accounts or even cause the entire application to fail.
A10 Insecure Configuration Management
Having a strong server configuration standard is critical to a secure web application.
These servers have many configuration options that affect security and are not secure out
of the box.
OWASP Webgoat Project
WebGoat is a deliberately insecure J2EE web application maintained by OWASP
designed to teach web application security lessons. In each lesson, users must

169

demonstrate their understanding of a security issue by exploiting a real vulnerability in


the WebGoat application. For example, in one of the lessons the user must use SQL
injection to steal fake credit card numbers. The application is a realistic teaching
environment, providing users with hints and code to further explain the lesson.WebGoat
is written in Java and therefore installs on any platform with a Java virtual machine
8.4 Data Security best practices
Following is the list of practices for data security:
1. Software patch updates
Security patches are the primary method of fixing security vulnerabilities in software.In
computing, a patch is a small piece of software designed to update or fix problems with a
computer program. This includes fixing bugs, replacing graphics and improving the
usability or performance.
2. Anti-virus software
A computer program designed to detect and respond to malicious software, such as
viruses and worms. Responses may include blocking user access to infected files,
cleaning infected files or systems, or informing the user that an infected program was
detected.Antivirus software consists of computer programs that attempt to identify, thwart
and eliminate computer viruses and other malicious software(malware).Various methods
exist of encrypting and packing malicious software which will make even well-known
viruses undetectable to antivirus software. Detecting these "camouflaged" viruses
requires a powerful unpacking engine, which can decrypt the files before examining
them.
Antivirus software typically uses two different techniques to accomplish this:
1. Examining (scanning) files to look for known viruses matching definitions in a virus
dictionary
2. Identifying suspicious behaviour from any computer program which might indicate
infection. Such analysis may include data captures, port monitoring and other methods.
3. Host-based firewall software
A firewall is an information technology (IT) security device which is configured to
permit, deny or proxy data connections set and configured by the organization's security
policy. Firewalls can either be hardware and/or software based.
A firewall's basic task is to control traffic between computer networks with different
zones of trust. Typical examples are the Internet which is a zone with no trust and an
internal network which is (and should be) a zone with high trust. The ultimate goal is to
provide controlled interfaces between zones of differing trust levels through the

170

enforcement of a security policy and connectivity model based on the least privilege
principle and separation of duties.
4. Passwords
A password is a form of secret authentication data that is used to control access to a
resource. The password is kept secret from those not allowed access, and those wishing to
gain access are tested on whether or not they know the password and are granted or
denied access accordingly.
5. Encryption
In cryptography, encryption is the process of obscuring information to make it unreadable
without special knowledge, sometimes referred to as scrambling.Encryption is any
procedure to convert plaintext into ciphertext.The translation of data into a secret code.
Encryption is the most effective way to achieve data security. To read an encrypted file,
you must have access to a secret key or password that enables you to decrypt it.
Unencrypted data is called plain text ; encrypted data is referred to as cipher text.
6. Authentication
In computer security, authentication is the process of attempting to verify the digital
identity of the sender of a communication such as a request to log in. The sender being
authenticated may be a person using a computer, a computer itself or a computer
program.
7. Authenticated proxy services
Firewall authentication proxy feature allows network administrators to apply specific
security policies on a per-user basis. Previously, user identity and related authorized
access was associated with a user's IP address, or a single security policy had to be
applied to an entire user group or sub network. Now, users can be identified and
authorized on the basis of their per-user policy, and access privileges tailored on an
individual basis are possible, as opposed to general policy applied across multiple users.
With the authentication proxy feature, users can log in to the network or access the
Internet via HTTP, and their specific access profiles are automatically retrieved and
applied from a CiscoSecure ACS, or other RADIUS, or TACACS+ authentication server.
The user profiles are active only when there is active traffic from the authenticated users.
The authentication proxy is compatible with other Cisco IOS security features such as
Network Address Translation (NAT), Context-based Access Control (CBAC), IP Security
(IPSec) encryption, and Cisco Secure VPN Client (VPN client) software.

171

8. Physical security
Physical security describes measures that prevent or deter attackers from accessing a
facility, resource, or information stored on physical media. It can be as simple as a locked
door or as elaborate as multiple layers of armed guard posts.
The field of security engineering has identified three elements to physical security:
obstacles, to frustrate trivial attackers and delay serious ones; alarms, security lighting,
security guard patrols or closed-circuit television cameras, to make it likely that attacks
will be noticed; and security response, to repel, catch or frustrate attackers when an attack
is detected.
8.5 Ethical hacking

What is Ethical Hacking?


A white hat hacker, also rendered as ethical hacker, is, in the realm of information
technology, a person who is ethically opposed to the abuse of computer systems. A white
hat generally focuses on securing IT systems, whereas a black hat would like to break
into them but this is a simplification as a black hat will wish to secure his own machine,
and a white hat will have no issues breaking into it in the course of his or her activities.
What is a denial-of-service (DoS) attack?
In a denial-of-service (DoS) attack, an attacker attempts to prevent legitimate users from
accessing information or services. By targeting your computer and its network
connection, or the computers and network of the sites you are trying to use, an attacker
may be able to prevent you from accessing email, web sites, online accounts (banking,
etc.), or other services that rely on the affected computer.
The most common and obvious type of DoS attack occurs when an attacker "floods" a
network with information. When you type a URL for a particular web site into your
browser, you are sending a request to that site's computer server to view the page. The
server can only process a certain number of requests at once, so if an attacker overloads
the server with requests, it can't process your request.
This is a "denial of service" because you can't access that site. In a denial of service
attack, the user sends several authentication requests to the server, filling it up. All
requests have false return addresses, so the server can't find the user when it tries to send
the authentication approval. The server waits, sometimes more than a minute, before
closing the connection. When it does close the connection, the attacker sends a new batch
of forged requests, and the process begins again--tying up the service indefinitely.

172

DoS attacks have two general forms:


1. Force the victim computer(s) to reset or consume its resources such that it can no
longer provide its intended service.
2. Obstruct the communication media between the intended users and the victim so that
they can no longer communicate adequately.
What is a distributed denial-of-service (DDoS) attack?
In a distributed denial-of-service (DDoS) attack, an attacker may use your computer to
attack another computer. By taking advantage of security vulnerabilities or weaknesses,
an attacker could take control of your computer. He or she could then force your
computer to send huge amounts of data to a web site or send spam to particular email
addresses. The attack is "distributed" because the attacker is using multiple computers,
including yours, to launch the denial-of-service attack.
Back Door
A backdoor in a computer system (or cryptosystem or algorithm) is a method of
bypassing normal authentication or securing remote access to a computer, while
attempting to remain hidden from casual inspection. It is also possible to create a
backdoor without modifying the source code of a program, or even modifying it after
compilation. This can be done by rewriting the compiler so that it recognizes code during
compilation that triggers inclusion of a backdoor in the compiled output. When the
compromised compiler finds such code, it compiles it as normal, but also inserts a
backdoor (perhaps a password recognition routine). So, when the user provides that input,
he gains access to some (likely undocumented) aspect of program operation. Many
computer worms, such as Sobig and Mydoom, install a backdoor on the affected
computer (generally a PC on broadband running insecure versions of Microsoft Windows
and Microsoft Outlook). Such backdoors appear to be installed so that spammers can
send junk email from the infected machines.
Symmetric Backdoor
A traditional backdoor is a symmetric backdoor: anyone that finds the backdoor can in
turn use it.
Asymmetric Backdoor
An asymmetric backdoor can only be used by the attacker who plants it, even if the full
implementation of the backdoor becomes public (e.g., via publishing, being discovered
and disclosed by a reverse-engineer, etc.). Also, it is computationally intractable to detect
the presence of an asymmetric backdoor under black-box queries. This class of attacks

173

have been termed kleptography. They can be carried out in software or hardware (eg,
smartcards) or a combination. The theory of asymmetric backdoors is part of a larger
field now called cryptovirology.
Spoofing
Spoofing is the creation of TCP/IP packets using somebody else's IP address. Routers use
the "destination IP" address in order to forward packets through the Internet, but ignore
the "source IP" address. That address is only used by the destination machine when it
responds back to the source.
URL spoofing and phishing
Another kind of spoofing is "web page spoofing," also known as phishing. In this attack,
a legitimate web page such as a bank's site is reproduced in "look and feel" on another
server under control of the attacker. The intent is to fool the users into thinking that they
are connected to a trusted site, for instance to harvest user names and passwords. This
attack is often performed with the aid of URL spoofing, which exploits web browser bugs
in order to display incorrect URLs in the browsers location bar; or with DNS cache
poisoning in order to direct the user away from the legitimate site and to the fake one.
Once the user puts in their password, the attack-code reports a password error, then
redirects the user back to the legitimate site.
TCP and DNS Spoofing
In TCP spoofing, Internet packets are sent with forged return addresses and in DNS
spoofing the attacker forges information about which machine names correspond to
which network addresses.
Referer spoofing
Some websites, especially pornographic paysites, allow access to their materials only
from certain approved (login-) pages. This is enforced by checking the Referer header of
the HTTP request. This referer header however can be changed Referer spoofing,
allowing users to gain unauthorized access to the materials.
Defending Against Spoofing
Filtering at the Router - Implementing ingress and egress filtering on your border routers
is a great place to start your spoofing defense. You will need to implement an ACL
(access control list) that blocks private IP addresses on your downstream interface.
Encryption and Authentication - Implementing encryption and authentication will also
reduce spoofing threats. Both of these features are included in Ipv6, which will eliminate
current spoofing threats.

174

Man in the Middle


In cryptography, a man-in-the-middle attack (MITM) is an attack in which an attacker is
able to read, insert and modify at will, messages between two parties without either party
knowing that the link between them has been compromised. The attacker must be able to
observe and intercept messages going between the two victims. The MITM attack can
work against public-key cryptography and is also particularly applicable to the original
Diffie-Hellman key exchange protocol, when used without authentication
Example of a successful MITM attack against public-key encryption
Suppose Alice wishes to communicate with Bob, and that Mallory wishes to eavesdrop
on the conversation, or possibly deliver a false message to Bob. To get started, Alice must
ask Bob for his public key. If Bob sends his public key to Alice, but Mallory is able to
intercept it, a man-in-the-middle attack can begin. Mallory can simply send Alice a public
key for which she has the private, matching, key. Alice, believing this public key to be
Bob's, then encrypts her message with Mallory's key and sends the enciphered message
back to Bob. Mallory again intercepts, deciphers the message, keeps a copy, and re
enciphers it (after alteration if desired) using the public key Bob originally sent to Alice.
When Bob receives the newly enciphered message, he will believe it came from Alice.
This example shows the need for Alice and Bob to have some way to ensure that they are
truly using the correct (for example, authenticated) public keys of each other. Otherwise,
such attacks are generally possible, in principle, against any message sent using publickey technology. Fortunately, there are a variety of techniques that help defend against
MITM attacks.
Replay Attack
A replay attack is a form of network attack in which a valid data transmission is
maliciously or fraudulently repeated or delayed. This is carried out either by the
originator or by an adversary who intercepts the data and retransmits it, possibly as part
of a masquerade attack by IP packet substitution (such as stream cipher attack).
Suppose Alice wants to prove her identity to Bob. Bob requests her password as proof of
identity, which Alice dutifully provides (possibly after some transformation like a hash
function); meanwhile, Mallory is eavesdropping the conversation and keeps the
password. After the interchange is over, Mallory connects to Bob posing as Alice; when
asked for a proof of identity, Mallory sends Alice's password read from the last session,
which Bob must accept.
Preventing Replay Attack
A way to avoid replay attacks is using session tokens. Session tokens should be chosen by
a (pseudo-) random process. Timestamping is another way of preventing a replay attack.

175

Synchronization should be achieved using a secure protocol. For example Bob


periodically broadcasts the time on his clock together with a MAC. When Alice wants to
send Bob a message, she includes her best estimate of the time on his clock in her
message, which is also authenticated. Bob only accepts messages for which the
timestamp is within a reasonable tolerance. The advantage of this scheme is that Bob
does not need to generate (pseudo-) random numbers
TCP/IP Hijacking
TCP/IP hijacking is a clever technique that uses spoofed packets to take over a
connection between a victim and a host machine. The victim's connection hangs, and the
attacker is able to communicate with the host machine as if the attacker were the victim.
This technique is exceptionally useful when the victim uses a one-time password to
connect to the host machine. A one-time password can be used to authenticate once, and
only once, which means that sniffing the authentication is useless for the attacker. In this
case, TCP/IP hijacking is an excellent means of attack.
During any TCP connection, each side maintains a sequence number. As packets are sent
back and forth, the sequence number is incremented with each packet sent. Any packet
that has an incorrect sequence number isn't passed up to the next layer by the receiving
side. The packet is dropped if earlier sequence numbers are used, or it is stored for later
reconstruction if later sequence numbers are used. If both sides have incorrect sequence
numbers, any communications that are attempted by either side aren't passed up by the
corresponding receiving side, even though the connection remains in the established
state. This condition is called a desynchronized state, which causes the connection to
hang.
Weak Keys
In cryptography, a weak key is a key which when used with a specific cipher, makes the
cipher behave in some undesirable way. Weak keys usually represent a very small
fraction of the overall key space, which usually means that if one generates a random key
to encrypt a message weak keys are very unlikely to give rise to a security problem. A
cipher with no weak keys is said to have a flat key space. Weak keys are secret keys with
a certain value for which the block cipher in question will exhibit certain regularities in
encryption or, in other cases, a poor level of encryption.
Example : Weak keys in DES
The block cipher DES has a few specific keys termed "weak keys" and "semi-weak
keys". These are keys which cause the encryption mode of DES to act identically to the
decryption mode of DES (albeit potentially that of a different key). In operation, the
secret 56-bit key is broken up into 16 subkeys according to the DES key schedule; one
subkey is used in each of the sixteen DES rounds. The weak keys of DES are those which
produce sixteen identical subkeys. This occurs when the key bits are: all zeros

176

all ones
the first half of the entire key is all ones and the second half is all zeros
vice versa
Since all the subkeys are identical, and DES is a Feistel network, the encryption function
is self-inverting; that is, encrypting twice produces the original plaintext.
List of algorithms with weak keys
RC4. RC4's weak initialization vectors allow an attacker to mount a known-plaintext
attack and have been widely used to compromise the security of WEP.
IDEA. IDEA's weak keys are identifiable in a chosen-plaintext attack. They make the
relationship between the XOR sum of plaintext bits and ciphertext bits predictable. There
is no list of these keys, but they can be identified by their "structure".
Blowfish. Blowfish's weak keys produce bad S-boxes, since Blowfish's S-boxes are
key-dependent. There is a chosen plaintext attack against a reduced-round variant of
Blowfish that is made easier by the use of weak keys. This is not a concern for full 16round Blowfish.
Social Engineering
In computer security, social engineering is a term that describes a non-technical kind of
intrusion that relies heavily on human interaction and often involves tricking other people
to break normal security procedures. Social engineering attacks are typically carried out
by telephoning users or operators and pretending to be an authorized user, to attempt to
gain illicit access to systems.
Birthday
A birthday attack is a name used to refer to a class of brute-force attacks. It gets its name
from the surprising result that the probability that two or more people in a group of 23
share the same birthday is greater than 1/2. Such a result is called a birthday paradox.
If some function, when supplied with a random input, returns one of k equally-likely
values, then by repeatedly evaluating the function for different inputs, we expect to obtain
the same output after about a certain number of tries.
Birthday attacks are often used to find collisions of hash functions. To avoid this attack,
the output length of the hash function used for a signature scheme can be chosen large
enough so that the birthday attack becomes computationally infeasible, i.e. about twice as
large as needed to prevent an ordinary brute force attack

177

Password Guessing
Passwords, pass phrases and security codes are used in virtually every interaction
between users and information systems.
Unfortunately, with such a central role in security, easily guessed passwords are often the
weakest link. They grant attackers access to system resources; and bring them
significantly closer to being able to access other accounts, nearby machines, and perhaps
even administrative privileges
Brute Force
In cryptanalysis, a brute force attack is a method of defeating a cryptographic scheme by
trying a large number of possibilities. Using brute force, attackers attempt combinations
of the accepted character set in order to find a specific combination that gains access to
the authorized area. Attackers can use brute force applications, such as password guessing
tools and scripts, in order to try all the combinations of well-known user names and
passwords.
Dictionary
In cryptanalysis and computer security, a dictionary attack is a technique for defeating a
cipher or authentication mechanism by trying to determine its decryption key by
searching a large number of possibilities. In contrast with a brute force attack, where all
possibilities are searched through exhaustively, a dictionary attack only tries possibilities
which are most likely to succeed, typically derived from a list of words in a dictionary.
Software Exploitation
In computer security, an exploit is a piece of software, a chunk of data, or sequence of
commands that take advantage of a bug, glitch or vulnerability in order to get unintended
or unanticipated behavior out of computer software, hardware, or something electronic
(usually computerized). This frequently includes such things as gaining control of a
computer system or allowing privilege escalation or a denial of service attack. There are
several methods of classifying exploits. The most common is by how the exploit contacts
the vulnerable software. A 'remote exploit' works over a network and exploits the security
vulnerability without any prior access to the vulnerable system.
Viruses
A virus is a program that reproduces its own code by attaching itself to other executable
files in such a way that the virus code is executed when the infected executable file is
executed.
A computer virus is a computer program which reproduces. To distribute itself, a virus
needs to execute or otherwise be interpreted. Viruses often hide themselves inside other

178

programs to be executed. A computer virus reproduces by making, possibly evolved,


copies of itself in the computer's memory, storage, or over a network.
Boot sector viruses
A boot sector virus affects the boot sector of a hard disk, which is a very crucial part. The
boot sector is where all information about the drive is stored, along with a program that
makes it up. By inserting its code into the boot sector, a virus guarantees that it loads into
memory during every boot sequence. A boot virus does not affect files; instead, it affects
the disks that contain them. According to Symantec, Boot Sector Viruses differ only
slightly from Master Boot Record Viruses in their respective effects- both load into
memory and stay there (resident viruses), thus infecting any executable launched
afterwards. In addition, both types may prevent recent Operating Systems from booting.
Tunneling Virus
One method of virus detection is an interception program which sits in the background
looking for specific actions that might signify the presence of a virus. To do this it must
intercept interrupts and monitor what's going on. A tunneling virus attempts to backtrack
down the interrupt chain in order to get directly to the DOS and BIOS interrupt handlers.
The virus then installs itself underneath everything, including the interception program.
Some anti-virus programs will attempt to detect this and then reinstall themselves under
the virus. This might cause an interrupt war between the anti-virus program and the virus
and result in problems on your system. Some anti-virus programs also use tunneling
techniques to bypass any viruses that might be active in memory when they load.
A tunneling virus attempts to bypass activity monitor anti-virus programs by following
the interrupt chain back down to the basic DOS or BIOS interrupt handlers and then
installing itself.
Trojan Horses
Trojan horse is a program that contains or installs a malicious program called trojan. The
term is derived from the classical myth of the Trojan Horse. Trojan horses may appear to
be useful or interesting programs (or at the very least harmless) to an unsuspecting user,
but are actually harmful when executed. There are two common types of Trojan horses.
One, is otherwise useful software that has been corrupted by a cracker inserting malicious
code that executes while the program is used. The other type is a standalone program that
masquerades as something else, like a game or image file, in order to trick the user into
some misdirected complicity that is needed to carry out the program's objectives.
Example: Back Orifice
Back Orifice is a Trojan that provides a back door into your computer when active and
you are connected to the Internet. The original program came out in August 1998 with an
update called BO-2000 later. The name is a play on Microsoft's Back Office and the

179

program is advertised as a network management program. It is produced by the group


Cult of the Dead Cow (cDc). Even though it does what it's advertised to do, the fact that
it installs silently, can't be easily detected once run, and potentially allows a remote user
to take complete control of your computer without your permission when it is installed
has caused the AV companies to call it a Trojan and they have developed detection
routines for the program.BO is distributed as several programs and documentation. The
original programs run on Win95/98 only; Bo-2000 also runs on NT. Indications are BO
can be attached to other executables in the same style as viruses. When run, BO silently
installs itself (you can't even see it in the task list) and, when the computer is connected
to a TCP/IP network (e.g., the Internet) it sits in the background and just listens. What it's
listening for are commands starting with *!*QWTY? from a remote computer. The
commands themselves are encrypted (in the US version; an international version does not
use strong encryption). When a command is received BO is capable of many things; some
benign, others quite destructive and/or intrusive. A short list includes: computer info, list
disk contents, file manipulation (including updating itself!), compressing &
decompressing files, get and send cached passwords, terminate processes, display
messages, access the registry, plus store and send keyboard input while users are logging
into other services. BO even supports HTTP protocols and emulates a web server so
others can access the user's computer using a web browser. If that's not enough, BO can
expand its abilities using plug-ins (which, of course, it can be commanded to download to
itself).
Logic Bombs
Logic Bomb also called slag code, programming code added to the software of an
application or operating system that lies dormant until a predetermined period of time
(i.e., a period of latency) or event occurs, triggering the code into action. Logic bombs
typically are malicious in intent, acting in the same ways as a virus or Trojan horse once
activated. In fact, viruses that are set to be released at a certain time are considered logic
bombs. They can perform such actions as reformatting a hard drive and/or deleting,
altering or corrupting data.
Worms
A worm is a piece of software that uses computer networks and security flaws to create
copies of itself. A copy of the worm will scan the network for any other machine that has
a specific security flaw. It replicates itself to the new machine using the security flaw, and
then begins scanning and replicating a new worm. Worms are programs that replicate
themselves from system to system without the use of a host file. This is in contrast to
viruses, which requires the spreading of an infected host file.
Slammer Worm
A worm usually exploits some sort of security hole in a piece of software or the operating
system. For example, the Slammer worm exploited a hole in Microsoft's SQL server.

180

Code Red Worm


The Code Red worm slowed down Internet traffic when it began to replicate itself, but
not nearly as badly as predicted. Each copy of the worm scanned the Internet for
Windows NT or Windows 2000 servers that do not have the Microsoft security patch
installed. Each time it found an unsecured server, the worm copied itself to that server.
The new copy then scanned for other servers to infect.
Ida Code Red Worm
The Ida Code Red Worm, which was first reported by eEye Digital Security, takes
advantage of known vulnerabilities in the Microsoft IIS Internet Server Application
Program Interface (ISAPI) service. Un-patched systems are susceptible to a "buffer
overflow" in the Idq.dll, which permits the attacker to run embedded code on the affected
system. This memory resident worm, once active on a system, first attempts to spread
itself by creating a sequence of random IP addresses to infect unprotected web servers.
Each worm thread will then inspect the infected computer's time clock.
Architecture of 'I Love You' Worm
The author of the worm has conceded that he may have released the malware by
"accident". The worm is written using Microsoft Visual Basic Scripting (VBS) and
requires that the end user run the script in order to deliver its payload. It will add a set of
registry keys to the Windows registry that will allow the malware to start up at every
boot.
The worm will then search all drives which are connected to the infected computer and
replace *.JPG, *.JPEG, *.VBS, *.VBE, *.JS, *.JSE, *.CSS, *.WSH, *.SCT, *.HTA files
with copies of itself while appending the file name with a .VBS. extension. The malware
will also locate *.MP3 and *.MP2 files and when found, makes the files hidden, copies
itself with the same filename and append a .VBS.
The worm propagates by sending out copies of itself to all entries in the Microsoft
Outlook address book. It also has an additional component that it will download and
execute on an infected system called "WIN-BUGSFIX.EXE" which is a password
stealing program which will e-mail cached passwords.
Denial of service
What is a denial-of-service (DoS) attack?
In a denial-of-service (DoS) attack, an attacker attempts to prevent legitimate users from
accessing information or services. By targeting your computer and its network
connection, or the computers and network of the sites you are trying to use, an attacker

181

may be able to prevent you from accessing email, web sites, online accounts (banking,
etc.), or other services that rely on the affected computer.
The most common and obvious type of DoS attack occurs when an attacker "floods" a
network with information. When you type a URL for a particular web site into your
browser, you are sending a request to that site's computer server to view the page. The
server can only process a certain number of requests at once, so if an attacker overloads
the server with requests, it can't process your request.
This is a "denial of service" because you can't access that site. In a denial of service
attack, the user sends several authentication requests to the server, filling it up. All
requests have false return addresses, so the server can't find the user when it tries to send
the authentication approval. The server waits, sometimes more than a minute, before
closing the connection. When it does close the connection, the attacker sends a new batch
of forged requests, and the process begins again--tying up the service indefinitely.
DoS attacks have two general forms:
1. Force the victim computer(s) to reset or consume its resources such that it can no
longer provide its intended service.
2. Obstruct the communication media between the intended users and the victim so that
they can no longer communicate adequately.
What is a distributed denial-of-service (DDoS) attack?
In a distributed denial-of-service (DDoS) attack, an attacker may use your computer to
attack another computer. By taking advantage of security vulnerabilities or weaknesses,
an attacker could take control of your computer. He or she could then force your
computer to send huge amounts of data to a web site or send spam to particular email
addresses. The attack is "distributed" because the attacker is using multiple computers,
including yours, to launch the denial-of-service attack.
9. Security Models & Architecture
9.1 A,I,C
Availability, Integrity, Confidentiality
The purpose of a Security Model is to acheive:
Availability - Prevention of loss of access to resources and data
Integrity - Prevention of unauthorized modification of data and resources

182

Confidentiality - Prevention of unauthorized disclosure of data and resource


Orange Book (TCSEC) Trusted Computer System Evaluation Criteria (TCSEC) is a
United States Government Department of Defense (DoD) standard that sets basic
requirements for assessing the effectiveness of computer security controls built into a
computer system. The TCSEC was used to evaluate, classify and select computer systems
being considered for the processing, storage and retrieval of sensitive or classified
information.
The Orange book defines four divisions: D, C, B and A where division A has the highest
security. Each division represents a significant difference in the trust an individual or
organization can place on the evaluated system.
D Minimal Protection
C Discretionary Protection
B Mandatory Protection
A Verified Protection

9.2 Operating system architectures


An operating system provides an environment for applications and users to work within.
Every operating system is made up of various layers and modules of functionality. It has
the responsibility of managing the underlying hardware components, memory
management, I/O operations, file system, process management, and providing system
services. The architecture defines how these activities are performed and carried out.
Process Management
A process is a program in execution. A process could be in one of the states:
Ready
Running
Blocked(Locked)
Done
The OS is responsible for creation/deletion of both user and system processes,
suspension/resumption of processes, process synchronization, communication and
deadlock handling.
Thread Management
Threads are a way for a program to split itself into two or more simultaneously running

183

tasks. A OS which supports multi-threading can execute threads of a multi-threaded


program simultaneously thus saving execution time.
Process Scheduling
Scheduling and synchronizing various processes and their activities is part of process
management, which is a responsibility of the operating system.
Process Activity
When multiple processes are active, the OS has to make sure that each process is isolated
and do not interfere with each other.
Memory Management
To provide a safe and stable environment, an operating system must exercise proper
memory management, which is one of the most important tasks that it carries out,
because basically everything happens in memory
These are the basic functions of a memory manger of an OS:

Relocation
o Swap contents from RAM to the hard drive as needed
o Provide pointers for applications if their instructions and memory segment
have been moved to a different location in main memory

Protection
o Limit processes to interact only with the memory segments that are
assigned to them
o Provide access control to memory segments

Sharing
o Use complex controls to ensure integrity and confidentiality when
processes need to use the same shared memory segments
o Allow many users with different levels of access to interact with the same
application running in one memory segment

Logical organization
o Allow for the sharing of specific software modules, such as library
routines & procedures

Physical organization
o Segment the physical memory space for application and operating system
processes

9.3 Trusted computing base and security mechanisms

184

The term trusted computing base, which originated from the Orange Book, does not
address the level of security that a system provides, but rather the level of trust that a
system provides, albeit from a security sense.
The trusted computing base (TCB) is everything in a computing system that provides a
secure environment. This includes the operating system and its provided security
mechanisms, hardware, physical locations, network hardware and software, and
prescribed procedures.
The ability of a trusted computing base to enforce correctly a unified security policy
depends on the correctness of the mechanisms within the trusted computing base, the
protection of those mechanisms to ensure their correctness, and the correct input of
parameters related to the security policy.
Not all components need to be trusted and therefore not all components fall within the
trusted computing base (TCB). The components that do fall within the TCB need to be
identified and their accepted capabilities need to be defined.
For example, the TCB software consists of:
->The kernel (operating system)
->The configuration files that control system operation
->Any program that is run with the privilege or access rights to alter the kernel or the
configuration files
Any program which mis-behaves and is a threat to security would not fall under the TCB.
9.4 Protection mechanisms within an operating system
An operating system has several protection mechanisms to ensure that processes do not
negatively affect each other or the critical components of the system itself.
Memory protection
CPU modes & Protection rings
CPU modes (also called processor modes or CPU privilege levels) are operating modes
for the central processing unit of some computer architectures that place restrictions on
the operations that can be performed by the process currently running in the CPU. This
allow to design the operating system to run at different privilege levels.
Protection rings support the availability, integrity, and confidentiality requirements of
multitasking operating systems. The most commonly used architecture provides four
protection rings:
Ring 0 Operating system kernel
Ring 1 Remaining parts of the operating system

185

Ring 2 I/O drivers and utilities


Ring 3 Applications and user activity

9.5 Security models - Examples etc


A system to be secured would be done with a security policy in hand.
A security model maps the abstract goals of the policy to information system terms by
specifying explicit techniques that are necessary to enforce the security policy.
For example, if the security policy says "All users must be authorized and logged", the
security model details what all is to be done to make sure that the authorization and
logging happens, so that system adheres to the said policy.
Examples:
The Bell-LaPadula model:
In the 1970s, the U.S. military used time-sharing mainframe systems and was concerned
about the security of these systems and leakage of classified information. The BellLaPadula model was developed to address these concerns. It was the first mathematical
model of a multilevel security policy used to define the concept of a secure state machine
and modes of access and outlined rules of access.
Three main rules are used and enforced in the Bell-LaPadula model:

The simple security rule states that a subject at a given security level cannot read
data that resides at a higher security level.
The *-property (star property) rule states that a subject in a given security level
cannot write information to a lower security level.
The strong star property rule states that a subject that has read and write
capabilities can only perform those functions at the same security level, nothing
higher and nothing lower.

The Biba model was developed after the Bell-LaPadula model. The Biba model
addresses the integrity of data within applications. The model is designed such that
subjects may not corrupt data in a level ranked higher than the subject, or be corrupted by
data from a lower level than the subject. The Biba model is not concerned with security
levels and confidentiality, so it does not base access decisions upon this type of lattice.

186

If implemented and enforced properly, the Biba model prevents data from any integrity
level from flowing to a higher integrity level. Biba has two main rules to provide this
type of protection:
*-integrity axiom: A subject cannot write data to an object at a higher integrity
level (referred to as no write up).
Simple integrity axiom: A subject cannot read data from a lower integrity level
(referred to as no read down).
10. Access Control

10.1 Concepts: Access, Subject


Access is the ability to do something with the resource. In computer world, to create,
delete or view a file, we need access to the file.
Resource on which the user will act is Subject. The user will try to gain access on the
subject. Based on the user credentials, access will be granted or denied.
Access control is the process by which users are identified and granted certain privileges
to information, systems, or resources. Computer- based access controls can prescribe not
only who or what process may have access to a specific system resource, but also the
type of access that is permitted. These controls may be implemented in the computer
system or in external devices.Access control devices properly identify people, and verify
their identity through an authentication process so they can be held accountable for their
actions. Good access control systems record and timestamp all communications and
transactions so that access to systems and information can be audited at later
dates.Reputable access control systems all provide authentication, authorization, and
administration
Access Control Objectives The primary objective of access control is to preserve and
protect the confidentiality, integrity, and availability of information, systems, and
resources

10.2 Access control models


Types of Access Control
There are two basic types of access control: those that verify who you say you are, and
those that verify who you really are. The three basic verification methods are to check
1.what

you have,
2. what you know, or
3. what you are

187

or even some combination of these. In common noncomputer usage, an example of the


"what you have" method would be having the key to a padlock; you can get in if you do.
"What you know" is the method used to keep other people out of your account; if they
don't know your password, tough luck for them. And "what you are" is coming into
prominent play in criminal investigations, as DNA patterns are admitted as evidence.
Discretionary access control systems allow the owner of the information to decide who
can read, write, and execute a particular file or service. When users create and modify
files in their own home directories, their ability to do this is because they have been
granted discretionary access control over the files that they own. On end-user laptops and
desktops, discretionary access control systems are prevalent.
Mandatory access control systems do not allow the creator of the information to govern
who can access it or modify data. Administrators and overseeing authorities predetermine who can access and modify data, systems, and resources. Mandatory access
control systems are commonly used in military installation, financial institutions, and
because of the new HIPAA privacy laws in medical institutions as well.
Role-based access control systems allow users to access systems and information based
on their role within the organization. Role-based access allows end-users access to
information and resources based on their role within the organization. Roles based access
can be applied to groups of people or individuals. For example, you can allow everyone
in a group named sysadmin access to privileged resources.
Rule-based access control systems allow users to access systems and information based
on pre-determined and configured rules. Rules can be established that allow access to all
end-users coming from a particular domain, host, network, or IP addresses. If an
employee changes their role within the organization, their existing authentication
credentials remain in effect and do not need to be re-configured. Using rules in
conjunction with roles adds greater flexibility because rules can be applied to people, as
well as devices.

Access Control Models


Access control models provide a model for developers who need to implement access
control functionality in their software and devices. Instead of having to reinvent the
wheel for every system and design a complex access control system, developers can write
a system based on existing well thought-out models. For the Security+ exam, there are
three different types of access control models, which you need to be able to explain and
differentiate:
1. Mandatory Access Control (MAC)

188

2. Discretionary Access Control (DAC)


3. Role Based Access Control(RBAC)

Discretionary Access Control (DAC)


A widely used type of access control model is Discretionary Access Control (DAC). In a
DAC model, a subject has complete control over the objects that it owns and the
programs that it executes. For example, user Alice owns a file called mywork.doc. She
allows mywork.doc to be read by Bob and members of the Sales group and allows no one
else access to it. The better implementations of DAC provide a method to grant access on
a need-to-know basis by denying access to everyone by default. Access permissions must
be assigned explicitly to those who need access.
Programs executed by a user will have the same permissions as the user who is executing
it. This implies that the security of the system depends on the applications that are being
executed and, therefore, when a security breach in an application takes place, it can affect
all the objects to which the user has access. This makes DAC very vulnerable to Trojan
Horses. For example, suppose subject Alice has read and write access to object file1.doc.
Charlie, a malicious attacker, could write a program that creates a new object file2.doc
when executed.
The program would grant Alice write access and Charlie read access. Charlie can disguise
the program as legitimate software and send it to Alice. When Alice runs the program, it
will have the same privileges as Alice. It could copy the content from file1.doc to
file2.doc, effectively exposing the content of file1.doc to Charlie. Imagine an
administrator executing the program; the attacker could obtain maximum privileges,
jeopardizing the security of the entire system.
Mandatory Access Control (MAC)
In Mandatory Access Control (MAC) models, the administrator manages access controls.
The administrator defines a policy, which users cannot modify. This policy indicates
which subject has access to which object. This access control model can increase the
level of security, because it is based on a policy that does not allow any operation not
explicitly authorized by an administrator. The MAC model is developed for and
implemented in systems in which confidentiality has the highest priority, such as in the
military. Subjects receive a clearance label and objects receive a classification label, also
referred to as security levels.
In the original MAC model according to Bell and LaPadula, access rights were granted
according to numeric access levels of subjects to objects that were labeled an access
level. For example, an administrator has access level 65535, Alice level 100, and Guest

189

level 1. There are two files, file1.doc has a level of 2, file2.doc a level of 200. Alice can
access only file1, Guests can neither access file1 and file2, and the administrator can
access both files. The access level of the users has to be equal or higher than the object
they want to access.
The Bell and LaPadula model was later expanded to what is also known as Multi-Level
Security (MLS). MLS typically used in military environments, implements an extra
security layer for each object by using labels (i.e. "top secret", "secret", "confidential",
and "unclassified"). Only users located in the same layer, or a higher layer, can access the
objects. This works on a need to know basis, known as the principal of least privileges;
users can only access the objects they need to be able to do their job. Additionally,
subjects cannot write down, which means they cannot write to object or create new
objects with a lower security label than itself. This prevents subject from sharing secrets
with subject with a lower security label, hence keeps information confidential.
Role Based Access Control (RBAC)
The third main type of access control mode is Role Based Access Control. In RBAC
models, an administrator defines a series of roles and assigns them to subjects. Different
roles can exist for system processes and ordinary users. Objects are set to be a certain
type, to which subjects with a certain role have access. This can save an administrator
from the tedious job of defining permissions per user.
Rule-Based Access Control model, which, to confuse matters a bit, is sometimes referred
to as Rule-Based Role-Based Access Control (RB-RBAC). It includes mechanisms to
dynamically assign roles to subjects based on their attributes and a set of rules defined by
a security policy. For example, you are a subject on one network and you want access to
objects in another network. The other network is on the other side of a router configured
with access lists. The router can assign a certain role to you, based on your network
address or protocol, which will determine whether you will be granted access or not
RBAC
Role-Based Access Control (RBAC) is an access control method controlling access to
what information computer users can utilize the programs that they can run, and the
modifications that they can make.
With role-based access control, access decisions are based on the roles that individual
users have as part of an organization. Users take on assigned roles (such as Unix
administrator, Database administrator, Network Administrator, etc.,). The process of
defining roles should be based on a thorough analysis of how an organization operates
and should include input from a wide spectrum of users in an organization.

190

The use of roles to control access can be an effective means for developing and enforcing
enterprise-specific security policies, and for streamlining the security management
process.
Users and Roles
Under the RBAC framework, users are granted membership into roles based on their
competencies and responsibilities in the organization. The operations that a user is
permitted to perform are based on the user's role. User membership into roles can be
revoked easily and new memberships established as job assignments dictate. Role
associations can be established when new operations are instituted, and old operations
can be deleted as organizational functions change and evolve. This simplifies the
administration and management of privileges; roles can be updated without updating the
privileges for every user on an individual basis. When a user is associated with a role: the
user can be given no more privilege than is necessary to perform the job. This concept of
least privilege requires identifying the user's job functions, determining the minimum set
of privileges required to perform that function, and restricting the user to a domain with
those privileges and nothing more. In less precisely controlled systems, this is often
difficult or costly to achieve. Someone assigned to a job category may be allowed more
privileges than needed because it is difficult to tailor access based on various attributes or
constraints. Since many of the responsibilities overlap between job categories, maximum
privilege for each job category could cause unlawful access. Roles and Role Hierarchies
Under RBAC, roles can have overlapping responsibilities and privileges; that is, users
belonging to different roles may need to perform common operations. Some general
operations may be performed by all employees. In this situation, it would be inefficient
and administratively cumbersome to specify repeatedly these general operations for each
role that gets created. Role hierarchies can be established to provide for the natural
structure of an enterprise. A role hierarchy defines roles that have unique attributes and
that may contain other roles; that is, one role may implicitly include the operations that
are associated with another role. In the network domain, a role Domain admin could
contain the roles of Operating system administrator and network administrator. This
means that members of the role Specialist are implicitly associated with the operations
associated with the roles Operating system administrator and network administrator
without the administrator having to explicitly list the Operating system administrator and
network administrator operations. Role hierarchies are a natural way of organizing roles
to reflect authority, responsibility, and competency: The role in which the user is gaining
membership is not mutually exclusive with another role for which the user already
possesses membership. These operations and roles can be subject to organizational
policies or constraints. When operations overlap, hierarchies of roles can be established.
Instead of instituting costly auditing to monitor access, organizations can put constraints
on access through RBAC. For example, it may seem sufficient to allow network admin to
have access to entire network if their access is monitored carefully. With RBAC,
constraints can be placed on network admin access so that only local LAN can be
accessed.

191

Roles and Operations


Organizations can establish the rules for the association of operations with roles.
Operations can also be specified in a manner that can be used in the demonstration and
enforcement of laws or regulations. For example, a system admin can be provided with
operations to create system users not another system admin. An operation represents a
unit of control that can be referenced by an individual role, subject to regulatory
constraints within the RBAC framework. An operation can be used to capture complex
security-relevant details or constraints that cannot be determined by a simple mode of
access. The RBAC framework provides administrators with the capability to regulate
who can perform what actions, when, from where, in what order, and in some cases under
what relational circumstances: Only those operations that need to be performed by
members of a role are granted to the role. Granting of user membership to roles can be
limited. Some roles can only be occupied by a certain number of employees at any given
period of time. The role of manager, for example, can be granted to only one employee at
a time. Although an employee other than the manager may act in that role, only one
person may assume the responsibilities of a manager at any given time. A user can
become a new member of a role as long as the number of members allowed for the role is
not exceeded.
Advantages of RBAC
A properly-administered RBAC system enables users to carry out a broad range of
authorized operations, and provides great flexibility and breadth of application. System
administrators can control access at a level of abstraction that is natural to the way that
enterprises typically conduct business. This is achieved by statically and dynamically
regulating users' actions through the establishment and definition of roles, role
hierarchies, relationships, and constraints. Thus, once an RBAC framework is established
for an organization, the principal administrative actions are the granting and revoking of
users into and out of roles. This is in contrast to the more conventional and less intuitive
process of attempting to administer lower-level access control mechanisms directly (e.g.,
access control lists [ACLs], capabilities, or type enforcement entities) on an object-byobject basis.
Summary

Roles are assigned based on organizational structure with emphasis on the


organizational security policy

Roles are assigned by the administrator based on relative relationships within the
organization or user base. For instance, a manager would have certain authorized
transactions over his employees. An administrator would have certain authorized
transactions over his specific realm of duties (backup, account creation, etc.)

192

Each role is designated a profile that includes all authorized commands,


transactions, and allowable information access.

Roles are granted permissions based on the principle of least privilege.

Roles are determined with a separation of duties in mind so that a developer Role
should not overlap a QA tester Role.

Roles are activated statically and dynamically as appropriate to certain relational


triggers (help desk queue, security alert, initiation of a new project, etc.)

Roles can be only be transferred or delegated using strict sign-offs and


procedures.

Roles are managed centrally by a security administrator or project leader.

MAC
Mandatory access control (MAC) is an access policy determined by the system, not the
owner. MAC is used in multilevel systems that process highly sensitive data, such as
classified government and military information. A multilevel system is a single computer
system that handles multiple classification levels between subjects and objects.
MAC secures information by assigning sensitivity labels on information and comparing
this to the level of sensitivity a user is operating at. In general, MAC access control
mechanisms are more secure than DAC yet have trade offs in performance and
convenience to users. MAC mechanisms assign a security level to all information, assign
a security clearance to each user, and ensure that all users only have access to that data
for which they have a clearance. MAC is usually appropriate for extremely secure
systems including multilevel secure military applications or mission critical data
applications. A MAC access control model often exhibits one or more of the following
attributes.
Only administrators, not data owners, make changes to a resource's security label.
All data is assigned security level that reflects its relative sensitivity,
confidentiality, and protection value.
# All users can read from a lower classification than the one they are granted (A
"secret" user can read an unclassified document).
# All users can write to a higher classification (A "secret" user can post information to a
Top Secret resource).
# All users are given read/write access to objects only of the same classification (a
"secret" user can only read/write to a secret document).
# Access is authorized or restricted to objects based on the time of day depending on
the labeling on the resource and the user's credentials (driven by policy).
#
#

193

# Access is authorized or restricted to objects based on the security characteristics of


the HTTP client (e.g. SSL bit length, version information, originating IP address or
domain, etc.)
For example, DAC mechanisms check the validity of the credentials given them at the
discretion of the user, and mandatory access controls (MAC) validate aspects that the
user cannot control. For instance, anyone can give you a username and password and you
can then log in with them; which username and password you supply is at your
discretion, and the system can't tell you apart from the real owner. Your DNA is
something you can't change, though, and a control system that only allowed access to
your pattern would never work for anyone else--and you couldn't pretend to be someone
else, either. This makes such a system a mandatory (also called non-discretionary) access
control system.

DAC
Discretionary Access Control (DAC) is a kind of access control system that permits
specific entities (people, processes, devices) to access system resources according to
permissions for each particular entity. The controls are discretionary in the sense that a
subject with a certain access permission is capable of passing that permission (perhaps
indirectly) on to any other subject (resource)."
Discretionary Access Control means that each object has an owner and the owner of the
object gets to choose its access control policy. DAC mechanisms check the validity of the
credentials given them at the discretion of the user. For instance, anyone can give you a
username and password and you can then log in with them; which username and
password you supply is at your discretion, and the system can't tell you apart from the
real owner.
For example, Standard Linux file permissions use the Discretionary Access Control
(DAC) model. Under DAC, files are owned by a user and that user has full control over
them, including the ability to grant access permissions to other users. The root account
has full control over every file on the entire system. For example, John has root access
can be allowed to both read and change a file, while Jim can be restricted to only reading
the file if he doesn't own the file.
11. Wireless security
11.1 WTLS (Wireless Transport Layer Security)
Wireless Transport Layer Security (WTLS) is the security level for Wireless Application
Protocol (WAP) applications. WTLS, designed specifically for the wireless environment,
is needed because the client and the server must be authenticated in order for wireless
transactions to remain secure and because the connection needs to be encrypted.

194

Based on Transport Layer Security (TLS) v1.0 (a security layer used in the Internet,
equivalent to Secure Socket Layer 3.1), WTLS was developed to address the problematic
issues surrounding mobile network devices - such as limited processing power and
memory capacity, and low bandwidth - and to provide adequate authentication, data
integrity, and privacy protection mechanisms. WTLS provides a public-key
cryptography-based security mechanism.
Wireless transactions, such as those between a user and their bank, require stringent
authentication and encryption to ensure security to protect the communication from
attack during data transmission. Because mobile networks do not provide end-to-end
security, TLS had to be modified to address the special needs of wireless users.
Designed to support datagrams in a high latency, low bandwidth environment, WTLS
provides an optimized handshake through dynamic key refreshing, which allows
encryption keys to be regularly updated during a secure session.

11.2 802.11 and 802.11x


802.11 and 802.11x: 802.11 refers to a family of specifications developed by the IEEE
for wireless LAN technology. 802.11 specifies an over-the-air interface between a
wireless client and a base station or between two wireless clients. The IEEE accepted the
specification in 1997. IEEE 802.11 also known by the brand Wi-Fi, denotes a set of
Wireless LAN/WLAN standards developed by working group 11 of the IEEE LAN/MAN
Standards Committee (IEEE 802). The term 802.11x is also used to denote this set of
standards and is not to be mistaken for any one of its elements. There is no single 802.11x
standard. The term IEEE 802.11 is also used to refer to the original 802.11, which is now
sometimes called "802.11legacy".
There are several specifications in the 802.11 family:
802.11 -- applies to wireless LANs and provides 1 or 2 Mbps transmission in the 2.4 GHz
band using either frequency hopping spread spectrum (FHSS) or direct sequence spread
spectrum (DSSS).
802.11a -- an extension to 802.11 that applies to wireless LANs and provides up to 54
Mbps in the 5GHz band. 802.11a uses an orthogonal frequency division multiplexing
encoding scheme rather than FHSS or DSSS.
802.11b (also referred to as 802.11 High Rate or Wi-Fi) -- an extension to 802.11 that
applies to wireless LANS and provides 11 Mbps transmission (with a fallback to 5.5, 2
and 1 Mbps) in the 2.4 GHz band. 802.11b uses only DSSS.802.11b was a 1999
ratification to the original 802.11 standard, allowing wireless functionality comparable to
Ethernet.

195

802.11g -- applies to wireless LANs and provides 20+ Mbps in the 2.4 GHz band.
11.3 WEP / WAP (Wired Equivalent Privacy / Wireless Application Protocol)
Wireless Application Protocol or WAP is an open international standard for applications
that use wireless communication. Its principal application is to enable access to the
Internet from a mobile phone or PDA. WAP is a specification for a set of communication
protocols to standardize the way that wireless devices, such as cellular telephones and
radio transceivers, can be used for Internet access, including e-mail, the World Wide Web,
newsgroups, and instant messaging. While Internet access has been possible in the past,
different manufacturers have used different technologies. In the future, devices and
service systems that use WAP will be able to interoperate.
A WAP browser is designed to provide all of the basic services of a computer based web
browser but simplified to operate within the restrictions of a mobile phone. WAP is now
the protocol used for the majority of the world's mobile internet sites, known as WAP
sites. The Japanese i-mode system is currently the only other major competing wireless
data protocol.
Mobile internet sites, or WAP sites, are websites written in, or dynamically converted to,
WML (Wireless Markup Language) and accessed via the WAP browser.
WAP protocol suite:
The WAP Forum proposed a protocol suite that would allow the interoperability of WAP
equipment and software with many different network technologies; the rationale for this
was to build a single platform for competing network technologies such as GSM and IS95 (also known as CDMA) networks.
+------------------------------------------+
| Wireless Application Environment (WAE)
|
+------------------------------------------+
| Wireless Session Protocol (WSP)
|
+------------------------------------------+
| Wireless Transaction Protocol (WTP)
|
+------------------------------------------+
| Wireless Transport Layer Security (WTLS) |
+------------------------------------------+
| Wireless Datagram Protocol (WDP)
|
+------------------------------------------+
|
*** Any Wireless Data Network ***
|
+------------------------------------------+

|
|
| WAP
| protocol
| suite
|
|

The bottom-most protocol in the suite is the WAP Datagram Protocol (WDP), which is
an adaptation layer that makes every data network look a bit like UDP to the upper layers
by providing unreliable transport of data with two 16-bit port numbers (origin and
destination). WDP is considered by all the upper layers as one and same protocol, which
196

has several "technical realizations" on top of other "data bearers" such as SMS, USSD,
etc. On native IP bearers such as GPRS, UMTS packet-radio service, or PPP on top of a
circuit-switched data connection, WDP is in fact exactly UDP.
WTLS provides a public-key cryptography-based security mechanism similar to TLS.
Its use is optional.
WTP provides transaction support (reliable request/response) that is adapted to the
wireless world. WTP supports more effectively than TCP the problem of packet loss,
which is common in 2G wireless technologies in most radio conditions, but is
misinterpreted by TCP as network congestion.
Finally, WSP is best thought of on first approach as a compressed version of HTTP. This
protocol suite allows a terminal to emit requests that have an HTTP or HTTPS equivalent
to a WAP "gateway"; the gateway translates requests into plain HTTP.

Wired Equivalent Privacy (WEP):


Wired Equivalent Privacy (WEP) is a scheme that is part of the IEEE 802.11 wireless
networking standard to secure IEEE 802.11 wireless networks (also known as Wi-Fi
networks). Because a wireless network broadcasts messages using radio, it is particularly
susceptible to eavesdropping.
WEP was intended to provide comparable confidentiality to a traditional wired network
(in particular it does not protect users of the network from each other), hence the name.
Several serious weaknesses were identified by cryptanalysts any WEP key can be
cracked with readily available software in two minutes or less and WEP was
superseded by Wi-Fi Protected Access (WPA) in 2003, and then by the full IEEE
802.11i standard (also known as WPA2) in 2004. Despite the weaknesses, WEP provides
a level of security that can deter casual snooping.
WEP is part of the IEEE 802.11 standard ratified in September 1999. WEP uses the
stream cipher RC4 for confidentiality and the CRC-32 checksum for integrity.
Basic WEP Encryption: RC4 Keystream XORed with Plaintext Standard 64-bit WEP
uses a 40 bit key, which is concatenated to a 24-bit initialization vector (IV) to form the
RC4 traffic key. At the time that the original WEP standard was being drafted, U.S.
Government export restrictions on cryptographic technology limited the keysize. Once
the restrictions were lifted, all of the major manufacturers eventually implemented an
extended 128-bit WEP protocol using a 104-bit key size.
A 128-bit WEP key is almost always entered by users as a string of 26 Hexadecimal
(Hex) characters (0-9 and A-F). Each character represents 4 bits of the key. 4 26 = 104

197

bits; adding the 24-bit IV brings us what we call a "128-bit WEP key". A 256-bit WEP
system is available from some vendors, and as with the above-mentioned system, 24 bits
of that is for the I.V., leaving 232 actual bits for protection. This is typically entered as 58
Hexadecimal characters. (58 4 = 232 bits) + 24 I.V. bits = 256 bits of WEP protection.
Key size is not the only major security limitation in WEP. Cracking a longer key requires
interception of more packets, but there are active attacks that stimulate the necessary
traffic. There are other weaknesses in WEP, including the possibility of IV collisions and
altered packets, that are not helped at all by a longer key.

198

S-ar putea să vă placă și