Documente Academic
Documente Profesional
Documente Cultură
Cryptography (or cryptology) is a word from Greek kryptos, means hidden or secret"; and graphic
means writing", or "study". Cryptography is the practice and study of hiding information. Cryptography
is probably the most important aspect of communications security and is becoming increasingly important
as a basic building block for computer security.
The first known use of the modern cipher was by Julius Caesar, who did not trust his messengers and for
this reason, he created a system in which each character in this message was replaced by a character three
position ahead of it in Roman alphabet.
Basically cryptography is the science and art of creating secret codes, it include techniques such as
microdots, merging words with images and other way to hide information in storage and transmit.
Modern cryptography follows a strongly scientific approach, and it concern itself with the following four
objectives i.e., confidentiality, integrity, non-repudiation and authentication.
Plaintext: This is the original intelligible message or data that is fed into the algorithm as input.
Encryption algorithm: The encryption algorithm performs various substitutions and
transformations on the plaintext.
Secret key: The secret key is also input to the encryption algorithm. The key is a value
independent of the plaintext and of the algorithm. The algorithm will produce a different output
depending on the specific key being used at the time. The exact substitutions and transformations
performed by the algorithm depend on the key.
Cipher text: This is the scrambled message produced as output. It depends on the plaintext and
the secret key. For a given message, two different keys will produce two different cipher texts.
The cipher text is an apparently random stream of data and, as it stands, is unintelligible.
Decryption algorithm: This is essentially the encryption algorithm run in reverse. It takes the
cipher text and the secret key and produces the original plaintext.
3. HASHING TECHNIQUE
Hash algorithms are one-way mathematical algorithms that take an arbitrary length input and produce a
fixed length output string. A hash value is a unique and extremely compact numerical representation of a
piece of data.
Hashing algorithm store or transmit the arbitrary length data without using any secret key or public key, it
using the message digest technique for securing the message by any cryptanalyst. MD5 (message digest
technique) produces 128 bits for instance. It is computationally improbable to find two distinct inputs that
hash to the same value (or ``collide''). Hash functions have some very useful applications. They allow a
party to prove they know something without revealing what it is, and hence are seeing widespread use in
password schemes. They can also be used in digital signatures and integrity protection.
CRYPTANALYSIS:
There are two general approaches to attacking a conventional encryption scheme:
CRYPTANALYSIS:
Cryptanalytic attacks rely on the nature of the algorithm plus perhaps some knowledge of the general
characteristics of the plaintext or even some sample plaintext-cipher text pairs. In other words it is
explained as, cryptography is the science and art of creating secret codes, while c Cryptanalysis is the
science and art of breaking these secret codes.
This type of attack exploits the characteristics of the algorithm to attempt to deduce a specific plaintext or
to deduce the key being used.
BRUTE-FORCE ATTACK:
The attacker tries every possible key on a piece of cipher text until an intelligible translation into plaintext
is obtained. On average, half of all possible keys must be tried to achieve success.
The various types of cryptanalytic attacks, based on the amount of information known to the cryptanalyst.
Type
Attack
Cipher
only
of Known to Cryptanalyst
text
Encryption algorithm
Cipher text
Known
plaintext
Encryption algorithm
Cipher text
One or more plaintext-cipher text pairs formed with the secret key
Chosen
plaintext
Encryption algorithm
Cipher text
Plaintext message chosen by cryptanalyst, together with its corresponding
cipher text generated with the secret key
Chosen
cipher text
Encryption algorithm
Cipher text
Purported cipher text chosen by cryptanalyst, together with its
corresponding decrypted plaintext generated with the secret key
Chosen text
Encryption algorithm
Cipher text
Plaintext message chosen by cryptanalyst, together with its corresponding
cipher text generated with the secret key
Purported cipher text chosen by cryptanalyst, together with its
corresponding decrypted plaintext generated with the secret key
3. L2TP VPN:
L2TP or Layer to Tunneling Protocol is similar to PPTP, since it also doesnt provide encryption and it
relies on PPP protocol to do this. The difference between PPTP and L2TP is that the latter provides not
only data confidentiality but also data integrity. L2TP was developed by Microsoft and Cisco.
4. IPsec:
Tried and trusted protocol which sets up a tunnel from the remote site into your central site. As the name
suggests, its designed for IP traffic. IPsec requires expensive, time consuming client installations and this
can be considered an important disadvantage. The two IPsec protocols are:
Authentication Header (AH); provides data authentication, data integrity and replay protection for
data.
Encapsulating Security Payload (ESP); provides data authentication, data confidentiality and
integrity, and replay protection.
5. SSL:
SSL or Secure Socket Layer is a VPN accessible via https over web browser. SSL creates a secure session
from your PC browser to the application server youre accessing. The major advantage of SSL is that it
doesnt need any software installed because it uses the web browser as the client application.
6. MPLS VPN:
MPLS (Multi-Protocol Label Switching) was originally designed to improve the store-and-forward speed
of routers. MPLS was created as a team effort on the part of Ipsilon, Cisco, IBM, and Toshiba. These
companies worked together as part of the IETF (Internet Engineering Task Force) and MPLS was born.
MPLS are no good for remote access for individual users, but for site-to-site connectivity, theyre the
most flexible and scalable option. These systems are essentially ISP-tuned VPNs, where two or more sites
are connected to form a VPN using the same ISP. An MPLS network isnt as easy to set up or add to as
the others, and hence bound to be more expensive. MPLS does perform better than a site-to-site VPN
because there is less overhead and the routing between sites is optimized by static routes from your ISP.
Larger ISPs can even bring your data center (if you have one) into your MPLS network. A real MPLS
network should provide ping times between sites in under 10 ms. Traditional site-to-site VPNs can range
anywhere from 30 ms (at best) to over 100 ms.
7. Hybrid VPN:
A few companies have managed to combine features of SSL and IPSec & also other types of VPN types.
Hybrid VPN servers are able to accept connections from multiple types of VPN clients. They offer higher
flexibility at both clients and server levels and bound to be expensive.
Encapsulation
Authentication
Data encryption
ENCAPSULATION:
With VPN technology, private data is encapsulated with a header that contains routing information that
allows the data to traverse the transit network. For examples of encapsulation, see VPN Tunneling
Protocols.
AUTHENTICATION:
Authentication for VPN connections takes three different forms:
1. User-level
authentication
by
using
PPP
authentication:
To establish the VPN connection, the VPN server authenticates the VPN client that is attempting
the connection by using a Point-to-Point Protocol (PPP) user-level authentication method and
verifies that the VPN client has the appropriate authorization. If mutual authentication is used, the
VPN client also authenticates the VPN server, which provides protection against computers that
are masquerading as VPN servers.
2. Computer-level
authentication
by
using
Internet
Key
Exchange
(IKE):
To establish an Internet Protocol security (IPsec) security association, the VPN client and the
VPN server use the IKE protocol to exchange either computer certificates or a preshared key. In
either case, the VPN client and server authenticate each other at the computer level. Computer
certificate authentication is highly recommended because it is a much stronger authentication
method. Computer-level authentication is only performed for L2TP/IPsec connections.
3. Data
origin
authentication
and
data
integrity:
To verify that the data sent on the VPN connection originated at the other end of the connection
and was not modified in transit, the data contains a cryptographic checksum based on an
encryption key known only to the sender and the receiver. Data origin authentication and data
integrity are only available for L2TP/IPsec connections.
DATA ENCRYPTION:
To ensure confidentiality of the data as it traverses the shared or public transit network, the data is
encrypted by the sender and decrypted by the receiver. The encryption and decryption processes depend
on both the sender and the receiver using a common encryption key.
Intercepted packets sent along the VPN connection in the transit network are unintelligible to anyone who
does not have the common encryption key. The length of the encryption key is an important security
parameter. You can use computational techniques to determine the encryption key. However, such
techniques require more computing power and computational time as the encryption keys get larger.
Therefore, it is important to use the largest possible key size to ensure data confidentiality.
BENEFITS OF VPN
A well designed VPN can greatly benefit a company. It provides many benefits such as:
A VPN can extend geographic connectivity.
A VPN can improve the network as well as organizational security.
A VPN can reduce operational cost as compared to traditional WAN.
A VPN can improve the overall productivity of the organization.
A VPN can simplify the n/w topology and provide telecommuter support.
Cable modems enable fast connectivity and are relatively cost efficient.
Information is easily and speedily accessible to off-site users in public places via Internet
availability and connectivity.
VOLUNTARY TUNNELING: With voluntary tunneling, the client starts the process of
initiating a connection with the VPN server. One of the requirements of voluntary tunneling is an
existing connection between the server and client. This is the connection that the VPN client
utilizes to create a tunneled connection with the VPN server.
COMPULSORY TUNNELING: With Compulsory tunneling, a connection is created between:
o Two VPN servers
o Two VPN access devices VPN routers
In this case, the client dials-in to the remote access server, by using whichever of the following
methods:
o
o
The remote access server produces a tunnel, or VPN server to tunnel the data, thereby compelling
the client to use a VPN tunnel to connect to the remote resources.
VPN tunnels can be created at the following layers of the Open Systems Interconnection (OSI) reference
model:
Data-Link Layer layer 2: VPN protocols that operate this layer are Point-to-Point Tunneling
Protocol (PPTP) and Layer 2 Tunneling Protocol (L2TP).
Network Layer layer 3: IPSec can operate as a VPN protocol at the Network layer of the OSI
reference model.
Tunnel maintenance: This involves both the creation and management of the tunnel.
VPN data transfer: This relates to the actual sending of encapsulated VPN data through the
tunnel.
PAP
CHAP
MS-CHAP
EAP
DIGITAL SIGNATURE:
A digital signature is an electronic analogue of a written signature; the digital signature can be
used to provide assurance that the claimed signatory signed the information. In addition, a digital
signature may be used to detect whether or not the information was modified after it was signed
(i.e., direct integrity of signed data). These assurances may be obtained whether the data was
received in a transmission or retrieved from storage. A properly implemented digital signature
algorithm that meets the requirements of this standard can provide these services. For example
RSA, DSA, Rabin signature algorithm, Undeniable signatures etc based signature schemes
algorithm.
A digital signature or digital signature scheme is a mathematical scheme for demonstrating the
authenticity of a digital message or document. A valid digital signature gives a recipient reason
to believe that the message was created by a known sender, and that it was not altered in transit.
Digital signatures are commonly used for software distribution, financial transactions, and in
other cases where it is important to detect forgery or tampering.
Digital signatures employ a type of asymmetric cryptography. For messages sent through a nonsecure channel, a properly implemented digital signature gives the receiver reason to believe the
message was sent by the claimed sender. Digital signatures are equivalent to traditional
handwritten signatures in many respects; properly implemented digital signatures are more
difficult to forge than the handwritten type. Digital signature schemes in the sense used here are
cryptographically based, and must be implemented properly to be effective. Digital signatures
can also provide non-repudiation, meaning that the signer cannot successfully claim they did not
sign a message, while also claiming their private key remains secret; further, some nonrepudiation schemes offer a time stamp for the digital signature, so that even if the private key is
exposed, the signature is valid nonetheless. Digitally signed messages may be anything
represent-able as a bit string: examples include electronic mail, contracts, or a message sent via
some other cryptographic protocol.
A digital signature scheme typically consists of three algorithms:
A key generation algorithm that selects a private key uniformly at random from a set of
possible private keys. The algorithm outputs the private key and a corresponding public
key.
A signing algorithm that, given a message and a private key, produces a signature.
A signature verifying algorithm that, given a message, public key and a signature, either
accepts or rejects the message's claim to authenticity.
Figure: Digital signature generation and verification
Two main properties are required. First, a signature generated from a fixed message and fixed
private key should verify the authenticity of that message by using the corresponding public key.
Secondly, it should be computationally infeasible to generate a valid signature for a party who
does not possess the private key.
USE OF DIGITAL SIGNATURE:
As organizations move away from paper documents with ink signatures or authenticity stamps,
digital signatures can provide added assurances of the evidence to provenance, identity, and
status of an electronic document as well as acknowledging informed consent and approval by a
signatory. The United States Government Printing Office (GPO) publishes electronic versions of
the budget, public and private laws, and congressional bills with digital signatures. Universities
including Penn State, University of Chicago, and Stanford are publishing electronic student
transcripts with digital signatures.
Below are some common reasons for applying a digital signature to communications:
Authentication
Although messages may often include information about the entity sending a message, that
information may not be accurate. Digital signatures can be used to authenticate the source of
messages. When ownership of a digital signature secret key is bound to a specific user, a valid
signature shows that the message was sent by that user. The importance of high confidence in
sender authenticity is especially obvious in a financial context. For example, suppose a bank's
branch office sends instructions to the central office requesting a change in the balance of an
account. If the central office is not convinced that such a message is truly sent from an
authorized source, acting on such a request could be a grave mistake.
Integrity
In many scenarios, the sender and receiver of a message may have a need for confidence that the
message has not been altered during transmission. Although encryption hides the contents of a
message, it may be possible to change an encrypted message without understanding it. (Some
encryption algorithms, known as nonmalleable ones, prevent this, but others do not.) However, if
a message is digitally signed, any change in the message after signature will invalidate the
signature. Furthermore, there is no efficient way to modify a message and its signature to
produce a new message with a valid signature, because this is still considered to be
computationally infeasible by most cryptographic hash functions.
Non-repudiation
Non-repudiation, or more specifically non-repudiation of origin, is an important aspect of digital
signatures. By this property an entity that has signed some information cannot at a later time
deny having signed it. Similarly, access to the public key only does not enable a fraudulent party
to fake a valid signature.
INTRODUCTION TO FIREWALL:
The Internet has made large amount of information available to the average computer user at home, in
business and education. For many people, having access to this information is no longer just an
advantage, it is essential. By connecting a private network to the Internet can expose critical or
confidential data to malicious attack from anywhere in the world. The intruders could gain access to your
sites private information or interfere with your use of your own systems. Users who connect their
computers to the Internet must be aware of these dangers, their implications and how to protect their data
and their critical systems.
Therefore, security of network is the main criteria here and firewalls provide this security. The Internet
firewalls keep the flames of Internet hell out of your network or, to keep the members of your LAN pure
by denying them access the all the evil Internet temptations.
Figure: Firewall
A firewall is a hardware device or a software program running on the secure host computer that sits
between the two entities and controls access between them. Here the two entities are nothing but a private
network and the public network like Internet.
A firewall is a secure and trusted machine that sits between a private network and a public
network. The firewall machine is configured with a set of rules that determine which network
traffic will be allowed to pass and which will be blocked or refused. In some large organizations,
you may even find a firewall located inside their corporate network to segregate sensitive areas
of the organization from other employees. Many cases of computer crime occur from within an
organization, not just from outside. Firewalls can be implemented in both hardware and software, or a
combination of both. Firewalls are frequently used to prevent unauthorized Internet users from accessing
private networks connected to the Internet, especially intranets. All messages entering or leaving the
intranet pass through the firewall, which examines each message and blocks those that do not meet the
specified security criteria.
Firewalls can be constructed in quite a variety of ways. The most sophisticated arrangement
involves a number of separate machines and is known as a perimeter network. Two machines act
as "filters" called chokes to allow only certain types of network traffic to pass, and between these
chokes reside network servers such as a mail gateway or a World Wide Web proxy server. This
configuration can be very safe and easily allows quite a great range of control over who can
connect both from the inside to the outside, and from the outside to the inside. This sort of
configuration might be used by large organizations.
Typically though, firewalls are single machines that serve all of these functions. These are a little
less secure, because if there is some weakness in the firewall machine itself that allows people to
gain access to it, the whole network security has been breached. Nevertheless, these types of
firewalls are cheaper and easier to manage than the more sophisticated arrangement just
described.
NEED OF FIREWALLS:
The general reasoning behind firewall usage is that without a firewall, a subnet's systems expose
themselves to inherently insecure services such as NFS or NIS and to probes and attacks from hosts
elsewhere on the network. In a firewall-less environment, network security relies totally on host security
and all hosts must, in a sense, cooperate to achieve a uniformly high level of security. The larger the
subnet, the less manageable it is to maintain all hosts at the same level of security. As mistakes and
lapses in security become more common, break-ins occur not as the result of complex attacks, but
because of simple errors in configuration and inadequate passwords.
A firewall approach provides numerous advantages to sites by helping to increase overall host security.
The following sections summarize the primary benefits of using a firewall.
Concentrated Security
Enhanced Privacy
Policy Enforcement
required? If, for example, a user requires little or no network access to her desktop workstation, then a
firewall can enforce this policy.
3. CONCENTRATED SECURITY:
A firewall can actually be less expensive for an organization in that all or most modified software and
additional security software could be located on the firewall systems as opposed to being distributed on
many hosts. In particular, one-time password systems and other add-on authentication software could be
located at the firewall as opposed to each system that needed to be accessed from the Internet.
Other solutions to network security such as Kerberos involve modifications at each host system. While
Kerberos and other techniques should be considered for their advantages and may be more appropriate
than firewalls in certain situations, firewalls tend to be simpler to implement in that only the firewall need
run specialized software.
4. ENHANCED PRIVACY:
Privacy is of great concern to certain sites, since what would normally be considered innocuous
information might actually contain clues that would be useful to an attacker. Using a firewall, some sites
wish to block services such as finger and Domain Name Service. Finger displays information about users
such as their last login time, whether they've read mail, and other items. But, finger could leak
information to attackers about how often a system is used, whether the system has active users connected,
and whether the system could be attacked without drawing attention. Firewalls can also be used to block
DNS information about site systems, thus the names and IP addresses of site systems would not be
available to Internet hosts. Some sites feel that by blocking this information, they are hiding information
that would otherwise be useful to attackers.
5. LOGGING AND STATISTICS ON NETWORK USE, MISUSE:
If all access to and from the Internet passes through a firewall, the firewall can log accesses and provide
valuable statistics about network usage. A firewall, with appropriate alarms that sound when suspicious
activity occurs can also provide details on whether the firewall and network are being probed or attacked.
It is important to collect network usage statistics and evidence of probing for a number of reasons. Of
primary importance is knowing whether the firewall is withstanding probes and attacks, and determining
whether the controls on the firewall are adequate. Network usage statistics are also important as input into
network requirements studies and risk analysis activities.
6. POLICY ENFORCEMENT:
Lastly, but perhaps most importantly, a firewall provides the means for implementing and enforcing a
network access policy. In effect, a firewall provides access control to users and services. Thus, a network
access policy can be enforced by a firewall, whereas without a firewall, such a policy depends entirely on
the cooperation of users. A site may be able to depend on its own users for their cooperation; however it
cannot nor should not depend on Internet users in general.
TYPES OF FIREWALLS:
Firewalls fall into different categories. They are mainly,
1. Packet filtering firewalls,
2. Circuit level gateways,
3. Application gateways,
4. State-full multilayer inspection firewall.
1. Packet Filtering Firewalls:
These firewalls work at the network layer of OSI model, or IP layer of TCP/IP. They are usually part of a
router. A router is a device that receives packets from one network and forwards them to another network.
In a packet filtering firewall, each packet is compared to a set of criteria before it is forwarded. Depending
on the packet and the criteria, the firewall can drop the packet, forward it or send a message to the
originator. Rules can include source and destination IP addresses, source and destination port number and
type of the protocol embedded in that packet. These firewalls often contain an ACL (Access Control List)
to restrict who gains access to which computers and networks.
Advantages of packet filtering:
1. Because not a lot of data is analyzed or logged, they use very little CPU resources and create less
latency in a network. They tend to be more transparent in that the rules are configured by the network
administrator for the whole network so the individual user doesnt have to face the rather complicated
task of firewall rule sets.
2. It is cost effective to simply configure routers that are already a part of the network to do additional
duty as firewalls.
3. Network layer firewalls tend to be very fast and tend to be very transparent to users.
4. Cost: Virtually all high-speed Internet connections require a router. Therefore, organizations with highSpeed Internet connections already have the capability to perform basic Packet filtering at the Router
level without purchasing additional hardware or software.
Drawbacks of packet filtering:
2. Circuit-level Gateways:
These firewalls work at the session layer of the OSI model, or TCP/IP layer of the TCP/IP. They monitor
TCP handshaking between packets to determine whether a requested session is legitimate. Traffic is
filtered based on the specified session rules, such as when a session is initiated by the recognized
computer. Information passed to remote computer through a circuit level gateway appears to have
originated from the gateway. This is useful for hiding information about protected networks. Circuit level
gateways are relatively inexpensive and have the advantage of hiding information about the private
network they protect. On the other hand, they do not filter individual packets. Unknown traffic is allowed
up to level 4 of network stack. These are hardware firewalls and apply security mechanisms when a TCP
or UDP connection is established.
3. Application Gateways:
These are the software firewalls. These are often used by companies specifically to monitor and log
employee activity and by private citizens to protect a home computer from hackers, spy ware to set
parental controls for children.
Application gateways also called proxies are similar to circuit level gateways expect that they are
application specific. They can filter packets at the application layer of OSI or TCP/IP model. Incoming or
outgoing packets cant access services for which there is no proxy. In plain terms, an application level
gateway is configured to be a web proxy will not allow all ftp, gopher, telnet or other traffic through.
Because they examine packets at the application layer, they can filter application specific commands such
as http: post, get etc;
It works like a proxy. A proxy is a process that sits between a client and a server. For a client proxy looks
like a server and for a server, the proxy looks like a client. Example Application layer firewall: In
Figure3, an application layer firewall called a ``dual homed gateway'' is represented. A dual homed
gateway is a highly secured host that runs proxy software. It has two network interfaces, one on each
network, and blocks all traffic passing through it.
Advantages of application gateways:
1. Since application proxies examine packets at the application program level, a very fine level of security
and access control may be achieved.
2. These reject all inbound packets contain common EXE and COM files.
3. The greatest advantage is that no direct connections are allowed through the firewall under any
circumstances. Proxies provide a high level of protection against denial of service attacks.
Disadvantages of application gateways:
1. Proxies require large amount of computing resources in the host system, which can load to performance
bottlenecks or slow downs the network.
2. Proxies must be written for specific application programs and not all applications have proxies
available.
ADVANTAGES OF FIREWALL:
Concentration of security, all modified software and logging is located on the firewall system as
opposed to being distributed on many hosts;
Protocol filtering, where the firewall filters protocols and services that are either not necessary or
that cannot be adequately secured from exploitation; information hiding , in which a firewall can
``hide'' names of internal systems or electronic mail addresses, thereby revealing less information
to outside hosts;
Application gateways, where the firewall requires inside or outside users to connect first to the
firewall before connecting further, thereby filtering the protocol;
Extended logging, in which a firewall can concentrate extended logging of network traffic on one
system;
Centralized and simplified network services management, in which services such as ftp,
electronic mail, gopher, and other similar services are located on the firewall system(s) as
opposed to being maintained on many systems.
DISADVANTAGES OF FIREWALL:
Given these advantages, there are some disadvantages to using firewalls.
o
The most obvious being that certain types of network access may be hampered or even blocked for
some hosts, including telnet, ftp, X Windows, NFS, NIS, etc. However, these disadvantages are not
unique to firewalls; network access could be restricted at the host level as well, depending on a
site's security policy.
A second disadvantage with a firewall system is that it concentrates security in one spot as opposed
to distributing it among systems, thus a compromise of the firewall could be disastrous to other
less-protected systems on the subnet. This weakness can be countered; however, with the argument
that lapses and weaknesses in security are more likely to be found as the number of systems in a
subnet increase, thereby multiplying the ways in which subnets can be exploited.
Another disadvantage is that relatively few vendors have offered firewall systems until very
recently. Most firewalls have been somewhat ``hand-built'' by site administrators, however the time
and effort that could go into constructing a firewall may outweigh the cost of a vendor solution.
There is also no firm definition of what constitutes a firewall; the term ``firewall'' can mean many
things to many people.