Documente Academic
Documente Profesional
Documente Cultură
COMPUTER SCIENCE AND ENGINEERING BY A.V.N.KRISHNA Under the Guidance of Prof. A.VINAYA BABU
Department of Computer Science and Engineering Acharya Nagarjuna University, Nagarjuna Nagar 522 510
Andhra Pradesh, India, Aug. 09
DECLARATION
I hereby declare that the entire thesis work entitled, PERFORMANCE EVALUATION OF NEW ENCRYPTION ALGORITHMS WITH EMPHASIS ON PROBABILISTIC ENCRYPTION & TIME STAMP IN NETWORK SECURITY submitted to the department of Computer Science and Engineering, Acharya Nagarjuna University, in fulfillment of the requirement for the award of the degree of Doctor of Philosophy is a bonafide record of my own work carried out under the supervision of Dr. A.VINAYA BABU, Professor of Computer Science, Director, Admissions, JNTUH, Hyderabad. I further declare that the thesis either in part or full, has not been submitted earlier by me or others for the award of any degree in any University.
CERTIFICATE
This is to certify that the present dissertation work entitled PERFORMANCE EVALUATION OF NEW ENCRYPTION ALGORITHMS WITH EMPHASIS ON PROBABILISTIC ENCRYPTION & TIME STAMP IN NETWORK SECURITY submitted by Mr. A.V.N.KRISHNA for the award of the Degree of Doctor of Philosophy in Computer Science and Engineering to the Acharya Nagarjuna University, Nagarjuna Nagar is a record of bonafide research work actually carried out under my supervision in the Computer Science and Engineering department of this University. The results embodied in this dissertation have not been submitted for any other degree or diploma.
Date:
01- 08 - 2009
HYDERABAD
Acknowledgements
I owe an inestimable debt of gratitude to Professor Dr. A.Vinaya Babu, Director, Admissions, Jawaharlal Nehru Technological University, Hyderabad, for help and guidance in the successful completion of this dissertation piece. Professor Vinaya Babu, as a Mentor, has been a dependable source of encouragement and guidance in and through this continuous effort. I owe an inestimable debt of gratitude to Professor S.N.N.Pandit, Director for Quantitative Studies, Osmania University, for his motivation and encouragement in my pursuit of the Ph.D. degree. Professor S.N.N.Pandit was a subtle motivator of my participation and
presentation of my work in International Conference on Information Systems, Technology & Management( CISTM 05), that gave me self-confidence in pursuing the research work to completion. I am grateful and respectable to Dr.I.Ramesh Babu, Head of the Computer
Science & Engineering Dept. for the constant encouragement given in completion of this work. Equally I am grateful to Dr.Satya Prasad, Chairman, Board of Studies, Computer Science, for his continues monitoring of my work in its improvement to a standard level. I am very thankful to Professor Trimurty for his valuable suggestions in improving this work. I am grateful to Sri K. Krishna Rao & Pradeep Reddy, Correspondent/Director, Indur Institute of Engineering & Technology, Indur Society, for their untiring support rendered to me in completion of my work. I am also grateful to Dr. B. Vishnu Vardhan and Prof. P.Nageswara Rao for their continuous support in pursuing this work. I am also grateful to Dr. L.Pratap Reddy, H.O.D, E.C.E. dept., J.N.T.U.H. and Dr. C.Raghavendra Rao, Professor, Computer Information Systems, Hyderabad Central University, Hyderabad, for their continuous support in pursuing this work. And also I am very much grateful to my colleagues in Computer Science department, and Management of Indur
Institute of Engineering & Technology for their continuous encouragement in completion of this work. Last but not least, I wish to acknowledge my Family members role in my coming this far.
CONTENTS
Abstract List of Tables List of Figures I ii Vi 1 08
1 2
2.1 2.1.1 2.1.1.1 2.1.1.1.1 2.1.1.1.2 2.1.1.1.3 2.1.1.1.4 2.1.1.2 2.1.1.3 2.1.1.4 2.1.1.5 2.1.1.5.1 2.1.1.5.2 2.1.1.5.3 2.1.1.5.4 2.1.1.5.5 2.1.1.5.6 2.1.1.5.7 2.1.1.6 2.1.1.6.1 2.1.1.7 2.1.2 2.1.2.1 2.1.2.1.1 2.1.2.1.2 2.1.3 2.2 2.2.1 2.2.2 2.2.3
09 34
2.3 2.3.1 2.3.1.1 2.3.1.2 2.3.1.3 2.3.1.4 2.4 2.4.1 2.4.2 2.4.2.1 2.4.2.2 2.4.2.3 2.4.2.4 2.4.2.5 2.4.2.6 2.5
Probabilistic Encryption Definition of the Problem Chosen Cipher Text Attack Neural Networks in Cryptography Knapsack Based Crypto Systems On Probabilistic Encryption Schemes for Encryption Using Non Linear code. Numerical Model for data Development Partial differential equations Numerical Data Analysis Discretization Methods Control Volume Formulation Steady one dimensional data flow Grid spacing Boundary Conditions Solution of Linear Algebraic Equations Key Distribution Mechanism
3.1 3.2
3.2.1
3.2.1.1 3.2.1.2 3.2.1.3 3.2.1.4 3.2.1.5 3.2.1.6 3.2.1.7 3.2.1.8 3.2.1.9
32 33-41 42 50
42 42 43 44 45 46 48 50 50 51 62
3.2.2
3.2.2.1 3.2.2.2 3.2.2.3 3.2.2.4 3.2.2.5 3.2.2.6
51 52 53 53 55 56
3.2.3
3.2.3.1 3.2.3.2 3.2.3.3 3.2.3.4 3.2.3.5 3.2.3.6 3.2.3.7 3.2.3.8 3.2.3.9
59 61 62 63 76
3.3
A New Mathematical Model with a key & a Time stamp for encryption mechanism.
63 65 66 66 68 69 72 75 76 77 85
Mathematical modeling of the problem Linear Data flow problem Procedure for generating data from coefficients by tridiogonal method Results Computing power of the model Complexity of the model by its Construction & Strength Avalanche Effect on the proposed Model Security Analysis Conclusion
78 78 79 80 82 82 84 85 85
86-103 .
4.1 4.2
Training of the Algorithms using different keys & Time stamps Role of Statistical tests on Values generated by the algorithms under study
5
5.1 5.2 5.3 5.4
6
6.1 6.2 6.3 6.4 6.5
107122
7
7.1 7.2 7.3 7.4
8
8.1 8.2
Summary/Conclusions
Summary/Dissertation Conclusion
9
9.1
Future Work
Future work
136
References
137-140
141-142
List of Tables
Table No. 3.2.1.1: Encryption & Decryption process by Model 1...44 Table No.3.2.1.2: Representation of No. of computations, Computing power, Complexity by construction & Complexity by strength of Model 1...47 Table 3.2.1.3 Identifying the variations in cipher text by slight variations in the
key (Model 1)...49 Table No.3.2.2.1 Encryption & Decryption Process of Model 2.54 Table No.3.2.2.2 Representation of No. of computations, Computing power, Complexity by construction & Complexity by strength of Model 2...58 Table 3.2.2.3 Identifying the variations in cipher text by slight variations in the
Table 3.2.2.4
plaintext(Model2).61
Table No.3.2.3.2 Representing No. of computations, computing power, complexity by construction & strength of Model 3.....71
Table 3.2.3.3
Table 3.2.3.4
Table 3.2.3.5 Identifying multiple copies of cipher text for one plain text75
Table No. 3.3.1 Encryption & Decryption Process by Mathematical Model ..81
Table3.3.2
Complexity
by
construction
&
strength
of
Model
.83
Table 4. 1: Relationship between Random Key considered with the Basins (Sub Keys) generated..93
Table 4. 2: Level of Metric between Key and Basins (Sub Keys) generated for Sign Function applied on the product of Modified ternary vector & the random matrix key ......94
Table 4.3: Level of Metric between Key and Basins (Sub Keys) generated for different functions applied on the product of Modified ternary vector & random key ..............95
Table 4.4: Possible Relationship between Plain text and Cipher Text Generated in Probabilistic encryption Algorithm (Model 4) (Depending on the random Key Chosen)....96
Table 4.5: Conversion of plain text to cipher text for a 5 character Text....97
Table 4.7: Conversion of plain text to cipher text for a 15 character Text97
Table 4.8: Conversion of plain text to cipher text for a 20 character Text98
Table 4.9: Multiple Cipher Texts for a 5 Character Plain Text ....98
Table 6.1: Comparative Study of Number of computations and computational overhead of generated models...109
Table 6.2: Comparative study of Different Generated Models in terms of Security analysis by construction and crypto Analytical Strength..112
Table 6.3: Comparative Study of Different Generated Models by Avalanche effect, Linear & Differential Crypto Analysis & Security Analysis.....116
Table 6.4: Summary of Comparative study of some models of the proposed algorithm in terms of number of computations, computational overhead, change in plain text , change in key, complexity of the models by their construction and strength and security analysis, depending on the chosen key117
Table 6.5.1: Comparative study of the proposed encryption algorithms like Model 1 & Model 2 with standard algorithm like DES in terms of computational overhead, Data overhead, complexity and security analysis...119
Table 6.5.2: Comparative study of the proposed encryption algorithms like Model 3 & Model 4 with standard algorithm like RC4 in terms of computational overhead, Data overhead, complexity and security analysis...120
List of Figures
Figure 1.2: Symmetric Encryption Model.3 Figure 1.3: Public Key Encryption Model.4
Figure 1.4: Probabilistic Encryption Model ................6 Fig 4.1: Sequential Data Representation of Algorithm 1...102 Fig 4.2: Sequential Data Representation of Algorithm 2... 103 Figure 6. 1: Pictorial Representation of Comparative study of some models of the proposed algorithm in terms of computational overhead, change in plain text ,change in key Depending on the chosen Key. 118 Figure 6.2: Pictorial Representation of the proposed encryption algorithms like Model 1 & Model 2 with standard algorithm like DES in terms of computational overhead & Data overhead. ...121
Figure 6.3: Pictorial Representation of the proposed encryption algorithms like Model 3 & Model 4 with standard algorithm like RC4 in terms of computational overhead & Data overhead. ...121
Chapter 1
INTRODUCTION
he necessicity of information security with in an organization have undergone major changes in the past and present times. In the earlier times physical means is used to provide security to data. With the advent of computers in every field,
the need for software tools for protecting files and other information stored on the computer became important. The important tool designed to protect data and thwart illegal users is computer security.
With the introduction and revolution in communications, one more change that affected security is the introduction of distributed systems which requires carrying of data between terminal user and a set of computers. Network security measures are needed to protect data during their transmission. The mechanisms used to meet the requirements like authentication and confidentiality are observed to be quite complex. Security mechanisms usually involve more than a particular algorithm or protocol for encryption & decryption purpose and as well as for generation of sub keys to be mapped to plain text to generate cipher text. It means that participants be in possession of some secret information (Key), which can be used for protecting data from unauthorized use rs. Thus a model has to be developed within which security services and mechanisms can be viewed.
To identify and support the security services of an organization at its effective level, the manager needs a systematic way. One approach is to consider three aspects of information security that is Security attack, Security mechanism and Security services. Security attack identifies different modes by which intruder tries to get unauthorized information and the services are intended to counter security attacks, and they make use of one or more security mechanisms to provide the service.
As the importance of information systems is ever growing in all most all fields, electronic information takes on many of the roles, earlier they being done on papers. Few information integrity functions that the security mechanism has to support are security and confidentiality of the data to be transmitted and authentication of users.
There is no single mechanism that will provide all the services specified. But we can identify a very important mechanism that supports all forms of information integrity is cryptographic technique. Encryption of information is the most common means of providing security. A model for encryption can be represented by the following Figure 1.1.
Senders Message
Cipher text converted back to plain text using the same key
Receivers message
This general model shows that there are four basic tasks in designing a particular security service. 1. Designing an algorithm for performing encryption & decryption process. 2. Generating the secret information with the help of algorithm of step 1. 3. Identifying methods for the distribution and sharing of secret information. 4. Identifying rules to be used by both the participating parties to make it secured.
A crypto system is an algorithm, plus all possible plain texts, cipher texts and keys. There are two general types of key based algorithms: symmetric and public key. With most symmetric algorithms, the same key is used for both encryption and decryption, as shown in Figure 1.2
Senders Message
Cipher text converted back to plain text using the same private key
Receivers message
Figure 1.2 Symmetric-key encryption The process of symmetric-key encryption can be very fast as the users do not experience any significant time delay because of the encryption and decryption. Symmetric-key encryption provides security to data as the key is shared only by the participating parties. It also provides a degree of authentication, since information encrypted with one symmetric key cannot be decrypted with any other symmetric key. Thus, as long as the symmetric key is kept secret by the two parties using it to encrypt communications, each party can be confident that it is communicating with the other as long as the decrypted messages specify a meaningful sense.
Symmetric-key encryption will be successful only if the symmetric key is kept secured by the two parties involved. If anyone else discovers the key, it affects both confidentiality and authentication. The success of a symmetric algorithm rests in the key, divulging the key means that any one could encrypt and decrypt messages. As long as the communication needs to remain secure, the key must be protected between the participating parties. Encryption and decryption with a symmetric algorithm are denoted by E K (M) = C D K (M) = P Symmetric algorithms can be divided into two categories. Some operate on the plain text a single bit or byte at a time, these are called stream algorithms or stream ciphers. Others operate on group of bits or characters. Such algorithms are called block algorithms.
Public Key algorithms use two keys, one key for encryption and the other for decryption. One key can be called as public key which can be declared public and the other one is private that is, the key is known only to the particular participating party. And also public key cryptography can be used for digital signing as it supports authentication of users. The information encrypted with one key will only be decrypted with the other key. Further more the decryption key cannot be calculated from the encryption key. Figure 1.3 shows a simplified view of the way public-key encryption works.
Senders Message
Cipher text converted back to plain text using the private key of receiver
Receivers message
Compared with symmetric-key encryption, public-key encryption requires more computation and is therefore not always appropriate for large amounts of data. However, it's possible to use public-key encryption to send a symmetric key, which can then be used to encrypt additional data. This is the approach used by the SSL protocol. This provides Authentication, Integrity & Confidentiality of Information at low computing power. Since authentication of the users is very important in applications like ecommerce and other similar applications, public key cryptography is of much use. Encryption and decryption can be represented in a public key scheme is E Kpu(M) = C D Kpr(C) = M Where Kpu is the public key and Kpr is the private key.
In public key encryption there is always a possibility of some information being leaked out. A crypto analyst tries to get some information based on ones public key. Complete information can not be gained here but a part of information may be gained. In probabilistic Encryption, multiple cipher texts are generated for one plain text, a cryptanalyst can not generate any information by chosen plain text and chosen cipher text attacks.
Senders Message
Multiple Cipher texts converted back to plain text using the same key
Receivers message
Security Analysis of algorithms: Different algorithms offers different degrees of security, it depends on how hard they are to break. If the cost required to break an algorithm is greater than the value of the encrypted data, then the algorithm is supposed to be safe. If the time required breaking an algorithm is longer than the time that the encrypted data must remain secret, and then also it is safe. If the amount of data encrypted with a single key is less than the amount of data necessary to break the algorithm, it is supposed to be safe.
An algorithm is unconditionally secure if, it is difficult to recover the plain text in spite of having substantial amount of cipher text. In such circumstances, only a one time pad is unbreakable in a cipher text only attack, simply by trying every possible key one by one and by checking whether the resulting plain text is meaningful. This is called a brute force attack. Cryptography is more concerned with crypto systems that are computationally infeasible to break. Any algorithm is considered computationally secure if it cannot be broken with available resources.
The complexity of an attack can be measured as Data Complexity, the amount of data needed as input to the attack, processing complexity, the time needed to perform the attack and storage requirements which are the amount of memory needed to do the attack which is space complexity.
As a thumb rule, the complexity of an attack is taken to be minimum of these three factors. Another classification of complexities is by complexity of the algorithm by its construction and complexity of the algorithm by its strength. By its construction, the time complexity of the algorithm can be calculated by executing through the steps of the algorithm, which will be referred as O(n). Complexities can also be expressed as orders of magnitude. If the length of the key is k, then the processing complexity is given by 2k . It means that 2 k operations are required to break the algorithm. Then the complexity of the algorithm is said to be exponential in nature.
A desirable property of any encryption algorithm is that a small change in plain text or the key should produce significant change in cipher text. Such an effect is known as avalanche effect. The more the avalanche affects of the algorithm, the better the security. Crypto analysis is the study of recovering the plain text with out access to the key. It may also find weakness in a crypto system that identifies patterns which can be useful in knowing the previous results.
An attempted crypto analysis is called an attack. There are five types of attack. Each of them assumes that the crypto analyst has complete knowledge of the encryption algorithm used. 1. Cipher text only attack: Here the intruder is in hold of cipher text only. The crypto analyst has cipher text of several messages, all of which have been encrypted using the same encryption algorithm. The crypto analysts job is to recover the plain text or the key used to encrypt the messages, in order to decrypt other part of messages encrypted with the same keys. 2. Known Plaintext attack: The crypto analyst is in possession of pairs of known plain text and cipher text. His job is to get the key used to encrypt the messages or an algorithm to decrypt any messages encrypted with the same key. 3. Chosen Plaintext Attack (CPA): Here the crypto analyst is in hold of not only cipher text but also parts of chosen plain text. Here the intruder is identified to be placed at encryption site to do the attack. Differential crypto analysis is an example of this mode. 4. Chosen cipher text attack (CCA): Under the CCA model, the crypto analyst is in possession of chosen cipher text and corresponding plain text being decrypted from the private key. After it has chosen the messages, however, it only has access to an encryption machine. 5. Chosen text: In this model, the analyst posses the encipher algorithm, Cipher text to be decrypted, chosen plain text messages and corresponding cipher texts, fabricated cipher text with the corresponding decrypted plain texts developed by the private key.
Present work:
In this work an attempt has been made to generate two algorithms which provide security to data transmitted. The first algorithm considers a random matrix key which on execution by a series of steps generates a sequence. This sequence is used a sub key to build three different encryption models. Each model can be used for encryption of data. The second algorithm considers not only the key but also initialization vector and a time stamp to generate sub keys which are used for encryption process. And also a mechanism has been discussed which identifies any garbled key while transmitted from the Key Distribution Centre. In this work both the algorithms are discussed in terms of computational security, computational complexity and computational overhead. Both the algorithms are studied for their strengths and limitations. A crypto analytical study of the algorithms with emphasis on probabilistic encryption is also considered in this study.
The encryption algorithms are compared with standard algorithms like RC4 and DES. The algorithms are also discussed in terms of its applications and also about their advantages and limitations in network security environment.
Chapter 2
LITERATURE SURVEY
crypto system [1,7,8,17] is an algorithm which include all possible plain texts, cipher texts and keys. There are two general types of key based algorithms:
or a longer key is generated from a shorter one and XOR'd against the plaintext to make the cipher text. Schemes of the former type are called block ciphers, and schemes of the latter type are called stream ciphers.
2.1.1.1 DES Algorithm: The most widely used encryption scheme is based on Data Encryption Standard (DES). There are two inputs to the encryption function, the plain text to be encrypted and the key. The plain text must be 64 bits in length and key is of 56 bits. First, the 64 bits of plain text passes through an initial permutation that rearranges the bits. This is fallowed by 16 rounds of same function, which involves permutation & substitution functions. After 16 rounds of operation, the pre output is swapped at 32 bits position which is passed through final permutation to get 64 bit cipher text.
Initially the key is passed through a permutation function. Then for each of the 16 rounds, a sub key is generated by a combination of left circular shift and permutation. At each round of operation, the plain text is divided to two 32 bit halves, and the fallowing operations are executed on 32 bit right halve of plain text. First it is expanded to 48 bits using a expansion table, then X-ORed with key, then processed in substitution tables to generate 32 bit output. This output is permuted using predefined table and XORed with left 32 bit plain text to form right 32 bit pre cipher text of first round. The right 32 bit plain text will form left 32 bit pre cipher text of first round.
Decryption uses the same algorithm as encryption, expect that the application of sub keys is reversed. A desirable property of any encryption algorithm is that a small change in either plain text or the key should produce a significant change in the cipher text. This effect is known as Avalanche effect which is very strong in DES algorithm.
Since DES is a 56 bit key encryption algorithm, if we proceed by brute force attack, the number of keys that are required to break the algorithm is 2 56 . But by differential crypto analysis, it has been proved that the key can be broken in 2
47
combinations of known
41
plain texts. By linear crypto analysis it has been proved that, it could be broken by 2 combinations of plain text.
The DES algorithm is a basic building block for providing data security. To apply DES in a variety of applications, four modes of operations have been defined. These four models are intended to cover all possible applications of encryption for which DES could be used. They involve using a initialization vector being used along with key to provided different cipher text blocks. 2.1.1.1.1 Electronic Code Book (ECB) mode: ECB mode divides the plaintext into blocks m1, m2, ..., mn, and computes the cipher text ci = Ei(mi). This mode is
vulnerable to many attacks and is not recommended for use in any protocols. Chief among its defects is its vulnerability to splicing attacks, in which encrypted blocks from one message are replaced with encrypted blocks from another. 2.1.1.1.2 Cipher Block Chaining (CBC) mode: CBC mode remedies some of the problems of ECB mode by using an initialization vector and chaining the input of one encryption into the next. CBC mode starts with an initialization vector iv and XORs a value with the plaintext that is the input to each encryption. So, c1 = Ek(iv XOR m1) and ci = Ek(ci-1 XOR mi). If a unique iv is used, then no splicing attacks can be performed, since each block depends on all previous blocks along with the initialization vector. The iv is a good example of a nonce that needs to satisfy Uniqueness but not Unpredictability. 2.1.1.1.3 Cipher Feed-Back (CFB) mode: CFB mode moves the XOR of CBC mode to the output of the encryption. In other words, the cipher text c1 = p1 XOR Sj(E(IV)). This mode then suffers from failures of Non-Malleability, at least locally to every block, but changes to ciphertext do not propagate very far, since each block of ciphertext is used independently to XOR against a given block to get the plaintext. These failures can be seen in the following example, in which a message m = m1 m2 ... mn is divided into n blocks, and encrypted with an iv under CFB mode to c1 c2 ... cn. Suppose an adversary substitutes c'2 for c2. Then, in decryption, m1 = Ek(iv) XOR
c1, which is correct, but m'2 = Ek(c1) XOR c'2, which means that m'2 = m2 XOR c2 XOR c'2, since m2 = Ek(c1) XOR c2. Thus, in m2, the adversary can flip any bits of its choice. Then m'3 = Ek(c'2) XOR c3, which should lead to random looking message not under the adversary's control, since the encryption of c'2 should look random. But m4 = Ek(c3) XOR c4 and thereafter the decryption is correct. 2.1.1.1.4 Output Feed-Back (OFB) mode OFB mode modifies CFB mode to feed back the output of the encryption function to the encryption function without XORing the cipher text.
2.1.1.2 Triple DES: Given the potential vulnerability of DES to brute force attack, a new mechanism is adopted which uses multiple encryptions with DES and multiple keys. The simplest form of multiple encryptions has two encryption stages and two keys. The limitation with this mechanism is it is susceptible to meet in the middle attack. An obvious counter to meet in the middle attack and reducing the cost of increasing the key length, a triple encryption method is used, which considers only two keys with encryption with the first key, decryption with the second key and fallowed by encryption with the first key. Triple DES is a relatively popular alternative to DES and has been adopted for use in key management standards.
2.1.1.3 Homomorphic DES: A variant of DES called a homophonic DES [9] is considered. The DES algorithm is strengthened by adding some random bits into the plaintext, which are placed in particular positions to maximize diffusion, and to resist differential attack. Differential attack makes use of the exclusive-or homophonic DES. In this new scheme, some random estimated bits are added to the plaintext. This increases the certain plaintext difference with respect to the cipher text.
A homophonic DES is a variant of DES that map search plaintext to one of many cipher texts (for a given key). In homophonic DES a desired difference pattern with the cipher text will be suggested with some key values including the correct one, oppositely
wrong pairs of cipher text. For a difference pattern which 56-bit plaintext to a 64-bit cipher text using a 56-bit key. In this scheme, eight random bits are placed in specific positions of the 64-bit input data block to maximize diffusion.
For example, the random bits in HDESS are the bit- positions 25, 27, 29, 31, 57, 59, 61 and 63. In this algorithm, after the initial permutation and expansion permutation in the first round, these eight random bits will spread to bits 2, 6, 8, 12, 14, 18, 20, 24, 26, 30, 32, 36, 38,42,44,48 of the 48-bit input block to the S-boxes and will affect the output of all the S-boxes. The 48 expanded bits must be exclusive-ord with some key before proceeding to the S-boxes, thus two input bits into the S-boxes derived from the same random bit may have different values. This says that the random bits do not regularize the input to the S-boxes, that is, the property of confusion does not reduce while we try to maximize diffusion.
The decryption of the homophonic DES is similar to the decryption of DES. The only difference is that eight random bits must be removed to get the original plaintext (56 bits). A homophonic DES can easily be transformed into a triple-encryption version by concatenating a DES decryption and a DES encryption after the homophonic DES. Security analysis: Thus there is a probability of 1/256 between a pair of texts. The differential crypto analysis is also difficult on this mechanism. The diffusion of bits is also more in this mode. Thus this mechanism provides some probabilistic features to DES algorithm which makes it stronger from differential and linear crypto analysis.
2.1.1.4 AES: The Advanced Encryption Standard (AES) was chosen in 2001. AES is also an iterated block cipher, with 10, 12, or 14 rounds for key sizes 128, 192, and 256 bits, respectively. AES provides high performance symmetric key encryption and decryption.
2.1.1.5 Dynamic substitution: An apparently new cryptographic mechanism [39] which can be described as dynamic substitution is discussed in the fallowing topic. Although structurally similar to
simple substitution, dynamic substitution has a second data input which acts to re-arrange the contents of the substitution table. The mechanism combines two data sources into a complex result; under appropriate conditions, a related inverse mechanism can then extract one of the data sources from the result. A dynamic substitution combiner can directly replace the exclusive-OR combiner used in Vernam stream ciphers. The various techniques used in Vernam ciphers can also be applied to dynamic substitution; any cryptographic advantage is thus due to the additional strength of the new combiner.
2.1.1.5.1 The Vernam Cipher: A Vernam cipher maps plaintext data with a pseudo-random sequence to generate cipher text. Since each ciphertext element from a Vernam combiner is the (mod 2) sum of two unknown values, the plaintext data is supposed to be safe. But this mode is susceptive to several cryptanalytic attacks, including known plain text and cipher text attacks. And if the confusion sequence can be penetrated and reproduced, the cipher is broken. Similarly, if the same confusion sequence is ever re-used, and the overlap identified, it becomes simple to break that section of the cipher.
2.1.1.5.2 Cryptographic Combiners: An alternate approach to the design of a secure stream cipher is to seek combining functions which can resist attack; such functions would act to hide the pseudo-random sequence from analysis. The mechanism of this work is a new combining function which extends the weak classical concept of simple substitution into a stronger form suitable for computer cryptography.
2.1.1.5.3 Substitution Ciphers: In simple substitution ciphers each plain text character is replaced with fixed cipher text character. But this mechanism is weak from statistical analysis methods where by considering the rules of the language, the cipher can be broken. This work is concerned with the cryptographic strengthening of the fundamental substitution operation through dynamic changes to a substitution table. The substitution table can be represented as a function of not only input data but also a random sequence. This combination gives a cryptographic combining function; such a
function may be used to combine plaintext data with a pseudo-random sequence to generate enciphered data.
2.1.1.5.4 Dynamic Substitution: A simple substitution table supported with combining function gives the idea of dynamic substitution. A substitution table is used to translate each data value into an enciphered value. But after each substitution, the table is re-ordered. At a minimum, it makes sense to exchange the just-used substitution value with some entry in the table selected at random. This generally changes the just-used substitution value to help prevent analysis, and yet retains the existence of an inverse, so that the cipher can be deciphered.
2.1.1.5.5 Black Box Analysis: Dynamic substitution may be considered to be a black box, with two input ports Data In and Random In, and one output port Combiner Out. In the simple version, each data path has similar width; evidently the mechanism inside the box in some way combines the two input streams to produce the output stream. It seems reasonable to analyze the output statistically, for various input streams.
2.1.1.5.6 Polyalphabetic Dynamic Substitution: A means to defend to knownplaintext and chosen-plaintext attacks would be to use multiple different dynamic substitution maps and to select between them using a hidden pseudo-random sequence. Thus the dynamic substitution if free from statistical attacks where each character of plain text is replaced with multiple characters of cipher text which makes the mechanism robust. 2.1.1.5.7 Internal State: Dynamic substitution contains internal data which after initialization is continuously re-ordered as a consequence of both incoming data streams; thus, the internal state is a function of initialization and all subsequent data and confusion values. The changing internal state of dynamic substitution provides necessary security to the data streams.
Thus dynamic substitution provides a probabilistic nature to the enciphering mechanism. The limitation with this scheme is, not only different dynamic substitution
tables has to be maintained but also the pseudo random sequence which selects between these dynamic substitution tables has to be shared between sender and receiver.
2.1.1.6 Nonces A nonce [34] is a bit string that satisfies Uniqueness, which means that it has not occurred before in a given run of a protocol. Nonces might also satisfy Unpredictability, which effectively requires pseudo-randomness: no adversary can predict the next nonce that will be chosen by any principal. There are several common sources of nonces like counters, time slots and so on.
2.1.1.6.1 Nonce Based Encryption: In this work a different formalization for symmetric encryption is envisaged. The encryption algorithm is made to be a deterministic function, but it is supported with initialization vector (IV). Efficiency of the user is made success of this mode. The IV is a nonce like value, used at most once within a session. Since it is used at most once having any sort of crypto analysis is practically not possible which provides sufficient security.
2.1.1.7 One-Time Pad Encryption One more encryption mechanism for providing security to data is one time pad [17] encryption. The functions are computed as follows: A and B agree on a random number k that is as long as the message they later want to send. Ek(x) = x XOR k Dk(x) = x XOR k Note that since k is chosen at random and not known to an adversary, the output of this scheme is indistinguishable to an adversary from a random number. But it suffers from several limitations. It is susceptible to chosen plain text and chosen cipher text attacks. Again the limitation is here is sharing of one time keys by the participating parties of the encryption scheme. As a new key is always used for encryption, a continuous sharing of key mechanism has to be employed by the participating parties.
2.1.2.1.1 Algorithm Features: 1.It uses a variable length key from 1 to 256 bytes to initialize a 256-byte state table. The state table is used for subsequent generation of pseudo-random bytes and then to generate a pseudo-random stream which is XORed with the plaintext to give the cipher text. Each element in the state table is swapped at least once. 2. The key is often limited to 40 bits, because of export restrictions but it is sometimes used as a 128 bit key. It has the capability of using keys between 1 and 2048 bits. RC4 is used in many commercial software packages such as Lotus Notes and Oracle
Secure. 3. The algorithm works in two phases, key setup and ciphering. During a N-bit key setup (N being your key length), the encryption key is used to generate an encrypting variable using two arrays, state and key, and N-number of mixing operations. These mixing operations consist of swapping bytes, modulo operations, and other formulas. 2.1.2.1.2 Algorithm Strengths: The difficulty of knowing which location in the table is used to select each value in the sequence. A particular RC4 Algorithm key can be used only once and Encryption is about 10 times faster than DES. Algorithm Weakness: One in every 256 keys can be a weak key. These keys are identified by cryptanalysis that is able to find circumstances under which one of more generated bytes are strongly correlated with a few bytes of the key. Thus some symmetric encryption algorithms have been discussed in this chapter. They varies from block ciphers like DES, Triple DES, Homomorphic DES to stream ciphers like RC4. To the symmetric encryption mechanisms concepts like application of Nounce and dynamic substitution are discussed which provides randomness to the encryption mechanism. This probabilistic nature to the encryption mechanism provides sufficient strength to the algorithms against Chosen Cipher text attacks(CCA). The security with all these mechanisms lies with proper sharing of keys among the different participating parties.
The scheme shown in Figure 1.2 says public key is distributed and encryption being done using this key. In general, to send encrypted data, one encrypts the data with the receivers public key, and the person receiving the encrypted data decrypts it with his private key.
Compared with symmetric-key encryption, public-key encryption requires more computation and is therefore not always appropriate for large amounts of data. However, a combination of symmetric & Asymmetric schemes can be used in real time environment. This is the approach used by the SSL protocol.
As it happens, the reverse of the scheme shown in Figure 1.2 also works: data encrypted with ones private key can be decrypted only with his public key. This may not be an interesting way to encrypt important data, however, because it means that anyone with receivers public key, which is by definition published, could decipher the data. And also the important requirement with data transfer is authentication of data which is supported with Asymmetric encryption schemes, which is an important requirement for electronic commerce and other commercial applications of cryptography.
One key is (n, e) and the other key is (n, d). The values of p, q, and phi should also be kept secret.
n is known as the modulus. e is known as the public key. d is known as the secret key.
Encryption Sender A does the following:1. Get the recipient B's public key (n, e). 2. Identify the plaintext message as a positive integer m. 3. Calculate the ciphertext c = m^e mod n. 4. Transmits the ciphertext c to receiver B. Decryption Recipient B does the following:1. Consider his own private key (n, d) to compute the plain text m = c^d mod n. 2. Convert the integer to plain text form.
2. If both message digests are identical, the signature is valid. Compared with symmetric-key encryption, public-key encryption provides authentication & security to the data transmitted but requires more computation and is therefore not always appropriate for large amounts of data.
With probabilistic encryption algorithms [8,13], a crypto analyst can no longer encrypt random plain texts looking for correct cipher text. Since multiple cipher texts will be developed for one plain text, even if he decrypts the message to plain text, he does not know how far he had guessed the message correctly. To illustrate, assume a crypto analyst has a certain cipher text ci. Even if he guesses message correctly, when he encrypts message the result will be completely different cj. He cannot compare ci and cj and so cannot know that he has guessed the message correctly. Under this scheme, different cipher texts will be formed for one plain text. Also the cipher text will always be larger than plain text. This develops the concept of multiple cipher texts for one plain text. This concept makes crypto analysis difficult to apply on plain text and cipher text pairs.
An encryption scheme consists of three algorithms: The encryption algorithm transforms plaintexts into cipher texts while the decryption algorithm converts cipher texts back into plaintexts. A third algorithm, called the key generator, creates pairs of keys: an encryption key, input to the encryption algorithm, and a related decryption key needed to decrypt. The encryption key relates encryptions to the decryption key. The key
generator is considered to be a probabilistic algorithm, which prevents an adversary from simply running the key generator to get the decryption key for an intercepted message. The following concept is crucial to probabilistic cryptography:
2.3.1.1 Chosen Cipher Text Attack: In the simplest attack model, known as Chosen Plaintext Attack (CPA) [7], the adversary has access to a machine that will perform arbitrary encryptions but will not reveal the shared key. This machine corresponds intuitively to being able to see many encryptions of many messages before trying to decrypt a new message. In this case,
any adversary to
distinguish an encryption Ek(m) from Ek(m') for two arbitrarily chosen messages m and m'. Distinguishing these encryptions should be hard even if the adversary can request encryptions of arbitrary encryption function is encryption messages. Note that this property cannot be satisfied if the deterministic! In this case, the adversary can simply request an
of m and an encryption of m' and compare them. This is a point that one
should all remember when implementing systems: encrypting under a deterministic function with no randomness in the input does not provide Semantic Security. One more crypto analytical model is Chosen Cipher text Attack (CCA) Model. Under the CCA model, an adversary has access to an encryption and a decryption machine and must perform the same task of distinguishing encryptions of two messages of its choice. First, the adversary is allowed to interact with the encryption and decryption services and choose the pair of messages. After it has chosen the messages, however, it only has access to an encryption machine. An advancement to CCA Model is Chosen Cipher text Attack 2 (CCA2). CCA2 security has the same model as CCA security, except that the adversary retains access to the decryption machine after choosing the two messages. To keep this property from being trivially violated, we require that the adversary not be able
to decrypt the cipher text it is given to analyze. To make these concepts of CCA & CCA2 adoptable in real time environment, recently Canetti, Krawczyk and Nielsen defined the notion of replayable adaptive chosen ciphertext attack [5] secure encryption. Essentially a cryptosystem that is RCCA secure has full CCA2 security except for the little detail that it may be possible to modify a ciphertext into another ciphertext containing the same plaintext. This provides the possibility of perfectly replayable RCCA secure encryption. By this, we mean that anybody can convert a ciphertext y with plaintext m into a different ciphertext y that is distributed identically to a fresh encryption of m. It propose such a rerandomizable cryptosystem, which is secure against semi-generic adversaries. To improve the efficiency of the algorithm, a probabilistic trapdoor one way function is presented. This adds randomness to the proposed work which makes crypto analysis difficult.
2.3.1.2 Neural networks in cryptography: One more technique that is used in probabilistic encryption is to adopt Neural Networks [14] on encryption mechanisms. Neural network techniques are added to
probabilistic encryption to make cipher text stronger. In addition to security it can also be seen that data over head could be avoided in the conversion process A new probabilistic symmetric probabilistic encryption scheme based on chaotic attractors of neural networks can be considered. The scheme is based on chaotic properties of the Over storaged Hopfield Neural Network (OHNN). The approach bridges the relationship between neural network and cryptography. However, there are some problems in the scheme: (1) exhaustive search is needed to find all the attractors; (2) problem exists on creating the synaptic weight matrix.
2.3.1.3 Knapsack-based crypto systems: Knapsack-based cryptosystems [2] had been viewed as the most attractive and the most promising asymmetric cryptographic algorithms for a long time due to their NPcompleteness nature and high speed in encryption/decryption. Unfortunately, most of them are broken for the low-density feature of the underlying knapsack problems. To improve the performance of the model a new easy compact knapsack problem and
propose a novel knapsack-based probabilistic public-key cryptosystem in which the cipher-text is non-linear with the plaintext. 2.3.1.4 On Probabilistic Scheme for Encryption Using Nonlinear Codes Mapped from Z_4 Linear Codes: Probabilistic encryption becomes more and more important since its ability to against chosen-cipher text attack. To convert any deterministic encryption scheme into a probabilistic encryption scheme, a randomized media is needed to apply on the message and carry the message over as an randomized input [27,28]. Thus nonlinear codes obtained by certain mapping from linear error-correcting codes are considered to serve as such carrying media. Thus some algorithms are discussed in literature which is symmetric and probabilistic in nature.
2.4.2 Numerical Data Analysis The following are the steps to generate a numerical method for data analysis[36,39]. 2.4.2.1 Discretisation Methods. The numerical solution of data flow and other related process can begin when the laws governing these processes are represented in differential equations. The individual differential equations follow a certain conservation principle. Each equation employs a certain quantity as its dependent variable and implies that there must be a balance among various factors that influence the variable. The numerical solution of a differential equation consists of a set of numbers from which the distribution of the dependent variable can be constructed. It means a numerical method is equal to a experiment in which a set of experimental values gives a means of the measured quantity in the domain under study. Let us suppose that we decide to represent the variation of by a polynomial in x
= a 0 + a 1 x + a 2 x2 + ..a n x n
a2.an. This will enable us to evaluate , at any location x by substituting the value of x and the values of as in the above equation. Thus a numerical method treats as its basic unknowns the values of the dependent variable at a finite number of location called the grid points in the calculation domain. This method includes the task of providing a set of algebraic equations for these unknowns and of prescribing an algorithm for solving the equations. A discretisation equation is an algebraic equation connecting the values of for a set
of grid points. Such an equation is derived from the differential equation governing and thus expresses the same physical information as the differential information. That is only a few grid points are represented in the given differential equation. The value of at a grid point is represented by values at its neighborhood values. As more and more grid points are considered, the solutions of discritization equations reach the exact solution of the corresponding differential equations.
2.4.2.2 Control Volume Formulation. The considered area is divided into a number of grid points each with control volumes surrounding each grid point. The differential equation is integrated over each control volume piecewise to identify the data values. The feature of the control volume formulation is that the output data to the control volume is equal to input data values of the control volume. It means that conservation principle is identified over the control volume. This characteristic exists for any number of grid points. Thus even the course grid solution exhibits exact integral balances.
(eq. 1)
where k & s are constants. To derive the discretisation equation we shall employ the grid point cluster. We focus attention on grid point P, which has grid points E, W as neighbors. For one dimensional problem under consideration we shall assume a unit thickness in y and z directions. Thus the volume of control volume is delx*1*1. Thus if we integrate the above equation over the control volume, we get If we evaluate the derivatives . T/ X in the above equation from piece wise linear profile , the resulting equation will be Ke( Te Tp)/( X)e Kw(Tp Tw)/( X)w + S *del x=0.0 where S is average value of s over control volume. (eq. 3) ( K .T/X)e (K T/X)w + S X = 0.0 (eq. 2)
apTp = aeTe + awTw +b Where ae= Ke/Xe aw = Kw/dXw ap= ae+aw-sp.delX b=se.delX .
(eq. 4)
2.4.2.4 Grid Spacing For the grid points the distances (dX)e and (dX)w may be or may not be equal. For
simplicity we assume the grid spacing as equal on the left side and right side of grid points. Indeed, the use of non uniform grid spacing is often desirable, for it enables us to deploy more efficiently. Infact we shall obtain an accurate solution only when the grid is sufficiently fine. But there is no need to employ a fine grid in regions where the dependent variable T changes slowly with X. On the other hand, a fine grid is required where the T_X variation is steep. The number of grid points and the way they are distributed gives the nature of problem to be solved. Theoretical calculations using only a few grid points specify a convenient way of learning.
2.4.2.5 Boundary Conditions There is one grid point on each of the two boundaries. The other grid points are called internal points, around each of which a control volume is considered. Based on the grid points at boundary, internal grid points are evaluated by Tri diagonal matrix algorithm.
2.4.2.6 Solution Of Linear Algebraic Equations The solution of the discretisation equations for the one-dimensional situation can be obtained by the standard Gaussian elimination method. Because of the particularly simple form of equations, the elimination process leads to a delightfully convenient algorithm. For convenience in presenting the algorithm, it is necessary to use somewhat different nomenclature. Suppose the grid points are numbered 1,2,3ni where 1 and ni denoting boundary points.
The discretisation equation based on equations (1-4) can be written as Ai Ti + BiTi+1 +CiT i-1 = Di and T i-1. For the given problem (eq. 5)
For I = 1,2,3.ni. Thus the data value T is related to neighboring data values T
i+1
C1=0 and Bn=0; These conditions imply that T1 is known in terms of T2. The equation for I=2, is a relation between T1, T2 & T3. But since T1 can be expressed in terms of T2 , this relation reduces to a relation between T2 and T3. This process of substitution can be
continued until Tn-1 can be formally expressed as Tn. But since Tn is known we can obtain Tn-1.This enables us to begin back substitution process in which Tn-2,Tn3.T3,T2 can be obtained. For this tridiogonal system , it is easy to modify the Gaussian elimination procedures to take advantage of zeros in the matrix of coefficients.
Referring to the tridiogonal matrix of coefficients above, the system is put into a upper triangular form by computing new Ai. Ai = Ai (C i-1 /Ai)* Bi where i = 2,3ni. Di= Di (C i-1 /Ai) * Di Then computing the unknowns from back substitution Tn = Dn / An. Then Tn = Dk Ak * T k+1 / Ak, k= ni-1, ni-23,2,1. be used as sub key in cryptographic techniques. (eq. 8) (eq. 9) (eq. 6) (eq. 7)
Thus a Sequence of values are generated using tridiogonal matrix algorithm which can
A more flexible scheme, referred to as the control vector [10]. In this scheme, each session key has an associated control vector consisting of a number of fields that specify the uses and restrictions for that session key. The length of the control vector may vary. As a first step, the control vector is passed through a hash function that produces a value which is equal to encryption key length. The hash value is XOR ed with the master key to produce an output that is used as key to encrypt the session key. When the session key is delivered to the user the control vector is delivered in its plain form. The session key can be recovered only by using both master key that the user shares with the KDC and the control vector. Thus the linkage between session key & control vector is maintained. Some times keys get garbled in transmission. Since a garbled key can mean mega bytes of unacceptable cipher text, this sis a problem. All keys should be transmitted with some kind of error detection and correction bits. This is one way errors of key can be easily detected and if required the key can be reset. One of the most widely used methods is to encrypt a constant value with the key and to send the first 2 to 4 bytes of that cipher text along with the key. At the receiving end, the same thing is being done. If the encrypted constants match then the key has been transmitted with out error. The chance of undetected error ranges from one in 2 in 2 32.
16 t
o one
The limitation with this approach is in addition to the key, even the constant has
to be transmitted to participating parties. Some times the receiver wants to check if a particular key he has, is the correct decryption key. The nave approach is to attach a verification block, a known header to the plain text message before encryption. At the receivers side, the receiver decrypts the header and verifies that it is correct. This works, but it gives intruder a known plain text to help crypto analyze the system.
Present work: In the present work two algorithms are developed. The first algorithm uses a matrix key which on multiplication with a ternary vector and applying a sign function on the product generates a sequence. This sequence will be used to generate three different models of substitution technique. Thus the algorithm is considered to be a substitution algorithm which uses a single key to be shared by both the sender and receiver, and the cipher processes the input element continuously, producing output one element at a time. The new encryption algorithm is based on the concept of Poly alphabet cipher which is an improvement over mono alphabet. Three models are developed from the given algorithm [22, 23]. The first two models like Model 1 & Model 2 can be classified under block ciphers and Model 3 is a stream ciphers. Each model is having its own advantages and limitations. The second algorithm considers not only key but also initialization vector and a time stamp to generate sub keys which are used for encryption process.
Step2: Convert the sequence to ternary form of a 3 digit number i.e. 0 ------- 000 1-------- 001 2-------- 002 . . . 26-------- 222
Step 4: Subtract 1 from each element of the above matrix and the resulting matrix R is
= 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 1 1 1 1 0 1 1 0 1 1 0 1 1 0 1 1 0 1 1 0 1 1 0 1 1 0 1 1 0 1
1 1 1 0 0 0 1 1 1 1 1 1 0 0 0 1 1 1
A=
2 3 4
5 1 2
6 3 3
Step6: R= R X A
1 7 13 4 2 8 9 3 3 1 5 11 6 R = 0 6 11 5 1 3 3 9 8 2 4 13 7 1 7 4 1 6 31 5 2 1 4 1 2 3 0 3 2 1 4 1 2 5 0 3 6 1 4 7 0 1 2 5 1 4 7 3 6 9 5 2 1 3 0 3 1 2 5 9 6 3 7 4 1 5 2 1
Step7:
Convert all positive values to 1, negative values to -1 and zero to 0 of the resulting matrix of step 6.
1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 0 1 1 1 1 1
1 1 1 1 0 1
0 0 0 2 0 0 2 2 0 2 0 0 2 1 0 2 2 0 2 0 0 2 2 0 2 2 2
0 0 0 0 0 1 0 0 2 0 0 2 0 1 2 0 2 2 0 2 2 1 2 2 2 2 2
2 0 0 0 0 0 0 0 0 2 2 0 2 1 0 2 0 0 2 2 2 2 2 2 2 2 0
Step9: Convert each row of the matrix R to decimal form to generate sequence. ie 0 0 2 will form 0*3 2+0*3 1+2*3 0 = 2 The sequence formed is = 2 0 24 6 20 0 18 0 3 18 18 6 20 2 6 20 13 6 20
8 8 23 26 8 26 26 24.
18 0
8 23 26 8 26 26 24.
4.1. n[0] ie. 0 is compared with r[27] values. There is a match at r[1],r[2] & r[4]. Neglect already visited elements. Thus b(0)=[0,1,2,4). 4.2 Step 4.1 is repeated with other elements of basin ie. 1, 2 & 4 values. For elements 1& 4, there is no match of values in r[27]. For element 2, there is a match at r[10]. Thus the basin b(0) is updated to (0, 1, 2, 4&10). 5. The procedure is repeated for the next element of the sequence of step 1 which is not visited earlier. The other basins formed are b(1)=(3,5,6,7,8,9,11,12,14,15,17,18,19,20,21,23) b(2)=(13) b(3)=(16,22,24,25,26)
Mapping the sequence or the basins on the plain text to generate three models of cipher text. 3.2.1 Model 1:
A new substitution technique The algorithm [22, 34] considers a matrix key and executes a sequence of steps which generates a sequence. Each block of plain text is replaced by summation of alphanumerical value of plain text and the sequence generated to form cipher text. Thus the cipher text obtained becomes very difficult to break with out knowing the key.
3.2.2 Model 2:
A New Variable Length Key Block Cipher Technique for Network Security in Data Transmission[15]. The algorithm considers a random matrix key which on execution of a sequence of steps generates a sequence. Based on the equality of values this sequence is being divided into basins. Each basin represents one block of data. Depending on starting input plain text character, corresponding basin is considered as a key. Each block of plain text is replaced by summation of alphanumerical value of plain text and the sequence generated to form cipher text. The procedure is repeated for certain plain text depending on chosen value. Thus the cipher text obtained becomes very difficult to be broken with out knowing the key. In a variable length encryption scheme, the key used is of variable length. The scheme requires one time execution of the key schedule to generate all sub keys prior to encryption. This provides for slight resource burden on the key agility.
3.2.3 Model 3:
A New probabilistic Stream encryption scheme In public key encryption there is always a possibility of some information being leaked out. Because a crypto analyst can always encrypt random messages with a public key, he can get some information. Not a whole of information is to be gained here, but there are potential problems which allow a crypto analyst to encrypt random messages with public key. Some information is leaked out every time to the crypto analyst, he encrypts a message. In probabilistic multiple cipher texts are generated for one plain text, a cryptanalyst can not generate any information by chosen plain text and chosen cipher text attacks.
The algorithm [23] considers a matrix key and executes a sequence of steps which generates a sequence. Based on the equality of values this sequence is being divided into basins. The basins with minimum values will be neglected. Remaining basins represent the characters of the alphabet. Each plain text character is replaced by a set of basins. Each basin value is replaced by random value from that basin to generate cipher text. Thus multiple cipher texts will be generated for one plain text and any one cipher text can be used for transmission of data. Thus the cipher text obtained becomes computationally infeasible to be broken with out knowing the key. Thus the algorithm generates a subkey which will be used for encryption and decryption processes used to develop three models. Model 1 and Model 2 are block ciphers and Model 3 is stream cipher. Encryption & Decryption Processes are inverse to one another which make the same algorithm to be used for encryption and decryption process with a difference of addition in Encryption with subtraction in Decryption Process.
A sequence is generated
The steps that are involved in the proposed algorithm. 1. The decimal values and letters of the plain text are given numerical values
starting from 0 2. A random matrix is used as a key. Let it be A. 3. A ternary vector for 33 values i.e from 0 to 26 is generated 4. Let this be B. 5. 1 is subtracted from all the values of ternary vector. 6. The modified ternary vector is multiplied with the matrix key. 7. A sign function is applied on the product of ternary vector & matrix key. 8. 1 is added to all values of step 7. 9. A sequence is generated which is used as sub key 10. The sub key is added to the individual numerical values of the message to generate cipher text.
It can be seen that to extract the original information from the coded text is highly impossible for the third person who is not aware of encryption keys and the method of coding. Even if the algorithm is known it is very difficult to break the code and generate key, given the strength of the algorithm .Thus given a short response time through internet communication, the algorithm is supposed to be safe.
3.2.1.4 Example:
Sequence generated from the product of matrix key considered and ternary vector (ref. pp 44 ) = 2 0 0 18 0 3 18 18 6 20 10 . 26 24.
Table No. 3.2.1.1: Encryption & Decryption process by Model 1: Encryption. Plain Text Alpha Numeric equivalent Key Add Mod 36 Cipher Text 2 12 12 C 0 31 31 v 0 23 23 n 18 38 02 02 0 27 27 R 3 21 21 L A 10 v 31 n 23 K 20 R 27 I 18
Decryption Cipher Text Alpha Numeric equivalent Add 36 Key Subtract Plain Text 12 2 10 A 31 0 31 v 23 0 23 n 38 18 20 K 27 0 27 R 21 3 18 I 12 31 23 02 27 21 C v n 02 R L
1st computation: 27 calculations, 2nd computation: 27 calculations. Key considered: 9 character key, 3rd computation: 27*9 calculations, 4th computation: 27 calculations, 5th computation: 27 calculations, 6th computation: 27 calculations, 7th computation: 27 calculations. Considering a 27 character plain text, 8th computation: 27 calculations, 9th computation: 27 calculations, 10th computation: 27 calculations. Thus the total computational overhead by the first model is 486 calculations
Complexity by Construction 1st Computation: Converting n=0:26 to ternary vector. Let it be r. The complexity is in multiples of n. 2nd Computation: Calculating r-1. The complexity is in multiples of n 3rd Computation: Multiplying r with the key considered. The complexity is in multiples of n 4th Computation: Applying sign function on the product. Store it in r. The complexity is in multiples of n 5th Computation: Calculating r+1. The complexity is in multiples of n 6th Computation: Converting output ternary vector to integer form. Let this be s, the sequence generated. The complexity is in multiples of n 7th Computation: Converting plain text to alphanumerical value. The complexity is in multiples of n 8th Computation: Adding alphanumerical value of plain text to sequence generated. The complexity is in multiples of n 9th Computation: Applying mod function on the output. The complexity is in multiples of n 10th Computation: Converting the output to characters of the alphabet to get cipher text. The complexity is in multiples of n. Thus we can say that the complexity of model1 is O (n).
Table No.3.2.1.2: Representation of No. of computations, Computing power, Complexity by construction & Complexity by strength S No. Algorithm Number of computations 1 Computing Power 27 Complexity by Construction N Complexity by Strength ---
2 3
5 6
Converting n=0:26 to ternary vector. Let it be r. calculating r-1 multiplying r with the key considered Applying sign function on the product calculating r+1 converting output ternary vector to integer form converting plain text to alphanumerical value Adding alphanumerical value of plain text to sequence generated Applying mod function on the output converting the output to characters of the alphabet to get cipher text
1 1
27 27*9
N N
-----
27
---
1 1
27 27
N N
-----
27
---
27
---
27
---
10
1 Total=10
27 Total=486.
n O(n)
Exponential
In this model a sequence is generated and this sequence is substituted for the plain text to generate cipher text. Depending on the key, the sequence will be generated. We will identify the variations in the sequence generated, by slight variations in the key .Thus we can identify the variations in the cipher text by slight variations in the key considered. We will also identify the variations in the cipher text by slight variations in the plain text. For example, considering different cases for slight variations in the key, Case 1: A=key
2 5 6 3 1 3 4 2 3
3 5 6 3 A = key = 4 1 5 2 3
Sequence generated r= 1 0 0 18 0 0 18 18 3 20 2 6 20 13 6 20 24 6 23 8 8 26 26 8 26 26 25. Case 3: By decreasing the key values by 1 at each row.
1 5 6 A = key = 2 1 3 2 2 3
Sequence generated by the model. r= 11,1,3,20,0,6,18,18,6,20,2,6,20,13,6,20,24,6,20,8,8,20,26,6,23,25,15.
Table No. 3.2.1.3: Identifying the variations in cipher text by slight variations in the
key. (Model 1) Case Considered Case 1 Key Considered Plain text considered Cipher text formed cb cv e
[2 5 -6; 3 1 3; 4 - a b c d e 2 -3]
Case 2
[3 5 -6; 4 1 3; 5 - a b c d e 2 -3]
bbc v e
Case 3
[1 5 -6; 2 1 3; 2 - a b c d e 2 -3]
i c f x e
Thus we can see that, by changing the key slightly, there are a lot of variations in
the cipher text which provides maximum avalanche effect to the algorithm. This provides for maximum strength and security to the algorithm. But since the model is a simple substitution algorithm, the plain text variations to cipher text variations is negligible.
3.2.1.9 Conclusion:
In this work a ternary system with a 3 digit number is used. So the sub key generated is a 3 3 ie a 27 digit number. By considering a ternary vector with a four digit number or five digit number, the length of the sub key can be increased by 34, 35 which increases the length of sub key generated. Similarly by considering n ary vector the length of the subkey generated can still be increased. Thus by increasing the length of subkey, security of cipher system can be increased still further.
3.2.2 A New Variable Length Key Block Cipher Technique for Network Security in Data Transmission
The algorithm that is going to be discussed in this work is going to consider a random matrix key which on execution of sequence of steps generates a sequence. Based on the equality of values this sequence is being divided into basins. Each basin represents one block of data. Depending on starting input plain text character, corresponding basin is considered as a key. Each block of plain text is converted to alphanumerical values which are mapped with the sub key to generate cipher text. The procedure is repeated for certain plain text depending on chosen value. Thus the cipher text obtained becomes very difficult to be broken with out knowing the key. Key Words: Cryptography, Variable length key, Encryption Algorithm, Example, Add function.
A sequence is generated
The steps that are involved in the proposed algorithm. 1. The decimal values and letters of the alphabet are given numerical values starting from 0. 2. A random matrix is used as a key. Let it be A. 3. A ternary vector for 33 values i.e from 0 to 26 is generated. 4. Let this be B. 5. 1 is subtracted from all the values of step 4. 6. The generated ternary vector is multiplied with matrix key. 7. A sign function is applied on the product of ternary vector & the matrix key. 8. 1 is added to all values of step 7 9. A sequence is generated. 10. Basins will be developed from this sequence which contains equal values. 11. Basins with minimum values (say 1 value) are eliminated. 12. The first value of plain text is considered. 13. The remainder of the first value of plain text with the number of basins is calculated.
14. Depending on the value of the remainder, the corresponding basin is considered as a key. 15. The key is added to the plain text to generate cipher text. 16. The procedure is repeated for successive value. plain texts depending on chosen
3.2.2.3 Advantages
1. It is almost impossible to extract the original information. 2. Even if the algorithm is known, it is difficult to extract the matrix key. 3. Versatile to users. Different users of internet can use different modified versions of the new algorithm. 4. As per basin values, the same character is substituted by different alpha numerical value which provides more security for the message.
3.2.2.4 Example:
The sequence generated from the product of ternary vector & Matrix key considered (Ref. pp 44 ) 2 0 0 18 0 3 18 18 6 20 10 . 26 24. The basins that can be formed using this sequence are b(0)=(0,1,2,4,10) b(1)=(3,5,6,7,8,9,11,12,14,15,17,18,19,20,23) b(2)=(16,21,22,24,25,26)
Plain text Considered: avnkri Considering the first character of the plain text, a = 10, 10 mod 3=1. So the basin considered is b(1)
Table No.3.2.2.1 Encryption & Decryption Process by Model 2: 1. Encryption Plain text Alpha Numeric Equivalent Key(Basin Considered) Add Mod 36 13 13 36 0 0 29 29 T 27 27 R 35 35 Z 27 27 R 3 5 6 7 8 9 A 10 V 31 N 23 K 20 R 27 I 18
Cipher Text D
Computation seven: Generating different basins by placing equality of values of the sequence in one basin. Computation eight: Converting plain text to alphanumerical value. Computation nine: Considering the first character of plain text and applying a mod function of the order of number of basins formed. Computation ten: Depending on the output of the mod function, the corresponding basin is used as a key. Computation eleven: Adding the key to the alphanumerical value of the plain text. Computation twelve: Applying mod function on the output. Computation thirteen: Converting the output to characters of the alphabet to get cipher text. Computation fourteen: Repeating the procedure for chosen part of plain text. Thus the total number of computations in the proposed model is 14.
Computation overhead of the Model with a 9 character key 1st computation: 27 calculations, 2nd computation: 27 calculations Key considered: 9 character key, 3rd computation: 27*9 calculations, 4th
computation: 27 calculations, 5th computation: 27 calculations, 6th computation: 27 calculations, 7th computation: 27*27 calculations +27*27 calculations , 8th
computation: 27 calculations, 9th computation: 03 calculations depending on chosen rule, 10th computation: 03 calculations, 11th computation: 27 calculations, 12th computation: 27 calculations, 13th computation: 27 calculations, 14th computation: 03 calculations depending on the chosen rule. Thus the total computational overhead by the second model is 1953 calculations
11th Computation: Adding the key to the alphanumerical value of the plain text ie 27 calculations. The complexity is in multiples of n. 12th Computation: Applying mod function on the output ie 27 calculations. The complexity is in multiples of n. 13th Computation: Converting the output to characters of the alphabet to get cipher text. ie 27 calculations. The complexity is in multiples of n. 14th Computation: Repeating the procedure for chosen part of plain text ie 03 calculations depending on the chosen rule. The complexity depends on chosen rule. Thus the total computational complexity of the model by its construction is of the order of O (n 2)
Complexity of the algorithm by its strength Complexities can also be expressed as orders of magnitude. If the length of the key is k, then the processing complexity is given by 2k. It means that 2 k operations are required to break the algorithm. In the given algorithm a matrix key is used. This matrix key is multiplied with ternary vector. On the generated values a sign function is used to convert all positive values to 1, negative values to 1 and zero to 0. This provides the necessary strength to the algorithm. Thus known the algorithm, known the cipher text it is quite difficult to generate the matrix key. Thus in the present algorithm, there is no means by which the key can be retrieved, other than trying all the combinations of key, the complexity of the algorithm is said to be exponential in nature. Thus this algorithm is supposed to be safe in real time environment.
Table No.3.2.2.2 Representation of No. of computations, Computing power, Complexity by construction & Complexity by strength Algorithm Number of Computing Complexity Complexity S No. computations Power by by Strength Construction 1 Converting 1 27 N --n=0:26 to ternary vector. Let it be r. 2 calculating r-1 1 27 N --3 multiplying r 1 27*9 n --with the key considered 4 applying sign 1 27 n --function on the product 5 calculating r+1 1 27 n --6 Converting 1 27 n --output ternary vector to integer form 7 Dividing the 1 27*27 n2 --sequence to Basins 8 Converting 1 27 n --plain text to alphanumerical value 9 Identifying 2 27 n --basin as sub key 10 Adding 1 27 n --alphanumerical value of plain text to sub key 11 applying mod 1 27 n --function on the output 12 converting the output to characters of the alphabet to get cipher text 2 Total=14 27 Total=1953 n O(n 2) Exponential
2 5 6 3 1 3 4 2 3
the basins that can be formed using this sequences which are used as sub keys are
b(0)=(0,1,2,4,10) b(1)=(3,5,6,7,8,9,11,12,14,15,17,18,19,20,21,23) b(2)=(16,22,24,25,26) Case 2: By increasing the values of key by 1 at each row
3 5 6 A = key = 4 1 3 5 2 3
Sequence generated r= 1 0 0 18 0 0 18 18 3 20 2 6 20 13 6 20 24 6 23 8 8 26 26 8 26 26 25. Thus the basins formed which are used as sub keys are
b(0)= (0,1,2,4,5,10)
b(1)= (3,6,7,8,9,11,12,14,15,17,18,19,20,23) b(2)= (16,21,22,24,25,26); Case 3: By decreasing the key values by 1 at each row
1 5 6 A = key = 2 1 3 2 2 3
Sequence generated by the model. r= 11,1,3,20,0,6,18,18,6,20,2,6,20,13,6,20,24,6,20,8,8,20,26,6,23,25,15. Basins formed which are used as sub keys are b(0) = (0,11,4) b(1)= (2,3,10) b(2)= (5,6,7,8,9,11,12,14,15,17,18,19,20,21,22,23,24,26)
Note: Considering the basins as keys, by neglecting the basins which contain minimum number of values, say 1. Table No. 3.2.2.3 Identifying the variations in cipher text by slight variations in the key (Model 2)
Case considered
Cipher text
character of plain text Case 1 Case 2 Case 3 B(1)=3 5 6 7 8 B(1)=3 6 7 8 9 B(1)=2 3 10 2 3 abcde abcde abcde dgikm dhjl o cemfh
Thus we can see that, by changing the key slightly, there is a lot of variations in the basins generated. Since these basins will form the sub keys for the given model, they provide maximum avalanche effect to the algorithm. Thus this model provides good variations in cipher text for slight variations in the key. This provides for maximum strength and security to the algorithm.
Table No.3.2.2.4 Identifying the variations in cipher text by slight variations in the plain text (Model 2) Case considered SubKey considered Plain text as per the first Cipher text
character of plain text Case 1 Case 1 Case 1 36789 16 22 24 25 26 0 1 2 4 10 abcde bbcde cbcde dhjln hortu aceho
We will see a lot of variations in the cipher text generated for slight variations in the plain text. Thus it provides a maximum avalanche effect to the algorithm which provides for more strength and security.
3.2.2.9 Conclusion:
In the given work, a ternary vector with a 3 digit number is used. By using a n-ary vector the length of the vector can be increased. By increasing the length of the vector, the number of basins may not be increased. But the number of values in each basin will be increased which provides more strength to the developed model. It is also observed that by slight variations in the key values and plain text, the number of basins formed and number of values of each basin are varying in nature, which provides a better avalanche effect. This provides more security and strength to the algorithm
The new encryption algorithm is based on the concept of Polyalphabet cipher which is an improvement over monoalphabetic technique. In this algorithm one character of plain text is replaced by set of values from different basins. Thus a polyalphabet is maintained.
The algorithm that is going to be discussed in this work is going to generate a Sequence. The algorithm considers a matrix key and executes a sequence of steps which generates this sequence. Based on the equality of values this sequence is being divided into basins. The basins with minimum values will be eliminated. From the remaining basins each basin represents one character. Each Character in the plain text is replaced by a corresponding basin value. Thus the cipher text obtained becomes impossible to be broken with out knowing the key.
A sequence is generated.
Each character of plain text are represented by set of basins. Values from basins are randomly represented to form cipher text.
Thus this algorithm is a combination of 1. A ternary vector of 27 values to few basin values. 2. Each basin containing different number of values. 3. Each character of the alphabet being represented by a set of basins.
The steps that are involved in the proposed algorithm. 1. The letters of the alphabet are given numerical values starting from 1. 2. A random matrix is used as a key. Let it be A. 3. A ternary vector for 33 values i.e from 0 to 26 is generated. 4. 1 is subtracted fro all values of step 3. 5. The generated ternary vector is multiplied by the matrix key. 6. A sign function is applied on the product of ternary vector & the matrix key. 7. 1 is added to all values of step 6. 8. A sequence is generated. 9. Basins will be developed from this sequence which contains equal values. 10 Each plain text character is converted to set of basins based on chosen base value. 11 Each basin is replaced by values from basins in random form to generate the cipher text.
3.2.3.3 Advantages:
1. It is almost impossible to extract the original information. 2. Even if the algorithm is known, it is difficult to extract the key. 3. Versatile to users. Different users of internet can use different modified versions of the new algorithm. Since in this algorithm all positive values are consider as +1 and all negative values are consider as -1 and zero as 0,it is impossible to generate the matrix key even if plain text and cipher text are known . 4. As per the matrix the same character is substituted by different alpha numerical value which provides more security for the message. 5. By suitably combining basins, the number of characters of the alphabet can be increased.
3.2.3.4 Example:
For n= 0 : 26 ; Sequence generated r (ref. pp 43)=2 0 0 18 0 3 18 18 6 20 2 6 20 13 6 20 24 6 20 8 8 23 26 8 26 26 24.
Now considering a base value of 3 i.e. representing a character with random values from 3 basins, the total characters that can be represented in this alphabet is 3 3 =27. For example if K=11, it can be represented as 1 0 2 = b1 b0 b2.
Table No. 3.2.3.1 Encryption & Decryption Process by Model 3. Encryption Plain text Alpha equivalent Equivalent basins b(0)b(0)b(1) b(2) b(0) b(1) b(1) b(0) b(2). numeric A 01 S 19 K 11
24 4 9
12
2 26
d i
b z
26 1 14
16 4
24
Cipher text Cj
b b
a n
Alpha equivalent
numeric 1 2 8
24 4 9
12
2 26
Equivalent basins
b(0)b(0)b(1)
numeric
01
19
11
Complexity of the algorithm by its strength Complexities can also be expressed as orders of magnitude. If the length of the key is k, then the processing complexity is given by 2k. It means that 2 k operations are required to break the algorithm. In the given algorithm a matrix key is used. This matrix key is multiplied with ternary vector. On the generated values a sign function is used to convert all positive values to 1, negative values to 1 and zero to 0. A sequence is generated. This sequence is divided to basins depending on similar values. The characters of plain text are replaced by set of basins based on chosen base value. The basins are replaced by random values of the corresponding basins. This provides the necessary strength to the algorithm. Thus known the algorithm, known the cipher text it is quite difficult to generate the matrix key. Thus in the present algorithm, there is no means by which the key can be retrieved, other than trying all the combinations of key, the complexity of the algorithm is said to be exponential in nature. Thus this algorithm is supposed to be safe in real time environment.
Table No.3.2.3.2 Representing No. of computations, computing power, complexity by construction & strength S No. Algorithm Number of computations 1 Computing Power 27 Complexity by Construction N Complexity by Strength ---
2 3
5 6
10
11
Converting n=0:26 to ternary vector. Let it be r. Calculating r-1 Multiplying r with the key considered Applying sign function on the product Calculating r+1 Converting output ternary vector to integer form Dividing the sequence to Basins Converting plain text to alphanumerical value Converting Plain text value by set of basins Representing values fro basins in random order Generating cipher text
1 1
27 27*9
N N
-----
27
---
1 1
27 27
N N
-----
27*27+27*27 n 2
---
27
---
27*3
---
27*3+k
---
N O(n 2)
Exponential
For example, considering the key A of size(3*3) Case 1: A=key Sequence generated from the proposed model n=0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 r=2 0 0 18 0 3 18 18 6 20 2 6 20 13 6 20 24 6 20 n= 19 20 21 22 23 24 25 26 r= 8 8 23 26 8 26 26 24.
2 5 6 3 1 3 4 2 3
thus the basins that can be formed using this sequences are b(0)=(0,1,2,4,10) b(1)=(3,5,6,7,8,9,11,12,14,15,17,18,19,20,21,23) b(2)=(13) b(3)=(16,22,24,25,26) Case 2: By increasing the values of key by 1 at each row
3 5 6 A = key = 4 1 3 5 2 3
Sequence generated r= 1 0 0 18 0 0 18 18 3 20 2 6 20 13 6 20 24 6 23 8 8 26 26 8 26 26 25. Thus the basins formed are b(0)= (0,1,2,4,5,10) b(1)= (3,6,7,8,9,11,12,14,15,17,18,19,20,23) b(2)=(13) b(3)= (16,21,22,24,25,26);
1 5 6 A = key = 2 1 3 2 2 3
Sequence generated by the model. r= 11,1,3,20,0,6,18,18,6,20,2,6,20,13,6,20,24,6,20,8,8,20,26,6,23,25,15.
Basins formed are b(0) = (0,11,4) b(1) = (1); b(2)= (2,3,10); b(3)= (5,6,7,8,9,12,14,15,17,18,19,20,21,22,23,24,26) b(4)=(13) b(5)=(25).
Table No. 3.2.3.3 Identifying the variations in cipher text by slight variations in the key Considering different cases Number of sub keys Number formed characters of of Ratio of number of the characters of cipher
alphabet which the text to plain text cipher text supports Case 1 Case 2 Case 3 4 4 6 64 64 36 3 3 2
Plain text
Cipher text
Data overhead
a a a
b b b
c c c
Table No. 3.2.3.4 Identifying the variations in cipher text by slight variations in the plain text Considering different cases Case 1 Case 1 Case 1 a b c b b b c c c 0biabpacj b0v0jpj0d 0s0bdxacj 3 times 3 times 3 times Plain text Cipher text Data overhead
Table No. 3.2.3.5 Identifying multiple copies of cipher text for one plain text Considering different cases Case 1 Case 1 Case 1 a a a b b b c c c 0ahbap0kj 0jwdavjrd Jbnbdyatd 3 times 3 times 3 times Plain text Cipher text Data overhead
Thus a lot of variation in the cipher text is observed by slight variations in the key and the plain text. This provides for maximum avalanche effect on the algorithm which provides maximum strength and security to the algorithm. And also it is observed that by slight variations in the key, there is a substantial variation in the number of basins formed. This will reduce the data overhead as less combination of basins will be sufficient to support the characters of the alphabet.
3.2.3.9 Conclusion:
In this work a ternary system with a 3 digit number is used. So the sequence generated is a 27 digit number. By using a n-ary vector, the length of the vector can be further increased. But this does not guarantee in the increase in number of basins formed. The number of values of generated sequence will increase in each basin. Thus the new model provides a new probabilistic substitution mechanism where each character of plain text is replaced by two or three characters of cipher text depending on the chosen key. And also the model develops multiple cipher texts for one plain text. The algorithm provides almost equal security at low computational overhead. And also the given algorithm is free from differential and linear crypto analysis, which makes it suitable in data encryption. The limitation with this algorithm is more data has to be transmitted than the actual data which demands for more band width requirements.
3.3 A New Mathematical Model with a key & a Time stamp for encryption mechanism.
In this work a model is going to be used which develops data distributed over a identified value which is used as nonce (IV). By properly considering an empirical value, data is derived from the developed model. This empirical value is considered as a key. The process is repeated for different timings which are used as time stamps in the encryption mechanism [11,33]. Thus this model generates a distributed sequence which is used as sub key. The model involves a identified value which is used as nonce (IV), a considered empirical value which is used as key and different timing as time stamps which are very important parameters in symmetric data encryption schemes which supports not only security but also timeliness of encryption mechanism and also acknowledgement between the participating parties.
3.3.1 Mathematical modeling of the problem The approach to time series analysis was the establishment of a mathematical model describing the observed system. Depending on the appropriation of the problem a linear or nonlinear model will be developed. This model can be useful to generate data at different times to map it with plain text to generate cipher text.
3.3.2 Linear data flow Problem. The inintialization vector (IV) considered in the problem is n. When t=0, T (I) =Y (I) =300. where I=1,2,..M. Dividing the problem area into M number of points, and for simplicity by assuming data of the first and Mth grid points are considered to be known and constant. The conservation equation with respect to data , time & space is represented as 2T/ x 2 =T/ t where is a constant, T is the data value, x is the space value & t is time value [38]. Integrating on both sides For the grid points 2, M-1, the coefficients can be represented by considering the conservation equation, (/.x) *(T I+1 n+1- T I n+1 ) (/ x)* (TI n+1- T I-1 n+1 ) = ( x / t)* ( T I n+1 T I n ). (* t /(.x) 2 )*(T I+1 n+1- T I n+1 ) (* t /(.x) 2 )*(TI n+1- T I-1 n+1 ) = ( T I n+1 T I n ). (-2** t /(.x) 2 )*(T I n+1 ) + (* t /(.x) 2 )*( T I-1 n+1 ) + (* t /(.x) 2 )*(T I+1 n+1) = ( T I n+1 T I n ).
n+1
where T T
I-1 n+1
represents data value for the considered grid point for the current delt,
I+1 n+1
&T
points for the current delt, T I n represents data item for past delt, be a constant. Considering which is a key for the given model, the coefficients are obtained for
each state (grid point) in terms of A(I) refers to data value of the corresponding grid point, C(I) and B(I) refers to data values of preceding and succeeding grid points for the current delt, D(I) refers to data value of the considered grid point in the preceding delt. A(I)= 1 + 2 delt/(delx)**2. B(I)= - delt/(delx)**2. C(I)= - delt/(delx)**2. D(I)= T I n
3.3.3 Procedure for generating data from coefficients by tridiogonal method. Using the coefficients of grid points, and by using the tridiogonal matrix algorithm, the data distribution is calculated. The grid points are numbered 1,2,3,.M. with points 1 and M denoting extreme states.
The discretisation equation can be written as Ai Ti + BiT i+1 +CiT i-1 = Di For I = 1,2,3M. Thus the data Ti is related to neighboring data values T
i+1
and
T i-1. For the given problem C1=0 and BM=0 as T1 & TM represent boundary states. These conditions say that T1 is known in terms of T2. The equation for I=2, is a relation between T1, T2 & T3. But since T1 can be expressed in terms of T2 , this relation reduces to a relation between T2 and T3. This process of substitution can be continued until T M-1 can be formally expressed as TM. But since TM is known we can obtain T
M-1.This
M-2,
T M-3T3, T2 can be obtained. This process is continued until further iterations cease to produce any significant change in the values of Ts. Finally the data distribution is obtained for all grid points for different times by considering a suitable which is used as key.
3.3.4 Results By considering a suitable key =4, del t= 2, delx =2 for a total time stamp of 6 units, Different data values obtained are For del t=2, time =2; 33 6 7 4 33 8 11 13 32 22 29 20 26 0 18 10 17 11 1 1;
Thus by using the same key, by changing the time stamp values different sequences can be generated which are used as sub keys. These sub keys can be mapped to plain text to generate cipher text.
Table No. 3.3.1: Encryption & Decryption process by Mathematical Model Encryption Plain Text A S 28 K 20 S 28
Conversion to 10 alpha numeric value Sub key Total Mod 36 Cipher Text 33 43 07 07
6 34 34 Y
7 27 27 R
4 32 32 W
Decryption
Cipher Text
07
Y 34
R 27
W 32
Conversion to 07 alpha numeric value Add 36 if less 43 than 9 Sub key Subtract Plain Text 33 10 A
34
27
32
6 28 S
7 20 K
4 28 S
1. Considering steady state equation 2. Integrating the Steady state equation over control volume. 3. Representing the above equation in terms of a constant, time stamps and nonce. 4. Identifying the coefficients A[],B[],C[],D[] in terms of constant which is nothing but key, time stamp & nonce. 5. Initialization of boundary data values says T1 & Tn. 6. Representing T2 in terms of T1 & T3 and continuing the process till Tn-1. 7. Known Tn , using back tracking to calculate Tn-1, Tn-2 ..T2. 8. Repeat the process for different steps of time stamps to generate sequence. 9. Mapping of sequence to Plain text to generate cipher text. Thus the total number of computations to generate cipher text from plain text using the sequence generated from the developed Model is 9.
Computing power to generate cipher text from plain text: If we go by the algorithm, the number of computations required to generate the sequence at different grid points by different time stamps depends on the steps of time stamp, nonce considered. In the given problem for a time stamp 10 units & a Nonce of 21, the total number of computation needed to generate 20 character block of cipher text is around 500 computations.
3.3.6 Complexity of the model By Construction: In the given model, the data value depends on a constant value which is the key; it also depends on time stamp and a nonce value. By changing the time stamp different values of the sequence which is to be mapped between plain text & cipher text can be generated. By changing the nonce also, different values can be generated which provides good security against crypto analysis. The complexity of the model by construction is O(nt).
Table No. 3.3.2: Complexity of the Algorithm by its construction. S.No. 1 Computational Details No. of Computations Read key, delt, delx 4 &Time 2 3 Read nonce(n) 1 O(n) Complexity
O(n)
O(nt)
time is reached 7 Calculate T2 in terms n-1 of T1 & T3 and continuing the process till Tn-1 represented in terms of Tn. 8 Known Tn calculate 1 Tn-1 9 Back tracking the n-1 O(n) O(n)
process to calculate Tn-2 in terms of Tn-1 and so on till T2. 10 Repeat the process for n-1*(time/delt) successive delts till total time is reached O(nt)
By its Strength: Since in the given model present data depends on the key, time stamps, nonce & Past data, there is no means by which the key can be guessed rather than trying all combinations of keys. Thus the complexity of the model can be considered exponential in nature.
3.3.7 Avalanche effect: It mainly deals with identifying major variations in cipher text by small variations in key values. In the given model it is observed that a small change in key values, Time stamps or the nonce is providing major changes in cipher values.
Case 1: By changing the key by one value. =5, del t= 2, delx =2, time =4; for n= 21. Considering initialization constants T1 & Tn as 300 ,300 respectively.
12 8 11 19 30 17 7 14 22 9 4 16 28 26 22 9 25 7 10 6 0 12 27 1 35 22 7 15 23 2 19 7 15 8 25 20 31 14 6 2 25 0. Case 2: =5, del t= 1, delx =2, time =4; 12 6 20 5 9 4 28 22 32 17 5 28 7 29 28 34 22 32 26 20 0 12 31 22 3 24 7 5 23 30 12 27 19 22 0 11 35 15 6 15 30 0 12 12 17 23 1 6 17 3 19 27 11 15 14 24 29 13 6 18 19 28 22 0 12 21 12 4 8 5 25 9 2 21 15 1 25 31 2 1 16 1 18 7 Case 3: =5, del t= 2, delx =1, time =4; 12 10 9 10 9 10 9 10 9 10 9 10 9 11 6 16 5 6 24 13 0 0 0 0 0 0 0 0 0 0 0 0 0 1 2 4 6 7 0. Case 4: =3, del t= 2, delx =2, time =4; 12 21 12 16 3 0 24 7 3 1 30 18 18 34 9 3 16 9 27 6 0 12 20 4 1 24 28 22 16 28 17 30 3 21 20 23 32 25 8 13 1 0.
3.3.8 Security Analysis: In the given model, the sub key generated depends on a) Time stamp which is total time/del t. b) The considered Nonce value, which gives variable number of sub key values. Since both the time stamp and nonce considered can be dynamic in nature, the number of rounds of the algorithm and the number of values of the sub key generated are variable in nature. These features provide sufficient security to the algorithm. Since the number of rounds is variable in nature and a mod function is applied on the sub key values, it is relatively free from differential crypto analysis. The Nonce value considered in the algorithm is dynamic which gives variable number of sub key values which makes the algorithm free from linear crypto analysis. This model is using dynamic time stamps and nonce values in addition to key values to generate sub keys, it is relatively free from cipher text, known plain text, chosen plain text, chosen cipher text and chosen text attacks
3.3.9 Conclusion:
This encryption mechanism uses a Initialization Vector, Time Stamp & Key to generate distributed sequences which are used as sub-keys. Since the time stamp is variable in nature, the model provides sufficient security against crypto analysis. The model is free from cipher text attack, known plain text & cipher text attacks. In the given model past & present time stamps have been used to generate data. By properly guessing future time stamps, the model can be made still stronger.
raining an encryption model involves variable length key, a plain text and an algorithm which applies the key on the plain text to generate cipher text. This
allows for the data to be transmitted over the network in some form which cannot be read by any intruder. In the first algorithm, the model will be trained for different keys and an analysis of the models will be done. The keys that are used for training on the model are 3*3 keys and 4*4 matrix keys. By having small variations in the keys, the model will be studied for its increase in performance and working. The model will also be trained for its increase in security by having slight variations in the key values. The model will also be trained for changes in data overhead for small variations in the key values.
The model will be trained for different input vectors applied on the matrix keys to generate different sequences which will be used to generate basins. The model is trained for different input vectors of 27, 81 values with base 3 and 64 & 100 values with a base 4. For example, if the input vector of 27 values ranging from(0-26) is considered, with a base value of 3 & the number being represented in 3 digit form, then the values will be represented as (0 0 0 to 2 2 2).
For example, (26) 3 in 3 digit format = 2 *3 2 +2*3 1 +2*3 0.= ternary equivalent is ( 2 2 2) 3 (80) 3 in 4 digit format = 3*3 3+3*3 2+3*3 1+3*3 0 = ternary equivalent is (3 3 3 3) 3 (100) 4 in 4 digit format = 1*4 3+2*4 2+1*4 1+0*4 0. = 4-ary equivalent is (1 2 1 4) 4 (255) 4 in 4 digit format = 3*4 3+3*4 2+3*4 1+3*4 0.= 4-ary equivalent is ( 3 3 3 3) 4 For example, Case 1: Considering the key A of size (3*3)
A =
key
2 5 6 3 1 3 4 2 3
r=2 0 0 18 0 3 18 18 6 20 2 6 20 13 6 20 24 6 20 r= 8 8 23 26 8 26 26 24.
the basins that can be formed using this sequences are b(0)=(0,1,2,4,10) b(1)=(3,5,6,7,8,9,11,12,14,15,17,18,19,20,21,23) b(2)=(13) b(3)=(16,22,24,25,26)
3 5 6 4 1 3 5 2 3
r= 1 0 0 18 0 0 18 18 3 20 2 6 20 13 6 20 24 6 23 8 8 26 26 8 26 26 25.
Thus the basins formed are b(0)= (0,1,2,4,5,10) b(1)= (3,6,7,8,9,11,12,14,15,17,18,19,20,23) b(2)=(13) b(3)= (16,21,22,24,25,26); Case 3: By decreasing the key values by 1 at each row.
A = key
1 5 6 2 1 3 2 2 3
Sequence generated by the model. r= 11,1,3,20,0,6,18,18,6,20,2,6,20,13,6,20,24,6,20,8,8,20,26,6,23,25,15. Basins formed are b(0) = (0,11,4) b(1) = (1); b(2)= (2,3,10); b(3)= (5,6,7,8,9,11,12,14,15,17,18,19,20,21,22,23,24,26) b(4)=(13) b(5)=(25). In the case 3, for the given key, 6 basins have been formed. By this key, the alphabet supports 6 characters. By properly choosing each alphabet value by two basins, the number of characters of the alphabet can be increased to 36 values. This reduces the data overhead by half of its value. Case 4: By increasing the key size to [4*4], we can increase the number of basins and also the
5 1 2
6 1 3 4 2 4
A = key =
2 4
3 2 3 3
Sequence generated for n=0 to 80 values represented to the base 3. r=0,33,60,0,0,6,0,9,20,54,57,60,0,0,40,0,19,20,54,54,60,54,54,74,9,20,20,33,60,60,0,6,26 ,9,20,26,57,60,61,0,40,80,19,20,23,54,60,71,54,74,80,20,20,47,60,60,71,6,26,26,20,26,26 ,60,61,80,40,80,80,20,23,26,60,71,80,74,80,80,20,47,80. The basins formed by considering similar values, b(0)=(0,3,4,6,12,13,15,30,39,5,31,57,10,36) b(1)= (9,54,7,24,33,18,19,21,22,45,48,1,27,16,42) b(2)= (20,60,8,17,25,26,34,43,51,52,69,78,2,11,28,29,37,46,54,55,63,72,32,35, 58,59,61,62,71,9,18,19,21,22,45,48,38,64,47,56,73,7,24,33,16,42,53,79,1,27) b(3)= (23,74,44,70,49,75) b(4)= (40,14,66) b(5)= (80,41,50,65,67,68,76,77).
Case 5:
2 A= key = 3 5 5 1 2 6 1 3 4 2 4
4 2 3 3
Sequence generated n=0 to 80 values represented to the base 3. r=0,6,33,0,0,6,0,0,20,54,54,60,0,0,0,0,9.20,54,54,57,54,54,65,0,20,20,33,60,60,0,6,26,9,2 0,26,57,60,61,0,40,80,19,20,23,24,60,71,54,74,80,20,20,47,60,60,80,15,26,26,23,26,26,6 0,71,80,80,80,80,20,26,26,26,16,80,80,74,80,80,47,74,80. The basins formed from the generated sequence b(0)=(0,3,4,6,7,12,13,14,15,24,30,39,1,5,31,57,45,20,36,8,17,25,26,34,43,44,51,52,69,32 ,35,58,59,61,62,70,71,38,47,64,53,78)
Case 6:
3 A = key = 4 6 5 1 2 6 1 3 4 2 4
5 2 3 3
Sequence generated for n=0 to 80 values being represented to the base 3. r=0,3,6,0,0,3,0,0,10,54,54,60,0,0,0,0,0,20,54,54,54,54,54,55,0,10,20,33,60,60,0,6,26,9,20 ,26,57,60,61,0,40,80,19,20,23,54,60,71,74,54,80,20,20,47,60,70,80,25,26,26,26,26,26,60, 80,80,80,80,80,20,26,70,80,80,77,80,80,74,77,80. Basins formed from the generated sequence b(0)=(0,3,4,6,7,12,13,14,15,16,22,28,37,79,80,1,5,2,29,39,48,54,62,63,64,65,66,67,71,72 ,74,75,78,9,10,18,19,20,43,47,45,46,76,31,8,23,40,17,24,32,41,49,50,68,51,42,38) b(1)=(25,33,55,21,26,60,30,56,57,58,59,69,11,27,35,44,52,61,34,36) b(2)= (70,53) b(3)= (77,73)
Case 7:
1 A = key = 2 4 5 1 2 6 1 3 4 2 4
3 2 3 3
Sequence generated for n=0-100 for a base value of 4. r=0,72,136,136,0,0,8,42,0,16,34,42,32,34,34,34,128,132,136,137,0,0,85,170,0,33,34,38,3 3,34,34,34,128,128,136,154,128,128,162,170,16,34,34,98,34,34,34,34,128,128,170,128,1 45,162,162,161,162,162,162,34,34,34,34,72,136,156,154,0,8,42,43,16,34,42,42,34,34,34, 42,132,136,137,170,0,85,170,33,34,38,42,34,34,34,38,128,136,154,170,128. Basins formed the generated sequence b(0)= (0,4,5,8,20,24,21,68,84,6,69) b(1)=(16,128,9,40,72,32,33,36,37,48,49,52,96,100,1,64,12,25,28,38)
Case 8:
2 A = key = 3 5 5 1 2 6 1 3 4 2 4
4 2 3 1
Sequence generated for n=0:100 for a base value of 4. r=0,8,72,136,0,0,8,26,0,0,34,42,16,34,34,128,128,136,136,0,0,0,106,0,16,34,34,32,34,34, 34,128,128,132,138,128,128,146,166,0,34,34,34,34,34,34,34,128,128,128,154,128,128,1 62,162,144,162,162,162,34,34,34,34,72,136,136,154,0,8,42,42,16,34,42,92,34,34,34,42,1 32,136,137,170,0,85,170,170,33,34,38,42,34,34,34,38,128,136,154,170,128. Basins formed from generated sequence b(0)=(0,45,8,9,20,21,22,24,40,68,84,1,6,69) b(1)=(16,128,12,25,72,17,32,33,36,37,48,49,50,52,53,96,100,2,64,28,88) b(2)=(26,34,7,10,13,14,15,27,29,30,31,41,42,43,44,45,46,47,60,61,62,63,73,76,77,78,89, 92,93,94,11,70,71,74,79,91,75) b(3)=(38,146,90,95) b(4)=(85) 3 5 1 2 6 1 3 4 2 4 4 6
Case 9:
A =
key =
5 2 3 3
Sequence generated for n=0:100 for a base value of 4. r=0,4,8,72,0,0,4,9,0,0,17,38,0,33,34,34,128,128,136,136,0,0,26,0,0,34,34,16,34,34,34,12 8,128,128,137,128,128,129,162,17,34,34,33,34,34,34,128,128,128,134,128,128,146,162, 64,162,162,162,34,34,34,72,136,156,154,0,8,42,42,16,34,42,42,34,34,34,42,132,136,137, 170,0,85,170,170,33,34,38,42,34,34,34,38,128,136,154,170,128 Basins form the generated sequence
b(0)=(0,4,5,8,9,12,20,22,23,38,65,80,97,98,99,100,1,6,2,66,7,11,86,91) b(1)=(16,128,26,69,15,30,31,32,34,35,46,47,48,50,51,92,96,21,14,24,25,27,28,29,40,41, 43,44,45,5,56,60,70,73,74,75,85,88,89,90,81) b(2)=(17,136,10,39,18,62,63,78,93) b(3)=(33,137,13,42,84,67,68,71,72,76,87,3,61) b(4)=(64,154,54,94) From the above study, it has been observed that the generated model is trained for a [3*3] and [4*4} random keys on an input ternary vector of 27 & 81 values with a base of 3, and on 100 values with a base of 4. An attempt has been made to show better a avalanche effect with respect to the models developed. For the input vectors of 27, 81 & 100 values, the number of basins formed varies between 3to6. It has been observed that the more the values of the vector, it does not guarantee in the increase of basins. But increases in the values of basins are observed which provides more security. It is also observed that for a certain variations in the key, the model is generating more basins. Thus for a random generated key, the number of basins is evaluated. A better metric can be provided to a random key which provides more number of basins. Since the numbers of basins form the characters of the alphabet, the more the number of basin, and the more will be the characters of the alphabet. Thus a combination of basins will increase the number of characters of the alphabet. It is also observed that the more the number of basins, the less will be the data overhead. For example, if the basins generated are 6 and if 2 basins are considered for each character, and then the number of characters of the alphabet will be 36. If the number of basins formed is only 3, then each character has to be represented by 3 basin values if the alphabet is supporting 27 characters. The analysis and results of the Training model is illustrated in the table no. 4.1 & 4.2.
Table 4.1: Relationship between Random Key considered with the Basins ( Sub Keys) generated
S.No. Cases: Ternary Random Key Number Basins formed of Metric between Key Basins formed 1 Considered to the [2 5 -6; 3 1 3; 4 -2 - 4 base 3 3] Low and
Low
High
High
Low
Low
High
considered to the base 4 8 Considered to the [2 5 -6 1;3 1 3 2;4 -2 5 base 4 9 -3 1;5 2 4 4] Medium Medium
Table 4.2: Level of Metric between Key and Basins (Sub Keys) generated for Sign Function applied on the product of Modified ternary vector & the random matrix key S Random Key No. basins formed of Ratio of Number Cipher text characters to of alphabet of Data overhead Metric can mapped between Key and formed 1 [2 5 -6; 3 1 3; 4 4 -2 -3] 3 64 3 Low Basins that be
no. Considered
plain text
[3 5 -6; 4 1 3; 4 5 -2 -3]
64
Low
[1 5 -6; 2 1 3; 6 2 -2 -3]
36
High
36
High
64
Low
64
Low
36
High
125
Medium
125
Medium
Table 4.3: Level of Metric between Key and Basins (Sub Keys) generated for different functions applied on the product of Modified ternary vector & random key S no. Random Key Function used No. in the model on basins of Metric that can be mapped between Key and Basins formed
Considered the product of formed ternary vector & random key 1 [0 2 3;2 2 Sign 2;0 2 2] 3
Low
Low
Low
Low
High
[1 2 3;3 2 Mod, 3
Low
2; 1 2 2] 6 [1 2 3;3 2 Mod, 5 2; 1 2 2] 7 [1 2 3;3 2 Mod, 7 2; 1 2 2] 8 [11 8 9;12 Mod, 3 13 10;11 9 9] 9 [11 8 9;12 Mod, 5 13 10;11 9 9]
It is observed that the by using different functions like Modular arithmetic or Sign functions on the product of Modified ternary vector and matrix key, the relation to key and sub keys almost remains the same ie varies between 3 -8 basins(sub keys).
High
High
High
High
Table 4.4 : Possible Relationship between Plain text and Cipher Text Generated in Probabilistic encryption Algorithm (Model 3) (Depending on the random Key Chosen).
S no.
Ratio
of Number
of Possible characters in
Number
of Possible
the Basin
alphabet
text character Case 1 3 3, if single 27 valued basin is neglected. Case 2 Case 3 3 2 4 6 64 36 6,1,13,6 3,1,3,1,1,17 6-78 3-51 6,14,5 30-450
Thus depending on the key considered and the number of sub keys(Basins) generated, one plain text character can be replaced by different combinations of cipher texts. As per the above tables it is observed that if one plain text character is replaced by 3 basins, then the number of possible cipher text characters for one plain text character is 30 to 450 combinations. For different possible keys different combinations of cipher texts are possible for one plain text character which varies from a lesser value to higher values.
Conversion of plain text to cipher text Table 4.5: Conversion of plain text to cipher text for a 5 character text
Case 1: Plain Text for a 5 Character text Cipher text for model 1 Cipher text for model 2 Cipher text for model 3 The re Vhesm Wmkym Vbp0vvjspvdj0ex
Table 4.6: Conversion of plain text to cipher text for a 10 character text
Case 2: Plain Text for a 10 Character text Cipher text for model 1 Cipher text for model 2 Cipher text for model 3 The require Vhesmqxjzf Wmkymrvjjf Xapapy0izyoedkveypxudqoayabdix
Table 4.7: Conversion of plain text to cipher text for a 15 character text
Case 3: Plain Text for a 15 Character text Cipher text for model 1 Cipher text for model 2 Cipher text for model 3 The requirements Vhesmqxjzfukptu Wmkymqfsuz Ydp0ppmbby00dtpnpyzo0sddvdcjsychrdsyrgyp0vxai
Table 4.8: Conversion of plain text to cipher text for a 20 character text
Case 4: Plain Text for a The reqiurementsof inf 20 text Cipher text for Vhesmqxjzfukptuuhioi model 1 Cipher text for Wmkymrvjjfqfsuzpnjwh model 2 Cipher text for V0pdvyacpybal0yspxzkdlday00dqycchdoyelpyappdolpdbydqaasrxdxa model 3 Character
Thus multiple cipher texts can be formed for one plain text in probabilistic encryption scheme which makes the algorithm strong from chosen cipher text attacks.
12 8 11 19 30 17 7 14 22 9 4 16 28 26 22 9 25 7 10 6 0 12 27 1 35 22 7 15 23 2 19 7 15 8 25 20 31 14 6 2 25 0. Case 2: =5, del t= 1, delx =2, time =4; 12 6 20 5 9 4 28 22 32 17 5 28 7 29 28 34 22 32 26 20 0 12 31 22 3 24 7 5 23 30 12 27 19 22 0 11 35 15 6 15 30 0 12 12 17 23 1 6 17 3 19 27 11 15 14 24 29 13 6 18 19 28 22 0 12 21 12 4 8 5 25 9 2 21 15 1 25 31 2 1 16 1 18 7 Case 3: =5, del t= 2, delx =1, time =4; 12 10 9 10 9 10 9 10 9 10 9 10 9 11 6 16 5 6 24 13 0 0 0 0 0 0 0 0 0 0 0 0 0 1 2 4 6 7 0. Case 4: =3, del t= 2, delx =2, time =4; 12 21 12 16 3 0 24 7 3 1 30 18 18 34 9 3 16 9 27 6 0 12 20 4 1 24 28 22 16 28 17 30 3 21 20 23 32 25 8 13 1 0. Case 5: =22, del t= 2, delx =2, time =4; 12 6 4 25 26 20 19 22 9 29 13 32 9 33 19 23 32 15 23 12 2 12 26 29 15 30 18 16 28 30 13 16 13 28 26 5 16 12 1 9 6 2.
Thus we can see that by changing the key values slightly, by changing the time stamp the model is generating variable and distinct values which provides sufficient strength to the algorithm. The algorithm can be made still stronger by varying the nonce value which generates variable sub key values.
4.2 Role of Statistical tests on Values generated by the algorithms under study.
In the given study, two algorithms are discussed. Both the algorithms execute a series of steps and generate a sequence. This sequence is being used as sub key to be mapped to plain text to generate cipher text. The strength of the encryption & Decryption process depends on the strength of sequence generated. In this part of work some statistical tests like Uniformity tests, Universal tests & Repetition tests are tried on the sequence generated to test the strength of it. Uniform test tries to study the uniformity of a sequence generated. From the sequence generated, the first two consecutive points are considered as coordinates of a graph. The process is repeated for all the values of the sequence to generate the graph. For example, if a,b,c,d..n are the values of sequence generated, then the coordinates of the graph are (a,b),(c,d).(n-1,n). If they are dependent they will form into lines or they will form into a plane. In this study, different cases are tried for both the algorithms to study the non uniformity distribution of values in the sequence. Universal tests: It tries to see whether the data can be compressed. Since in the given sequence, the repetition of values is minimum, we can say that the developed sequences are relatively free from this test. Repetition test: This test studies that each character of the sequence is not identical to the earlier character. In the sequence generated by the first algorithm, the values are random
and unique in nature, the probability that the values get repeated is 1/27 . In the second algorithm even though some values of the sequence are repeated, the distribution of values is not uniform and random in nature which does not give any insight about he pattern of the sequence. Thus we can say that the sequence generated by both the algorithms is relatively free from Uniformity test, Universal test & Repetition tests. The analysis is being done on sequences generated by both the algorithms which are represented as case studies in the earlier part of the work. The analysis is being represented as graphs for both the algorithms.
90 80 70 60 50 40 30 20 10 0 0 50 100
90 80 70 60 50 40 30 20 10 0 0 50 100
90 80 70 60 50 40 30 20 10 0 0 50 100
18 16 14 12 10 8 6 4 2 0 0 10 20 30
40 35 30 25 20 15 10 5 0 0 20 40
From the graphs we can say that the data distribution is non uniform and random in nature in the sequence generated by the algorithms which provides sufficient strength against crypto analysis.
Chapter 5
Key distribution mechanism forms an integral part of encryption decryption process. The success of Symmetric encryption lies with proper and confidential sharing of private key among the participating parties. Some times keys get garbled in transmission. Since a garbled key can mean mega bytes of unacceptable cipher text, this sis a problem. All keys should be transmitted with some kind of error detection and correction bits. In this part of work, a new approach is tried to identify any garbled form of key is transmitted to participating parties. A hash value is generated from the key to be transmitted. The generated hash value which is appended to the key encrypted by either private keys of the participating parties or the master key shared by the participating parties provide security & confidentiality to key transmitted and also identifies any garbling of key values during transmission. In this part of work, a new approach is tried to identify whether any garbled form of key is transmitted to participating parties. In the methodology of the proposed work, two algorithms are presented, the first algorithm deals with generating the product of ternary vector and a random matrix key and applying a sign function on the product to generate a sequence (pp 42). In the second algorithm a mathematical model is considered which uses a key, time stamp and nonce values to generate sequence (pp 84).
1. Generated sequence by the earlier methods is considered. 2. The sequence is converted to binary form. 3. A block of bits of binary sequence say 6 is considered. 4. Each block is exclusive_or ed with the next block of data to generate a 6 bit hash value.
5.2 Example:
Considering the first algorithm, 1. Generated sequence from the algorithm 0 13 11 6 2 12 10 5 1 2. Binary form of the sequence= 00000000 00001101 00001001 00000110 00000010 00001100 00001010 00000101 00000001.
3. Considering 6 bits as one block of data. 4. Generated hash value= 101111 = 47.
5.3 Identifying any garbled key during transmission: Appending the hash value to key: When the key is being transmitted to the participating
parties, a hash value is generated which will be appended to the key. At the receivers side, considering the key and using the same algorithm hash value is regenerated. This regenerated hash value is compared with received hash value. If both the hash values are matching, it means that key received is in correct fashion and no garbling of keys has taken during transmission. The advantage with this mechanism is, if any errors takes place in the key while transmission, there will be a mismatch between received hash value & generated hash value and accordingly the participating parties ask for retransmission of key.
5.4 Conclusion:
This mechanism uses the same algorithm for the generation of sub keys and also for the
generation of sequence which is used to generate hash value. If any errors takes place in the key while transmission, there will be a mismatch between received hash value & generated hash value and accordingly the participating parties ask for retransmission of key. Thus this model identifies any garbled form of keys while transmission from Key Distribution Centre.
Computational overhead (Computing Power) for the developed models Algorithm 1: Model 1: Computation overhead for a 9 character key
1st computation: 27 calculations, 2 computation: 27 calculations, 3 computation: 27*9 calculations. 4th computation: 27 calculations, 5th computation: 27 calculations, 6th computation: 27 calculations, 7 computation: 27 calculations considering a 27 character plain text, 8th computation: 27 calculations, 9th computation: 27 calculations, 10th computation 27 calculations. Thus the total computational overhead per block of data by the first model is 486 calculations.
depending on chosen rule, 10 computation 03 calculations, 11 computation: 27 calculations, 12th computation: 27 calculations, 13th computation: 27 calculations, 14th computation: 03 calculations depending on the chosen rule. Thus the total computational overhead per block of data by the second model is 1953 calculations.
8th computation: 27 calculations, 9th computation: 81 calculations, 10th computation: 81 calculations, 11th computation: 81*k calculations (where k refers to number of computations for a random pick up of values). Thus the total computational overhead per character by the fourth model is 2025 +81k calculations.
Algorithm 2:
If we go by the algorithm, the number of computations required to generate the sequence at different grid points by different time stamps depends on the steps of time stamp, nonce considered. In the given problem for a time stamp 10 units & a Nonce of 21, the total number of computation needed to generate 20 character block of cipher text is around 500 computations.
Table 6.1:Comparative Study of Number of Computations and computational overhead of generated models.
Algorithm
Number of computations Computational overhead for for one plain text character one plain text character to to one cipher text character. one cipher text character. 486 1953 2025+81k 500
10 14 11 09
Algorithm 2: Complexity of the model By Construction: In the given model, the data value depends on a constant value which
is the key; it also depends on time stamp and a nonce value. By changing the time stamp different values of the sequence which is to be mapped between plain text & cipher text can be generated. By changing the nonce also, different values can be generated which provides good security against crypto analysis. The complexity of the model by construction is O(nt).
By its Strength: Since in the given model present data depends on the key, time
stamps, nonce & Past data, there is no means by which the key can be guessed rather than trying all combinations of keys. Thus the complexity of the model can be considered exponential in nature.
TABLE 6.2: Comparative study of Different Generated Models in terms of Security analysis by construction and crypto Analytical Strength
Model No.
Security construction
analysis
Model 2:
By changing the key slightly, there is a lot of variations in the basins generated. Since these basins will form the keys for the given model, they provide maximum avalanche effect to the algorithm. Thus this model provides for good variations in cipher text for slight variations in the key. Depending on the starting value of the plain text, a slight variation in plain text will call for a different key. Thus we will also see a lot of variations in the cipher text generated for slight variations in the plain text. Thus it provides a maximum avalanche effect to the algorithm which provides for more strength and security.
Model 3:
For the same plain text, different cipher texts will be formed which makes crypto analysis difficult. And also a lot of variation in the cipher text is observed by slight variations in the key and the plain text. This provides for maximum avalanche effect on the algorithm And also it is observed that by slight variations in the key, there is a substantial variation in the number of basins formed, which again provides for better avalanche effect which ultimately provides a maximum strength and security to the algorithm.
Algorithm 2:
In the given model it is observed that a small change in key values, Time stamps or the nonce is providing major changes in cipher values.
The objective of linear crypto analysis is to find a effective linear equation of the form P[x1.xn] ex or C[y1.ym]=K[z1..zl]. where P & C refers to plain and cipher texts and K be the key. Once a proposed relation is determined, the procedure is to compute the results of the left-hand side of the equation for a large number of plain text & cipher text pairs. If the result is zero for more than half of the time, assume that K[z1zl] = 0 other wise it is assumed as 1. This gives us linear equations on key bits. These equations can be solved to generate key bits.
Algorithm 1: Model 1:
The model uses a sign function on the product of ternary vector and a matrix key to generate the sequence. The sign function converts all positive values to 1, negative values to -1, and zero with 0.This sequence is substituted for plain text to generate cipher text. Thus it is impossible to generate the matrix key from the known plain text and cipher texts. Thus this model is free from differential crypto analysis. But this model uses a simple substitution technique to generate cipher text; it is some what susceptible to linear crypto analysis. The Key can not be gained and not a whole of information can be gained, but part of information can be gained in this model.
Model 2:
Since in this work, it uses a sign function to generate the sequence. Similarity of values of the sequence will be used to generate basins which are used as variable length keys to map plain text to cipher text. Since it uses a sign function, it is free from differential crypto analysis and since variable length keys are used it is also free from linear crypto analysis.
Model 3:
Since in this work, it uses a sign function to generate the sequence. Similarity of values of the sequence will be used to generate basins. These basins are mapped to characters of plain text. Then the values from basins are chosen randomly to generate cipher text. Since it uses a sign function, it is free from differential crypto analysis and since basins with different and variable values are mapped with characters of plain text in a random way to form cipher text, it is also free from linear crypto analysis.
Algorithm 2:
The time stamp and nonce considered can be dynamic in nature, the number of rounds of the algorithm and the number of values of the sub key generated is variable in nature. These features provide sufficient security to the algorithm. Since the number of rounds is variable in nature and a mod function is applied on the sub key values, it is relatively free from differential crypto analysis. The Nonce value considered in the algorithm is
dynamic which gives variable number of sub key values which makes the algorithm free from linear crypto analysis.
Algorithm 1: Model 1: This model is completely free from cipher text only, type of attack. By the
other attacks, the key may not be retrieved but a part of plain text may be retrieved.
Model 2: This model is completely free from cipher text only and known plain text
attacks but may not be completely free from chosen plain text and cipher text attacks where a part of information may be leaked out by cryptanalysis.
Model 3: This model is completely free from cipher text, known plain text, chosen
plain text, chosen cipher text and chosen text attacks. This algorithm is also free from public key attacks.
Algorithm 2:
This model is using dynamic time stamps and nonce values in addition to key values to generate sub keys, it is relatively free from cipher text, known plain text, chosen plain text, chosen cipher text and chosen text attacks
Table 6.3: Comparative Study of Different Generated Models by Avalanche effect, Linear & Differential Crypto Analysis & Security Analysis
Model No.
Avalanche effect
Avalanche by effect
Effect
variations in variations in Crypto key Algorithm1 11 Model 1: Model 2 12 3 Low Plain text 1 Analysis Low
Cipher text
Low
Model 3
Low
Cipher
text,
Known Plain Text, chosen plain text & cipher text. Algorithm 2 20 1 Low Low Cipher text,
Table 6.4: Summary of Comparative study of some models of the proposed algorithm in terms of number of computations, computational overhead, change in plain text , change in key, complexity of the models by their construction and strength and security analysis, depending on the chosen key.
Mod el no. Type Cipher of No. of Computat ional overhead per character Algor ithm 1:Mo del 1 Mode l2 Block Cipher 15 1953 3 12 O(n**2) Exponenti al Cipher text, known plain text Mode l3 Algor ithm 2 Stream Cipher Stream Cipher 09 500 11 2025+81k Indetermi nate 1 Indetermi nate 20 O(nt) O(n**2) Exponenti al Exponenti al Text only Text only Block Cipher 10 486 1 11 O(n) Exponenti al Cipher text only Change in text plain Change in key Complexi ty by its constructi on Complexi ty by its strength Security analysis
computati ons
Fig.6.1 Pictorial Representation of Comparative study of some models of the proposed algorithms in terms of computational overhead, change in plain text , change in key Depending on the chosen Key.
Changes in Key
3 2.5 2 1.5 1 0.5 0 Model 1 Model 2 Model 3 Algorithm 2 Changes in Plain text
Table 6.5.1: Comparative study of the proposed Block Ciphers( Model 1 & Model2) with standard algorithm like DES in terms of computational overhead, Data overhead, complexity and security analysis. Algorithm Computational overhead per block of data
DES * 49392 for a56 bit key Model 1 ** 486 for a 72 bit key Model 2 1953 for a 72 bit key Equal Equal
Security Analysis
Table 6.5.2: Comparative study of the proposed Stream ciphers Algorithm 1: Model 3(Probabilistic encryption algorithm), Algorithm 2 with standard algorithm like RC4 in terms of computational overhead, Data overhead, complexity and security analysis. Algorith m Computational overhead per character
RC4 * 64,000 for a 40 bit key. Model 3 2106 for a 72 bit key. Algorith m2 500 for a 8 bit key 20 Exponential Chosen text only 40 Exponential Chosen text only
Security Analysis
50000 40000 30000 20000 10000 0 Model model 1 2 DES Computational over head per block of data
80 70 60 50 40 30 20 10 0 Model 1 Model 2 DES 80 character text Data overhead for a 20 character text 40 character text
Figure 6. 2: Pictorial Reresentation of the proposed encryption algorithms like Model 1 & Model 2 with standard algorithm like DES in terms of computational overhead & Data overhead.
70000 60000 50000 40000 30000 20000 10000 0 Model Model 3 4 RC4 Computational overhead per character
Figure 6. 3: Pictorial Reresentation of the proposed encryption algorithms like Model 3 & Algorithm 2 with standard algorithm like RC4 in terms of computational overhead & Data overhead.
Thus in this work a comparative study of three developed models of algorithm 1 & a model of algorithm 2 is discussed in terms of computing power, complexity, avalanche effect & security analysis on encryption process. In the given work, the onus is on developing the subkeys, the encryption and decryption processes are inverse to one another. Thus the computing power, Complexity, avalanche effect and security analysis are equally valid for both encryption and decryption processes. And also Model 1&2 are compared with block cipher like DES algorithm and Models 3 of algorithm 1& Algorithm2 with standard stream cipher like RC4. The comparative study is represented both in the form of tables and graphs.
Chapter 7
APPLICATIONS
7. Applications
7.1 Mobile Adhoc Networks:
In applications like Mobile Adhoc Networks [39], Wireless Sensor Networks and Broad Casting Applications, New symmetric encryption techniques and probabilistic encryption techniques can well be used. It can also be used for Key encapsulating mechanism (KEM) along with Public key cryptography, which can well be used in secure electronic transactions. A mobile ad-hoc network (MANET) is type of wireless ad-hoc network used for mobile routers. The routers are freely movable and can change from place to place. Such type of systems can be used independently or may be connected to internet. They can be used at different places and at different times, usually with all nodes connected to few hops of each other, and having data transfer at a constant rate. A new substitution algorithms, or in particular a probabilistic encryption can well be used in MANET environment to provide security to the constant data sent among different nodes of each hop. The Unique characteristics of MANETs pose a number of non trivial challenges to security design
3. No clear line of defense 4. Stringent Resource Constraints 5. Highly dynamic network topology. In this environment, MANET architecture requires offloading of heavy computation to more powerful devices. One more limitation is its vulnerability of routing protocols that can freeze the whole N/W. One more requirement with MANET is its combining of Key predistribution with secret sharing to achieve key distribution in MANETs. Thus due to the absence of clear line of defense, a complete security solution for MANET should integrate both proactive and reactive approaches.
7.2
contains
vibration, pressure, motion or pollutants, at different places. The main application of this sensor networks lies with motivated by military applications. Each node is associated with sensors, with a radio transceiver, a small microcontroller, and a battery. The capacity of the sensor considered depends on energy requirements and bandwidth and computational speeds. A sensor network is supported with multi-hop routing algorithm. Since the very important application of wireless sensor network lies with military applications, security is of high importance. Here are some constraints with security on Sensors 1. Slow computation rate. 2. Battery power tradeoff between security and battery life. 3. Limited Memory 4. High latency, consume power, turn on periodically. Thus in this environments of MANETs & WSNs, there is a high possibility of pattern matching to get any unauthorised information from the data being sent. To provide
security in this environment, the developed models can well be used. In probabilistic model, multiple cipher texts will be generated for one plain text and any one cipher text can be used for transmission. By the construction of the model, it is free from differential and linear cryptoanalysis and also it is free from chosen cipher text attacks. In the second algorithm, the encrypted data not only depends on the key but also on random nonce value and a highly dynamic time stamp. Thus the probabilistic model and the time stamped model avoids any statistical attacks and pattern matching attacks on the encrypted data being sent, which makes the developed models suitable in these environments. And more over the developed models require less computing power for encryption & decryption process, they are well suited in these environments where the data transfer is high.
share the same private key for decryption purpose which purports the very concept of security. This limitation can be overcome by having a key distribution centre where all the participants have to be registered. The key distribution centre, based on the request from the sender generates a session key which is distributed to all the participants of the broadcasting mechanism. Using the session key and encryption algorithms, data transfer is well secured in a broadcasting environment.
Chapter 8
8.1 Summary
This study represents the importance of Encryption of data for storage and transmission. The significance of encrypted data can be identified in light of the The advantages of
encrypting data manifest themselves in the form of security & confidentiality in real time applications. Encryption of data is of particular significance in applications like email, e commerce, e-cash where highly vulnerable communication lines is accessed for transmission of highly volatile data.
The study traces the development of various encryption models in a real time environment in all their breath taking diversity and breakthroughs in Chapters 2. The significance of the advances and adaptabilities is measured in terms of their diversity of applications in myriad ways that we feel in our daily lives.
The chapter 3 identifies the methodology used in the developed work. It is classified as two algorithms. The first algorithm generates a sequence, followed by model to generate sub keys and mapping of sequence or the sub keys on plain text to generate cipher text. The second algorithm considers a key, a time stamp & a nonce value to generate sub keys which are mapped on plain text to generate cipher text.
Chapter 3.2.1 builds a new block cipher algorithm. The algorithm considers a matrix key and executes a sequence of steps which generates a sequence. A block of plain text which is of equal length to the length of sequence generated is considered. Each Character in the plain text is replaced by a corresponding numerical value added by a value from the generated sequence .Thus the cipher text obtained becomes difficult to break with out knowing the key. The strength of the generated model is studied in terms of computing power, Avalanche effect, and complexity of the model in terms of construction & strength. A Crypto analysis of the model is also identified to study the security of the developed model.
Chapter 3.2.2 builds a new variable length key block cipher algorithm. The algorithm considers a matrix key and executes a sequence of steps which generates a sequence. Based on the similarity of values this sequence is being divided into basins. The basins with minimum values will be eliminated. Remaining each basin represents one block of data. Depending on starting input plain text character, corresponding basin is considered as a key. The procedure is repeated for certain plain text depending on chosen value. Thus the cipher text obtained becomes very difficult to be broken without knowing the key. The strength of the generated model is studied in terms of computing power, Avalanche effect, and complexity of the model in terms of its construction & strength. A Crypto analysis of the model is also identified to study the security of the developed model.
Chapter 3.2.3 builds a new stream probabilistic cipher algorithm. The algorithm considers a matrix key and executes a sequence of steps which generates a sequence. Based on the similarity of values this sequence is being divided into basins. Each basin represents one character. Each character in the plain text is replaced by a set of basins based on chosen base value. Each basin value is replaced by random values from basins to generate the cipher text. Thus multiple cipher texts will be developed for one plain text. Any one cipher text can be used for data transmission. Thus the cipher text obtained becomes computationally infeasible to be broken with out knowing the key. The strength
of the generated model is studied in terms of computing power, Avalanche effect, and complexity of the model in terms of its construction & strength. A Crypto analysis of the model is also identified to study the security of the developed model. Topic 3.3 builds a model which considers not only the key, but also a time stamp and a nonce value to generate sub keys which are mapped to plain text to generate cipher text. The strength of the generated model is studied in terms of computing power, Avalanche effect, and complexity of the model in terms of its construction & strength. A Crypto analysis of the model is also identified to study the security of the developed model.
Trainings on the developed algorithms are discussed in chapter 4. Training an encryption model involves variable length key, a plain text and an algorithm which applies the key on the plain text to generate cipher text. This allows for the data to be transmitted over the network in some form which cannot be read by any intruder. In the first algorithm, the model is trained for different keys and an analysis of the models is being done. The random matrix keys of the form 3*3 keys and 4*4 matrix keys are considered in the training process. In the second algorithm, the model is trained for different key values, time stamps & nonce values. By having small variations in the keys, the models are studied for their increase in performance and working. The models are also trained for their increase in security by having slight variations in the key values. The models are also studied for changes in data overhead for small variations in the key values. A model to identify the garbled keys in transmission is discussed in chapter 5. This facilitates for identifying and rectifying garbled keys while transmission through the unreliable network.
A comparative study of all the developed models is discussed in chapter 6. All the developed models are studied in terms of computing power, Avalanche effect and complexity of the models in terms of their construction and strength. A crypto analytical study of all developed models is also considered. Block Ciphers like Model 1 & Model2 of the first algorithm are compared with standard block cipher DES and ciphers like
Model 3 of the first algorithm & second algorithm are compared with stream cipher like RC4.
Data encryption applications to communication networks are discussed in Chapter 7. The applications covered are Mobile adhoc networks, wire_less sensor
networks and broad casting applications. The chapter highlights the significance of data encryption to communication networks in light of the ever growing user demand for both security and confidentiality in transmission of data/information. This is particularly important in the face of computer access to financial transactions due to the impact of Internet.
8.2 Conclusions.
Encryption models presented in chapters 3.2.1, 3.2.2, 3.2.3 &3.3 ranged over new block cipher techniques to stream cipher techniques. The first algorithm considers a matrix key and executes a sequence of steps which generates a sequence. This sequence is used to generate four different encryption models.
Model 1 is a new symmetric encryption technique. In the Model it is observed that, for a given key, the total computational overhead of the model1 is 486 calculations The complexity of the model by its construction is O(n), by its strength it is exponential in nature. It is also observed that by slight variations in the key a lot of variations in cipher text is identified which provides more strength to the generated model. : This algorithm is completely free from cipher text only, type of attack. By the other attacks, the key may not be retrieved but a part of plain text may be retrieved.
Model 2 is a variable length key block cipher technique. It is observed that, for a given key, the total computational overhead of the model2 is 1953 calculations The complexity of the model by its construction is O(n2), by its strength it is exponential in nature. It is also observed that by slight variations in the key and the plaintext a lot of variations in cipher text is identified which provides more strength to the generated
model. This algorithm is completely free from cipher text only and known plain text attacks but may not be completely free from chosen plain text and cipher text attacks where a part of information may be leaked out by crypto analysis.
Model 3 is a stream probabilistic cipher algorithm. In this model, multiple cipher texts will be generated for one plain text. Thus identifying the relationship between plain text and cipher text is computationally infeasible. It is observed that, for a given key, the total computational overhead of the model3 is 2025+81k calculations (where k refers to number of computations for random pick up of values). The complexity of the model by its construction is O(n2), by its strength it is exponential in nature. It is also observed that by slight variations in the key and the plaintext a lot of variations in cipher text is identified which provides more strength to the generated model. This algorithm is completely free from cipher text, known plain text and chosen plain text and cipher text attacks. This model is also free from public key type attacks.
Algorithm 2: It is a new stream cipher algorithm. It is observed that, for a given key, time stamp &Nonce values, the total computational overhead is 500 calculations The complexity of the model by its construction is O(nt), by its strength it is exponential in nature. It is also observed that by slight variations in the key, time stamp & Nonce values a lot of variations in cipher text is identified which provides more strength to the generated model. This algorithm is completely free from cipher text, known plain text, chosen plain text attacks.
In chapter 4 the developed algorithms are trained for different combinations of keys. From the above study, it has been observed that the first algorithm is trained for a [3*3] and [4*4} keys on a input ternary vector of 27 & 81 values with a base of 3, and on 100 values with a base of 4. For the input vectors of 27, 81 & 100 values, the number of basins formed varies between 3 to 6. It has been observed that the more the values of the vector, it does not guarantee in the increase of basins. But increase in the values of basins are observed. It is also observed that for certain variations in the key, the model is generating more basins. Thus for a certain random generated key, the number of basins is
evaluated. A better metric is provided to a random key which provides more number of basins. Since the number of basins forms the characters of the alphabet, the more the number of basins, the more will be the characters of the alphabet. A combination of basins will increase the number of characters of the alphabet. It is also observed that by an increase in the number of basins, the data overhead will be drastically reduced. The analysis and results of the training model is illustrated in tables: 4.1 to 4.3. The second algorithm is trained for different keys, time stamps and nonce values. It has been identified that a lot of variations are observed by varying the above said values which gives sufficient strength to the algorithm. In both the algorithms, the strength lies with non uniformity of values in the sequence generated. The sequence generated by both the algorithms is tested for various statistical tests like uniformity testing, Universal testing and repetition tests. The results proved that the generated sequences by both the algorithms withstand the above said statistical tests. The results are shown in fig. 4.1 & 4.2. A model to identify the garbled keys in transmission is discussed in chapter 5. This mechanism uses the same algorithm for the generation of sub keys and also for the generation of sequence which is used to generate hash value. This hash value will be appended to key while transmitted from the Key Distribution Centre. If any errors takes place in the key while transmission, there will be a mismatch between received hash value & generated hash value and accordingly the participating parties ask for retransmission of key. This facilitates for identifying and rectifying garbled keys while transmission through the unreliable network.
A comparative study of all the developed models is presented in chapter 6. All the models are compared in terms of computing power, avalanche effect, complexity of models in terms of their construction and strength and a crypto analysis of the models is also undertaken. The summary of the study is presented in table: 6.4. The models are compared with standard algorithms like DES & RC4 algorithms and the results are shown in Tables: 6.5.1 & 6.5.2.
applications like Mobile Adhoc Networks, Wireless Sensor Networks and Broad Casting Applications, New Symmetric encryption models and in particular probabilistic encryption mechanism could well be used. The proposed models can also be used for Key encapsulating mechanism (KEM) along with Public key cryptography, which can well be applicable in secure electronic transactions and digital cash.
The conclusions drawn above, reflecting the overall security and confidentiality rates of transmitting data, confirm the improvement in the efficiency of transmitting data. The methodology used in the dissertation can be used for evaluating new encryption algorithms in terms of multiple parameters. Further, the quantitative data indicates
relationship between Random key considered, sub keys (Basins) generated, computational power needed by the first algorithm and the strength and security of the algorithm. It also identifies the importance of multiple parameters like keys, time stamps and nonce values used in second algorithm in terms of its security & strength.
In the case of Substitution encryption algorithms, the gain by using DES algorithm is its low computational power, which will be very much gained by using the developed models. The developed models are giving almost equal security at low computational overhead (Computing power). As the security of encrypting models is directly related to the key length [3], the more the key length the more will be the security of the algorithm. But this parameter increases the computational overhead (computing power) of the encryption algorithms. The security of the developed models is relatively free from the key length which gives more flexibility regarding computing power.
Another conclusion from the above study is freeness from public key attacks. With probabilistic encryption algorithm (Model 3), a crypto analyst can no longer encrypt random plain texts looking for correct cipher text. Since multiple cipher texts will be developed for one plain text, even if he decrypts the message to plain text, he does not know how far he had guessed the message correctly. Under this scheme, different cipher texts will be formed for one plain text. Also the cipher text will always be larger than
plain text.
Thus the study discusses the distinguishing features of two encryption algorithms and their models in terms of their applications. The first algorithm considers a random matrix key which is multiplied with a ternary vector. A sign function is applied on the product to generate a sequence. This sequence is used to build three different encryption models. Each model can be used for encryption of data. Two models like Model1 & Model2 are based on block cipher technique and Model3 is a stream cipher. In Model 1, the sequence generated is used as subkey to be mapped to alphanumerical equivalent of Plaintext to generate cipher text. The computational power of developed model is low and the model is using sign function which makes it relatively free from Differential cryptoanalysis. In model 2, the sequence generated is divided to basins based on equal values. These basins contain unequal values and based on chosen value corresponding basin is used as subkey. Since theses basins can be considered as variable length keys, the model is relatively free from Linear cryptoanalysis also. In particular, Model3 is based on probabilistic encryption scheme where multiple cipher texts will be generated for one plain text and any one cipher text can be used for transmission of data. Since the model is probabilistic in nature, it is free not only from Differential and Linear cryptoanalysis but also free from Chosen Ciphertext attack. The model is also free from public key attacks. The second algorithm considers not only the key but also time stamps and Initialization Vector to generate a sequence which is used as subkey to generate cipher text. The study outlines the components of these algorithms and their strength in a real time environment.
This process of improvement has been spurred on by the ever-growing user demand for security to the transmission (and storage) of information. The Postwar II period ushered in a new multifaceted era of advances in science and technology; emergence of multinational corporations; and globalization of economic activity. This change has and will continue to put heavy user demand on security to data in communication systems and networks. Increased competition globally in the
e_commerce and other applications underlines the need for secured communication. For data storage and transmission, encryption becomes imperative for security and confidentiality.
Chapter 9
FUTURE WORK
8. FUTURE WORK:
The present work deals with plain text being represented by numerical and characters of English alphabet. The work can be improved so that it can support the characters of not only English but also of other languages as well. The work can also be improved to support not only text but also other forms of message transmission like audio, video and images.
REFERENCES
1. Amjay Kumar, Ajay Kumar: Development of New Cryptographic Construct using Palmprint Based Fuzzyvoult, EURASIP Journal on Adv. In Signal Processing, Vol 21, pp 234-238, 2009
2. Baocang Wang, Qianhong Wu, Yupu Hu: A Knapsack Based Probabilistic Encryption Scheme, On Line March 2007, www.citeseer.ist.psu.edu.
3.
Bluekrypt
2009:
Cryptographic
Key
length
Recommendations,
http://www.keylength.com
4. Blum L., Blum M , Shub M. : A simple unpredictable pseudo random number generator , SIAM J. compute , 1986, 15, (2), pp 364-383.
5. Brics: Universally comparable notions of key exchange and secure channels, Lecture Notes in Computer Science, Springer, Berlin, March 2004.
7. Brassard G.: Modern Cryptology , a tutorial lecture Notes on computer science , (325) ,(spring-verlas) .
8.Bruce Scheneier: Applied cryptography (John Wiley & sons (ASIA) Pvt. Ltd.
9. Carlone Fontaine & Fabien Galand: A Survey of Homomorphic Encryption for non specialists, EURASIP Journal,Vol 07, Article 10.
10. Donavan G.Govan, Nathen Lewis: Using Trust for Key Distribution & Route Selection in Wireless Sensor Networks, International Conference on Network Operations & Management, IEEE Symposium 2008, PP 787-790.
11. Dorothy E. Denning et al.: Time Stamps in Key Distribution Protocol, Communication of ACM, Vol 24, Issue 8, Aug 1981, pp 533-536.
12. E.C.Park, I.F.Blake: Reducing communication overhead of Key Distribution Schemes for Wireless Sensor Networks: Computer Communications & Networks, ICCCN 2007, pp 1345-1350.
14. Guo D, Cheng L.M., Cheng L.L: A New Symmetric Probabilistic Encryption Scheme Based on Chaotic Attractors of Neural Networks, Applied Intelligence, Vol 10, No.1, Jan 99, pp 71-84.
15. Hamid Mirvazri, Kashmiran Jumari Mahamod Ismail, Zurina Mohd. Hanapi: Message based Random Variable Length Key Encryption Algorithm, Journal of Computer Science, pp 573-578, 2009.
16. Hianyi Hu, Gufen Znu, Guanning Xu: Secret Scheme for Online Banking based on Secret key Encryption, Second International Workshop on Knowledge Discovery & Data Mining, Jan 23-25 2009.
17. Henry Baker and Fred Piper: Cipher systems(North wood books, London 1982).
18.J.William
stalling
:Cryptography
and
network
security
(Pearson
Education,ASIA1998)
19. Kaiping Xue: Study of improved key Distribution Mechanisms based on two length structure for wireless sensor networks, International conference on adv. Information Technology, 2008.
20. Krishna A.V.N.: A new algorithm in network security, International Conference Proc. Of CISTM-05, 24-26 July 2005, Gurgoan, India.
21. Krishna A.V.N., Vishnu Vardhan.B.: Utility and Analysis of some Encryption algorithms in E learning environment, International Convention Proc. Of CALIBER 2006, 02-04 Feb. 2006, Gulbarga, India.
22. Krishna A.V.N., S.N.N.Pandit: A new Algorithm in Network Security for data
transmission, Acharya Nagarjuna International Journal of Mathematics and Information Technology, Vol: 1, No. 2, 2004 pp97-108
23. Krishna A.V.N, S.N.N.Pandit, A.Vinaya Babu: A generalized scheme for data encryption technique using a randomized matrix key, Journal of Discrete Mathematical Sciences & Cryptography, Vol 10, No. 1, Feb 2007, pp73-81
24. Krishna A.V.N., A.Vinaya Babu: Web and Network Communication security Algorithms, Journal on Software Engineering, Vol 1,No.1, July 06, pp12-14
25. Krishna A.V.N, A.Vinaya Babu: Pipeline Data Compression & Encryption Techniques in e-learning environment, Journal of Theoretical and Applied Information Technology, Vol 3, No.1, Jan 2007, pp37-43.
26. Krishna A.V.N, A.Vinaya Babu: A Modified Hill Cipher Algorithm for Encryption of Data in Data Transmission, Georgian Electronic Scientific Journal: Computer Science and Telecommunications 2007 !N0. 3(14) pp 78-83.
27.Lester S. Hill, Cryptography in an Algebraic Alphabet, The American Mathematical Monthly 36, June-July 1929, pp306312.
28.Lester S. Hill, Concerning Certain Linear Transformation Apparatus of Cryptography, The American Mathematical Monthly 38, 1931, pp135154.
29. Maybec.J.S. (1981), Sign Solvability, Proceedings of first symposium on computer assisted analysis and model simplification, Academic Press, NY.
31. Pandit S.N.N (1963): Some quantitative combinatorial search problems. (Ph.D. Thesis).
32.Pandit S.N.N (1961): A New matrix Calculus, J Soc., Ind. And Appl. Math. Pp 632-637.
33. Pci Yihting: A Temporal order Resolution algorithm in the multi server time stamp service frame work, International Conference on Advanced Information Networking & Applications, AINA 2005, Vol 2m 28-30 March, pp 445-448.
34.
Phillip
Rogaway
Nonce
Based
Symmetric
Encryption,
www.cs.ucdavis.edu/rogeway.
35. R.S.Thore & D.B.Talange: Security of internet to pager E-mail messages (Internet for India-1997IEEE Hyderabad section) pp.89-94.
37. R.H.Rahnan, N,Nowsheen: A New Symmetric Key Distribution Protocol Using Centralized Approach, Asian Journal of Information Technology, 2007 6(8) pp 911-915.
38. Suhas V. Patenkar Numerical Heat Transfer and Fluid Flow 11-75(1991).
39 Terry Ritter: Substitution Cipher with Pseudo-Random Shuffling, Cryptologia archive, Volume XIV, issue 4, Oct 99, www.portal.acm.org
40. wikipedia.org/wiki/Mobile-adhoc-network
41. wikipedia.org/wiki/sensor_network
42. Wireless sensor networks: Edited by C.S.Raghavendra, Krishna M. Sivalingam, Taieb Znati.