Sunteți pe pagina 1din 6

International Journal of Emerging Trends & Technology in Computer Science (IJETTCS)

Web Site: www.ijettcs.org Email: editor@ijettcs.org, editorijettcs@gmail.com Volume 2, Issue 5, September October 2013 ISSN 2278-6856

Studies on the advancements of Neural Networks and Neural Network Based Cryptanalytic Works
SambasivaRao Baragada, Member IEEE and P. Satyanarayana Reddy, Member IEEE
Government Degree College, Khairatabad, Hyderabad, India.

Abstract: A neural network is considered as a cryptanalytic


tool by considering the task of breaking a cryptosystem as a pattern classification problem. The possibility of employing neural networks for identification of cipher systems from cipher texts has been explored rapidly. This paper reports the advancements done in the neural networks and figure out the key ideas of some potential works pursued during last 10 years cryptanalysis using neural networks.

Keywords: Cryptanalysis, Neural networks

1. NEURAL NETWORKS AND ITS


ADVANCEMENTS

An artificial neural network (ANN) mimics some features of a real nervous system, contains a collection of basic computing units called neurons. Such a model shows strong resemblance to axons and dendrites in a nervous system. Robustness, flexibility and collective computation are the attractive features of this model, due to its selforganizing and adaptive nature. This model also shows the ability to deal with a variety of data situations and could be more user-friendly than the traditional approaches. The nodes of these networks resemble differential equations. The connections between these nodes can either be inter-connected within between layers or intra-connected within the same layer. Activation value is fed to the nodes in the successive layers, computed from connection weights with the outputs of previous layer. The activation value is passed through a non-linear function. Hard-limiting non linearity is considered, if vectors are binary or bipolar and a squashed function is chosen, if vectors are analog in nature. Popular squashed functions are sigmoid (0 to 1), tanh (-1 to +1), Gaussian, logarithmic and exponential. A network can either be discrete or analog. The neuron of a discrete network is associated with two states, whereas the analog network is associated with a continuous output. Discrete network can be synchronous, when the state of every neuron in the network is updated. In the same way it can asynchronous, when only one neuron is updated for a given time period. A feed forward network provides input to the next layer with no closed chain of dependence among neural states through a set of connection strengths or weights. The chain has to be closed to make it feedback network. When the output of the network depends upon the current input, Volume 2, Issue 5 September October 2013

the network is static (no memory). If the output of the network depends upon past inputs or outputs, the network is dynamic (recurrent). If the interconnection among neurons changes with time, the network is adaptive; otherwise it is called non-adaptive. Updation of connection strengths of the networks can be seen in fixed weight association networks methods, supervised methods, and unsupervised methods. The weights are pre-computed for fixed weight association networks. Supervised methods consider both input and output for weight updation, whereas unsupervised methods use only input. The complete pattern recognition system composed of instantiation space, selection of patterns, training and testing the network. The development of artificial neural networks was first reported in the early forties by McCulloh and Pits in [1]. A neuron is said to be fired, if the sum of its excitatory inputs reaches its threshold. This state remains, until neuron receives no inhibitory input. The model proposed by the McCulloh and Pits can used to construct a network which has the ability to compute any logical function. Rosenblatt found that the McCulloh-pitts model [1] was unbiological. In order to overcome the deficiencies in the McCulloh-Pitts model, he found out a new model, namely, the perceptron model, which could be utilized to learn and generalize. Further, he investigated several mathematical models, which included competitive learning or self-organization, and forced learning which is somewhat similar to reinforcement learning. In addition to the above two types of learning, the concept of supervised learning was developed and incorporated in the adaptive linear element model (ADALINE). The ADALINE was found by Widrow et al. [2]. The ADALINE is a single neuron, which uses a method to descend the gradient of the error, by using the supervised learning. The ADALINE is a linear neuron, and it is helpful to discriminate the patterns, which are linearly separable. The concept of multi layer ADALINEs or multilayer network was developed for patterns which were not linearly separable. The training of the multi layer network was first explained by Werbos [3] as backpropagation algorithm (BPA) in his Ph.D. dissertation. His work did not become popular. Rumelhart [4] and his group published the parallel processing, a two-volume collection of studies on a broad variety of neural network configurations. Through these books, the concept of backPage 23

International Journal of Emerging Trends & Technology in Computer Science (IJETTCS)


Web Site: www.ijettcs.org Email: editor@ijettcs.org, editorijettcs@gmail.com Volume 2, Issue 5, September October 2013 ISSN 2278-6856
propagation algorithm became popular for training a multi layer network. Lippmann [5] briefed the concept of different algorithms in his tutorial paper, and he still made neural networks more popular. Much work has been carried out, with respect to the number of hidden layers, the number of hidden nodes in a hidden layer, methods of presenting the patterns, training the network with initial random weights at different ranges, types of error criteria used, and selection of patterns. Even though the training procedure for the neural network is unique and problem-oriented, it is sufficient to have one hidden layer for most of the problems solved by supervised training. Sietsema and Robert [6] have analyzed the various training strategies with more than one hidden layer and finally claimed that one hidden layer was sufficient. Chester [7] has claimed better performance for the network with two hidden layers. The number of nodes in a hidden layer should be, neither too many, nor too few. Too many nodes in the hidden layer will result in the oscillation of the mean squared error (MSE) around a particular value without any convergence; or sometimes the network converges to one of the local minima. Similarly, too few a number of nodes in the hidden layer will sometimes be just suitable, only to learn the training patterns, but generalization of the network may not be possible. It is necessary that there should be a way to find out the optimum number of nodes in a hidden layer. Mario et al. [8] have used a heuristic algorithm to estimate the number of nodes in a hidden layer. Hirose et al. [9] have adopted a different approach, by using an algorithm based on MSE to estimate the same. To overcome the difficulty of analyzing the number of nodes in the hidden layer, Weymaere et al. [10] have used Gaussian function in the hidden nodes and sigmoid function in the output nodes. Fujita [11] has analyzed hidden unit function. In most of the supervised training methods, the patterns are presented in a pre-determined sequence in a cycle. Normally, the order of presentation of the patterns is maintained in all the cycles. Ridgway [12] found in his thesis that cyclic presentation of patterns could lead to cyclic adaptation. These cycles would cause the weights of the entire network to cycle, by preventing convergence. Various error criteria have been tried by Zakai [13] for better convergence of the network. Quantizations of the weights and training BPA have been analyzed by Shoemaker et al. [14]. Analysis of BPA with respect to mean weight behavior was done by Bershad et al. [15]. In reality, most of the patterns are not linearly separable. Non-linear classifiers are used for pattern classification, in order to achieve good seperability. The multilayer network is a non-linear classifier, since it uses hidden layer. In addition to multiplayer network, polynomial discriminate function (PDF) is also a non-linear classifier. In the PDF, the input vector is pre-processed, similar to the suggestions by Specht [15, 16]. Normally, neural networks are used for classify patterns by learning from Volume 2, Issue 5 September October 2013 samples. Different neural network paradigms employ different learning rules. In some way, all these paradigms determine different pattern statistics from a set of training samples. Then, the network classifies new patterns on the basis of these statistics.

2. RECENT WORKS OF NEURAL NETWORK BASED CRYPTANALYSIS


Early attempts have been made by Liew and De Silva [17] in applying multi-layer perceptron (MLP) neural networks for deciphering symmetric block ciphers. In their work, MLP network decides on the encryption algorithm which depends on the secret key employed. Here, authors have focused on developing a novel mutant algorithm which is hard to break while generating block ciphers. The length of any cipher block has made fixed of 64bits for the obtained ciphers. The length of the secret key can be of both fixed and variable. MLP network adds the strength to the material of the secret key during rounds of encryption. Secret key and the embedding algorithm have strong coupling. The basic building blocks of each cipher are generated by operations like XOR, addition modulo, multiplication modulo, Feistel swapping and shifting. The designed MLP network consists of three layers via. Input layer, hidden layer and output layer. Each layer is associated with 64 processing nodes. The process of encryption is done in two rounds. Secret key is provided as input In the first round and second round with iterative nature derives the required cipher blocks. Differential and linear attacks can be resisted by the proposed algorithm. Even though the algorithm fights or resists two important cryptanalysis attacks, the nature of the algorithm if kept open can itself be prone to reverse engineering for identifying vulnerabilities. Chengqing et. al., [18, 19, 20] are considered to be the pioneer researchers in cryptanalysis. They have critically investigated potential cryptographic works proposed by other researchers of their time. In [18, 19], they performed cryptanalysis of the chaotic neural network (CNN) for encryption which was actually proposed by Yen and Guo to protect images and videos. Yen and Guo have claimed that ciphers generated using CNN are potential and are less prone to cryptanalytic attacks. After rigorous experiments by chengqing et. al., have identified that CNN is easily be broken by know/chosen-plaintext attacks. The security of CNN is over-estimated against brute-force attack. Finally, a positive set of suggestions are provided by authors to improve CNN for better encryption. Later they have empirically analyzed the security concerns of multistage encryption system (MES) [20] which was proposed during ISCAS 2004. Ibrahim and Maaroof [21] reviewed significant works available to their time that deal with biological inspired computational (BIC) paradigm in the filed of cryptology. They have classified the BIC works into three classes via. Genetic algorithm (GA), artificial neural network (ANN), Page 24

International Journal of Emerging Trends & Technology in Computer Science (IJETTCS)


Web Site: www.ijettcs.org Email: editor@ijettcs.org, editorijettcs@gmail.com Volume 2, Issue 5, September October 2013 ISSN 2278-6856
and artificial immune system (AIS). They stated that ANN can be leveraged for both cryptanalysis and keyexchange. The protocol based on neural key-exchange doesn't take the support of number theory rather it is based on mutual learning which is a synchronization of neural networks. Chandra and Varghese [22] have employed Cascade Correlation Neural Network and Back Propagation Network to investigate the nature of Cipher Systems. Back propagation algorithm is considered because it is simple to construct and learn. Cascade Correlation Neural Networks are leveraged in this work to construct the neural network with varying nodes and layers during learning phase. Their investigations have been pursued with a large collection of cipher texts which were obtained from Enhanced RC6 and Stream Cipher (SEAL). Input cipher texts are categorized into three datasets via. Dataset1: single key, different plaintext messages, Dataset2: same set of plaintext messages, different sets of keys, Dataset3: different sets of plaintext messages, different sets of keys respectively. Even with increased complexity of the input datasets, they identified that cascade correlation model performed better than other similar works and even better than Back Propagation Network too. The process of identifying vulnerabilities of significant cryptographic methods not only limited to chengqing et al. Several other researchers like Yang et. al., [23] have also worked on breaking or investigating potential cryptographic systems for their limitations. In the year 2006, Yu et al. presented a new cryptographic scheme based on the delayed choatic neural networks. This cryptographic scheme has been positively attacked by Yang et al. and concluded that secret key stream can be obtained back with a chosen plaintext attack. Culibrk et al. [24] too worked in breaking non deterministic symmetric block cipher proposed by GuoCheng-Cheng. This non determinism has been implied by using a Hopfield neural network while generating ciphers which has the property to exhibit the phenomenon of stocastic error in convergence. Authors have classified this deciphering method as a simple mathematical problem. Stated simply as, It is needed to find a matrix which acts as a permutation matrix for a given set of conjugate matrices. In the paper, authors have shown both proofs in the form of theorems and applicable examples/instances. Their work has shown the drastic reduction of key space during cryptanalysis which is a serious concern in designing cryptosystems. In the year 2006, Xiang et al. [25] and Yu et al. [26] have proposed cryptosystems associated with two encryption schemes which combines circular bit shift and XOR operations. Such cryptosystem is under control of a choatic system that generates pseudo-random bit sequence (PRBS) during encryption. Li et al. [27] pursued cryptanalysis of these two schemes and concluded with with three significant findings from their experiments via. i) security defects exist in both the schemes like Volume 2, Issue 5 September October 2013 insufficient randomness, in-adequate sub-keys and low sensitivity of encryption ii) Two chosen plain texts are enough to reconstruct PRBS as an equivalent key and iii) Also, two known plaintexts are adequate to carry out a differential known-plain text attack to find out most of the forbidden chaotic PRBS elements. Authors have shown both chosen-plaintext and differential known-plaintext attack through theoretical experimentations and examples. They have also formulated the works produced in [25, 26] in a simple manner before pursuing their work. Adequate properties of the operations which are employed in the work are presented accordingly. Nalini and Raghavendra [28] have leveraged a couple of novel cryptanalysis schemes via. application of thermostatistical persistence to simulated annealing and Particle Swarm Optimization (PSO) principle. From their experimental findings they identified that these two schemes have strong couple to draw optimization heuristics while pursuing cryptanalysis. Normally, annealing represents the process of cooling a heated metal slowly in order to reach a minimum energy state. The same methodology can described for simulated annealing cryptanalysis attack as, the secret key is represented as a string of N characters in the given alphabet. Simply swap the secret key elements at two randomly selected positions gives the minimal perturbation. The principle of PSO can be understood by a bird flocking. Say, a group of birds are on search of food in a particular area. The best and simple way is to follow the bird which is near to the food. The bird which is near to the food is called as Particle. In this work, each particle represents a key. Since authors have worked out on DES and DES uses 7byte key and therefore it is obvious to say that search space could be 7-dimensional. These particles dynamically change in search space accordingly to the triggering of external events, also each particle is associated with a fitness value (or function). Such heuristics principle could be positively applied in various fields like ANN, function optimization, and fuzzy systems. Dileep et al. [29] have explored the use of approximation function to identify the secret key using feed forward multi-layer neural networks (FFMNN) and support vector regressions to pursue cryptanalysis of Feistel type block ciphers. Their main intent is to recover back the plain text without the knowledge of the secret key employed through the hetero-association model (HAM). Support vector machines are further explored in the work due to the poor capture of association between plain and cipher texts for DES with electronic code book (ECB) method. From their research findings, they observe that support vector based regression model have successfully recovered the plain text for the text cipher text obtained using the same key and same plain text used to train during learning to neural network. Complete decryption is not possible due to the complexity of DES. Khaleed et al. [30] developed a mathematical black-box model to pursue cryptanalysis. In their work, neuroPage 25

International Journal of Emerging Trends & Technology in Computer Science (IJETTCS)


Web Site: www.ijettcs.org Email: editor@ijettcs.org, editorijettcs@gmail.com Volume 2, Issue 5, September October 2013 ISSN 2278-6856
identifier, a black-box model is constructed with a combination of system identification techniques and adaptive system techniques which is focused to attack cipher systems. Normally such neuro-identifiers are multi-layer feedforward neural networks with input (buffer), hidden (non-linear) and output layers (linear/non-linear). Using this identifier, the secret key is determined from trained plaintext-ciphertext pair. Levenberg Marquardt (LM) algorithm is applied to train the network. Authors focused on building a generalized system to attack most of the cipher systems, still it might fail on some specific cipher systems due to lack of adaptive learning. Sivagurunathan et al. [31] have employed backpropagation neural networks in classifying substitution ciphers via. Playfair, Vigenere and Hill Ciphers. The cipher texts are tailored to a fixed size say 1Kb during experimental investigations. Both tansigmoidal and logsigmoidal rules are followed during training the features. Some significant features extracted during training include, number of alphabets in cipher text, number of adjacent duplicates and distance between digram duplicates. Two phases in testing are followed via. different password length & same plain text and same password length and different plaintexts. Experimental findings reveal that Playfair ciphers are correctly classified irrespective of length of the password and plaintext. Significant misclassifications occur while classifiying Hillciphers. This might be occurred due to the reduction of number of repetitive adjacent duplicate characters when password length is increasing. Alani MM [32] has presented a novel cryptanalytic attack on DES. Cryptanalysis known-plain attack based on neural network is proposed in his work which recovers the plaintext without estimating the key employed during encryption. Based on his experiments, he estimated that it takes 51 minutes in average to pursue cryptanalysis attack of 211 plaintext-ciphertext pairs. The work has been compared and found improved in terms of knownplaintexts required. Fatih et al. [33] have successfully revealed the secret parameters by carrying detailed investigating Cokal and Solak attacks. Their actual intent is to break the cryptographic system proposed by Bigdeli et al. which is designed based on chaos encryption method using hopfield neural network. The computational complexity of their proposed attack is O(n), where n=2 gives the number of randomly selected cipher/plain image pairs. Finally authors suggested paying much attention on the fundamental criteria while designing cryptosystem. Swehta and Megha [34] have applied back-propagation neural network to attack Fiestel ciphers in order to explore the solution space of the ciphers. Substitution has been used during the cryptanalysis attack. Authors have computed the key value using mathematical formulations of Fiestel on cipher text values. This key acts as the weight of the edges in neural network to compute the output of network. Their procedure works in reverse Volume 2, Issue 5 September October 2013 direction because of Fiestel ciphers are composed of two blocks (Left and Right). This reverse order gives the actual plaintext which the task actually done by the trained neural network. SambasivaRao Baragada and P Satyanarayana Reddy have listed out the key ideas of the high impact cryptanalytic works combined with machine learning in [35]. Their work can be further studied along with this paper by the interested readers which can give a broad idea over the advancements of cryptanalysis in recent times.

4. SUMMARY
Neural Networks and its advancements are presented in as the first part of the paper. The article then focused on reviewing the some potential cryptanalysis works using neural networks during last 10 years. The paradigm shift can be clearly observed in the research proposals of various researchers trending to neural networks. Chengqing, Li, Culbrk et. al. rigourously worked to investigate the vulnerabilities of various potential encryption schemes. It is clearly observed that these cryptanalysis attacks using neural networks are not confined and are continuously been under improvements.

ACKNOWLEDGEMENTS
The authors wholeheartedly express their thanks to the Principal and staff of Government Degree College, Khairatabad, Andhra Pradesh, India for their continuous support in pursuing this research work.

REFERENCES
[1] F. Rosenblatt, Principles of neurodynamics: Perceptron and theory of brain mechanisms, Washington D.C: Spartan Books, 1962. [2] B. Widrow,An Adaptive Adaline Neuron Using Chemical Memistors, Stanford Electronics Laboratories Technical Report 1553-2, October 1960. [3] Werbos PJ, Beyond Regression: New Tools for Prediction and Analysis in the Behavioral Sciences, PhD thesis, Harvard University, 1974. [4] Rumelhart, D. E., Hinton, G. E., and Williams, R. J., Learning internal representations by error propagation, Parallel Distributed Processing: Explorations in microstructure of cognition, Vol. 1, pp. 318362, 1986. [5] W. Y. Huang and R.P. Lippmann, "Comparisons between neuralnet and conventional classifiers", 1st IEEE International Conference on Neural Networks, San Diego, Vol. IV, pp. 485493, June 1987. [6] J Siestma, Rober JF Dow, Creating artificial neural networks that generalize, Neural Networks, ACM, Vol. 4, No. 1, pp. 6779, January 1991. [7] Chester, D.L., "Why Two Hidden Layers are Better than One, International Joint Conference on Neural

Page 26

International Journal of Emerging Trends & Technology in Computer Science (IJETTCS)


Web Site: www.ijettcs.org Email: editor@ijettcs.org, editorijettcs@gmail.com Volume 2, Issue 5, September October 2013 ISSN 2278-6856
Networks, Lawrence Erlbaum, Vol. 1, pp. 265268, January 1990. [8] Mostefa Golea and Mario Marchand, A Growth Algorithm for Neural Network Decision Trees, Europhysics Letters, Vol. 12, pp. 205210, June 1990. [9] Hirose, Yoshio, Koichi Yamashita, Shimpei Hijiya, Back-propagation algorithm which varies the number of hidden units, Neural Networks, ACM, Vol. 4 No. 1, pp. 6166, Januray 1991. [10] Nico Weymaere, Jean-Pierre Martens A fast and robust learning algorithm for feedforward neural networks, Neural Networks, ACM, Vol. 4, No. 3, pp. 361369, June 1991. [11] Fujita, O., Optimization of the hidden unit function in feedforward neural networks, Neural Networks, ACM, Vol. 5, No. 5, pp. 755764, September 1992. [12] Ridgway, W. C., An Adaptive Logic System with Generalizing Properties, Stanford Electronics Laboratories Technical Report 1556-1, prepared under Air Force Contract AF 33(616)7726, Stanford Univ., April 1962. [13] M. Zakai, General Error Criteria, IEEE Transactions on Information Theory, Vol. 10, No. 1, pp. 9495, January. 1964. [14] P. A. Shoemaker, A note on least-squares learning procedures and classication by neural network models, IEEE Transactions on Neural Networks, Vol. 2, No. 1, pp.158160, Janurary 1991. [15] John J. Shynk, Neil J. Bershad, Stationary points of a single-layer perceptron for nonseparable data models, Neural Networks, ACM, Vol. 6, No. 2, pp. 189202, February 1993. [16] Specht, D. F., A General Regression Neural Network, IEEE Transactions on Neural Networks, Vol. 2 No. 6, pp. 568576, November 1991. [17] Liew Pol Yee1, Liyanage C. De Silva, Application of MultiLayer Perceptron Networks in Symmetric Block Ciphers, Proceedings of the 2002 International Joint Conference on Neural Networks, Vol. 2, pp. 1455 1458, 2002. [18] Chengqing Li , Shujun Li , Dan Zhang , and Guanrong Chen, Cryptanalysis of a Chaotic Neural Network Based Multimedia Encryption Scheme, Advances in Multimedia Information Processing PCM 2004 Proceedings, Part III, volume 3333 of Lecture Notes in Computer Science, pp. 418-425, 2004, Springer-Verlag Berlin Heidelberg. [19] Chengqing Li , Shujun Li , Dan Zhang , Xiaofeng Liao , and Guanrong Chen, Chosen-Plaintext Cryptanalysis of a Clipped-Neural-Network-Based Chaotic Cipher, International Symposium on Neural Networks 2005 (ISNN2005) 2005. [20] Chengqing Li , Xinxiao Li , Shujun Li and Guanrong Chen, Cryptanalysis of a Multistage Encryption System, IEEE Int. Symposium on Circuits and Systems, pp. 880 883, 2005. Volume 2, Issue 5 September October 2013 [21] Ibrahim, S., and Maarof, M. (2005). A Review on Biological Inspired Computation in Cryptology, Jurnal Teknologi Maklumat, , Vol. 17, Issue No. 1, pp. 90-98, 2005. [22] B. Chandra, P. Paul Varghese, Applications of Cascade Correlation Neural Networks for Cipher System Identification, World Academy of Science, Engineering and Technology, pp. 311 314, 2007. [23] Jiyun Yang, Xiaofeng Liao, Wenwu Yu, Kwok-wo Wong, Jun Wei, Cryptanalysis of a cryptographic scheme based on delayed chaotic neural networks, Journal of Chaos, Solitons & Fractals, Elsevier, 2007. [24] Dubravko Culibrk, Daniel Socek, Michal Sramka, Cryptanalysis of the block cipher based on the hopfield neural network, Tatra Mt. Math. Publ. Vol. 37, pp. 75 91, 2007. [25] T. Xiang, X. Liao, G. Tang, Y. Chen, K. Wong, A novel block cryptosystem based on iterating a chaotic map, Physics Letters A, Vol. 349, No. 1-4, pp. 109 115, 2006. [26] W. Yu, J. Cao, Cryptography based on delayed chaotic neural networks, Physics Letters A, Vol. 356, No. 4-5, pp. 333 338, 2006. [27] Chengqing Li, Shujun Li, Gonzalo Alvarez, Guanrong Chen, Kwok-Tung Lo, Cryptanalysis of two chaotic encryption schemes based on circular bit shift and XOR operations, Physics Letters A, vol. 369, No. 1-2, pp. 23-30, 2007. [28] Nalini N, G. Raghavendra Rao, Cryptanalysis of Block Ciphers via Improvised Particle Swarm Optimization and Extended Simulated Annealing Techniques, International Journal of Network Security, Vol.6, No.3, pp.342353, 2008. [29] Dileep A.D, Sammireddy Swapna, C Chandrasekhar, Kant, PK Saxena, Decryption of Feistel Type Block Ciphers using Hetero-Association Model, National Conference on Communications (NCC 2008), pp. 74 78, 2008. [30] Khaled Alallayah, Mohamed Amin, Waiel Abd ElWahed, Alaa Alhamami, Attack and Construction of Simulator for Some of Cipher Systems Using NeuroIdentifier, International Arab Journal of Information Technology, Vol. 7, No. 4, pp. 365 372, 2010. [31] G.Sivagurunathan, V.Rajendran, T.Purusothaman, Classification of Substitution Ciphers using Neural Networks, IJCSNS International Journal of Computer Science and Network Security, Vol.10 No.3, 2010. [32] Alani MM, Neuro-Cryptanalysis of DES, World Congress on Internet Security 2012, pp. 23 27, 2012. [33] Fatih Ozkaynak, A. Bedri Ozer, Srma Yavuz, Cryptanalysis of Bigdeli Algorithm using okal and Solak Attack, International Journal of Information Security Science, Vol No. 1, Issue No. 3, pp. 79 81, 2012. [34] Swetha pandey, Megha Mishra, Cryptanalysis of Feistel cipher using Back propagation Neural Network, International Journal of Emerging Page 27

International Journal of Emerging Trends & Technology in Computer Science (IJETTCS)


Web Site: www.ijettcs.org Email: editor@ijettcs.org, editorijettcs@gmail.com Volume 2, Issue 5, September October 2013 ISSN 2278-6856
Technology and Advanced Engineering, Vol. 2, Issue 3, pp. 460 462, 2012. [35] SambasivaRao Baragada, P. Satyanaryana Reddy, A survey on machine learning approaches to cryptanalysis, International Journal of Emerging Trends & Technology in Computer Science (IJETTCS), Vol 02, Issue 04, July-August, 2013. AUTHORS SambasivaRao Baragada received his M.Sc and M.Phil in Computer Science in 2001 and 2006 respectively. He is awarded his Ph.D in Computer Science from Sri Venkateswara University, Tirupati in the year 2011. Earlier he was associated as Scientist at Satellite Data Acquisition & Processing System (SDAPS), Data and Information Management Group (DMG), Indian National Center for Ocean Information Services (INCOIS), Ministry of Earth Sciences, Govt. of India, Hyderabad. At present he is working as Lecturer in Computer Science in Govt. Degree College, Khairatabad, Hyderabad. He has published numerous articles in International Journals and Conferences. P Satyanarayana Reddy received his M.Sc in Mathematics with Computer Science from Osmania University in the year 2000. He is the recipient of CSIR-NET Lecturer-ship eligibility. At present he is working as Lecturer in Mathematics at Government Degree College, Khairtabad, Hyderabad. Presently he is pursuing his Ph.D from Osmania University, Hyderabad, India.

Volume 2, Issue 5 September October 2013

Page 28

S-ar putea să vă placă și