Sunteți pe pagina 1din 12

International Journal of Computer Science Engineering and Information Technology Research (IJCSEITR) ISSN 2249-6831 Vol.

3, Issue 1, Mar 2013, 49-60 TJPRC Pvt. Ltd.

AMALGAMATION OF CRYPTOGRAPHY WITH NEURO FUZZY SYSTEM


S. SARAVANAKUMAR1, S. SANKAR2, R. SENTHIL KUMAR3, T. A. MOHANAPRAKASH4
1

Professor, Department of Information Technology, Panimalar Institute of Technology, Chennai, India


2

Professor, Department of EEE, Panimalar Institute of Technology, Chennai, India

Associate Professor, Department of Information Technology, Panimalar Institute of Technology, Chennai, India Assistant Professor, Department of Information Technology, Panimalar Institute of Technology, Chennai, India

ABSTRACT
In this paper the security for the data transfer is not so efficient so that the data and also the personal information of the user have been taken by the intruders due to the drawbacks in the present file transfer protocols used. So there is a need for a very secure and less overhead file transfer protocol which uses less bandwidth and uses very good protocol to maintain data confidentiality. Thus this project enables to send the data between two computers across a shared or public network in a manner that emulates the properties of a private link. The basic requirements for a secure File Transfer Protocol (FTP) are User Authentication, Address Management, Data Compression, Data Encryption and Key Management. This paper also aims in providing a very secure, low overhead file transfer protocol for a small network.

KEYWORDS: EAP-TLS, Neural Network, Fuzzy Theory, Triple-DES, RSA, FTP INTRODUCTION
In todays environment of increased security concerns, the protection of ones data is becoming increasingly important. One of the most challenging practical aspects of providing end-to-end network security for legacy client-server protocols such as non-anonymous FTP (File Transfer Protocol) is convincing end users to actually use the secure alternatives, rather than abandoning them in favor of simpler, more familiar, or more fully featured insecure clients. A number of secure alternatives to the FTP protocol have been developed, but thus far have met with only limited success we feel this is primarily due to the fact that these solutions almost universally require the end user to learn a new, unfamiliar client interface or tweak complicated settings in order to make the security work. The average end user is interested in maintaining the security of their account, but is unwilling to invest a significant effort to setup a complicated system or the time to learn a whole new interface. The project aims in achieving the low overhead, reliable protocol for the file transfer in a small network where in which the administrator maintains the authentication information of each and every client and dispatches this information to servers when they require them. Thus for the reliable, sequenced flow and connection oriented communication the TLS (Transport Layer Security) is used along with the authentication protocol EAP (Extensible Authentication Protocol). The EAP-TLS [1] supports multiple authentications and the transport layer security so that the transport and the authentication function is achieved. EAP-TLS is the original standard LAN EAP authentication protocol [2]. It is considered one of the most secure EAP standards available and is universally supported by all manufacturers of wired LAN hardware and software including Microsoft. The requirement for a client-side certificate, however unpopular it may be, is what gives EAP-TLS its

50

S.Saravanakumar, S.Sankar, R.Senthil Kumar & T.A.Mohanaprakash

authentication strength and illustrates the classic convenience vs. security trade-off. A compromised password is not enough to break into EAP-TLS enabled systems because the hacker still needs to have the client-side certificate. Association rule using support and confidence can be defined as follows. Let I= {i1,,im}be a set of items. Database D={T1,.,Tn}is a set of transactions, where TiI(1im). Each transaction T is an itemset such that TI. A transaction T supports X, a set of items in I, if XI. The association rule is an implication formula like XY, where XI, YI and XY=. The rule with support s and confidence c is called, if |XY|/|D| s and |XY|/|X| c. Because of interestingness, we consider user specified thresholds for support and confidence, MST (minimum support threshold) and MCT (minimum confidence threshold). A detailed overview of association rule mining algorithms are presented in [3]. Privacy preserving association rule mining should achieve one of the following goals: (1) All the sensitive association rules must be hidden in sanitized database. (2) All the rules that are not specified as sensitive can be mined from sanitized database. (3) No new rule that was not previously found in original database can be mined from sanitized database. First goal considers privacy issue. Second goal is related to the usefulness of sanitized dataset. Third goal is related to the side effect of the sanitization process. Many approaches have been proposed to preserve privacy for sensitive knowledge or sensitive association rules in database. They can be classified in to following classes: heuristic based approaches, border based approaches, exact approaches, reconstruction based approaches, and cryptography based approaches. In following, a detailed overview of these approaches is given.

DATA DISTORTION
Heuristic Based Approaches These approaches can be further divided in to two groups based on data modification techniques: data distortion techniques and data blocking techniques. Data distortion techniques try to hide association rules by decreasing or increasing support (or confidence). To increase or decrease support (or confidence), they replace 0s by 1s or vice versa in selected transactions. So they can be used to address the complexity issue. But they produce undesirable side effects in the new database, which lead them to suboptimal solution. M.Attallah et al. [4] were the first proposed heuristic algorithms. Verykios et al. [5] proposed five assumptions which are used to hide sensitive knowledge in database by reducing support or confidence of sensitive rules. Y-H Wu et al. [6] proposed method to reduce the side effects in sanitized database, which are produced by other approaches [4]. K.Duraiswamy et al. [5] proposed an efficient clustering based approach to reduce the time complexity of the hiding process. Data blocking techniques replace the 0s and 1s by unknowns (?) in selected transaction instead of inserting or deleting items. So it is difficult for an adversary to know the value behind ?. Y.Saygin et al. [7] were the first to introduce blocking based technique for sensitive rule hiding. The safety margin is also introduced in [7] to show how much below the minimum threshold new support and confidence of a sensitive rule should. Wang and Jafari proposed more efficient approaches than other approaches presented. Border Based Approaches Border based approaches use the notion of borders presented . These approaches pre-process the sensitive rules so that minimum numbers of rules are given as input to hiding process. So, they maintain database quality while minimizing

Amalgamation of Cryptography with Neuro Fuzzy System

51

side effects. Sun and Yu were the first to propose the border revision process. Hiding process in greedily selects those modifications that lead to minimal side effects. The authors in presented more efficient algorithms than other similar approaches presented . Exact Approaches Exact approaches formulate hiding problem to constraint satisfaction problem (CSP) and solve it by using binary integer programming (BIP). They provide an exact (optimal) solution that satisfies all the constraints. However if there is no exact solution exists in database, some of the constraint are relaxed. These approaches provide better solution than other approaches. But they suffer from high time complexity to CSP. Gkoulalas and Verykios proposed an approach to find optimal solution for rule hiding problem. The authors proposed a partitioning approach for the scalability of the algorithm. Reconstruction Based Approaches Reconstruction based approaches generate privacy aware database by extracting sensitive characteristics from the original database. These approaches generate lesser side effects in database than heuristic approaches. Mielikainen was the first analyzed the computational complexity of inverse frequent set mining and showed in many cases the problems are computationally difficult. Y. Guo proposed a FP tree based algorithm which reconstruct the original database by using non characteristic of database and efficiently generates number of secure databases. Cryptography Based Approaches Cryptography based approaches used in multiparty computation. If the database of one organization is distributed among several sites, then secure computation is needed between them. These approaches encrypt original database instead of distorting it for sharing. So they provide input privacy. Vaidya and Clifton proposed a secure approach for sharing association rules when data are vertically partitioned. The authors in addressed the secure mining of association rules over horizontal partitioned data. Triple-DES Encryption: For the last two decades, cryptographic protection of keys and data in financial networks has been provided by the Data Encryption Standard (DES) encryption algorithm. Single-length DES has been shown to be vulnerable to an exhaustive key search attack in as little as 22 hours. The finance industry is moving to the Triple DES algorithm for its presumed increased security. The DES algorithm itself remains secure, but requires the longer key length of Triple DES to adequately secure banking assets. In order to realize the increased security potential of Triple DES, key management will need to assume primary importance. Standards-based Triple DES key storage and key exchange are being implemented insecurely today. Perhaps surprising to many, such implementations are only slightly more secure than single-length DES. Triple DES called Cipher Block Chaining with Output Feedback Masking (CBCM). While of theoretical interest, however, the attack is not practical. Triple DES has not been broken and its security has not been compromised. There are two basic attacks. One requires 2^65 blocks of chosen cipher text (i.e., you pick the cipher text and request the plaintext from the person whose messages you're trying to break). Even ignoring the prospects of getting the plaintext for chosen cipher text at all, if I've done my math right, that's about 1 billion terabytes of data that must be acquired from a single message. I can't even imagine the download time.

NEURO FUZZIFICATION SYSTEM


The Neural Fuzzy was drawn to implement in a fuzzy system as illustrated in figure 1. The connections in this

52

S.Saravanakumar, S.Sankar, R.Senthil Kumar & T.A.Mohanaprakash

architecture are weighted with fuzzy sets and rules using the same antecedents, which are represented by the drawn ellipses. They assure the integrity of the base of rules. The input units assume the function of fuzzyfication interface, the logical interface is represented by the propagation function and the output unit is responsible for the defuzzyfication interface. The process of learning in architecture it is based in a mixture of reinforcement learning with back propagation algorithm. This architecture can be used to learn the rule base from the beginning, if there is no priori knowledge of the system, or to optimise an initial manually defined rule base.
Defuzzyfication Layer

Fuzzyfication Layer

Figure 1: System Architecture E FuNN Architecture In Evolving Neural Fuzzy Network This nodes are created during the learning phase. The first layer passes data to the second layer that calculates the degrees of compatibility in relation to the predefined membership functions. The third layer contains fuzzy rule nodes representing prototypes of input- output data as an association of hyper-spheres from the fuzzy input and fuzzy output spaces. Each rule node is defined by two vectors of connection weights, which are adjusted through a hybrid learning technique. The fourth layer calculates the degree to which output membership functions are matched the input data and the fifth layer carries out the defuzzyfication and calculates the numerical value for the output variable. Dynamic Evolving Neural Fuzzy Network is a modified version of the system with the idea of not only the winning rule nodes activation is propagated but a group of rule nodes that is dynamic selected for every new input vector and their activation values are used to calculate the dynamical parameters of the output function.

Figure 2: Advanced System of Architecture

To get a more detail description of this architectures, beyond the specific pointed references made in this paper, a detailed survey was made by Abraham in 2000 where it can be found a detailed description of several well known neurofuzzy architectures theirs respective learning algorithms.

Amalgamation of Cryptography with Neuro Fuzzy System

53

The other attack requires that you get a known plaintext block encrypted under 2^33 (about 10 billion) variants of one of the three keys. You, of course, do not know that key or the others, but you must be able to control exactly how these variants are formed. Thus, this can be regarded as a chosen-key attack of sorts (the authors call it a "related-key" attack). Then you crack that one key. The second key is cracked with a chosen ciphertext attack and the third key by brute force. The time requirements for the attacks are not much more than for breaking single DES, but the chosen ciphertext and chosen key requirements are the show stoppers. To pull these off, you really must have access to the encryption process, as it is unlikely your adversary will be a willing accomplice. But if you can get that kind of access, you can probably get plaintext and keys by much simpler methods. Even though the attack is not realistic, the ANSI working group pulled that particular CBCM mode from the X9.52 standard because of public perception and potential lost confidence in Triple DES . Algorithm: Triple DES uses a "key bundle" which comprises three DES keys, K1, K2 and K3, each of 56 bits (excluding parity bits). The encryption algorithm is: Ciphertext = EK3(DK2(EK1(plaintext))) i.e., DES encrypts with K1, DES decrypt with K2, then DES encrypt with K3. Decryption is the reverse: Plaintext = DK1(EK2(DK3(ciphertext))) i.e., decrypt with K3, encrypt with K2, and then decrypt with K1. Each triple encryption encrypts one block of 64 bits of data. In each case the middle operation is the reverse of the first and last. This improves the strength of the algorithm when using keying option 2, and provides backward compatibility with DES with keying option 3. Keying Options: The standards define three keying options: Keying option 1: All three keys are independent. Keying option 2: K1 and K2 are independent, and K3 = K1. Keying option 3: All three keys are identical, i.e. K1 = K2 = K3. Keying option 1 is the strongest, with 3 x 56 = 168 independent key bits. Keying option 2 provides less security, with 2 x 56 = 112 key bits. This option is stronger than simply DES encrypting twice, e.g. with K1 and K2, because it protects against meet-in-the-middle attacks. Keying option 3 is no better than DES, with only 56 key bits. This option provides backward compatibility with DES, because the first and second DES operations simply cancel

Figure 3: Triple-DES Operation out. The encryption and decryption in Triple-DES is as shown in figure 3.

54

S.Saravanakumar, S.Sankar, R.Senthil Kumar & T.A.Mohanaprakash

Overview of RSA: One of the biggest problems in cryptography is the distribution of keys. Suppose you live in the United States and want to pass information secretly to your friend in Europe. If you truly want to keep the information secret, you need to agree on some sort of key that you and he can use to encode/decode messages. But you don't want to keep using the same key, or you will make it easier and easier for others to crack your cipher. But it's also a pain to get keys to your friend. If you mail them, they might be stolen. If you send them cryptographically, and someone has broken your code, that person will also have the next key. If you have to go to Europe regularly to hand-deliver the next key, that is also expensive. If you hire some courier to deliver the new key, you have to trust the courier, et cetera. RSA Encryption where RSA are the initials of the three creators: Rivest, Shamir, and Adleman [4]. It is based on the following idea as shown in the figure 4.

Figure 4: Public Key Cryptography Description of the Algorithm: The RSA algorithm involves three steps: key generation, encryption and decryption . These are explained Key Generation RSA involves a public key and a private key. The public key can be known to everyone and is used for encrypting messages. Messages encrypted with the public key can only be decrypted using the private key . The keys for the RSA algorithm are generated the following way: 1. 2. 3. 4. Choose two distinct prime numbers p and q. Compute n = p * q . Compute the totient Choose an integer e prime). 5. Determine d (using modular arithmetic) which satisfies the congruence relation

( p, q ) = ( p 1)(q 1) . 1 < e < ( p, q ) such that and e and ( p, q )

share no divisors other than 1 (i.e. e and

( p, q )

are co

d = 1(mod ( p, q )) .
The public key consists of the modulus n and the public (or encryption) exponent e. The private key consists of the modulus n and the private (or decryption) exponent d which must be kept secret.

Amalgamation of Cryptography with Neuro Fuzzy System

55

Encryption Alice transmits her public key (n,e) to Bob and keeps the private key secret. Bob then wishes to send message M to Alice. He first turns M into an integer 0 < m < n by using an agreed-upon reversible protocol known as a padding scheme. He then computes the ciphertext c corresponding to:

C = me (mod n)
This can be done quickly using the method of exponentiation by squaring. Bob then transmits C to Alice. Decryption Alice can recover m from C by using her private key exponent d by the following computation:

m = C d (mod n)
Given m, she can recover the original message M by reversing the padding scheme. The above decryption procedure works because:

C d = (m e ) d = med (mod n)
Now, since,

ed = 1 + k (n)

m ed = m1+ k ( n ) = m = (m ( n ) ) k = m(mod n)
The last congruence directly follows from Euler's theorem when m is relatively prime to n. Using the Chinese remainder theorem, it can be shown that the equations holds for all m. This shows that we get the original message back:

C d = m(mod n)
Padding Schemes When it is used in practice, RSA is generally combined with some padding scheme. The goal of the padding scheme is to prevent a number of attacks that potentially work against RSA without padding. Hybrid Encryption Its a new concept in cryptography to improve the security of the data transferred between the systems. The hybrid encryption can be done in many ways depending on the user to make the intruder to find difficult to get the original data. Using hybrid encryption the key to encrypt for one algorithm can be obtained by another algorithm in its output. In this paper I present a new technique where the data is encrypted twice by the Triple-DES and RSA algorithms where the key for the algorithm is done depending on the user requirement. A detailed overview of sensitivities is given as to the proposed framework of DSRRC algorithm is shown in Figure.5. Initially association rules (AR) are mined from the source database D by using association rule mining algorithms e.g. Apriori algorithm in [2]. Then sensitive rules (SR) are specified from mined rules. Selected rules are clustered based on common R.H.S. item of the rules. Rule-clusters are denoted as RCLs. Then for each Rule-cluster sensitive transactions are indexed. Sensitivity of each item (and each rule) in each Rule-cluster is calculated. Rule-Clusters are sorted in

56

S.Saravanakumar, S.Sankar, R.Senthil Kumar & T.A.Mohanaprakash

decreasing order of their sensitivity and sensitive transactions supporting first rule-cluster are sorted in decreasing order of their sensitivity.

Figure 5: Framework of Proposed Algorithm After sorting process, rule hiding (RH) process hides all the sensitive rules in sorted transactions for each cluster by using strategy mentioned in this section and updates the sensitivity of sensitive transactions in other cluster. Hiding process starts from highest sensitive transaction and continues until all the sensitive rules in all clusters are not hidden. Finally modified transactions are updated in original database and produced database is called sanitized database D which ensures certain privacy for specified rules and maintains data quality. The extended hazard regression model includes the most common hazard models, such as the proportional hazards model, the accelerated failure time model and a proportional hazards/accelerated time hybrid model with constant spread parameter. Although proportional hazards models have been used for some time to model occurrence of hardware failures such models for describing the dependence on covariates have not previously been used for modeling software failures. Proportional hazard and accelerated time models may be used to explain that part of the spread of failure distribution that is due to variation of the covariates. Let h0(.) be a baseline hazard function, = (1, 2, 1, 2) be the vector of unknown parameters, and u1(.), u2(.), v1(.), v2(.) be the known monotone functions equal one when their arguments are zero. The extended hazard regression model is given by:
hc (t | z ) = T1 (t | z )T2 (t | z ), where T1 (t | z ) = u1 ( 1T z ) v1 ( 1T z )[ u 1 ( 1T z )t ] v1 ( 1 z ) 1 and
T T2 (t | z ) = h0 ([ u 2 ( 2 z )t ] v2 ( 2 z ) ).
T T

Assuming u1(.)=u2(.), v1(.) = v2(.) and

h0 ( x ) =
with

{ ( k )} 1 x k 1 e x , 1 I (k ; x)

Amalgamation of Cryptography with Neuro Fuzzy System

57

I ( k ; x ) = { ( k )} 1 t k 1 e t dt ,
0

the above model corresponds to the hazard function of a random variable with a generalized gamma distribution with three parameters, two of them depending on covariates z. For v1(.) = v2(.) = k =1, the exponential model is obtained. When k=1, a Weibull model is obtained. For h0(x)=1/(1+x), we obtain the log-logistic distribution with two parameters depending on covariates. The baseline hazard function can be also a polynomial approximation or a piecewise polynomial approximation. To estimate the parameters let:
t

H c (t | z ) = hc (u | z )du
0

and t1, t2, a time sequence. Each ti has associated a covariate vector zi and an indicator variable defined by i=1 if at time ti, the system is in failure, or i=0 otherwise. The log-likelihood function is:

log L = i (log[hc (ti | zi ) H c (ti | zi )].


i =1

Then the vector of unknown parameters * = (*1, *2, *1, *2), the maximum likelihood estimate of is obtained by solving the system of nonlinear equations Log L/ = 0. In the software system under development, the covariates are provided by the MONITOR and describe the software history: the number of previous failures, the time of failure, the number of files under process, the global memory usage, the number of swapped out pages, the number of processes in the running queue etc. The predictive power of the parametric prediction model provided by the extended hazard regression model may be further improved by a neural network. It is, however, well known that the predictive capability of a neural network can be affected by the type of the neural network model is used to describe the failure data, how the input and output variables are represented to it, the order in which the input and output values are presented during the training process, and the complexity of the network. Software reliability growth model can be expressed in terms of the neural network mapping as k+h = Mapping((Tk, Fk), tk+h), where Tk is the sequence of cumulative execution time (t1, t2, , tk), Fk is the corresponding observed accumulated failures (1, , k) up to the time tk used to train the network, tk+h=tk+ is the cumulative execution time at the end of a future test session k+h and k+h is the prediction of the network. The following attributes are important for the architecture of a neural network: number of layers in the network and the type of network topology used. A NN can be a single-layer network (there is no hidden layer, only input and output layers) or multilayer networks (which have more hidden layers). Based on the connectivity and the direction of the links, the network can employs only forward-feeding connections (FFN) or is a multilayer network which employ feedback connections (recurrent networks). The predictive ability of a NN can be affected by the learning process. For the module NN two training regimes are available: generalized training and prediction training. The generalized training method is the standard for training feed-forward networks. Prediction training is appropriate in training recurrent networks.

58

S.Saravanakumar, S.Sankar, R.Senthil Kumar & T.A.Mohanaprakash

The data for training at time ti consists of the complete failure history of the system since the beginning of running. The representation method for input and output variables is based on a scale of the interval [0, 1]. The number of possible values (equidistant division) can be selected depending on the software project under study. The expected maximum values for both the cumulative faults and the cumulative execution time must be translated to a positive value, but less than 1. In the training process, initially, at least three observed data points must be used. Practically, at the end of each training, the network it is fed with future inputs to measure its prediction of the total number of defects. A substantial body of work has been published regarding fault tolerance and we should find the best algorithm related to the dependability and performance requirements for each specific application. On the other hand, environmental parameters directly influence the choice of the fault tolerant policy. For instance, if the failure rate becomes to low and the recovery delay is not so important and we can reasonably chose an optimistic approach such as check pointing. According to above presented framework for hiding association rules in database, the proposed DSRRC algorithm is shown in Figure 2. By using given minimum support threshold (MST) and minimum confidence threshold (MCT), algorithm first generates the possible number of association rules from source database D. Now some of the generated association rules are selected as sensitive rule set (set RH) by database owner. Rules with only single R.H.S. item are specified as sensitive. Then algorithm finds C clusters based on common R.H.S. item in sensitive rule set RH and calculates the sensitivity of each cluster. After that it index sensitive transactions for each cluster and sorts all the clusters by decreasing order of their sensitivities. For the highest sensitive cluster, algorithm sorts sensitive transaction in decreasing order of their sensitivites.

CONCLUSIONS
A new file transfer protocol using Hybrid encryption and EAP-TLS combines a great many features of new technology under cryptography. The module under discussion allows filtering the data supplied by the monitoring structure and produces a model that can be used for prediction. The main focus in this paper is the implementation of EAP-TLS protocol for the file transfer process and to use the hybrid cryptographic technique in a different manner suitable for the security of the data. It also deals with the implementation of Triple DES and RSA algorithms for encryption under Hybrid cryptographic technique.

REFERENCES
1. Burtschy, B., Albeanu, G., Boros, D.N. & Popentiu, Fl., Improving Software Reliability Forecasting, Microelectronics and Reliability, 37, 6(2008), 901-907. 2. Sitte Renate, Comparison of Software-Reliability-Growth Predictions: Neural Networks vs Parametric-Recalibration, IEEE Transactions on Reliability, 48,3(2010), 285-291. 3. Popentiu-Vladicescu, Fl. & Sens, P., A Software Architecture for Monitoring the Reliability in Distributed Systems, ESREL99, September 13-17, TUM Munich-Garching Germany, 615-620, 2009. 4. 5. Musa, J.D., Software Reliability Engineering, McGraw-Hill, New York, 2010. Smidts, C., Stutzke, M. & Stoddard, R,W., Software Reliability Modeling: An Approach to Early Reliability Prediction, IEEE Transactions on Reliability, 47, 3(2011), 268-278. 6. Jelinski, Z. & Moranda, P.B., Software reliability research. In Statistical Computer Performance Analysis. Academic Press, N.Y., 465-484, 2011.

Amalgamation of Cryptography with Neuro Fuzzy System

59

7.

Goel, A.L. & Okumoto, K., Tome-dependent error detection rate model for software reliability and other performance measures. IEEE Transactions on Reliability, R-28, 2011.

BIOGRAPHY

Dr. S.SARAVANAKUMAR has more than 11 years of teaching and research experience. He did his Postgraduate in ME in Computer Science and Engineering at Bharath engineering college,anna university chennai, and Ph.D in Computer Science and Engineering at Bharath University, Chennai. He has guiding a number of research scholars in the area Adhoc Network, ANN, Security in Sensor Networks, Mobile Database and Data Mining.

Dr. S. Sankar obtained his B.E Degree in Electrical & Electronics Engineering at Sri Venkateswara College of Engineering, from Madras University and M.E (Power System) Degree from Annamalai University Chidambaram. He has done his Ph.D in the area of FACTS controllers in 2011. His research interests are in the area of FACTS, Electrical Machines , Voltage stability, power quality, Power system security and Power System Analysis.

Mr. R. Senthilkumar,currently working as an Associate Professor at Panimalar Institute of

Technology, has

more than 9 years of teaching and research experience. He had graduated his Bachelors in Computer Science and Engineering from Anna University and his Masters in Computer Science and Engineering from Dr.M.G.R Educational and Research Institute. He had served several positions as Lecturer, senior Lecturer, Assistant professor at various institutions. He had guided more than 30 under graduate students in research and is more interested in the area of Adhoc network.

T.A. MOHANAPRAKASH has more than 7 years and 6 months of teaching and research experience. He did his Postgraduate in M.TECH in Information Technology at Sathyabama University Chennai .He occupied various position as Lecturer, Senior Lecturer and Assistant Professor. He has published 8 research papers in International journal, International conferencesandNationalconferences..

S-ar putea să vă placă și