Sunteți pe pagina 1din 6

Applied Soft Computing 7 (2007) 722727 www.elsevier.

com/locate/asoc

Articial neural network model for voltage security based contingency ranking
D. Devaraj *, J. Preetha Roselyn, R. Uma Rani
Department of Electrical & Electronics Engineering, A.K. College of Engineering, Krishnankoil-626190, Tamil Nadu, India Received 3 August 2004; received in revised form 14 July 2005; accepted 9 November 2005 Available online 29 March 2006

Abstract The continual increase in demand for electrical energy and the tendency towards maximizing economic benets in power transmission system has made real-time voltage security analysis an important issue in the operation of power system. The most important task in real time security analysis is the problem of identifying the critical contingencies from a large list of credible contingencies and rank them according to their severity. This paper presents an articial neural network (ANN)based approach for contingency ranking. A set of feed forward neural networks are developed to estimate the voltage stability level at different load conditions for the selected contingencies. Maximum L-index of the load buses in the system is taken as the indicator of voltage instability. A mutual information-based method is proposed to select the input features of the neural network. The effectiveness of the proposed method has been demonstrated through contingency ranking in IEEE 30-bus system. The performance of the developed model is compared with the unied neural network trained with the full feature set. Simulation results show that the proposed method takes less time for training and has good generalization abilities. # 2006 Elsevier B.V. All rights reserved.
Keywords: Voltage security; Contingency ranking; Articial neural network; Feature selection

1. Introduction The intensive loading of existing generation and transmission facilities, due to difculties in building new generation in load areas and drawing power from remotely-located generation has resulted in voltage related problems in many power systems. Moreover, lavish use of shunt capacitor banks, while extending transfer limits, makes the power system to move closer to the voltage instability point. A system enters a state of voltage instability when a disturbance, increase in load or change in system conditions, cause a progressive and uncontrollable deterioration in the voltage prole. Studies have been performed to study voltage instability with both static and dynamic approaches [1]. Traditional methods of voltage stability investigations have relied on static analysis using the conventional power ow method. This approach has been practically viable because of the fact that the voltage collapse is a relatively slow process. Computed PV/VQ curves are the most widely used method for evaluating the

* Corresponding author. E-mail address: deva230@yahoo.com (D. Devaraj). 1568-4946/$ see front matter # 2006 Elsevier B.V. All rights reserved. doi:10.1016/j.asoc.2005.11.010

voltage stability of a power system. Kessel et.al. [2] developed a voltage stability index called L-index based on the power ow solution. This index value ranges from 0 (no load) to 1 (voltage collapse). The bus with the highest L-index will be the most vulnerable bus in the system. Tiranuchit [3] proposed the minimum singular value of the Jacobian of the load ow equation as a voltage stability index. In [4], continuation power ow method was applied to compute the exact collapse point and the voltage stability margin. Gao et al. [5] used the modal analysis technique to compute the voltage stability level of the system. The aforementioned techniques require large computations and are not efcient for on-line applications. On-line voltage security analysis requires the evaluation of the effects of all possible contingency cases and ordering them based on their severity. The most severe contingency is ranked one and the least severe is ranked last. Recently, articial neural networks (ANN) have been proposed as a tool for contingency ranking [68]. In most of the published literature on ANNbased contingency ranking, a single large network is trained to map the system operating state to the post-contingency voltage stability level for all the contingencies in the contingency list. The problem with this approach is that, as the size of the system grows, the number of variables to be considered and the number

D. Devaraj et al. / Applied Soft Computing 7 (2007) 722727

723

of contingencies to look at to estimate the voltage stability will also increase. This may leads to difculty in training the network in a limited amount of time. In this paper separate neural networks dedicated to handle specic contingencies are developed. While training the neural network, by selecting only the relevant attributes of the data as input features and excluding redundant ones, higher performance is expected with smaller computational effort. In this work, we propose mutual information [9] between the input variables and the output as the criterion to select the input features of the networks. The motivation for considering the mutual information is its capability to measure a general dependence between two variables. The effectiveness of the proposed method is demonstrated through contingency ranking in IEEE 30-bus test system. The remainder of this paper is organized as follows: In Section 2, the use of L-index in voltage stability analysis is reviewed. The details of articial neural network are given in Section 3. The methodology followed to congure the ANN from the inputoutput data is explained in Section 4. Section 5 gives the details of the application of the proposed model for contingency ranking in IEEE 30-bus test system. 2. Voltage stability index The static approach for voltage stability analysis involves determination of an index known as voltage collapse proximity indicator. This index is an approximate measure of closeness of the system operating point to voltage collapse. There are various methods of determining the voltage collapse proximity indicator. One such method is the L-index of the load buses in the system proposed in [2]. It is based on load ow analysis and its value ranges from 0 (no load condition) to 1 (voltage collapse). The bus with the highest L index value will be the most vulnerable bus in the system. The L-index calculation for a power system is briey discussed below: Consider a N-bus system in which there are Ng generators. The relationship between voltage and current can be expressed by the following expression:      IG YGG YGL VG ; (1) IL YLG YLL VL where IG, IL and VG, VL represent currents and voltages at the generator buses and load buses. Rearranging the above equation we get,      VL ZLL FLG IL ; (2) IG KGL YGG VG where FLG YLL 1 YLG : The L-index of the jth node is given by the expression, Ng X Vi Lj 1 F u d d ji ij i j ; V
i1 j

where Vi is the voltage magnitude of ith generator bus, Vj the voltage magnitude of jth generator bus, uij the phase angle of the term F ji, di the voltage phase angle of ith generator unit, dj the voltage phase angle of jth generator unit, Ng the number of generating units. The values of F ji are obtained from the matrix F LG. It was demonstrated that when a load bus approaches a voltage collapse situation, the L-index approaches one. Hence for a system-wide voltage stability assessment, the L-index is evaluated at all load buses and the maximum value of the Lindex gives an indication of how far the system is from voltage collapse. Contingencies such as transmission line or generator outages often result in voltage instability in power system. The system is said to be secured if none of the contingencies causes voltage instability in the system. The maximum L-index of the system under a contingency gives a measure of the severity of that contingency. In this work articial neural networks are used to estimate the maximum value of the L-index under contingency state. 3. Review of articial neural network Articial neural networks [10,11] can be viewed as parallel and distributed processing systems, which consists of a large number of simple and massively connected processors. There are a number of architectures proposed to solve different pattern recognition problems. A multilayer feed forward network trained by back propagation is the most popular and versatile form of neural network for pattern mapping or function approximation problem. The structure of a multilayer feed forward network is shown in Fig. 1. The input vector representing the pattern to be recognized is presented to the input layer and distributed to subsequent hidden layers and nally to the output layer via weighted connections. Each neuron in the network operates by taking the sum of its weighted inputs and passing the result through a nonlinear activation function. This is mathematically represented as, X  n outi f neti f wi j out j bi ; (5)
j 1

where outi is the output of the ith neuron in the layer under consideration. outj is the output of the jth neuron in the preceding layer. wi j are the connection weights between the ith neuron and the jth inputs and bi is a constant called bias. One of the most frequently employed activation function for neural network is the sigmoid: f neti 1 ; 1 eaneti (6)

(3)

(4)

where a is the activation gain which controls the sigmoid function. The knowledge required to map the input patterns and output is embodied in the form of weights. Initially the weights appropriate to a given problem domain are unknown. Until a set of applicable weights is found the network has no ability to deal

724

D. Devaraj et al. / Applied Soft Computing 7 (2007) 722727

Fig. 1. Architecture of feed forward neural network.

with the problem to be solved. The process of nding a useful set of weights is called training. Training begins with a training set consisting of specimen inputs with associated outputs. Training the network involves adjusting the connection weights to correctly map the training set vectors atleast to within some dened error limit. In effect the network learns what the training set has to teach it. If the training set is good and the training algorithm is effective, the network should then be able to correctly estimate the output even for the inputs not belonging to the training set. This phenomenon is termed as generalization. Thus the application of neural network to a recognition problem involves two distinct phases: training phase and operational phase. During the training phase the network weights are adapted to reect the problem domain. In the second or operational phase, the weights have been frozen and the network when presented with test data will predict the correct output. 4. Development of neural network model The proposed methodology for contingency ranking is based on feed forward neural networks for L-index estimation for different operating conditions. The neural network approach has two phases: training and exploitation. During the training phase, a set of neural networks each dedicated to one contingency are trained to capture the underlying relationship between pre-contingency system state and the post-contingency L-index value. This is done in view of the diversied nature of the different contingencies. Also, it helps to exploit the local nature of many contingencies in model development. Fig. 2 shows the schematic representation of the various steps involved in the training of the networks. First, a large number of training data is generated through off-line simulation process. A feature selection algorithm is applied to select the relevant features of each network. The selected features after normalization are presented to train the feed forward neural network using back propagation algorithm. After training, the networks are evaluated through a different set of inputoutput data. The above steps are repeated for every selected

Fig. 2. Schematic representation of learning stage.

contingency. Once the networks are trained and tested, they are ready for estimating the L-index values at different operating conditions. These estimated values of L-index for different contingencies are ordered from highest to lowest for the purpose of contingency ranking. The details of various stages of ANN model development are presented below. 4.1. Training data generation In machine learning approaches, training data is the only available information to build the model and so they should represent the complete operating conditions of the system. For contingency ranking model development, inputoutput patterns are generated as per the following procedure:  First, a range of situations is generated by randomly perturbing the load at all buses between 70% and 140% of the base case value and by adjusting the generator output in proportion to the output in the base case condition.  For each load-generation pattern, pre-contingency line ows are obtained by solving the load ow equations using Newton Raphson algorithm.  Also, for each load-generation pattern, the single line-outages specied in the contingency list are simulated sequentially and the L-index values are evaluated by conducting AC load ow. 4.2. Feature selection Practical power systems have thousands of variables at the system level. If all the measured variables are used as inputs to

D. Devaraj et al. / Applied Soft Computing 7 (2007) 722727

725

neural network, it results in large size of the network and hence larger training time. To make the neural network approach applicable for large scale power system problems, some dimensionality reduction is mandatory. Also, networks involving too many input variables suffer from curse of dimensionality. The application of mutual information between the input variables and the output provides the basis for feature selection in this work. (i) Denition of mutual information Consider a stochastic system with input X and output Y. Let the discrete variable X has Nx possible values and Y has Ny possible values. Now the initial uncertainty about Y is given by the entropy H(Y) which is dened as,
Ny X H Y Py j logPyi ; j 1

information value at the top and so on. The optimum number of features can be selected by consequent training of the neural networks using a progressively increasing number of features until the minimum required accuracy is obtained. 4.3. Data normalization During training of the neural network, higher valued input variables may tend to suppress the inuence of smaller ones. Also, if the raw data is directly applied to the network, there is a risk of the simulated neurons reaching the saturated conditions. If the neurons get saturated, then the changes in the input value will produce a very small change or no change in the output value. This affects the network training to a great extent. To avoid this, the raw data is normalized before the actual application to the neural network. One way to normalize the data x is by using the expression: xn x xmin range starting value; xmax xmin (10)

(7)

where P( yj) are the probabilities for the different values of Y. The amount of uncertainty remaining about the system output Y after knowing the input X is given by the conditional entropy H(Y/X) which is dened as, X  Ny Nx X H Y =X Pxi Py j jxi logPy j jxi ;
i 1 j 1

(8)

where xn is the normalized value and xmin and xmax are the minimum and maximum values of the variable x. 4.4. Training and testing of neural network The neural network for all the contingencies consists of three layers with one hidden layer. The input layer has neurons equal to the number of features selected for each contingency and output layer has one neuron. The hidden layer neurons have tangent hyperbolic function as the activation function and the output neurons have linear activation function. Trial and error procedure is followed to select the suitable number of neurons in the hidden layer. The networks are trained with the training data set using Levenberg-Marquardt algorithm. The generalization capability of the neural network is analyzed by using a standard method in statistics called independent validation. The method involves dividing the available data into a training set and test set. The entire data set is usually randomized rst. The training data are next split into two partitions: the rst partition is used to update the weights in the network, and the second partition is used to assess the training performance periodically to avoid the overtraining problem. The test data are then used to assess how well the network has generalized. 5. Simulation results This section presents the details of the simulation carried out on IEEE 30-bus system for contingency ranking using the proposed approach. IEEE 30-bus system consists of 6 generators, 24 load buses and 41 transmission lines. The transmission line parameters and the generator cost coefcients are given in [12]. Thirty eight single line outages were considered for voltage stability-based contingency ranking. Based on the procedure given in Section 4.1, a total of 1000

where P( yjjxi) is the conditional probability for output yj given the input vector xi. Now the difference H(Y) H(YjX) represents the uncertainty about the system output that is resolved by knowing the input. This quantity is called the mutual information between the random variables X and Y. Denoting it by I(Y;X), we may thus write, I Y ; X H Y H Y j X : (9) The mutual information is therefore the amount by which the knowledge provided by X decreases the average uncertainty about the random experiment represented by the variable Y. Mutual information is a symmetrical measure. That is, the amount of information gained about Y after observing X is equal to the amount of information gained about X after observing Y. For the contingency selection problem under consideration, X corresponds to the pre-contingency line ows and Y corresponds to the postcontingency voltage stability index. (ii) Mutual information for feature selection For feature selection, rst the mutual information between each variable and the model output is calculated using (7)(9). If a variable has high value of mutual information with respect to the output, then this variable must have signicant effect on the output value, which is to be estimated. Therefore, this variable is selected as a feature for the neural network. On the other hand, those variables, which have low values of mutual information will be regarded as having minor effects on the output and are not selected for network training. Once the mutual information value of input variables is evaluated, the variables are ranked, with the variable having the high mutual

726

D. Devaraj et al. / Applied Soft Computing 7 (2007) 722727

Fig. 3. Mutual information for variables in model 1020.

inputoutput pairs were generated with 750 for training and 250 for testing. Pre-contingency line ow is taken as the input and post-contingency Lmax value is taken as the output of the network. As mentioned in Section 1, separate networks are developed for estimating the Lmax corresponding to each contingency. Neural network toolbox in MATLAB was used to develop the ANN models. The details and performance of the ANN models developed for the top ten severe contingencies alone are presented here. For selecting the input features, the training data set is arranged in the ascending order based on the Lmax value. Then, the output quantity is divided into three groups and the initial entropy is calculated using Eq. (7). The input variables are divided into ve levels and their conditional entropies are evaluated using Eq. (8). Next, the mutual information of each variable with respect to the output is computed using Eq. (9). The same procedure is repeated for all the models. For illustration, the mutual information between the input variables and the output quantity for the case of contingency (1020) is shown in Fig. 3. From this gure, it is evident that only a few variables are having signicant information about the output quantity and the remaining variables have very less amount of information only. To select the optimum number of features for each network, the input variables are ranked based on their mutual information value and the top three features are used to train the network and this number is increased progressively until the minimum required accuracy is reached. The selected features for the 10 models are given in the III column of Table 1.
Table 1 Training and testing performance of the networks S. no. 1 2 3 4 5 6 7 8 9 10 Line outage 12 13 412 67 910 1020 2827 46 1021 1920 Selected features Sl = 1,4,2,7 Sl = 2,4,1 Sl = 15,12,11,14,18,28,3,27 Sl = 9,5,8 Sl = 14,12,27,28,15 Sl = 25,24,22,18,23,15,12 Sl = 36,41,37,38,12,14 Sl = 14,3,28,6,12,18,27,15,13,41 Sl = 14,12,28,27,11,15,18,3,13,30 Sl = 25,24,18,15,22,12,14,11

The selected features after normalization along with the output are used to train the network. The networks are trained with Levenberg-Marquardt algorithm until the network reaches the mean square error of 5 103. After training, the generalization capability of the network is evaluated using the test data. The performance of all the 10 networks during the training and testing phase is presented in Table 1. From this table it is evident that all the networks have learned the input output relationship with reduced input features and their generalization capability is also satisfactory. To evaluate the performance of the developed networks in contingency ranking, the Lmax values estimated by the networks for one particular load condition along with their contingency rank are presented in Table 2. For comparison, the actual values of Lmax value calculated by the Newton-Raphson load ow algorithm are also presented. The result shows the agreement between the actual contingency ranking and the ranking based on the ANN results. The 10 ANN models took 0.15 s to estimate the Lmax values and the Newton-Raphson based load ow algorithm took 2.3 s to calculate the Lmax values. This
Table 2 Comparison of results Line outage ANN output Ranking 12 13 412 67 910 1020 2827 46 1021 1920 1 5 2 8 3 4 6 9 10 7 Lmax 0.2986 0.1766 0.2050 0.1597 0.1907 0.1745 0.1635 0.1553 0.1375 0.1621 Load ow result Ranking 1 5 2 8 3 4 6 9 10 7 Lmax 0.2958 0.1751 0.2069 0.1599 0.1899 0.1755 0.1655 0.1546 0.1361 0.1623

Table 3 Performance of the unied network Number of inputs 41 Number of hidden nodes 80 Testing error (mse) 7.5226 104 Training time (s) 359.78

Number of hidden nodes 8 6 2 3 4 5 4 8 5 5

Training time (s) 0.4740 0.5107 2.7397 0.6670 3.0573 0.6980 2.2447 3.1927 2.2863 2.4220

Testing error (mse) 4.0813 104 3.3884 104 4.6548 104 5.5749 104 5.0437 104 3.8101 104 4.6308 104 5.2365 104 4.5658 104 4.4423 104

D. Devaraj et al. / Applied Soft Computing 7 (2007) 722727

727

shows that neural networks can be used for fast contingency ranking in real time applications. For comparison, a unied neural network model with precontingency power ow as the input and the Lmax values of all the 38 contingencies as output was developed. The network was trained and tested with the data set used in the previous case and the performance of the network is given in Table 3. On comparing Table 1 and Table 3, it is found that the individual networks take less time for training than the unied network and the generalization capability of the individual networks is also better than the unied network. 6. Conclusion In this paper, an articial neural network-based approach is presented for voltage stability-based contingency ranking. A set of feed forward neural networks have been trained to map the non-linear relationship between the pre-contingency operating conditions and the post-contingency stability index. The problem of feature selection is addressed through mutual information between the input variables and the output stability index. With the incorporation of the feature selection method, accurate ANN models can be developed in a short period of time, even for large scale power systems. The effectiveness of the proposed method has been demonstrated through contingency ranking in IEEE 30-bus system. Test results show that the several small networks trained with the selected features are

better in performance in contingency ranking than the single large network trained with all features. References
[1] C. Taylor, Power Systems Voltage Stability, Mc-Graw Hill, Inc., New York, 1993. [2] P. Kessel, H. Glavitsch, Estimating the voltage stability of power systems, IEEE Trans. Power Syst. 1 (3) (1986) 346354. [3] A. Tiranuchit, R.J. Thomas, A Posturing strategy against voltage instability in electric power systems, IEEE Trans. Power Syst. 3 (1) (1988) 8793. [4] V. Ajjarapu, C. Christy, The continuation power ow: a tool for steady state voltage stability analysis, IEEE Trans. Power Syst. 7 (1) (1992) 416423. [5] B. Gao, G.K. Morisan, P. Kundur, Voltage stability evaluation using modal analysis, IEEE Trans. Power Syst. 7 (4) (1992) 15291542. [6] A.A. El-Keib, X. Ma, Applications of articial neural networks in voltage stability assessment, IEEE Trans. Power Syst. 10 (4) (1995) 18901896. [7] H.P. Schmidt, Application of articial neural networks to the dynamic analysis of the voltage stability problem, IEE Proc-Gener. Transm. Distrib. 144 (4) (1997) 371376. [8] H.B. Wan, A.O. Ekwue, Articial Neural Network based contingency ranking method for voltage collapse, Electr. Power Energy Syst. 22 (2000) 344354. [9] D. Devaraj, Computational intelligent techniques in power system security analysis, Phd dissertation, Department of Electrical Engineering, I.I.T, Chennai, India, July 2001. [10] P. Vas, Articial-intelligence-based electrical machines and drives, Oxford University Press, 1999. [11] Ham M. Fredric, Ivica Kostanic, Principles of Neurocomputing for Science and Engineering, Mc Graw Hill International Edition, 2001. [12] O. Alsac, B. Scott, Optimal load ow with steady state security, IEEE Trans. Power App. Syst. PAS-93 (1974) 745751.

S-ar putea să vă placă și