Sunteți pe pagina 1din 8

IEEE Transactions on Power Systems, Vol. 9, No.

1, February 1994 525


APPLICATION OF ARTIFICIAL NEURAL NETWORKS
IN POWER SYSTEM SECURITY AND VULNERABILITY ASSESSMENT
Qin Zhou Jennifer Davidson A. A. Fouad
Student Member Member Fellow

Abstract: In a companion paper the concept of system many factors: type, location and severity of the disturbance,
vulnerability is introduced as a new framework for power and on the "robustness" of the post disturbance network. In
system dynamic security assessment. Using the TEF method other words, stability analysis involves analysis of complex
of transient stability analysis, the energy margin AV is used patterns of system behavior. This is the motivation for
as an indicator of the level of security, and its sensitivity to applying the artificial neural network technique for dynamic
a changing system parameter p (aAV/Jp) as indicator of its security assessment of a stability-limited system.
trend with changing system conditions. These two indicators
are combined to determine the degree of system vulnerability In a companion paper [2] the concept of system
to contingent disturbances in a stability-limited power vulnerability is presented as a new framework for power
system. Thresholds for acceptable levels of the security system dynamic security assessment. This new concept
indicator and its trend are related to the stability limits of a combines the level of security and its trend with system
critical system parameter (plant generation limits). Operating condition (or with a critical system parameter) into one
practices and policies are used to determine these thresholds. indicator of dynamic security called "system vulnerability."
Using the transient energy function (TEF) method [3] as the
In this paper the artificial neural networks ( A N N s ) tool for transient stability analysis, the level of security is
technique is applied to the concept of system vulnerability indicated by the transient energy margin (AV), and its trend
within the recently developed framework, for fast pattern with a changing system parameter p is indicated by the
recognition and classification of system dynamic security sensitivity of the energy margin to changes in that parameter
status. A suitable topology for the neural network is (dAV/Jp). Vulnerability assessment depends on the values
developed, and the appropriate training method and input of AV and dAV/ap. While a very low (or negative) value of
and output signals are selected. AV, indicating critically stable (or unstable) condition may be
considered unacceptable, a relatively high positive value of
The procedure developed is successfully applied to the AV does not necessarily represent a secure condition, if the
IEEE 50-generator test system. Data previously obtained by trend in critical system parameters is such that under
heuristic techniques are used for training the ANN. expected changing system conditions the system may become
insecure. System vulnerability offers a framework for
Kev Words: Artificial neural networks, system security, assessing the system dynamic security according to the levels
system vulnerability, dynamic security assessment, transient of both AV and dAV/@.
stability.
The procedure for implementing the framework for
1. INTRODUCTION dynamic security assessment through the concept of system
vulnerability is by establishing thresholds for acceptable
In recent years artificial neural networks (ANN) have levels of the security indicator (AV) and its trend (dAV/Jp),
been proposed as an alternative method for solving certain and relating these thresholds to the stability limits of a
difficult power system problems where the conventional critical system parameter. In the work reported upon in this
techniques have not achieved the desired speed, accuracy, or paper the critical system parameter is the plant generation
efficiency [l]. An ANN is taught by example as opposed, for limits.
example, to an expert system which is taught by rules. ANN
methodology allows complex relationships between an initial The ANN technique is used for fast pattern recognition
state and a final state to be determined by an iterative and classification of dynamic system security status. The
mathematical algorithm, instead of by experts. topology of the proposed neural network, the method of
training, and the selection of the output and input signals are
In a stability-limited power system, security discussed. The multi-layered perceptron with back
determination requires analysis of the dynamic system propagation algorithm is chosen as our ANN since it has
behavior under prescribed sequences of events, known as been effective in solving many practical problems [4].
contingencies. These contingencies; which the power system
must withstand are specified by the reliability councils under The proposed technique is successfully applied to the
the jurisdiction of which the power system is operated. The IEEE 50-generator test system.
outcome of the system dynamic analysis will depend on
2. ARTIFICIAL NEURAL NETWORK MODEL
93 WM 183-4 PWRS A paper recommended and approved 2.1 Basic Elements
by the IEEE Power System Engineering Committee of
the IEEE Power Engineering Society for presentation
at the IEEE/PES 1993 Winter Meeting, Columbus, OH, In general the ANNs require three main functions [4-51:
January 31 - February 5, 1993. Manuscript submitted (1) an organized topology of interconnected processing
August 24, 1992; made available for printing elements, (2) a suitable training or learning algorithm, and (3)
November 4 , 1992. a method of recalling information. The following elements
are key to the ANN operation.

0885-8950/94/$04.00 0 1993 IEEE


526
Processing elements (PES): dynamic security and vulnerability classification. It is trai,
by numerical data. It operates in two modes: training aGd
Processing elements, often called nodes or neurons, are test. In the training mode, a set of representative training
where most of the computing is done. A typical data is used to adjust the weights of the neural network.
configuration of a processing element is shown in Fig. 1. The Once these weights have been determined, the neural
input signals ai ,i=1,2,...,n, are weighted by the weights oii, network is said to be trained. In the test mode, the trained
operated upon by the threshold function f(x) to produce the neural network is stimulated by test data. Usually the
output bj given by training and test data are different sets. The response of the
perceptron should then be representative of the data by
which it was trained.

2.3 Back-Propagation Algorithm 14-51


where e, is a bias factor (taken as LO), and
The basic idea of this algorithm is using the sensitivity of
the error with respect to the weight to modify the weight.
f ( x ) = (1 + e' I-' (2) For a weight qj( P ) in the t" layer, this can be written as

Architecture: (3)

A N N s ' architectures are formed by connecting the PES


into layers and linking them with weighted interconnections. where 5 is the step size, and E is the total error, given by
The architecture used is the multi-layered perceptron (see Y
below). E = C E" (4)

Em is the mean square error corresponding to the m* data


pair, given by

1-1

where 6" and rim are the desired and computed outputs of
the i" node in the output layer.

Adjustment to the weights is made in successive steps in


Fig. 1. Schematic diagram of j" processing element response to the training data pairs, using equation (3). When
all the training data has been used, the cycle is repeated
Learnipg: starting from the first training data pair. This process is
repeated until an acceptably low error results in the output.
Lc.irning is accomplished by changing the value of the
weights to achieve the desired results, i.e., the correct The mathematical model of back-propagation is
classification. The learning process adopted is supervised illustrated on the basis of the chain rule of partial derivatives.
learning in which the desired output is known. The training For the Q*layer, if the output of the i* node is S, (P ) and the
algorithm used is the back-propagation algorithm. A sum of the inputs to the same node is 0, (P ), then
description of the ANN model is given below.
S,(O = f (u,(U

A
(6)
2.2 The Lavered Perceptron Model
where f is given by (2). The derivative of E"' with respect to
wii ( P ) is given by
layer
outpu1 aE" =
- as(#) au,(Q
aE" - 1-(7)
aqt) asl(#) a q ) amu([)

Defining 6,(P ) by

we can show that [l]


layer
I I I
Fig. 2. Multi-layered perceptron
If the layered perception has L layers, then
8,(L) = aE"/aS,(L)
Figure 2 is a schematic diagram of the multi-layered (10)
perceptron used as the ANN model for power system = rr - t,"'
527
or simply the difference between the desired and computed It has been observed by the authors that there is a strong
outputs of the neural network. For an intermediate layer P correlation between the angles of the advanced generators in
we have the UEP*, and the values of the sensitivity of those
generators to changes in plant generation, the critical system
parameter used in this investigation. This correlation was
investigated in [6] for two operating conditions of the IEEE
50-generator test system (see Section 4). Tables 1 and 2,
Thus 6i(L-1)can be evaluated from 6AL), &(L-2) from 6,(L- taken from this reference, show the UEP angles and the
1)and so on, all the way to the input. sensitivity matrix respectively, for the stressed operating
condition, for nine fault locations (shown in the first row).
3. SELECTION OF ARTIFICIAL NEURAL NETWORK
Comparing the data in Tables 1 and 2, we note that there
3.1 ANN Architecture is strong correlation between the UEP angles for the
advanced generators and their sensitivities to plant
As pointed out in Section 2, the ANN model used is the generation changes. The severely disturbed generators,
multi-layered perceptron with back propagation algorithm. shown in the UEP Table 1as having angles 2 1.57 rad, also
The adopted architecture comprises the following layers: have negative sensitivities with substantial magnitudes.
input layer, hidden layer (or layers), and output layer.
Description of these layers is given below. From the results, the authors have decided to use the
UEP angles of the advanced generators as input signals in
Input layers: Includes as many neurons as needed for the the ANN, instead of the sensitivity information. Therefore,
desired input information, i.e., for AV and for aAV/ap the input neurons receive signals from AV and the UEP
(or related information - see Section 3.2 below); for angles of the advanced generators. It should be remembered,
the test system used in this study the input layers are however, that in the training process the ANN accomplishes
made up of about 30 neurons. the additional task of finding the complex relationship
between the UEP angle inputs and the output results based
Hidden layer(s): The number of hidden layers and the on the sensitivity information.
number of neurons in each layer depends on the
complexity of the classification pattern in the problem The selected ANN is, therefore, as shown schematically
at hand. Initial investigations used one hidden layer in Fig. 2 with the following features included. The input
with two neurons. However, after some signals for a given training data pair would include: one
experimentation, two hidden layers, with six nodes in value of AV, and as many UEP angles as there are advanced
the first and two nodes in the second, produced better generators in the UEP. The output would be the computed
results, and it is this latter architecture that was used vulnerability assessment (ideally close to 1.0 and 0.0). The
to produce the final results given in Tables 5 and 6. hidden layer(s), and the number of neurons per layer are
determined by experience.
Output layer: Has one neuron; its (desired) output is either
1 (vulnerable) or 0 (not vulnerable); acceptable 4. VALIDATION STUDIES
classifier results are: 2 0.8 and S 0.2 respectively.
4.1 Test System
3.2 Input Signals
The test system used for this study is a 50-generator IEEE
The framework for vulnerability assessment requires test system. This system is characterized by a large block of
information on the security indicator, which is the energy generation delivered from power system no. A. Figure 3 is
margin AV, and its sensitivity to changes in a critical system a one-line diagram of the area of power stations no. A and B
parameter p [21. Thresholds for acceptable levels of these of this system. Data for this system is given in [A.
indicators are developed, and the system vulnerability status
depends on whether the values of AV and dAV/ap, for a
given contingency, are above or below these thresholds.

When ANN is used for classifying complex patterns,


considerable amount of data is used in the training and
testing of the network. This provides an incentive to simplify
the computations involved, especially for on-line dynamic
security assessment applications. When the TEF method is
used for transient stability assessment, considerable amount
of additional computations are needed to obtain the
sensitivity information. The question arises: can the
sensitivity data be substituted for by other information Fig. 3. 50-generator IEEE system - power stations
obtained in the basic TEF procedure? A & B area

In the T I 3 method the UEE' is the "controllingunstable equilibrium point" for the specific disturbance (e.g., fault) under investigation. The potential energy
at that point gives the value of the critical energy against which the system transient energy at the end of the disturbance is compared for stability assessment
At the UEP,the severely disturbed generators (also caned critical generators) have UEP angles greater than 90'. For many systems and for most operating
conditions only a small number of generators have advanced UEP angles; this situation is usually called "plant mode." In some heavily loaded, or stressed,
amditions the distur4ance (if severe enough) may lead to separation of a large number of generators from the system; in this system condition a large number
of generam will have advanced UEP angles.
528
Table 1. UEP Angles (Stressed Case) Table 2. Energy Margin Sensitivity (aAV/ap)
Station A Generation Station A Generation
= 2 x 1300 M W = 2 x 13OO M W
6 12 1 2 10 25 61 63 7 6 12 1 2 10 25 61 63
\ Riir

UEP ANGLES (rad) G i K


SENSITIVITY VALUE
1 1.383 1.382 1.831 1.751 1.746 1.866 1.860 1.835 1.828
1 -0.145 -0.149 -0.492 -0.384 0.386 -0.544 -0.530 -0.532 4.540
2 1.912 1.9Y 2.243 2.115 2.120 2.253 2.254 2.246 2.247
2 -0.640 -0.678 -1.331 -1.031 -1.040 -1.375 -1.366 -1.412 -1.444
3 1.876 1.875 2.286 2.224 2.223 2.311 2.307 2.288 2.285 3 -0.551 -0.626 -1.348 -1.133 -1.137 -1.402 -1,384 -1.348 -1.360
4 1.878 1.877 2.290 2.228 2.227 2.316 2.312 2.292 2.290 4 -0.548 -0.453 -1.128 -0.859 -0.867 -1.181 -1.159 -1.179 -1.201
5 1.787 1.787 2.209 2.099 2.104 2.233 2.216 2.215 2.216 5 -0.612 -0.635 -1.823 -1.890 -1.896 -1.923 -1.990 -1.817 -1.888
6 2.388 2.387 2.759 2.717 2.719 2.776 2.775 2.755 2.755 6 -1.385 -1.505 -2.797 -2.520 -2.529 -2.858 -2.828 -2.754 -2.776
7 -0.298 -0.318 -0.847 -0.698 -0.700 -0.917 -0.897 -0.899 -0.910
7 1.744 1.744 2.191 2.116 2.112 2.223 2.218 2.195 2.189
8 -0.680 -0.775 -1.594 -1.270 -1.279 -1.629 -1.636 -1.575 -1.598
8 1.909 1.909 2.315 2.220 2.223 2.335 2.321 2.319 2.320 9 -1.377 -1.438 -2.659 , -2.736 -2.733 -2.741 -2.700 -2.702 -2.731
9 2.377 2.376 2.762 2.834 2.832 2.766 2.763 2.747 2.747 10 -0.191 -0.193 -0.603 -0.485 -0.488 -0.666 -0.649 -0.657 -0.666
10 1.554 1.554 2.007 1.927 1.921 2.042 2.036 2.011 2.004 11 -0.060 -0.063 -0.251 -0.185 -0.194 -0.300 -0.285 -0.303 -0.314
11 1.483 1.482 1.935 1.841 1.825 1.978 1.970 1.939 1.923 12 -0.896 -1.011 -1.936 -1.641 -1.649 -1.975 -1.977 -1.904 -1.929
12 2.055 2.054 2.431 2.362 2.365 2.446 2.434 2.430 2.431 13 -0.716 -0.755 -1.442 -1.133 -1.142 -1.488 -1.477 -1.525 -1.555
14 -1.493 -1.631 -2.943 -2.652 -2.660 -2.9% -2.964 -2.881 -2.899
13 2.016 2.016 2.332 2.211 2.215 2.341 2.342 2.335 2.336
15 -1.370 -1.429 -2.644 -2.714 -2.712 -2.726 -2.686 -2.694 -2.719
14 2.478 2.478 2.841 2.800 2.802 2,855 2.854 2.836 2.836 16 -1.176 -1.283 -2.393 -2.146 -2.154 -2.455 -2.430 -2.382 -2.403
15 2.358 2.358 2.737 2.799 2.798 2.142 2.739 2.724 2.725 17 -1.040 -1.204 -2.168 -1.828 -1.837 -2.191 -2.187 -2.094 -2.121
16 2.213 2.212 2.585 2.538 2.540 2.602 2.601 2.584 2.584 18 0.225 0.222 0.426 0.391 0.393 0.439 0.430 0.422 0.416
17 2.115 2.115 2.416 2.410 2.413 2.484 2.484 2.473 2.474 19 -1.095 -1.169 -2.221 -2.018 -2.026 -2.299 -2.273 -2.255 -2.280
18 0.920 0.920 0.981 0.962 0.963 0.982 0.982 0.928 0.918 20 -2.434 -2.398 -2.M2 -2.103 -2.211 -2.557 -2.510 -2.474 -2.505
2.148 2.148 2.522 2.474 2.476 2.541 2.538 2.522 2.522 21 -1.165 -1.265 -2.351 -2.073 -2.083 -2.409 -2.394 -2.341 -2.369
19
22 -1.170 -1.269 -2.352 -2.077 -2.086 -2.410 -2.396 -2.346 -2.375
20 3.051 3.051 2.859 2.787 2.790 2.862 2.861 2.846 2.846
23 -0.236 -0.108 -0.748 -0.505 -0.512 -0.803 -0.773 -0.831 -0.851
21 2.246 2.246 2.6lO 2.551 2 . W 2.623 2.613 2.608 2.608 24 -0.305 -0.246 -0.841 -0.639 -0.646 -0.905 -0.865 -0.939 4.961
22 2.248 2.248 2.613 2.553 2.556 2.626 2.615 2.611 2.611 25 -1.412 -1.466 -2.711 -2.782 -2.780 -2.7% -2.755 -2.764 -2.791
23 1.551 1.550 2.015 1.863 1.868 2.045 2.027 2.025 2.027 26 -1.818 -1.837 -2.100 -1.815 -1.823 -2.146 -2.113 -2.116 -2.149
24 1.570 1.569 1.996 1.882 1.886 2.020 2.003 2.003 2.003 27 -1.026 -1.182 -2.137 -1.806 -1.815 -2.162 -2.159 -2.074 -2.101
25 2.410 2.410 2.787 2.851 2.849 2.792 2.789 2.774 2.774 28 0.133 0.135 0.236 0.214 0.215 0.243 0.240 0.234 0.238
29 0.186 0.187 0.331 0.382 0.383 0.431 0.424 0.408 0.419
26 2.724 2.723 2.708 2.630 2.633 2.712 2.712 2.700 2.700
30 0.187 0.190 0.340 0.375 0.370 0.407 0.421 0.390 0.330
27 2.105 2.105 2.466 2.400 2.403 2.474 2.474 2.463 2.464
31 0.199 0.200 0.355 0.323 0.324 0.365 0.360 0.346 0.352
28 -0.049 -0.049 -0.064 -0.061 -0.062 -0.065 -0.064 -0.074 -0.074 32 0.014 0.021 0.024 0.017 0.017 0.027 0.026 0.042 0.042
29 0.098 0.098 0.100 0.098 0.098 0.099 0.100 0.077 0.077 33 -0.099 -0.093 -0.410 -0.211 -0.216 -0.449 -0.429 -0.494 -0.510
30 0.271 0.271 0.288 0.282 0.282 0.287 0.288 0.252 0.252 34 -0.320 -0.338 -0.849 -0.584 -0.592 -0.890 -0.881 -0.935 -0.957
31 0.152 0.152 0.160 0.157 0.157 0.159 0.160 0.133 0.133 35 -0.513 -0.54s -1.149 -0.865 -0.814 -1.192 -1.185 -1.233 -1.261
36 -0.013 -0.004 -0.066 -0.042 -0.045 -0.070 -0.067 -0.043 -0.045
32 -0.519 -0.519 -0.507 -0.502 -0.502 -0.510 -0.509 -0.506 -0.506
37 0.030 0.035 0.012 0.046 0.046 0.058 0.057 0.067 0.067
33 1.086 1.085 1.591 1.391 1,397 1.626 1.609 1.604 1.606
38 0.053 0.057 0.098 0.086 0.086 0.102 0.101 0.107 0.107
34 1.487 1.487 1.893 1.727 1.733 1.912 1.908 1.901 1.902 39 0.004 0.005 0.011 0.018 0.017 0.018 0.012 0.010 0.008
35 1.701 1.700 2.055 1.917 1.922 2.067 2.066 2.059 2.060 40 0.085 0.088 0.151 0.135 0.136 0.156 0.154 0.153 0.155
36 -0.013 -0.013 0.059 0.042 0.043 0.060 0.059 0.064 0.064 41 0.126 0.131 0.225 0.203 0.204 0.232 0.228 0.225 0.229
37 -0.457 -0.457 -0.452 -0.449 -0.449 -0.454 -0.453-0.452 -0.452 42 0.109 0.113 0.195 0.115 0.116 0.200 0.198 0.195 0.198
38 -0.090 -0.090 -0.089 -0.087 -0.087 -0.090 -0.089-0.090 -0.090 43 -0.021 -0.020 -0.061 -0.054 -0.055 -0.064 -0.063 -0.055 -0.055
44 0.069 0.071 0.121 0.108 0.109 0.124 0.123 0.122 0.124
39 0.451 0.451 0.503 0.486 0.486 O.M5 0.504 0.501 0.501
45 0.092 0.095 0.167 0.1W 0.151 0.173 0.170 0.170 0.172
40 -0.198 -0.198 -0.232 -0.225 -0.225 -0.233 -0.233-0.230 -0.230 46 0.120 0.121 0.222 0.200 0.201 0.229 0.226 0.220 0.223
41 0.505 0.505 0.477 0.483 0.483 0.476 0.477 0.479 0.479 47 0.078 0.082 0.142 0.127 0.127 0.147 0.145 0.148 0.149
42 0.055 0.055 0.019 0.027 0.026 0.018 0.019 0.021 0.021 48 0.108 0.112 0.192 0.173 0.173 0.198 0.195 0.193 0.196
43 -1.662 -1.662 -1.762 -1.741 -1.742 -1.764 -1.764-1.744 -1.744 49 0.088 0.091 0.155 0.139 0.140 0.160 0.158 0.156 0.154
44 -0.662 -0.662 -0.722 -0.710 -0.710 -0.724 -0.723-0.715 -0.715 50 0.106 0.109 0.189 0.170 0.171 0.194 0.192 0.189 0.192
45 0.063 0.063 0.053 0.055 0.055 0.053 0.053 0.049 0.049
46 0.188 0.188 0.191 0.189 0.189 0.190 0.190 0.178 0.178
47 0.090 0.090 0.079 0.082 0.082 0.078 0.019 0.078 0.078
48 0.103 0.103 0.062 0.070 0.070 0.060 0.061 0.064 0.064
49 -0.318 -0.318 -0.365 -0.356 -0.356 -0.367 -0.366 -0.361 -0.361 In the study reported upon here the critical system
50 -0.060 - 0 . 0 ~ )-0.084 -0.079 -0.079 -0.oes -0.084 -0.087 - 0 . 0 8 7 parameter is plant generation change.

4.2.1 Vulnerability classification


For the network conditions given in [q,except for the
Station A load variation given below, the system dynamic For the system vulnerability classification, the neural
security and vulnerability are investigated for faults in the network works as a classifier. The training set includes the
HV network, and for changes in plant generations; the latter following data.
are used as the critical system parameter. The data for
system vulnerability,used for training of the neural network, 1. Desired output:
is obtained by the procedure developed in [2].
The actual system vulnerability status as obtained by
the framework and procedure developed in [2]. The
4.2 Training of the ANN results are obtained using AV and aAV/aP information
and translated into stability limits of plant generation.
Using the test system shown in Fig. 3, the neural network Classification is based on whether the actual data is
discussed in Section 3 was trained to classify the system above or below derived thresholds of acceptable values.
vulnerability status for seven operating conditions. In all Based on this information,the systemvulnerability status
these operating conditions the total system load is held is designated by
constant, and the generation at Station B is held at 4000 Mw.
The generation at Station A (generators 9 and 25) is varied 1 = vulnerable
from 2 x 700 MW to 2 x 1300 MW in steps of 2 x 100 MW; 0 = not vulnerable
with the total generationheld (nearly) constantby adjustment
of remote generation. However, it is to be noted that the 2. Inputs
change in Station A generation considerably changes the flow
in the transmission network in the area shown in Fig. 3. This a. AV value (one entry for each training pair)
alters the degree of stress experienced, and is reflected in the b. UEP angles of advanced generators (up to 30 entries
system response to disturbances. for each training pair)
529
4.2.2 Computer package and training process
The multi-layered perceptron is trained for classification
of system vulnerability by using the computer package
Neural Works Professional II [8]. The equations used in this
software are slight modifications to the equations presented
in Section 2.3.

The training process uses a random ordering of the data.

4.3 Effect of Training Times

The effect of training times is illustrated for the case of


Station A generation of 2 x 700 MW. The results are shown
Table 4. System Vulnerability Status Matrix'

Station A
Generation
n
in Table 3.
0
Table 3. Results of Training the ANN via the Training Times
0
Station A Generation = 2 x 700 MW

* 1 = Vulnerable
Vulnerability Status I 0 = Not Vulnerable

Fault ANN Output


Bus Desired are shown in Table 5. The results for the remaining
No. operating conditions are similar to those shown.
Times Times Times

Noting that in the training of the neural network an

l 7 1.o 0.604214
0.600523
0.860502
0.859243
0.942520
output of less than 0.2 is considered 0.0, and an output equal
to or greater than 0.8 is considered 1.0, the results in Table 5
show that the ANN output correctly predicts the system
vulnerability in all but one of the cases. For Station A
0.104742 0.046341 0.028891 generation of 2 x 700 M W and a fault at bus 7, the ANN
0.103040 0.046147 output is 0.773662 instead of 2 0.8. The value of 0.773662
indicates that the classification lies close to the border
0.0 0.103020 0.046145 0.028814 between the two classes. It is possible that with additional
1 1 0 I 0.0 I 0.103084 0.046151 training data, which is "close" to this particular case, that the
ANN could be trained to classify this data point correctly
1 25 1 0.0 I 0.103274 I 0.046168 I 0.028833 1 (see below).
61 0.0 0.102697 0.046124 0.028770
0.0 0.028765
4.4.2 Different training and test sets
63 0.102684 0.046123
33 0.0 0.107041 0.047050 0.028716 In this series of tests the 63 training data pairs are
divided into two sets: a training set and a test set. The idea
The results in Table 3 show that as we increase the is to test the ANN'S ability to classify data which it had not
training times we get better training results and when the seen before.
training times N=180 we get the correct classification, i.e., an
a) Training set:
output of 2 0.8 when the desired output is 1.0, and 5 0.2
In this set Station A generation is scheduled at: 1400,
when the desired output is 0.0.
1600,2000,2200, and 2600 MW respectively; thus this
4.4 ANN Training Results set includes 45 data pairs.

b) Test set:
There are seven operating conditions and nine
contingencies (faults) used in the training and testing In this set Station A generation is scheduled at: 1800
procedure. Thus, a total of 63 pairs of training data are used. and 2400 MW; thus this set includes 18 data pairs.

For this test the training times N is increased to N=12000.


The system vulnerability status, which gives the desired
output of the ANN, is given in Table 4. Again we point out The results are shown in Table 6.
that these results are obtained using the procedure developed
in [2]. The results clearly show that the training of the neural
network has been successful for a variety of operating
For the large number of operating conditions used in this conditions and disturbances for the IEEE 50-generator test
test, it is felt that a large number of training times is needed, system. When the training times are sufficiently large,
to ensure correct system vulnerability classification. correct classification of system vulnerability has been
achieved by the multi-layered perceptron even for previously
4.4.1 Same training and test sets unseen data.

For this test N=1000 is used. A sample of the ANN


results for four operating conditions and nine fault locations
530
-
Table 5. ANN Results Some Training Q Test Sets
Training Times N=1000

Fault Station A Generation


Location
@us no.) Zx7WMW 2X8ooMw 2 x 1100 MW zx1z00Mw
Desired ANN Desired ANN Desired ANN Desired ANN
Output Output Output Output Output Output Output atput
--
0.773662 0.0 0.032605 0.868662

1 1 1:: 1 0.874138
0.010753
0.010753
1 1.o
1:; I 0.874137
0.010753
0.010753
/I :JC 1 0.874138
0.010835
0.010754
1 :3: I 0.874138
0.023517
0.017382
1

0.010753 0.0 0.010753 0.0 0.010753 0.0 0.010753

-
Table 6. ANN Results Different Training & Test Sets
Training Times N=lZOOO

-
Training Set Station A Generation
Location
(bus no.) 2 x 700MW 2 x 800MW 2xlOOOMW 2 x 1100 MW 2X1300Mw
Desired ANN Desired ANN Desired ANN DesiFed ANN Desired ANN
Output Output Output Output Output Output Output Output Output Output

0.0 0.007784 0.0 0.011101 1.0 0.985037


1.0 0.985038 p J - G0.985038
z' 1.0 0.985038 1.0 0.985038
0.0 0.003388 0.0 0.006941
0.0 0.003377 0.0 0.005901
0.003377 0.0 0.003377 0.0 0.005876
0.003377 0.003377 0.0 0.003377 0.0 0.005812

0.003377 0.003377
53 1
5. DISCUSSION AND CONCLUSIONS References
In this paper the artificial neural networks (ANNs) [l] El-Sharkawi, M. A., et al. "Neural Networks and Their
technique is applied to the concept of system vulnerability Application to Power Engineering," Control and Dynamic
within the recently developed framework, for fast pattern Systems 41, Academic Press, 1991.
recognition and classification of system dynamic security
status. A suitable topology for the neural network, based on [2] Fouad, A. A., Zhou Qin, and V. Vittal. "System Vulnerability
the multi-layered perceptron, is developed; and an as a Concept to Assess Power System Dynamic Security."
appropriate training method based on the back propagation Submitted to the Power Engineering Society of IEEE; in the
algorithm, is used. The input signals for the ANN are the review process.
energy margin AV and the UEP angles for the advanced
generators; the latter are used instead of the energy margin [3] Fouad, A. A., and V. Vittal. Pwer System Transient Stability
Analysis Using the Transient Energy Function Method. Prentice
sensitivities to reduce the computation burden. The ANN Hall, 1992.
output is the vulnerability status: 1for vulnerable, and 0 for
nonvulnerable. For the training set the output information, [4] Sipson, P. K. Artpcial Neural Systems. Pergamon Press,
obtained by the technique developed in [2], is provided. New York, 1990.

The proposed ANN technique is applied to the IEEE 50- [5] Lippman, R. P. "An Introduction to Computing with Neural
generator test system. Seven operating conditions, ranging Nets." IEEE ASSP Magazine, April 1987 4-22.
from unstressed to very stressed network conditions, and
nine fault locations were analyzed. In all the cases the [6] Zhou, Qin. "Sensitivity and UEP Analysis of the Transient
correct vulnerability classification is obtained, even for Energy Function Method." Proceedings of the First Midwest
previously unseen cases. Hectro-Technology Conference, Ames, Iowa, April 10-11,19923-
6.
In the work reported upon in this paper the emphasis
[;1 Vittal, V. 'Transient Stability Test Systems for Direct Stability
has been on investigating whether the complex dynamic Methods." IEEE Committee Report, I E E E Transactions On
behavior of a stability-limited power system can be captured, PWRS (Feb. 1992): 37-42.
for classification purposes, by the ANN technique. No
attempt was made to optimize the performance of a given [8] Neural Works Professional 11; Neural Ware, Inc., Pittsburgh,
architecture as exhibited in such factors as the number of Penn., 1989 (a) Neural Computing, (b) User's Guide.
layers, number of nodes per layer, number of training times
required, etc. In addition, the authors have not explored
whether in the presence of more than one changing system
parameter (i.e., in addition to the change in generation) other Biography
ANN architecture, or ANN'S (e.g., the BoltzmaM machine)
may be more suited to the problem at hand. These questions Oin Zhou: is currently working on his Ph.D. in the Department of
will be addressed in future investigations. Electrical Engineering and Computer Engineering at Iowa State
University. He received his B.S. in 1983and his M.S. in 1986 from
From the results presented in this paper the following Tsinghua University, Beijing, People's Republic of China. From
conclusions can be made. 1986 to 1989 he was a Lecturer of Electrical Engineering at
Tsinghua University. Research areas include power system
1. The multi-layered perceptron with back propagation security assessment and the application of Artificial Neural
algorithm is successful in correctly classifying the Networks in power systems. He is a member of Tau Beta Pi
complex patterns of system dynamic security and Honor Society.
vulnerability based on the TEF method of transient
stability analysis. Jennifer Davidson: (M 89) is an Assistant Professor in the
Department of Electrical Engineering and Computer Engineering
at Iowa State University. Dr. Davidson received her B.A. degree
2. There is good correlation between the UEP angles of in Physics from Mount Holyoke College, and her M.S. and Ph.D.
the advanced generators and energy margin degrees in Mathematics from the University of Florida. Her
sensitivities. Thus, the use of the U E P angles as input research interests include image processing, image algebra, neural
signals to the ANN has been successful in reducing networks, and computer vision.
the computation burden, without sacrificing the
accuracy of the results. Abdel-Aziz A. Fouad: Professor of the Electrical Engineering and
Computer Engineering Department at Iowa State University,
3. The data presented seem to indicate that the prospect received the B.S. degree in Electrical Engineering (1950) at the
for ANN use in on-line power system dynamic University of Cairo; M.S. (1953) from the University of Iowa; and
security and vulnerability assessment is quite realistic. Ph.D. (1956) from Iowa State University. He is a Fellow of the
Institute of Electrical and Electronics Engineers; and is the 1990
Acknowledgment Anson Marston Distinguished Professor of Engineering at Iowa
State University.
This work was supported in part by the Iowa State
University Electric Power Research Center.
532
DISCUSSION QIN ZHOU, JENNIFER DAVIDSON, and A. A. FOUAD We wish
to thank the discussors for their interest in the paper, and for the
points they raised. We would like to offer the following comments
DILEEP K. JAIN, D.P. Kothari (Centre for in response.
Energy Studies, IIT Delhi, India) and P.S.
Satsangi (Electrical Engineering Deptt., IIT Regarding Neural Works Professional 11, it takes input data
Delhi, India); We are happy to note between -9999 and +9999; and !&aftware accepts the numeric
application of artificial neural networks to inputs as given, i.e., without normalization. The UEP values used
power system disciplines. While working on a
comprehensive model of urban energy system,
as input are (coincidently) given in radians. The angles > 1.57
we noticed that the large volume of outputs radians (> 90') represent critical or severely disturbed generators;
generated from our system dynamics model similarly generators with angles < 1.57 are not severely disturbed.
could be utilised to develop an intelligent We could (and if it is more desirable would) use angles in degrees
quick-response model through use of instead.
artificial neural networks. Our ANN model
also uses the multi-layered perceptron with Concerning the discussors' second question, we wish to point out
back-propagation learning algorithm. Actual that the activation function for the ANN is a sigmoid function. It
values of all input and output variables were maps all the real numbers onto a range strictly between 0 and 1.
normalized to be within 0 and 1. We observed We are interested in an output which is classified near either end
following behaviour with ANN simulations:(l) of the 0/1 scale. The 2 0.8 and 5 0.2 were chosen empirically based
ANN results were, in some cases, three to on previous experience with this ANN model.
four times of actual outputs when the later
had their values close to 0 or 1 . ( 2 ) Host of
the ANN results varied by 5-10% when actual
The discussors seem to think that our use of the UEP angles instead
outputs were in the range of 0 . 1 5 to 0 . 8 5 . of the actual sensitivity information was based on ANN design
Like econometric models, we feel that one can considerations. Actually this choice was made because of the
not prove the existance of any causality relative ease with which UEP angles are obtained in the TEF
between input dnd output variables based on method of transient stability analysis. Our experience with the TEF
successful matching of actual outputs with method shows that the UEP angle information gives comparable
ANN outputs. indication of the relative stress to the system to that given by the
energy margin sensitivity, which involves a great deal more
In the light of above observations, we would computations. This relative degree of stress is implied in the
like to seek clarifications on the following system vulnerability concept.
issues: ( 1 ) since many input values lie
within + 3 , what range of whole numbers is Finally we wish to stress that in the work reported upon in the
acceptable to Neural Works Professional I 1 ?, paper the authors have primarily attempted to demonstrate a
(2) in case the desired output has to be
exact near 0 or 1 (instead of >0.8 and < 0 . 2 ) , concept. No attempt was made to optimize the ANN parameters
is it possible to achieve that ? If yes, how or architecture.
much additional time is needed ?, ( 3 ) from
the ANN results, how could the authors claim Manuscript received March 29,1993.
that UEP angles are better suited than
sensitivity of energy margin. This conclusion
seems to be made outside of ANN analysis.

We look forward to share authors' experience


of power-system application of multi-layered
perceptron to energy policy analysis and
congratulate them for nice piece of work.

Manwript received March 10, 1993.

S-ar putea să vă placă și