Sunteți pe pagina 1din 8

Induction Motor Speed Control System using

Artificial Neural Network Based PID Controller


Ananda Kumar A
Lecturer, Electrical & Electronics
Department
BITS Pilani
Pilani, Rajasthan, India - 333031
aak@pilani.bits-pilani.ac.in

Ravi Teja Nidumolu


Utsuk Sharma
Student, Electronics & Instrumentation
Department

Student, Electrical & Electronics


Department
BITS Pilani
Pilani, Rajasthan, India - 333031
rtnidumolu@gmail.com

BITS Pilani
Pilani, Rajasthan, India -333031
utsuksharma@yahoo.com

Abstract The PID control technology is very conventional.


There is an extensive application in many fields at present. The
PID controller is simple in structure, strong in robustness, and
can be understood easily. PID control schemes have been widely
used for many industrial processes, which can be represented by
nonlinear systems. The neural networks have great capability in
solving complex mathematical problems since they have been
proven to approximate any continuous function as accurately as
possible. Hence, it has received considerable attention in the field
of process control. Due to the complication of modern industrial
process and the increase of nonlinearity, time-varying and
uncertainty of the practical production processes, the
conventional PID controller can no longer meet our requirement.
This paper introduces the theoretical foundation of the neural
network and studying algorithm of the neural network briefly,
and designs the PID based induction motor with indirect vector
speed control system and simulation model based on artificial
neural network. In the end a new scheme for neural-network
based PID controllers are presented. The effectiveness of the
newly proposed control scheme for nonlinear systems is
numerically evaluated using a simulation example.
Keywords PID controller; Artificial Neural Network; Back
Propagation Algorithm; two layer feed forward network; induction
motor; indirect vector speed control; Ziegler-Nichols law.

I.

INTRODUCTION

A. Induction Motor

Induction motors are most commonly used motors in


different kinds of industries, household appliances, motor of
refrigerator, water pump motor. They are simple, robust and
can operate in any environmental condition.
Induction motors are cheaper in cost, maintenance
free unlike dc motors and synchronous motors due to the

absence of brushes, commutators and slip rings which makes


them operable even in polluted and explosive environments.
One of the main disadvantages of induction motors is that
speed control of induction motors is difficult.
In high performance system, the motor speed should
closely follow a specified reference trajectory regardless of
any load disturbance, parameter variations and model
uncertainties. In order to achieve high performance, fieldoriented control of induction motor drive is employed [1].
However the control design of such a system plays a role in
system performance. The decoupling characteristics of vectorcontrolled induction motor are adversely affected the
parameter changes in the motor. The speed control of IM
issues are traditionally handled by fixed gain PI and PID
controllers. However the fixed gain controllers are very
sensitive to parameter variations, load disturbances, etc. This
particular problem is dealt in this paper and non - traditional
method is proposed, neural network based speed control
system is presented.

B. Indirect Field - Oriented Induction Motor Drive


The indirect vector control method is essentially
same as the direct vector control except the unit vector
generated in an indirect manner using the measured speed

and slip speed

sl . The following dynamic

equations are taken into consideration to implement indirect


vector control strategy [2].

e = w e dt= ( wr + w sl ) r + sl

(1)

Theorizing and implementing such a complex


biological neural network (humans have ~ 10 11 neurons)
artificially is near to impossible, but it is possible to construct
a small set of simple artificial neurons. Networks of these
artificial neurons can be trained to perform useful functions.
Hence the name as commonly referred Artificial Neural
Networks (ANN).

The rotor circuit equation

d dr Rr
L
+ dr m R r i ds sl qr =0
dt
Lr
Lr

(2)

d qr Rr
L
+ qr m R r iqs sl dr =0
dt
Lr
Lr

(3)

For decoupling control


directs on the

qr =0 , so the total flux

(4)
D. PID Controller

Slip frequency can be calculated as

Lm R r
i
Lr qs

(5)

And the electromechanical torque developed as

T e=

In this paper, focus shall be on two layered


perceptron model of Back Propagation Neural Network
(BPNN) to obtain prediction, control speed of an induction
motor.

axis.

Lr d r
+r =Lm i ds
R r dt

sl =

breathing, motion, thinking and practically everything that we


do or can do. Each of your biological neuron, a rich assembly
of tissue and chemistry, has the complexity, if not the speed, of
any system. Some of our neural structure is formed at birth,
others are learnt by exposure to surroundings, by specific
responses to specific signal, like smell, taste, etc., experience
i.e., training or learning [3]. It is known that all biological
neural functions, including memory, are stored in the neurons
and the interconnections. Learning, Training or Experience
can be viewed as the establishment of new connections or the
modification of existing connections between neurons,
evolving into better being, capable of responding to a
situation.

3 P Lm
i
2 2 Lr r qs

(6)

C. Artificial Neural Networks


A human body or any living species is an example for
a complex biological neural network. Humans have a highly
interconnected set of biological neurons that facilitate reading,

The PID, Fig 1.4, controller consists of three separate


constant parameters, and is accordingly sometimes called
three-term control: the proportional, the integral and derivative
values, denoted P, I, and D. These values can be interpreted in
terms of time: P depends on the present error, I on the
accumulation of past errors, and D is a prediction of future
errors, based on current rate of change. The weighted sum of
these three actions is used to adjust the process via a control
element such as the speed of the induction motor.

Fig 1: Traditional PID Controller

II.

THE TWO - LAYER PERCEPTRON MODEL

The neural network used has a structure as sketched


in Fig 1.1. The d input units feed the network with signals

E. Discrete PID
The analysis for designing a digital implementation
of a PID controller in ANN requires the standard form of the
PID controller to be discretised. Approximations for first-order
derivatives are made by backward finite differences. The
integral term is discretised, with a sampling time t as follows

pi . Each of the M hidden units receive all input signals

each of them multiplied by a weight.

As said above and from the discrete PID equation we


feed in u, e which is (r - y) with respective delays into the
ANN and train using BP algorithm against the target states
known from experience

kp,

ki , kd

values are

determined and the ANN built can now replace the traditional
PID block and mimic the functionality of a normal PID.

[(

u ( t k )=u ( t k1 ) + K p 1+

2T
T
t Td
+
e ( t k ) + 1 d e ( t k 1 ) + d e ( t k2)
Ti t
t
t

) (

( )

Fig 2: ANN Based PID Controller

(10)

F.

PID Tuning

Tuning the PIDs require the application of the


Ziegler-Nichols rule, which is a heuristic PID tuning rule that
attempts to produce good values for the three PID gain
parameters.
In this method, the I and D terms are set to zero first
and the proportional gain is increased until the output of the
loop oscillates. As one increases the proportional gain, the
system becomes faster, but care must be taken not make the
system unstable. Once P has been set to obtain a desired fast
response, the integral term is increased to stop the oscillations.
The integral term reduces the steady state error, but increases
overshoot. Some amount of overshoot is always necessary for
a fast system so that it could respond to changes immediately.
The integral term is tweaked to achieve a minimal steady state
error. Once the P and I have been set to get the desired fast
control system with minimal steady state error, the derivative
term is increased until the loop is acceptably quick to its set
point. Increasing derivative term decreases overshoot and
yields higher gain with stability but would cause the system to
be highly sensitive to noise.

The summed input

aj

of the

j th hidden unit is

calculated from the input signals as follows:

a j= ( w jip i )
i=1

(7)

Fig 1.4: Neural Network Structure

Fig 1.3: The Two Layer Perceptron

k th output unit is:

The activation of the


M

x k =g

Where pi,

( (
j=0

w kjg(a j) )

1 i d

(8)

are the inputs and p0:= 1, so that

W j 0 is the bias of the

th

hidden unit. In this way we

absorb the bias into the weights.

Also here we have absorbed the bias into the weights, by


setting

g ( a 0 )=1

bias for the

III.

which results in

Wk0

being the

k th output unit.

TRAINING OF A NEURAL NETWORK

The process of tuning the neural network weights in


order to achieve a certain kind of performance of the neural
network is called training. In order to implement a function
using ANN we have to train it with input data and
corresponding target data. An error function E, cost function J
(like mse mean squared error) is specified, and the training
algorithm will search for the weights that result in a minimum
of the error function. Training refers to adjustments of weights
and biases in order to implement a function while minimising,
eliminating error.

Fig 5: Neural Network Training Tool

Training of the above neural network, Fig 1.3, is done


by iteratively simulating the system consisting of controller
and plant shown, and then evaluating the resulting time series
of plant outputs (
(
J.

y 1 , y 2 y N ) and control signals

u1 , u2 u N ) with respect to a given Cost Function

J =f ( y 1 , y 2 y N , u1 ,u 2 u N ) ;

(9)

An optimisation algorithm is used to minimise the cost


function through this iterative process. One such algorithm is
Back Propagation Algorithm, used during this study. Generic
algorithm of any optimisation algorithm:
1.
2.
3.
4.

signal. This error signal is then propagated backward through


the network, against direction of synaptic connections - hence
the name error back-propagation. The synaptic weights are
adjusted so as to make the actual response of the network
move closer the desired response [6].

run simulation of system


evaluate result with respect to given cost function
adjust weights according to optimisation algorithm
go back to step 1 as long as stop criterion is not met

Fig 7: Performance of Neural Network during training phase

V.
Fig 6: Hyperbolic tangent that is used as activation function for the hidden
units

IV.

BACK PROPAGATION ALGORITHM

CONSTRUCTING AN ARTIFICIAL NEURAL


NETWORK

The number of neural network outputs is fixed for


this controller as the only output is the control signal.
Tuneable parameters are the number of hidden units and the
input units.

Back propagation algorithm is a supervised learning


algorithm for multilayer feed forward neural network. Since it
is a supervised learning algorithm, both input and target output
vectors are provided for training the network. The error data at
the output layer is calculated using network output and target
output. Then the error is back propagated to intermediate
layers, allowing incoming weights to these layers to be
updated [4]. This algorithm is based on the error-correction
learning rule.
Basically, the error back-propagation process consists
of two passes through the different layers of the network: a
forward pass and a backward pass. In the forward pass, input
vector is applied to the network, and its effect propagates
through the network, layer by layer [5]. Finally, a set of
outputs is produced as the actual response of the network.
During the forward pass the synaptic weights of network are
all fixed. During the backward pass, on the other hand, the
synaptic weights are all adjusted in accordance with the errorcorrection rule. The actual response of the network is
subtracted from a desired target response to produce an error

Fig 8: Interconnections in ANN

The neural network controller we will use has inputs


that are the outputs of a lag network (inputs with delays). The
number of input units hence divides down to three distinct
parameters (r, u and y): We can adjust the number of inputs
originating from reference signals, plant outputs and control
signals respectively. We train the neural network with a goal
such as to minimise mean squared error (mse) to be less than
certain level.

VI.

RESULTS

Many controller design schemes using neural


networks have been proposed in the past, because it is an
effective technique for nonlinear approximation and enables
us to deal with nonlinear control systems [7] - [9]. The

schemes for neural-net based controllers are classified into


two main groups. One group consists of those in which the
control input is directly generated from the output signals of
the neural network. The other which was discussed, consists of
those in which the control parameters are tuned by the neural
network, that is, the PID gains are generated in the output
layer.

Fig 9: Circuit Diagram of Indirect Vector Speed Control based Induction Motor

Fig 10: Expected outputs v/s actual output

Both controller types require a proper choice of


model order, for the direct neural network controller, the

choice of training scenarios is a key factor to obtaining a good


controller. The controller proposed is trained with slowly

varying load over time, increasing constantly and decreasing


constantly over time. The controller outputs is more ideal in
such cases but requires a little more time to settle in cases of
sudden application of load, steady state errors almost
negligible.

VII.

CONCLUSIONS

BP neural network is a neural network structure


commonly used. It can approach any nonlinear function with

Fig 11: Output comparison between ref signal, ANN output and PID output during rise in
ref signal

VIII.

arbitrary precision, has good approximation performance and


simple structure, and thus is an excellent neural network.
Therefore, BP Neural Network algorithm is proposed, and
based on which an implementation scheme of PID control
system is presented, the results show that the scheme can
improve the convergence speed and the trained BP neural
network also is strongly adaptive which improves the
performance of PID controllers. The rise time and fall time as
can be seen from Fig. 1.8 - Fig. 1.10 has dropped considerably
and steady state error being zero.

Fig 12: Output comparison between ref signal, ANN output and PID output during fall in
ref signal

APPENDIX

A. Specification of squirrel cage induction motor


(SCIM) 3 phase, 2 pair poles, 5 HP, 1750
rpm, 460 V, 60 Hz.

B. Stator: RS = 1.115, LS = 0.6 mH


C. Rotor: Rr = 1.083, LS = 0.6 mH
D. Lm = 0.2037 H, J = 0.02 kg m2

E.

G. VI.

F.
K. [4] Martin T. Hagan, Howard B. Demuth,
Mark Beale, Neural Network Design,
China Machine Press, 2002.

REFERENCES

H. [1] Martin T. Hagan, Howard B. Demuth, Mark Beale,


Neural Network Design, China Machine Press, 2002.
I.

[2] F.BLASCHKE, The principle of field orientation as


applied to the new trans - vector closed-loop control
system for rotating-field machine, Siemens Rev., Vol.34,
no.3, pp.217-220, May1972.

J.

[3] B.K Bose modern power electronics and ac drives


Prentice-Hall Publication, Englewood Cliffs, New
Jersey,1986

L. [5] Himavathi, Anitha, Muthuramalingam, Feed forward


Neural Network Implementation in FPGA Using Layer
Multiplexing for Effective Resource Utilization, Neural
Networks, IEEE Transactions - 2007.
M.
N. [6] M. Hajek, Neural Networks, 2005.

O. [7] R. Rojas, Neural Networks, Springer-Verlag,


Berlin, 1996
P.

[8] Neural Network Control, Daniel Eggert, Technical


University of Denmark, Informatics and Mathematical
Modelling, 2003, 1-100.

Q. [9] K.S. Narendra and K. Parthasarathy, "Identification


and control of dynamic systems using neural networks,"
IEEE Trans. Neural Network, vol.1, no.2, pp.1-27, 1990
R. [10] W.T. Millaer III, R.S. Sutton, and P.J. Webors, Neural
Networks for Control, MIT Press, Cambridge, 1990.

S. [11] M.M. Gupta and D.H. Rao, Neuro-Control Systems


Theory and Applications, IEEE Press, New York, 1990

S-ar putea să vă placă și