Documente Academic
Documente Profesional
Documente Cultură
DECEMBER 2012
I declare that this thesis entitled “Multi-objective Optimization PID Controller
Parameters using Genetic Algorithm” is the result of my own research except as
cited in the references. This thesis has not been accepted for any degree and is not
concurrently submitted in candidature of any other degree.
Signature : ……………………………………………
Name : MOHD RAHAIRI BIN RANI
Date : 15 DECEMBER 2012
To my late father
To my beloved family
ACKNOWLEDGEMENTS
All praises and thanks to Allah, whom with His willing giving me the
opportunity to complete this thesis. I am heartily thankful to my supervisor, Dr.
Hazlina binti Selamat and co-supervisor, Dr. Hairi bin Zamzuri for their support and
guidance from the initial to the final stage of my research. Only Allah alone can
repay their kindness. Also thanks to the folks at the CAIRO for working together
and being fun with. They have kept me in good spirits throughout my study.
DECLARATION ii
DEDICATION iii
ACKNOWLEDGEMENTS iv
ABSTRACT v
ABSTRAK vi
TABLE OF CONTENTS vii
LIST OF TABLES x
LIST OF FIGURES xi
LIST OF ABBREVIATIONS xiii
LIST OF APPENDICES xv
1 INTRODUCTION 1
1.1 Background of the Problem 1
1.2 Importance of the Work 3
1.3 Research Objectives 3
1.4 Scope of Work 4
1.5 Research Contribution 4
1.6 Thesis Outline 4
2 PID CONTROLLER 6
2.1 Introduction 6
2.2 Conventional Tuning Approaches 7
2.3 Stochastic Tuning Approaches 10
2.4 Multi-objective PID Controller Tuning 13
2.5 Summary 14
3 MULTI-OBJECTIVE EVOLUTIONARY ALGORITHMS 15
3.1 Introduction 15
3.2 Basic Evolutionary Algorithm 15
3.2.1 Initialization 16
3.2.2 Objective Function and Fitness Assignment 18
3.2.3 Selection 18
3.2.4 Crossover 20
3.2.5 Mutation 21
3.2.6 Elitism 22
3.2.7 Termination 22
3.2.8 GA Loop 23
3.3 Studies of Multi-objective Evolutionary Algorithm 24
3.3.1 Weighted Sum Approaches 25
3.3.2 Population Based Approaches 27
3.3.3 Pereto Based Approaches 28
3.3.4 Review of diversity preservation in MOEA 31
3.4 Elitist Non-dominated Sorting Genetic Algorithm
(NSGA-II) 33
3.5 Summary 33
4 METHODOLOGY 35
4.1 Introduction 35
4.2 The Global Criterion Genetic Algorithm (GCGA) 35
4.2.1 Global Ranking Fitness Assignment 36
4.2.2 Binary Tournament Selection 37
4.2.3 Simulated Binary Crossover 39
4.2.4 Polynomial Mutation 40
4.2.5 Elitism through Non-dominated Sorting, Crowding
Distance and k-Nearest Neighbour (k-NN) 40
4.2.6 Complete Loop 43
4.3 Rotational Inverted Pendulum (RIP) 44
4.3.1 Mathematical representation of RIP 46
4.3.2 Integration of GCGA in the PID Tuning of RIP 50
4.4 Summary 52
5 RESULT AND DISCUSSION 53
5.1 Introduction ` 53
5.2 Performance Evaluation of GCGA 53
5.2.1 ZDT and DTLZ Test Problems 54
5.2.2 Convergence Metric 56
5.2.3 Diversity Metric 57
5.2.4 Comparison GCGA and NSGA-II in ZDT4 Test
Problem 59
5.3 Procedure of choosing the desired solution from final
pareto front from GCGA optimization 62
5.4 Simulation Results 63
5.5 Real RIP Results 67
5.6 Summary 69
REFERENCES 72
APPENDIX A-D 76-95
LIST OF TABLES
5.8 PID Parameters and their objective values for the three
extreme solutions from GCGA 66
LIST OF FIGURES
INTRODUCTION
Figure 1.1: Block diagram of the controller, actuator, plant and sensor in a
feedback or closed-loop system
The signal value send by the controller completely depends on the parameters
in the controller. The adjustment of the controller parameters or sometimes called
controller tuning is a critical element in the controller design process. Simple
2
Despite the simplicity in its structure and being the most popular type of
controller employed, the level of difficulty in the PID controller tuning mainly
depends on the plant behaviours (Åström and Hägglund, 2001). Nonlinearity,
instable open-loop system, under-actuation and the system’s order are the elements
that contribute to the difficulties of tuning process (Zhuang and Atherton, 1993).
Therefore this research used a rotational inverted pendulum (RIP) to demonstrate the
difficulties in tuning the PID control parameters for a very nonlinear and under-
actuated system. The under-actuated (two degree of freedoms, one actuator) property
of RIP also demonstrates the tuning example of two PID controllers simultaneously.
This condition will add to the difficulties in PID tuning.
Referring to the above conditions, the existing PID tuning methods are not
capable to tune the combination of PID parameters when facing such plants. Thus
this research tries to propose an algorithm that automatically gives the user the
optimized PID parameters for the objectives like steady-state error, settling time and
overshoot in the system.
3
Despite the popularity of PID controllers as the most practical controller for
control engineer, Ender (1993) reports that 30% of the installed PID controllers are
operating in manual mode and 65% of the automatic controllers are poorly tuned.
Moreover, a study from Van Overschee et al. (1997) shows 80% of PID controllers
are badly tuned and 25% of the PID controllers are operating under default factory
settings, means the controllers are not tuned at all. Recently, O’Dwyer (2009) states
the proposed tuning methods in literature are not having significant impact in the
industrial practises. These situations implies the tuning PID controllers are the
vexing problems to the tuning operators which maybe the tuning rules available are
not well compatible for their tuning problems in industry.
Hence this research tries to provide an alternative approach for tuning PID
controllers. The developed algorithm in this research will automatically provide the
designers with the optimized PID parameters with less rules of tuning.
This research consists of a few focus works in order to achieve its objectives.
i. Developing a multi-objective optimization algorithm to optimally tune the
PID controller performances like settling time, steady state errors and
overshoot using multi-objective genetic algorithms (MOGAs) approach.
ii. Analysing the optimization algorithm using several test problems borrowed
from literature and comparing to a well-known algorithm.
iii. Applying the results of optimized PID controller simulation to the real plant
in order to validate the algorithm in the real implementation.
Chapter 3 discusses the literature review for evolutionary algorithm (EA), the
application of EA in the controller tuning problem and the multi-objective genetic
algorithms (MOGAs). Previous work done by the researchers in the area of MOGAs
will be used as the basis for the proposed algorithm in Chapter 4.
5
Chapter 5 analyzes the GCGA through several popular test problems and
compares its performances with the well known Non-dominated Sorting Genetic
Algorithm II (NSGA-II). This chapter also shows the optimization work of the PID
controller using GCGA in the simulation and real RIP.
PID CONTROLLER
2.1 INTRODUCTION
The parallel architecture of PID controller (after this refers as PID controller)
sums the all the error signal, e(t) after being multiplied by PID gains, Kp, Ki and Kd to
produce the input signal, u(t).
7
Table 2.1: PID Parameters Equation for Ziegler and Nicols Tuning.
Kp 0.6Ku
2K p
Ki
Tu
K P Tu
Kd
8
The same limitations went to another popular PID tuning approach, Cohen
and Coon (1953) method. This method required the users to model the plant as first
order plus dead time process. The steps to perform the Cohen and Coon as the
following
i. The process is waited until it reaches steady-state.
ii. The step change is introduced until the process reaches settle down at a new
value.
iii. The process gain, K was calculated based on the slope of change made in
Step 2.
iv. Based on Step 2, an approximate first order process with a time constant τ
and delayed by τdead was introduced.
v. The gains of PID controller were calculated based on the Table 2.2.
9
Table 2.2: PID Parameters Equation for Cohen and Coon Tuning.
τ 4 τ
Kp ( + dead )
Kτ dead 3 4τ
τ dead
13 + 8
Ki τ
τ
τ dead (32 + 6 dead )
τ
4
τ dead
Kd 2τ dead
11 +
τ
1 K pK p K p K p Ki
s 2 + s( + )+ = 0. (2.2)
T1 T1 T1
This can then be related with the general second-order model
s2 + 2ζωs + ω2. (2.3)
And thus we can obtain
10
2ζωT1 − 1 ω 2T1
Kp = , Ki = . (2.4)
Kp 2ζωT1 − 1
s s2Kd
K p (1 + + )
Ki Ki
G p ( s) = , (2.6)
s
Ki
can arbitrarily place all closed-loop poles. The system characteristic equation has
becomes
1 1 K pK pKd 1 K pK p K pK pK i
s3 + s 2 ( + + ) + s( + )+ =0 (2.7)
T1 T2 T1T2 T1T2 T1T2 T1T2
This can be compared with the following, general, third-order characteristic equation
(Astrom and Hagglund, 1988)
(s+αω)(s2+2ζωs + ω2) = 0 (2.8)
to get the PID parameters, as for the PI case earlier in Equations (2.1) to (2.4).
PID controllers tuning for the high order and complex plants is difficult when
using conventional approaches. Therefore, the control community shift its attention
to the stochastic approaches which provides a heuristic searching process to the
11
Figure 2.2: The structure of stochastic optimization techniques for PID controller
∞
2
MSE = ∫ (r − y(t )) dt (2.10)
t =0
∞
ITAE = ∫ t (r − y(t ))dt (2.11)
t =0
In this investigation, the IAE is the most commonly used objective function in
PID controller optimization works. However, there are several performance measures
in control design requirement besides the performance based on error like settling
time and overshoot.
Huang and Lam (1997) reported a multiple performance measures where they
combine overshoot (OS), settling time (ts) and mean square error, MSE into a single
equation the time domain response. The objective function, f to evaluate the
performances is given by,
f = α1OS + α 2 t s + α 3 MSE, (2.12)
where α1, α2 and α3 are the weights for each element of performance in the objective
function. In this work, they employed a GA as optimizer to optimize the PID
parameters for a Heating Ventilating and Air Conditioning (HVAC) system.
2.5 Summary
3.1 Introduction
In this research, the proposed algorithm is developed based on GA, thus most of the
literature reviews in this chapter is focused on GA in detail.
Over the last decades, GA has been used as a search and optimization tool in
the various applications such as science, economic and engineering. The main factors
for the success are their broad applicability, ease of use and global perspective
(Goldberg, 1989).
3.2.1 Initialization
3.2.3 Selection
Process on how individual chromosomes are chosen for the genetic processes
in order to form a population of offspring is done through ‘selection’. The objective
of the selection operator is to make copies of good solutions and remove the bad
solutions, while keeping the population size constant.
19
=
(3.1)
ೞೠ
20
where fsum is the total fitness of all individuals in the population. In other words, the
individual with the higher fitness has the higher chance to be selected.
The selection operator cannot create any new solutions in the population.
Once the mating pool is filled with selected individuals, the genetic operators
(crossover and mutation) will take place to produce the new solutions.
3.2.4 Crossover
In order to preserve some good solutions during the evolution process, not all
solutions in the mating pool are used in a crossover. If pc is the crossover probability,
then 100pc% solutions in the mating pool are used in the crossover operation and
100(1- pc)% of the solutions are simply copied to next operation.
3.2.5 Mutation
Like the crossover point, the mutation point is also randomly chosen and the
allele associated with the mutation point is changed. Not all alleles are mutated but
depends on the mutation probability, pm. The mutation operation alters the strings
(solutions) to hopefully create a better string. Since this operation is stochastically
performed, the claim is not guaranteed.
3.2.6 Elitism
One of the features of a good search algorithm is the capability to store the
best solutions found in its process. Since GA works with a population of solutions, it
requires maintaining a number of the best solutions in the process. This mechanism
sometimes called elitism, which becomes an important mechanism in GAs and can
be done in various ways. In steady state EA, elitism can be introduced in a simple
mechanism. After two offspring are created using genetic operators, they are
compared with both of their parents. Among these four solutions, the best two are
selected as the next generation.
Elitism can also be implemented globally in the generation sense. Once the
offspring population is generated, it will be combined with the current population.
Thereafter, the N best solutions are selected as the next generation. This type of
elitism will be used in our proposed algorithm.
23
3.2.7 Termination
3.2.8 GA Loop
Since all the main components of the fundamentals of GA are explained, the
overview of MOGAs in literature is now ready to be discussed.
Typically in MOP, there is no the single optimal solution but instead, a set of
optimal solutions are required in the optimization of the conflicting objectives. These
‘trade-off’ solutions are known as pareto optimal solutions. A set of decision
variables x* ∈ ℱ is said to be pareto optimal is there are not exist another x ∈ ℱ such
that fi(x) ≤ fi(x*) for all i = 1,2,...,k and fj(x) < fj(x*) for at least one j. Here all the
objectives are to be minimized and ℱ denotes as the feasible region of the problem
(region when all the constraints are satisfied.)
M
f = ∑ wk f k , (3.5)
k =1
where
M
∑w
k =1
k =1 (3.6)
where M is the number of objectives, wk is the weight for kth objective and f k is the
fitness value for kth objective.
26
Hajela and Lin (1992) introduced an algorithm based on this approach with
the weights are not fixed. Instead, they are encoded together in the chromosomes.
Hence the individuals (chromosomes) are evaluated based on potentially different
weights combination. Figure 3.7 shows the selection mechanism in Hajela and Lin
Genetic Algorithm (HLGA) in two dimensional objective spaces.
The two areas represent the difference selection priorities based on the
objectives. If f1 is the selection priority, the selected solution is the one which has a
low value of f1 (case of minimization) and so on.
The population based approaches have largely been criticised on their biased
behaviour toward a particular objective. Nevertheless, these methods have found
useful in some applications for handling constraint in the optimization (Coello et al.,
2007).
The right and upper dotted lines from an individual, represent the boundary of
domination area for that particular individual. The other individuals that located in
the domination are said to be dominated by the particular individual. For the sake
clarification, three individuals a, b and c are considered. Let say individual a
dominates individual b and c, and individual b dominates individual c. The MOGA’s
fitness assignment is given by:
fitness = 1 + number of superiors, (3.7)
where the term ‘superiors’ refers to the other individuals that dominate the particular
individual. Therefore the fitness for the individual a is ‘1’ because there is no other
solutions that dominates it. The individual b has the fitness of ‘2’ (only dominated by
individual a) and the individual c has the fitness of ‘3’ (dominated by individual b
and c).
Srinivas and Deb (1994) realized Goldberg’s suggestion most directly in the
Non-dominated Sorting Genetic Algorithm (NSGA) where every non-dominated
front in the current population is evaluated and the individuals in the same fronts will
have the same fitness value. To maintain the diversity in the pareto solutions, NSGA
introduced ‘niche count’, a counting of near individuals in the specified region of a
particular individual. The selection mechanism of NSGA is shown as in Figure 3.10.
The offshoot of this approach, known as NSGA-II, uses the same fitness
assignment strategy but has elitism mechanism and crowded comparison operator
which is a normalized distance between the two neighbours to preserve the diversity
(Deb et al., 2002). NSGA-II has always been considered as the state-of-the-art
algorithm by MOEA researchers (Coello and Lamont, 2004). However, the non-
dominated sorting in NSGA is computationally expensive although the fast version
of non-dominated sorting is suggested in NSGA-II.
There are several more MOGAs like Strengh Pareto Evolutionary Algorithm
(SPEA) (Zitzler and Thiele, 1999), Micro Genetic Algorithm (MicroGA) (Coello
Coello Coello and Toscano Pulido, 2001) and Multi-objective Covariant Matrix
Adaption Evolution Strategy (MO-CMAES) (Igel et al., 2007) which received large
attention from MOGAs community. However, due to the big impact and the fame of
NSGA-II, we use it as the main concept of the proposed algorithm and as comparison
in the test problems.
31
Since the grid sizes must be specified by the designers, it was difficult to
estimate the proper size of grids in all generations of individuals. This kind of user
interaction normally is not a favoured technique for the users.
Then in NSGA-II, the drawbacks of the grid and niche counting techniques
are overcomed in the ‘crowding distance’ technique. The crowding distance of an
individual is calculated based on the sum of two neighbours distance in each
objective based on their non-dominated front. Figure 3.13 shows the individuals in a
non-dominated front and the components in the crowding distance calculation.
From the Figure 3.10, the crowding distance, cd for the considered individual
in two objectives is given by,
d1 2 d
cd = ( ) + ( 2 )2 .
dx dy
(3.8)
In Section 4.2.5, the detailed procedure of the crowding distance is explained.
3.5 Summary
This chapter has reviewed on the basic principles of EA and the popular
MOEA in the literature. The pareto based approaches have been the favourite
approach for the researchers in developing the new optimization algorithm. As well
as the diversity preservation techniques, the methods that required less designer
inputs like the crowding distance is favoured.
With all these basic understandings in EAs and MOEAs, we are now ready to
introduce our proposed algorithm, Global Criterion Genetic Algorithm (GCGA) in
the next chapter.
CHAPTER 4
METHODOLOGY
4.1 Introduction
This chapter gives the methodology of the thesis where each part of the
proposed algorithm, Global Criterion Genetic Algorithm (GCGA) is explained.
Moreover, the mathematical representation of the rotational inverted pendulum (RIP)
is derived as the target plant of proposed algorithm.
This section describes the details of the proposed algorithm in the thesis
which called Global Criterion Genetic Algorithm (GCGA). GCGA is constructed
based on two different types of fitness assignment, the proposed global ranking
fitness assignment and the popular non-dominated sorting procedure. In this chapter,
the global ranking fitness assignment, non-dominated sorting procedure is first
described before introducing the proposed algorithm.
36
The proposed of global ranking is to rank the individual’s fitness based on the
summation of the ranks of the each objective in individual. The global ranking value,
G for an individual Xi is given by,
M
G Xi = ∑ ri, j ,
j =1
(4.1)
where M is the number of objectives and rj is the rank of Xi in the jth objective.
The Definition 4.1 and 4.2 define the binary tournament selection employed
in the GCGA.
38
Definition 4.1
Binary Tournament Selection without the Constraint: A solution i wins a tournament
with another solution j if any of the following conditions are true (Deb, 2001):
i. If solution i has a better rank, that is ri < rj.
ii. If they have the same rank but solution i has better crowding distance
than solution j, that is ri = rj and di > dj.
Definition 4.2
Binary Tournament Selection with the Constraint: A solution i wins a tournament
with another solution j if any of the following conditions are true:
i. If solution i is valid but solution j violates the constraint, that is vi = 0 and
vj > 0.
ii. If both solutions are valid but solution i has a better rank, that is vi = vj =
0 and ri < rj.
iii. If both solutions are valid and they have the same rank but solution i has
better crowding distance than solution j, that is vi = vj = 0, ri = rj and di >
dj.
39
Binary crossovers like one point crossover or two point crossover have a
successful history in binary coded GA. Motivated from this successes, Deb and
Agrawal (1995) introduce a real coded crossover inspired by the binary coded of one
point crossover to employed in the real coded GA. Simulated binary crossover (SBX)
produces two offsprings (c1 and c2) by recombining two parents (p1 and p2) based on
the user defined crossover distribution, nc.
1
c1 = [(1 − B) p1 + (1 + B) p2 ],
2
(4.2)
1
c2 = [(1 + B) p1 + (1 − B) p 2 ],
2
In this case, the value of spread factor for the gene of ith, B is computed using,
1
(2u) n c +1 , u ≤ 0.5
B= 1 (4.3)
1 n c +1
The value of u is randomly generated between ‘0’ to ‘1’. Note that not every
parent chosen from selection process undergoes crossover operation, unless the
crossover probability is ‘1’. Thus the decision whether the two chromosomes are
crossover or not depends on the crossover probability. Importantly, the SBX operator
has two main properties:
i. The difference offspring is in proportion to the parent solutions.
ii. Near-parents solutions are monotonically more likely to be chosen as
offspring than solutions distant from parents (Deb, 2001).
40
Like in the SBX operator, the polynomial mutation changes the chromosome
values based on the user defined mutation index, nm. For gene of ith, a parameter δ is
given by
1
(2r ) ( nm +1) − 1, u ≤ 0.5
δi = 1
(4.4)
( nm +1)
1 − [2(1 − r )] , u > 0.5
y i = x i + ( x i U − x i L )δ i (4.5)
where xiL and xiU are the lower and upper bounds of xi respectively.
M f (m + 1) m − f (m + 1) m
di = ∑ f max m − f min m
, (4.6)
m =1
where M is the number of objective, fmax and fmin are the values of maximum and
minimum objective values respectively. The larger the value of the crowding
distance, the smaller (better) its crowdedness property.
k = round 2P * ,
(4.7)
where P* refers to the number non-dominated individuals in the combination of
current population and the new population generated. The largest and smaller values
(bondaries) are assigned with infinity value while the intermediate individuals are
assigned by k-NN technique. The distance of individual i to its nearest k individuals
is given by
k M fi − f jnear
∑ ∑( )2 ,
m
di = m
j =1 m =1 f max m − f min m
(4.8)
Where fi is the objective value for individual i, fnear is the objective value for nearest
neighbours of individual i, fmax is the maximum objective value and fmin is the
minimum objective value. Like the crowding distance calculation, the larger the
value of K-NN distance, the small (better) the crowdedness property.
The addition of offspring population, along with the current population, will
be combined and sorted according to the non-dominated sorting and the crowding
distance in elitism. This combination will has 2N size of population. If the number of
non-dominated individual are less then N, the crowding distance values used as the
indicator on the rank of the individuals in the same front of non-dominated layer. In
42
other words, the individuals in the same non-dominated front will be sorted
descendingly.
Here the complete flow chart for mechanism of GCGA is shown in Figure
4.2. In GCGA, the objective values of every chromosome are converted into global
ranking values and the binary tournament selects the potential parents to be bred.
After the parents undergo genetic operations (SBX and polynomial mutation),
the current population and the newly generated population are combined in the
elitism mechanism. As described before, the survivors the combined population are
decided by the non-dominated sorting, the crowding distance and k-NN techniques.
a) b)
Figure 4.3: The rotational inverted pendulum at a) downward equilibrium and b)
upward equilibrium.
45
The RIP comes with several hardware in its operation which are Microbox as
the controller/processor, a power supply and a driver circuit. The configuration of
these hardware are shown in the Figure 4.4.
Figure 4.4: The configuration of RIP, Microbox, power supply and driver circuit.
In order to operate the RIP in the real time environment, there are several
software needed which are:
i. Mathwork Matlab and Simulink,
ii. xPC Target Simulink Library,
iii. Microsoft Visual Studio C++.
The mechanism of the integration for the all the software and PC can be
illustrated as in Figure 4.5.
The built controller in Simulink will be uploaded into Microbox using xPC
Target library which converts the simulink model into C++ code. Then the Microbox
works as controller to the RIP with the complement of sensors.
0 ߬
+ ൨ = ቂ ଵ ቃ.
݉ଶ ݃ܿଶ ߠ݊݅ݏଶ 0
(4.9)
where
ߠ
= ݍ ଵ ൨,
ߠଶ
(4.11)
ܸ (ݍ, ݍሶ ) = ଶ
E(4.13)
− ଶ ݉ଶ ܿଶ ଶ ߠଵሶ cos (2,& ) 0
ଵ
0
B (> ) = 6 9,
−%& (& 8)*+,&
(4.14)
;# = H G I − HG ,#3 .
F F J
(4.15)
Let
K# = "# + %& '# &
K& = %& (& &
KL = %& '# (&
KM = "& + %& (& &
KN = %& '# &
KO = %& (& 8
QR &
KP =
S@
QR
KT =
S@
Then
K# + K& )*+& ,& KL (-),&
= (> ) = 6 9,
KL (-),& KM
(4.16)
#
3
(2,& ) −KL ,&3 )*+,& + & K& ,#3 )*+(2,& )
KN ,& )*+ & #
?@ (>, >3 ) = U& V,
−K& ,#3 cos (2,& ) 0
(4.17)
;# = KT I − KP ,#3 , (4.18)
0
B(>) = 6 9,
−KO )*+,&
(4.19)
;
=(>)>/ = : # < − B(>) − ?@ (>, >3 )>3 ,
0
(4.20)
KM −KL (-),&
= W# = X X [X X \]J ^ WX J `a\J ^ 6 9.
#
J −KL (-),& K# + K& )*+& ,&
(4.21)
Y Z Y J J _
XY WX_ `a\^J
= W#
= CWX `a\^
c c
XZ [XJ \]J ^J
E (4.22)
_ J
c c
;
Let
d@ = : # < − B (>),
0
(4.23)
3 0 K I − KP ,#3
d@ = 6KT I − KP ,# 9 − 6 9=6 T 9,
0 −KO )*+,& KO )*+,&
(4.24)
K , 3 )*+& (2,& ) −KL ,&3 )*+,& + & K& ,#3 )*+(2,& ) ,#3
# #
#
K , , )*+ 3 3 &
(2,& ) − KL ,&3 )*+,& + K& ,#3 ,&3 )*+(2,& )
& #
K I − KP ,#3
d@ − ?@ (>, >3 )>3 = 6 T 9…
KO )*+,&
1 1
KN ,&3 ,#3 )*+& (2,& ) − KL ,&3 )*+,& + K& ,#3 ,&3 )*+(2,& )
&
=C E (4.28)
#
K , 3 &
cos(2, ) − K )*+,
& & # & O &
,/
! # . = =W# (d@ − ?@ (>, >3 )>3 ),
,&/
(4.29)
KM −KL (-),&
,#/ b b
! .=e g…
,&/ −KL (-),& K# + K& )*+& ,&
b b
KT I − KP ,#3 − & KN ,&3 ,#3 )*+& (2,& ) + KL ,&3 )*+,& − & K& ,#3 ,&3 )*+(2,& )
# & #
C E,
K , 3 cos(2,& ) + KO )*+,&
# & (4.30)
& & #
,#/ = hKT I − KP ,#3 − & KN ,&3 ,#3 )*+& (2,& ) + KL ,&3 )*+,& − & K& ,#3 ,&3 )*+(2,& )i …
XY # & #
c
,&3 ,#3 )*+& (2,& ) = 0, ,&3 )*+,& = 0 and ,#3 cos (2,& ) = 0.
& &
Let b = m"# + %& '# & n("& + %& (& & ) − (%& '# (& )&,
The numerical of the mechanical and electrical parameters are given in Table 4.2
Let x1 = θ1, x2 = θ2, x3 = θ1’ and x4 = θ2’. With physical parameters given in
Table 4.9, the state space representation of the linearized system is given by,
t#3 0 0 1 0 t# 0
t&3 0 0 0 1 t 0
e g=e ge g +e gI
&
tL3 0 −4.4256 −0.0390 0 tL 2.1334
(4.37)
tM3 0 42.6498 0.0334 0 tM −1.8286
In controlling the RIP, the modes of control can be divided into two
categories:
i. The swing-up mode - bringing the pendulum from the lower equilibrium to
the positions near the upper equilibrium.
ii. The balancing mode - taking place the swing-up mode to control the arm at
the desired position while the pendulum is stood still at the upper equilibrium.
The design of the controller in the swing-up mode is based on the estimation
of the energy in the pendulum thus it is not suitable to use in the optimization
methods. Hence this research only focused on optimization of design the PID
controller parameters for the balancing mode. However, the swing-up mode is still
required in the real implementation thus we design a swing-up controller based on
works of Acosta (2010).
we evaluated them empirically from the responses of simulation. Let say the arm
response, θ1 and the pendulum response, θ2 are both desired at 0 radian. The ITAE
function is given as in Equation (5.1),
(4.38)
where T is the simulation period which in our case is 20s and N is the number of
samples. Then the OS value is the maximum value of arm response, θ1 as in Equation
(5.2),
= max (,# ()) (4.39)
The third performance settling time, ts is programmingly evaluated from the
θ1 and θ2 responses. In our case, ts is defined as the time where the response settles at
most 5% of its desired position. If the settling time for the θ1, tsθ1 is bigger than the
settling time for θ2, tsθ2, then tsθ1 will be the ts and vice versa.
Table 4.3 shows the setting parameters for GCGA in order to optimize gains
in the two PID controllers. The comparison for this section is as shown in the table
below.
52
Since the simulations is done with assumption of the pendulum and already
brought up near the upper equilibrium (0 radian) by the swing-up controller, we set
the initial position of the arm, θ1 at 0.29 radian and the pendulum position, θ2 at 0
radian with the desired position for both positions are at 0 rad.
4.4 Summary
5.1 Introduction
In this section, GCGA is compared with one of the famous MOEAs, the Non-
dominated Sorting Genetic Algorithm II (NSGA-II) using the ZDT and DTLZ series
test functions. The ZDT and DTLZ test functions are described in Table 5.1. The test
series consists of various properties of problems such as convex, non-convex,
disconnected and non-uniformly spaced problems. All the symbols and description
for test problems are given in Table 5.2.
Table 5.1: ZDT and DTLZ Test problems used to evaluate the
performance of GCGA and NSGA-II
Variable Optimal
Test Problem n Objective Function
bounds solutions
f1 = x1
x1 x1 ∈ [0 ,1 ]
ZDT1 f 2 = g ( x ) 1 −
30 [0,1] g (x) xi = 0
(Convex)
∑ i = 2 x i i = 2,K , n
n
g (x) = 1 + 9
n −1
f1 ( x ) = x1
ZDT3
f 2 = g ( x )(1 −
x1 x
− 1 sin(10πx1 )) x1 ∈ [0 ,1 ]
(Convex, 30 [0,1] g ( x) g ( x) xi = 0
n x
disconnected) ∑i=2 i i = 2,K , n
g ( x) = 1 + 9
n −1
f 1 ( x ) = x1
2
x1 x1 ∈ [0 ,1 ]
ZDT2 f 2 = g ( x ) 1 −
30 [0,1] g ( x ) xi = 0
(Non-convex)
g (x) = 1 + 9
(∑ n
i= 2
xi ) i = 2,K , n
n −1
f1 = x1
x1 ∈ [0,1] x1 ∈ [0 ,1]
ZDT4 x1
10 xi ∈ [− 5,5] f 2 = g ( x ) 1 − xi = 0
g ( x)
(Non-Convex)
i = 2, K , n i = 2,K , n
∑ i = 2 [xi2 − 10 cos( 4π xi ) ]
n
g ( x ) = 1 + 10 ( n − 1) +
55
f 1 = 1 − exp( − 4 x1 ) sin 6 ( 6 π x i )
ZDT6 2
x1
(Non-convex, f 2 = g ( x ) 1 − x 1 ∈ [0 ,1 ]
g ( x )
Non- 10 [0,1] xi = 0
0 . 25
i = 2 ,K , n
∑ i = 2 i
n
uniformly x
g ( x) = 1 + 9
spaced) n −1
1
f1 ( x ) = (1 + g ( x M )) x1 x 2 ... x m −1
2
1
f 2 ( x ) = (1 + g ( x M )) x1 x 2 ...( 1 − x m − 1 )
2
⋅
DTLZ1 ⋅
10 [0,1] xi = 0.5∀i ∈ xM
⋅
1
f m ( x ) = (1 + g ( x M ))
(1 − x1 )
2
g ( x M ) = 100 ( x M + ∑ ( x i − 0 . 5 ) 2 ...
xi ∈ x M
− cos( 20 π ( x i − 0 . 5 )))
π
f1 ( x ) = (1 + g ( x M )) cos( x1 )...
2
π π
cos( x 2 )... cos( x m − 1 )
2 2
π
f 2 ( x ) = (1 + g ( x M )) cos( x1 )...
2
π π
cos( x 2 )... sin( x m − 1 )
DTLZ2 10 [0,1] 2 2 xi = 0.5∀i ∈ xM
⋅
⋅
⋅
π
f m ( x ) = (1 + g ( x M )) sin( x1 )
2
g ( xM ) = ∑ ( x i − 0 .5 ) 2
xi ∈ x M
π
f1 ( x ) = (1 + g ( x M )) cos( x1 )...
2
π π
cos( x 2 )... cos( x m − 1
)
2 2
π
f 2 ( x ) = (1 + g ( x M )) cos( x1 )...
2
π π
cos( x 2 )... sin( x m − 1 )
DTLZ3 10 [0,1] 2 2 xi = 0.5∀i ∈ xM
⋅
⋅
⋅
π
f m ( x ) = (1 + g ( x M )) sin( x1 )
2
g ( x M ) = 100 ( x M + ∑ ( x i − 0 . 5 ) 2 ...
xi ∈ x M
− cos( 20 π ( x i − 0 . 5 )))
56
P* M f k (i ) − f k ( j ) 2
d i = Min ∑( f )
j =1 k =1 k max − f k min (5.1)
57
where fmax and fmin are the maximum and minimum value in kth objective. Note that
in every test problem we generate about 500 points along the pareto front.
From Table 5.4, GCGA has better convergence performance for all the test
problems compared to NSGA-II. The average percentage convergence for GCGA
with the respect to NSGA-II for all the test problems is 35.57%. The critical ranking
procedure in global ranking fitness assignment has improved the convergence
properties in approximating the true pareto front.
Although the convergence metric itself can provide some information about
the diversity in the solution, here we choose another metric to represent the diversity
metric. The diversity metric, ∆ measures the diversity among the obtained non-
dominated solutions, P* respect to the reference set (RS). In calculating diversity, P*
are projected on a hyper-plane, therefore losing one dimensional of the points. For
the test problems in this research, the values of f1 will be the RS, which discretized in
58
a number of grids within the range of 0.01. The more grids that contain a point of P*
and also a point of RS, the higher the metric value. This metric defines two arrays,
H(i) and h(i) as presented in Equation 5.2 – Equation 5.3
(5.2)
(5.3)
∆= ∑
∑() @(())
() @(())
(5.4)
For diversity metric, with the assumption that the final non-dominated
solutions obtained approach the global pareto front, the higher the metric become and
so the better the diversity properties in solutions.
59
From the Table 5.6, GCGA has better (larger) diversity metrics in all test
problems except in ZDT3, ZDT6 and DTLZ2. This phenomenon is not surprising
because the crowding distance technique in NSGA-II still works well in the two or
three objectives optimization problems. However, the crowding distance technique
has shown degradation in diversity as in proves by di Pierro et al (2007).
Table 5.7: The transition of solution at Kth generation for GCGA and
NSGA-II in ZDT4 problem.
K GCGA NSGA-II
50
100
61
150
200
250
From Table 5.7, the continuous lines in all the plots represent the true pareto
front in ZDT4. At the 50th generation, it can be observed that GCGA has shown a
better convergence property with the solutions scattered in range of f1 = [1 – 3.5]
compared to NSGA-II, whose solutions in the region given by f1 = [3 – 5.5].
However, the solutions of NSGA-II have better distribution whereas the solutions in
GCGA are slightly crowded in a few places. At the 100th generation, the solutions in
GCGA started to form the pareto front but the solutions in NSGA-II were still
struggling to form the pareto front as well as to maintain their diversity. At the 150th
generation, the solutions from GCGA almost covered all the points in the pareto
front, but for NSGA-II, there are still some dominated solutions which are quite far
62
from the pareto front. Then at the 200th generation, almost all the solutions in GCGA
become non-dominated solutions as they cover almost all the pareto front compared
to those of NSGA-II. Finally, at the 250th generation, both algorithms converge and
cover approximately all the points in the pareto front. It is interesting to note that the
convergence of GCGA not only surpasses NSGA-II when approaching the true
pareto as in the results of convergence metrics in Table 5.7, but this property actually
has started at the earlier generations. However at some point, GCGA tends to lose its
diversity in the earlier stage of the generation (as shown by K=50 in Table 5.7)
before recovering back when approaching the pareto front. This inconsistency of
diversity may caused by the critical ranking procedure in the global ranking
mechanism. However, the introduction of K-NN technique at the final stage of
optimization helps the solutions to be well distributed.
5.3 Procedure of choosing the desired solution from final pareto front from
GCGA optimization
All of the final 100 individuals in GCGA’s simulation become pareto front
with all the individual obey the closed-loop stability constraint. This property gives
credit to GCGA optimization due to its wider option of solutions to designer instead
of depending on one solution with closed-loop stability. The Figure 5.1 shows the
final pareto front from GCGA optimization in the three axis of objectives.
0.8
0.6
ITEA
0.4
0.2
0
0.395
0.39
0.385
12
0.38 10
8
0.375 6
4
OS 0.37 2
0
ts (s)
Figure 5.1: The final pareto front from GCGA optimization in the three
objective space
The Figure 5.2 and 5.3 show the responses of all the final solution for the arm
and the pendulum in pareto front in GCGA.
Figure 5.2: The arm responses for individuals in the final pareto front
from GCGA
64
Figure 5.3: The pendulum responses for individuals in the final pareto front from
GCGA
The trade-off between ts, OS and and ITEA can be seen in the both figures and
for instance, the solution with the minimum OS has longer ts. This variety of
solutions provided by GCGA help the designers to analyse the suitable solution
according their specification of performances. This phenomenon is what makes the
optimization from GCGA is unique compare to conventional methods where it is not
provides an absolute solution to designer but instead, gives the possible optimized
solution for designer to choose for their application.
The PID parameters and the objective values for all the solutions in pareto
front are provided in the Appendix A. For the sake of clarity in investigation, we
choose three individuals which have the most minimum objective value of each
objective from 100 individuals in pareto front. These individuals sometimes called
the extreme solutions provide the individuals with have the most minimum ITAE, ts
and OS respectively. Since it is difficult to determine these individuals from the 3D
plot in Figure 5.1, these solutions are illustrated in 2D plots as in Figure 5.4 and 5.5.
65
0.388
most minimum ts
0.386
0.384
0.382
0.38
OS
0.378
0.376
most minimum OS
0.374
0.372
0.37
0 2 4 6 8 10 12
ts (s)
0.7
0.6
most minimum OS
0.5
0.4
ITAE
0.3
0.2
most minimum ITAE
0.1
0
0.37 0.372 0.374 0.376 0.378 0.38 0.382 0.384 0.386 0.388
OS
Figure 5.5: The most minimum OS and ITAE in the pareto front
The PID parameters and the objective values for the three extreme solutions
are provided in Table 5.8.
66
Table 5.8: PID Parameters and their objective values for the three extreme
solutions from GCGA
From the Table 5.8, the integral gains, Ki for the PID controllers for the arm
are close to zero while the controllers for the pendulum just requires PD control type.
This phenomenon also shows the optimization in GCGA not only optimize the
objective functions, but also choose the best PID type for the application. The
responses for the arm and pendulum for these three individuals are shown in Figures
5.6 and 5.7.
Figure 5.6: The arm responses of the three individuals with the most minimum
objective.
67
Figure 5.7: The pendulum responses of the three individuals with the most
minimum objective.
The solution with most minimum OS exhibits very distant shape of responses
with the other two. This phenomenon cannot be produced in the optimization of
single objective optimization. The introduction of diversity preservation technique in
GCGA causes this phenomenon to occur. This phenomenon also proves the
optimization using GCGA only provide a good solution, but also gives the designers
a variety of options to choose depends on their application.
The PID optimization with GCGA may has shown success in the simulation
results. However this optimization yet proving it can be apply in the real target plant.
Hence the PID parameters optimization results from simulations are fed into PID
controllers for the real plant. The experiments begin with swinging-up the pendulum
by the swing-up controller then the pendulum is balanced with the optimized
controllers for the three extreme solutions. The Figure 5.5 show the arm responses
and Figure 5.6 shows the pendulum responses for the three solutions.
68
60
most minimum ITAE
most minimum OS
most minimum ts
40
20
Arm angle (degree)
-20
-40
-60
0 10 20 30 40 50 60
time (s)
Figure 5.8: Arm responses for the three extreme solutions from the final pareto
front in GCGA
350
most minimum ITAE
most minimum OS
300 most minimum ts
250
Pendulum Angle (degree)
200
150
100
50
-50
0 10 20 30 40 50 60
time (s)
Figure 5.9: Pendulum responses for the three extreme solutions from the final
pareto front in GCGA
69
Unlike the simulations responses, the real plant responses experience small
angle oscillations for both the arm and pendulum. These oscillations occurred due to
uncertainties in the mathematical modelling. This phenomenon is the drawback of
the optimization using GCGA because the model needs to be well estimated.
5.6 Summary
The results in this chapter have shown GCGA has better convergence
property than NSGA-II in the test problems. The wider options of solutions from
resulted pareto front gives flexibility to designers in order to analyze a suitable PID
parameters in their application. Moreover, the guarantee of closed-loop stability in
the presented parameters gives reliability property of tuning in GCGA. In
comparison with the conventional approaches of PID tuning, GCGA has several
advantage;
i. The variety of the solutions provided from GCGA is more reliable,
because the closed-loop stability requirement is simultaneously added in the
optimization.
ii. The plant complexity is not a factor in the difficulties to tune PID
controller using GCGA.
However, the PID tuning using GCGA may face several drawbacks such as;
i. The mathematical/dynamic model of the plant has to be precisely
estimated as the less precise model may results in less optimal PID
parameters.
ii. GCGA requires complex coding structure with than simple GA and
less applicable for the users who not familiar with the fundamental
principles in GA.
CHAPTER 6
6.1 Conclusion
In the application in the tuning PID parameters, the GCGA has successfully
provides the reliable and optimized PID parameters in the both simulation and real
plants results. However, the real plant results experience the oscillations of the arm
and pendulum responses in the small angles. The uncertainties in the mathematical
model of RIP results these oscillations to occur.
Although this research manages to well tune the PID parameters for a
complex plant like rotation inverted pendulum (RIP), there are several suggestions
for the further research in the algorithm design or the implementation in the RIP.
Some suggestions for improvement of the system are:
i. Improving the diversity preservation technique in K-NN in the GCGA
design. It can be in two ways:
(a). Pruning the distance of K-NN value to accurately estimate the solutions
crowdedness. It means the calculation of K-NN is rapidly done when a
crowded individual is removed from the non-dominated front until desired
number of non-dominated individuals met. In the original design of GCGA,
the crowded individuals are removed simultaneously which may cause the
slight inaccurate of estimation.
(b). The K-NN technique can be replaced by the hyper-volume measure
which works very efficient in the large number of objective.
ii. In the derivation of the RIP, there are very small parameters that have been
neglected (assuming all values as zero) in order to simplify the mathematical
works. However, if these parameters are well estimated, the small oscillation
of arm and pendulum in the real-time implementation can be reduced. The
parameters like friction torque, Coulomb friction torque, static friction torque
and Stribeck’s velocity can be empirically estimated by performing a few
experiments.
72
REFERENCES
Johnson, M. A., Moradi, M. H. and Crowe, J. (2005). PID control: new identification
and design methods, Springer Verlag.
Kim, D. H. and Cho, J. H. (2005). Adaptive tuning of PID controller for
multivariable system using bacterial foraging based optimization. Advances
in Web Intelligence, 231-235.
Koza, J. R. (1992). Genetic programming: On the programming of computers by
natural selection. MIT Press, Cambridge, MA, USA.
Kukkonen, S. and Deb, K. (2006). Improved pruning of non-dominated solutions
based on crowding distance for bi-objective optimization problems. In, 2006.
Proceedings of the World Congress on Computational Intelligence (WCCI-
2006)(IEEE Press). Vancouver, Canada, 1179-1186.
Kursawe, F. (1991). A variant of evolution strategies for vector optimization.
Parallel Problem Solving from Nature, 193-197.
Mcmillan, G. K. (1983). Tuning and control loop performance. Instrument Society of
America, Research Triangle Park, NC.
Murata, T. and Ishibuchi, H. (1995). MOGA: Multi-objective genetic algorithms. In,
1995. IEEE, 289.
O'dwyer, A. (2009). Handbook of PI and PID controller tuning rules, Imperial
College Pr.
Osyczka, A. (1985). Multicriteria optimization for engineering design. Design
optimization, 1, 193-227.
Pedersen, G. K. M. and Yang, Z. (2006). Multi-objective PID-controller tuning for a
magnetic levitation system using NSGA-II. In: Proceedings of the 8th annual
conference on Genetic and evolutionary computation, 2006. ACM, 1737-
1744.
Polyak, B. and Tempo, R. (2001). Probabilistic robust design with linear quadratic
regulators. Systems & Control Letters, 43, 343-353.
Popov, A., Farag, A. and Werner, H. (2005). Tuning of a PID controller using a
multi-objective optimization technique applied to a neutralization plant. In:
Decision and Control, 2005 and 2005 European Control Conference. CDC-
ECC'05. 44th IEEE Conference on, 2005. IEEE, 7139-7143.
Porter, B. and Jones, A. (1992). Genetic tuning of digital PID controllers. Electronics
Letters, 28, 843-844.
Rechenberg, I. (1973). Evolutionsstrategie, Frommann-Holzboog.
75
1. GCGA_PID.m
generasi = input('No of Generation = '); %User defined
no. of generation
populasi = input('No of Population = '); %User defined no
of population
M = input('No of Objectives = '); %User defined
no. of objective
V = input('No of Decision Variables = '); %User defined
no. of PID gains
xoverprob = input('Probability of Crossover = '); %User defined
probability of crossover
mutaprob = input('Probility of Mutation = '); %User defined
probability of mutation
mu = input('Distribution Index for Crossover = '); %User defined
index of crossover
mum = input('Distribution Index for Mutation = '); %User defined
index of crossover
end
%offspring initilization
new_f = zeros(populasi,M);
best_f = zeros(generasi,M);
for i = 1 : generasi;
clc
fprintf('\nThe Current Generations = %d \n',i); %display the
current generation
fprintf('\nObjective Values = %d %d %d %d \n',f(1,1 : M));
%display the current objective values
77
%Elitism mechanism
[total_x,total_f,total_rank,total_cd] =
non_dom_sort(total_x,total_f);
[x,f,indx] = update_pop(total_x,total_f,total_rank,total_cd);
best_f(i,:) = [min(f(:,1)) min(f(:,2)) min(f(:,2))];
%ploting the pareto front
if indx <= populasi
plot3(f(:,1),f(:,2),f(:,3),'linestyle','none','marker','O');
else
plot3(f(:,1),f(:,2),f(:,3),'linestyle','none','marker','O','color','
red');
end
grid
F(i) = getframe;
end
toc
2. multi_ob.m
function [ts,os,ess,cs] = multi_ob(t,y1,y2,y3)
else
susun = sort(y1);
os = abs(max(susun));
end
3. stability_measure.m
function stability = stability_measure(x)
stability = zeros(size(x,1),1);
for m = 1 : size(x,1)
Kp1 = x(m,1);Ki1 = x(m,2); Kd1 = x(m,3); Kp2 = x(m,4); Ki2 =
x(m,5); Kd2 = x(m,6);
A = [0 0 1 0;0 0 0 1; 0 -4.425626674046159 -0.038954340258659 0;
0 42.649809572260509 0.033389434507422 0];
B = [0; 0; 2.133378267409323; -1.828609943493705];
C = [1 0 0 0; 0 1 0 0];
D = [0; 0];
real_pole = real(pole(sys));
im_pole = imag(pole(sys));
stab = 0;
for n = 1 : size(real_pole,1)
if real_pole(n) > 0
stab = stab + 1 + real_pole(n);
elseif (real_pole(n) == 0) && (im_pole(n)~=0);
stab = stab + 1;
end
end
stability(m) = stab;
end
79
4. global_ranking.m
function sum_f = global_ranking(f)
[populasi,M] = size(f);
if populasi == 0;
sum_f = [];
else
f_sort = zeros(populasi,M);
indx = zeros(populasi,M);
f_sorted = zeros(populasi,M);
for i = 1 : M
[f_sort(:,i),indx(:,i)] = sort(f(:,i));
end
for i = 1 : M
for j = 1 : populasi
f_sorted(indx(j,i),i) = j;
end
end
f_rank = zeros(populasi,M);
f_rank2 = zeros(populasi,M);
5. binary_tournament.m
function [mating_pool] = binary_tournament(x,ranking,stability)
full_pool = x;
full_rank = ranking;
full_stab = stability;
num = size(x,1);
mating_pool =[];
i = 1;
a = num;
while size(mating_pool,1) <= num/2 - 1;
indx_11 = ceil(1 + (a - 1).*rand);
indx_12 = ceil(1 + (a - 1).*rand);
while indx_11 == indx_12
indx_12 = ceil(1 + (a - 1).*rand);
end
if full_stab(indx_11) > full_stab(indx_12)
indx1 = indx_12;
elseif full_stab(indx_11) == full_stab(indx_12)
if full_rank(indx_11) > full_rank(indx_12)
indx1 = indx_12;
else
indx1 = indx_11;
end
else
indx1 = indx_11;
end
mating_pool(i,:) = full_pool(indx1,:);
i = i + 1;
full_pool(indx1,:) = [];
full_rank(indx1) = [];
full_stab(indx1) = [];
a = a - 1;
end
6. genetic.m
function new_pop =
genetic(mating_pool,mu,mum,min,max,xoverprob,mutaprob)
[a,V] = size(mating_pool);
new_pop = zeros(2*a,V);
for i = 1 : a;
indx_11 = ceil(1 + (a - 1).*rand);
indx_12 = ceil(1 + (a - 1).*rand);
while indx_11 == indx_12
indx_12 = ceil(1 + (a - 1).*rand);
end
parent1 = mating_pool(indx_11,:);
parent2 = mating_pool(indx_12,:);
[y1,y2,flag1,flag2] =
new_sbx(parent1,parent2,mu,min,max,xoverprob);
new_pop((2.*(i - 1) + 1),:) =
new_polynomial_mutation(y1,mum,max,min,mutaprob,flag1);
new_pop((2.*(i)),:) =
new_polynomial_mutation(y2,mum,max,min,mutaprob,flag2);
end
81
7. new_sbx.m
function [y1,y2,flag1,flag2] =
new_sbx(parent1,parent2,mu,x_min,x_max,xoverprob)
V = size(parent1,2);
y1 = zeros(1,V);
y2 = zeros(1,V);
xrand = rand;
if (xrand > xoverprob)
y1 = parent1;
y2 = parent2;
flag1 = 0;
flag2 = 0;
else
for n = 1 : V;
u = rand;
if u >= 0.5
if parent1(n) > parent2(n)
c1 = parent1(n);
c2 = parent2(n);
else
c1 = parent2(n);
c2 = parent1(n);
end
if (c1 - x_min(n)) > (x_max(n) - c2)
dum = (x_max(n) - c2);
else
dum = (c1 - x_min(n));
end
if ((c2 - c1) > 0.000000001)
B = 1 + (2./(y1(n) - y2(n)))*dum;
alpha = 2 - B.^(mu+1);
z = rand;
if z <= 1/alpha
yy = (z*alpha).^(1./(mu + 1));
else
yy = (1./(2 - z*alpha)).^(1/(mu + 1));
end
else
yy = 1;
end
y1(n) = 0.5*((c1 + c2) - yy*(c2 - c1));
y2(n) = 0.5*((c1 + c2) + yy*(c2 - c1));
if y1(n) < x_min(n)
y1(n) = x_min(n);
elseif y1(n) > x_max(n)
y1(n) = x_max(n);
else
y1(n) = y1(n);
end
if y2(n) < x_min(n)
y2(n) = x_min(n);
elseif y2(n)> x_max(n)
y2(n) = x_max(n);
else
y2(n) = y2(n);
end
else
y1(n) = parent1(n);
y2(n) = parent2(n);
82
end
end
flag1 = 1;
flag2 = 1;
end
8. new_polynomial_mutation.m
function child =
new_polynomial_mutation(x,mum,x_max,x_min,mutaprob,flag)
V = size(x,2);
child = x;
flag_m = zeros(1,V);
for j = 1 : V
mutrand = rand;
if mutrand > mutaprob;
child(j) = x(j);
flag_m(j) = 0;
else
if ((x(j) - x_min(j))/(x_max(j) - x_min(j))) < ((x_max(j) -
x(j))/(x_max(j) - x_min(j)))
alpha = (x(j) - x_min(j))./(x_max(j) - x_min(j));
else
alpha = (x_max(j) - x(j))./(x_max(j) - x_min(j));
end
r = rand;
if r <= 0.5
delta = ((2*r) + (1-2*r)*(1 -alpha).^((mum +
1))).^(1./(mum + 1)) - 1;
else
delta = 1 - (2*((1 -r)+ 2*(r - 0.5)*(1 - alpha).^(mum +
1))).^(1/(mum + 1));
end
child(j) = x(j) + delta.*(x_max(j) - x_min(j));
if child(j) > x_max(j)
child(j) = x_max(j);
elseif child(j) < x_min(j)
child(j) = x_min(j);
end
flag_m(j) = 1;
end
end
t_flag_m = sum(flag_m);
if (t_flag_m == 0)&&(flag == 0)
j = ceil(rand*V);
if ((x(j) - x_min(j))/(x_max(j) - x_min(j))) < ((x_max(j) -
x(j))/(x_max(j) - x_min(j)))
alpha = (x(j) - x_min(j))./(x_max(j) - x_min(j));
else
alpha = (x_max(j) - x(j))./(x_max(j) - x_min(j));
end
r = rand;
if r <= 0.5
delta = ((2*r) + (1-2*r)*(1 -alpha).^((mum + 1))).^(1./(mum
+ 1)) - 1;
else
delta = 1 - (2*((1 -r)+ 2*(r - 0.5)*(1 - alpha).^(mum +
1))).^(1/(mum + 1));
83
end
child(j) = x(j) + delta.*(x_max(j) - x_min(j));
if child(j) > x_max(j)
child(j) = x_max(j);
elseif child(j) < x_min(j)
child(j) = x_min(j);
end
end
9. non_dom-sort
function [x,f,ranking,cd] = non_dom_sort(x,f)
[N,M] = size(f);
V = size(x,2);
front = 1;
F(front).f = [];
individual = [];
ranking = zeros(N,1);
for i = 1 : N
individual(i).n = 0;
individual(i).p = [];
for j = 1 : N
dom_less = 0;
dom_equal = 0;
dom_more = 0;
for k = 1 : M
if f(i,k) < f(j,k)
dom_less = dom_less + 1;
elseif f(i,k) == f(j,k)
dom_equal = dom_equal + 1;
else
dom_more = dom_more + 1;
end
end
if (dom_less == 0) && (dom_equal ~= M)
individual(i).n = individual(i).n + 1;
elseif (dom_more == 0) && (dom_equal ~= M)
individual(i).p = [individual(i).p j];
end
end
if individual(i).n == 0;
ranking(i) = 1;
F(front).f = [F(front).f i];
end
end
while ~isempty(F(front).f)
Q = [];
for i = 1 : length(F(front).f)
if ~isempty(individual(F(front).f(i)).p)
for j = 1 : length(individual(F(front).f(i)).p)
individual(individual(F(front).f(i)).p(j)).n =
individual(individual(F(front).f(i)).p(j)).n - 1;
if individual(individual(F(front).f(i)).p(j)).n == 0
ranking(individual(F(front).f(i)).p(j)) = front
+ 1;
Q = [Q individual(F(front).f(i)).p(j)];
84
end
end
end
end
front = front + 1;
F(front).f = Q;
end
x = horzcat(x,f,ranking);
[temp,index_of_fronts] = sort(ranking);
for i = 1 : length(index_of_fronts)
sorted_based_on_front(i,:) = x(index_of_fronts(i),:);
end
current_index = 0;
for front = 1 : (length(F) - 1)
distance = 0;
y = [];
previous_index = current_index + 1;
for i = 1 : length(F(front).f)
y(i,:) = sorted_based_on_front(current_index + i,:);
end
current_index = current_index + i;
% Sort each individual based on the objective
sorted_based_on_objective = [];
for i = 1 : M
[sorted_based_on_objective, index_of_objectives] = ...
sort(y(:,V + i));
sorted_based_on_objective = [];
for j = 1 : length(index_of_objectives)
sorted_based_on_objective(j,:) =
y(index_of_objectives(j),:);
end
f_max = ...
sorted_based_on_objective(length(index_of_objectives), V
+ i);
f_min = sorted_based_on_objective(1, V + i);
y(index_of_objectives(length(index_of_objectives)),M + V + 1
+ i)...
= Inf;
y(index_of_objectives(1),M + V + 1 + i) = Inf;
for j = 2 : length(index_of_objectives) - 1
next_obj = sorted_based_on_objective(j + 1,V + i);
previous_obj = sorted_based_on_objective(j - 1,V + i);
if (f_max - f_min == 0)
y(index_of_objectives(j),M + V + 1 + i) = Inf;
else
y(index_of_objectives(j),M + V + 1 + i) = ...
(next_obj - previous_obj)/(f_max - f_min);
end
end
end
distance = [];
distance(:,1) = zeros(length(F(front).f),1);
for i = 1 : M
distance(:,1) = distance(:,1) + y(:,M + V + 1 + i);
end
y(:,M + V + 2) = distance;
y = y(:,1 : M + V + 2);
z(previous_index:current_index,:) = y;
end
85
x = z( : ,1 : V );
f = z(:, V + 1 : V + M);
ranking = z(:,V + M + 1);
cd = z(:,V + M + 2);
10. update_pop
function [x_new,f_new,indx] =
update_pop(total_x,total_f,total_rank,total_cd)
TN = size(total_x,1);
a = 1;
while a < size(total_x,1)
b = a + 1;
while b <= size(total_x,1)
if total_x(a,:) == total_x(b,:)
total_x(b,:) = [];
total_f(b,:) = [];
total_rank(b) = [];
total_cd(b) = [];
end
b = b + 1;
end
a = a + 1;
end
[DN,V] = size(total_x);
M = size(total_f,2);
x_sorted1 = zeros(DN , V);
f_sorted1 = zeros(DN , M);
rank_sorted1 = zeros(DN,1);
[total_sort5,indx5] = sort(total_cd,'descend');
for j = 1 : DN
x_sorted5(j,:) = total_x(indx5(j),:);
f_sorted5(j,:) = total_f(indx5(j),:);
rank_sorted5(j) = total_rank(indx5(j));
cd_sorted5(j) = total_cd(indx5(j));
end
[total_sort1,indx1] = sort(rank_sorted5);
for j = 1 : DN
x_sorted1(j,:) = x_sorted5(indx1(j),:);
f_sorted1(j,:) = f_sorted5(indx1(j),:);
rank_sorted1(j) = rank_sorted5(indx1(j));
cd_sorted1(j) = cd_sorted5(indx1(j));
end
f_non_dom = f_sorted1(1:indx,:);
K = ceil(sqrt(indx - TN./2)) + 1;
for m = 1 : M
mean1(m) = mean(f_non_dom(:,m));
std1(m) = std(f_non_dom(:,m));
end
for n = 1 : indx
for m = 1 : M
f2(n,m) = (f_non_dom(n,m) - mean1(m))./std1(m);
end
end
for n = 1 : indx
for p = 1 : indx
for m = 1 : M
d1(m) = (f2(n,m) - f2(p,m)).^2;
end
d2(p) = sqrt(sum(d1));
end
[d3,indx3] = sort(d2);
crowd1(n) = sum(d3(1:K));
end
for m = 1 : M
[ff,ind] = sort(f_non_dom(:,m));
crowd1(ind(1)) = inf;
crowd1(ind(indx)) = inf;
end
sum_crowd = zeros(indx,1);
for n = 1 : indx
% sum_crowd(n) = sum(crowd1(n,:));
sum_crowd(n) = crowd1(n);
end
[total_sort2,indx2] = sort(sum_crowd,'descend');
for j = 1 : indx
x_sorted2(j,:) = x_non_dom(indx2(j),:);
f_sorted2(j,:) = f_non_dom(indx2(j),:);
end
x_new = x_sorted2(1 : TN/2, :);
f_new = f_sorted2(1 : TN/2, :);
end
87
11. nonlineal_model.mdl
i. Subsystem
88
ii. delta
iii. 3-4-1
iv. 3-4-2
89
v. Output
APPENDIX B
The PID Gains and objectives for all points in the pareto front.
57.2318
25.0241 1.11E-14 17.3544 351.5758 0 2.229 0.2354 2.7172