Sunteți pe pagina 1din 113

MULTI-OBJECTIVE OPTIMIZATION OF PID CONTROLLER PARAMETERS

USING GENETIC ALGORITHM

MOHD RAHAIRI BIN RANI

A thesis submitted in fulfilment of the


requirements for the award of the degree of
Master of Engineering (Electrical)

Faculty of Electrical Engineering


Universiti Teknologi Malaysia

DECEMBER 2012
I declare that this thesis entitled “Multi-objective Optimization PID Controller
Parameters using Genetic Algorithm” is the result of my own research except as
cited in the references. This thesis has not been accepted for any degree and is not
concurrently submitted in candidature of any other degree.

Signature : ……………………………………………
Name : MOHD RAHAIRI BIN RANI
Date : 15 DECEMBER 2012
To my late father
To my beloved family
ACKNOWLEDGEMENTS

In the name of Allah, Most Gracious, Most Merciful,

All praises and thanks to Allah, whom with His willing giving me the
opportunity to complete this thesis. I am heartily thankful to my supervisor, Dr.
Hazlina binti Selamat and co-supervisor, Dr. Hairi bin Zamzuri for their support and
guidance from the initial to the final stage of my research. Only Allah alone can
repay their kindness. Also thanks to the folks at the CAIRO for working together
and being fun with. They have kept me in good spirits throughout my study.

I would like to express my deepest thanks and appreciations to my late


father, my mother, family and friends for their encouragement, cooperation and
support along my journey to complete this project. May Allah bless them all.
ABSTRACT

Proportional-Integral-Derivative (PID) controller is one of the most popular


controllers applied in industries. However, despite the simplicity in its structure, the
PID parameter tuning for high-order, unstable and complex plants is difficult. When
dealing with such plants, empirical tuning methods become ineffective while
analytical approaches require tedious mathematical works. As a result, the control
community shifts its attention to stochastic optimisation techniques that require less
interaction from the controller designers. Although these approaches manage to
optimise the PID parameters, the combination of multiple objectives in one single
objective function is not straightforward. This work presents the development of a
multi-objective genetic algorithm to optimise the PID controller parameters for a
complex and unstable system. A new genetic algorithm, called the Global Criterion
Genetic Algorithm (GCGA) has been proposed in this work and is compared with
the state-of-the-art Non-dominated Sorting Genetic Algorithm (NSGA-II) in several
standard test problems. The results show the GCGA has convergence property with
an average of 35.57% in all problems better than NSGA-II. The proposed algorithm
has been applied and implemented on a rotary inverted pendulum, which is a
nonlinear and under-actuated plant, suitable for representing a complex and unstable
high-order system, to test its effectiveness. The set of pareto solutions for PID
parameters generated by the GCGA has good control performances (settling time,
overshoot and integrated time absolute errors) with closed-loop stable property.
ABSTRAK

Pengawal Perkadaran-Kamiran-Pembezaan (PID) adalah salah satu daripada


pengawal-pengawal yang banyak digunakan di industri. Walau bagaimanapun,
selain memiliki struktur yang ringkas, penalaan parameter-parameter PID untuk
sistem yang tidak stabil, kompleks dan bertertib tinggi menjadi sukar untuk
disempurnakan. Apabila berhadapan dengan sistem sedemikian, kaedah-kaedah
empirikal menjadi tidak berkesan dan kaedah-kaedah analitik memerlukan jalan
kerja matematik yang rumit. Kesannya, komuniti kawalan cuba mengalihkan
perhatian kepada kaedah-kaedah stokastik yang kurang memerlukan interaksi
daripada jurutera. Walaupun kaedah-kaedah ini berjaya menalakan parameter-
parameter PID, penggabungan pelbagai objektif dalam satu fungsi objektif masih
tidak begitu jelas. Tesis ini memperincikan pembangunan satu algoritma evolusi
pelbagai objektif untuk mengoptimumkan parameter-parameter PID bagi satu sistem
yang kompleks dan tidak stabil. Algoritma yang dicadangkan iaitu Algoritma
Genetik Berkreteria Global (GCGA) akan dibandingkan dengan algoritma yang
popular, Algoritma Genetik Penyusunan Tak-didominasi (NSGA-II) dalam beberapa
permasalahan. Keputusan menunjukkan GCGA mempunyai purata 35.57% kadar
penumpuan yang lebih baik berbanding NSGA-II dalam semua pengujian.
Algoritma cadangan telah diaplikasikan ke atas bandul songsang berputar yang
merupakan satu sistem tidak linear yang sesuai untuk mewakili sistem yang
kompleks, bertertib tinggi dan tidak stabil. Set penyelesaian-penyelesaian pareto
yang diperolehi melalui GCGA mempunyai sifat-sifat kawalan (masa pengenapan,
kelajakan dan ralat masa mutlak bersepadu) yang baik dengan mematuhi sifat
kestabilan sistem tertutup.
TABLE OF CONTENTS

CHAPTER TITLE PAGE

DECLARATION ii
DEDICATION iii
ACKNOWLEDGEMENTS iv
ABSTRACT v
ABSTRAK vi
TABLE OF CONTENTS vii
LIST OF TABLES x
LIST OF FIGURES xi
LIST OF ABBREVIATIONS xiii
LIST OF APPENDICES xv

1 INTRODUCTION 1
1.1 Background of the Problem 1
1.2 Importance of the Work 3
1.3 Research Objectives 3
1.4 Scope of Work 4
1.5 Research Contribution 4
1.6 Thesis Outline 4

2 PID CONTROLLER 6
2.1 Introduction 6
2.2 Conventional Tuning Approaches 7
2.3 Stochastic Tuning Approaches 10
2.4 Multi-objective PID Controller Tuning 13
2.5 Summary 14
3 MULTI-OBJECTIVE EVOLUTIONARY ALGORITHMS 15
3.1 Introduction 15
3.2 Basic Evolutionary Algorithm 15
3.2.1 Initialization 16
3.2.2 Objective Function and Fitness Assignment 18
3.2.3 Selection 18
3.2.4 Crossover 20
3.2.5 Mutation 21
3.2.6 Elitism 22
3.2.7 Termination 22
3.2.8 GA Loop 23
3.3 Studies of Multi-objective Evolutionary Algorithm 24
3.3.1 Weighted Sum Approaches 25
3.3.2 Population Based Approaches 27
3.3.3 Pereto Based Approaches 28
3.3.4 Review of diversity preservation in MOEA 31
3.4 Elitist Non-dominated Sorting Genetic Algorithm
(NSGA-II) 33
3.5 Summary 33

4 METHODOLOGY 35
4.1 Introduction 35
4.2 The Global Criterion Genetic Algorithm (GCGA) 35
4.2.1 Global Ranking Fitness Assignment 36
4.2.2 Binary Tournament Selection 37
4.2.3 Simulated Binary Crossover 39
4.2.4 Polynomial Mutation 40
4.2.5 Elitism through Non-dominated Sorting, Crowding
Distance and k-Nearest Neighbour (k-NN) 40
4.2.6 Complete Loop 43
4.3 Rotational Inverted Pendulum (RIP) 44
4.3.1 Mathematical representation of RIP 46
4.3.2 Integration of GCGA in the PID Tuning of RIP 50
4.4 Summary 52
5 RESULT AND DISCUSSION 53
5.1 Introduction ` 53
5.2 Performance Evaluation of GCGA 53
5.2.1 ZDT and DTLZ Test Problems 54
5.2.2 Convergence Metric 56
5.2.3 Diversity Metric 57
5.2.4 Comparison GCGA and NSGA-II in ZDT4 Test
Problem 59
5.3 Procedure of choosing the desired solution from final
pareto front from GCGA optimization 62
5.4 Simulation Results 63
5.5 Real RIP Results 67
5.6 Summary 69

6 CONCLUSION AND FUTURE WORK 70


6.1 Conclusion 70
6.2 Future Works 71

REFERENCES 72
APPENDIX A-D 76-95
LIST OF TABLES

TABLE NO. TITLE PAGE

2.1 PID Parameters Equation for Ziegler and Nichol Tuning 8


2.2 PID Parameters Equation for Cohen and Coon Tuning 9
4.1 Example of the global ranking assignment 36
4.2 The mechanical and electrical parameters of the RIP 57
4.3 Parameters of the PID controller optimization 60
5.1 ZDT and DTLZ test problems 54
5.2 Symbols and description for the test problems 56
5.3 GCGA and NSGA-II user defined parameters 56
5.4 Convergence metric for GCGA and NSGA-II 57
5.5 Neighbouring scheme for the diversity metric 58
5.6 The diversity metric for GCGA and NSGA-II 59
th
5.7 The transition of solution at K generation for GCGA and
NSGA-II in ZDT4 problem. 60

5.8 PID Parameters and their objective values for the three
extreme solutions from GCGA 66
LIST OF FIGURES

FIGURE NO. TITLE PAGE

1.1 Block diagram of the controller, actuator, plant and sensor 1


2.1 Parallel architecture of PID controller 6
2.2 The structure of stochastic optimization techniques for
PID controller 11
3.1 Structure of a chromosome in real representation 17
3.2 Structure of chromosome in binary representation 17
3.3 Pseudo-code for the binary tournament 19
3.4 One-point crossover 21
3.5 Bitwise mutation 22
3.6 Pseudo-code for GA 23
3.7 HLGA’s selection mechanism 26
3.8 VEGA’s selection mechanism 27
3.9 MOGA’s selection mechanism 28
3.10 NSGA’s selection mechanism 29
3.11 Diversity preservation by grid technique 31
3.12 Niche counting 32
3.13 The crowding distance calculation 32
4.1 The elitism mechanism in GCGA 42
4.2 The proposed flow chart of GCGA 43
4.3 The rotational inverted pendulum at a) downward
equilibrium and b) upward equilibrium. 44
4.4 The configuration of RIP, Microbox, power supply and
driver circuit. 45
4.5 The configuration of RIP, Micro-box and Matlab Simulink 45
4.6 Rotational Inverted Pendulum (RIP) 46
5.1 The final pareto front from GCGA optimization in the
three objective space 63
5.2 The arm responses for individuals in the final pareto front
from GCGA 63
5.3 The pendulum responses for individuals in pareto front
from GCGA 64
5.4 The most minimum ts and OS in the pareto front 65
5.5 The most minimum OS and ITAE in the pareto front 65
5.6 The arm responses of the three individuals with the most
minimum objective. 66
5.7 The pendulum responses of the three individuals with the
most minimum objective 67
5.8 Arm responses for the three extreme solutions from the
final pareto front in GCGA 68
5.9 Pendulum responses for the three extreme solutions from
the final pareto front in GCGA 68
LIST OF ABBREVIATIONS

c1 - Distance to arm centre of mass


c2 - Distance to pendulum centre of mass
DTLZ - Deb-Thiele-Laumans-Zitzler
EA - Evolutionary Algorithm
ES - Evolution Strategy
GA - Genetic Algorithm
GCGA - Global Criterion Genetic Algorithm
GP - Genetic Programming
HLGA - Hajela and Lin Genetic Algorithm
J1 - Inertia of arm
J2 - Inertia of pendulum
Kb - Back EMF constant
Kd - PID derivative gain
Ki - PID integral gain
Kp - PID proportional gain
Kt - Torque Constant
l1 - Length of the arm
l2 - Length of the pendulum
LQR - Linear Quadratic Regulator
M - No. of objectives
m1 - Mass of the arm
m2 - Mass of the pendulum
MOEA - Multi-objective Evolutionary Algorithm
N - No. of individual in population
NSGA - Non-dominated Sorting Genetic Algorithm
NSGA-II - Elitism Non-dominated Sorting Genetic Algorithm
pc - Crossover probability
PID - Proportional-Integration-Derivative
pm - Mutation probability
PSO - Particle Swarm Optimization
RIP - Rotational Inverted Pendulum
Rm - Armature resistance
SPEA - Strength Pareto Evolutionary Algorithm
ZDT - Zitzler-Deb-Thiele
LIST OF APPENDIX

APPENDIX TITLE PAGE

A Matlab and Simulink code for RIP optimization using


GCGA 77
B The PID Gains and objectives for all points in the pareto
front. 92
C Published paper in International Journal of Innovative
Computing, Information and Control (IJICIC) 95
D Published paper in International Conference on
Modelling, Simulation and Optimization, Kuala Lumpur 111
CHAPTER 1

INTRODUCTION

1.1 Background of the Problem

Controller design is an essential aspect in control engineering in order to


ensure a controlled plant to perform well. The controller or control law describes the
algorithm or the signal processing employed by the control processor to generate the
actuator signal from the sensors and command signals it receives (Chen, 1992).
Figure 1.1 shows the configuration of the controller, actuator, plant and sensor in a
feedback or closed-loop system. The controller receives command signal and after
that compares it with the present output measured by the sensor. The controller then
send the appropriate signal to the actuator in order to ensure the plant produces the
same output as the command signal.

Figure 1.1: Block diagram of the controller, actuator, plant and sensor in a
feedback or closed-loop system

The signal value send by the controller completely depends on the parameters
in the controller. The adjustment of the controller parameters or sometimes called
controller tuning is a critical element in the controller design process. Simple
2

controllers like Proportional-Integral-Derivative (PID) controller only requires few


parameters to be tuned but complicated controllers like Linear-Quadratic Regulator
(LQR) and Linear Quadratic Gaussian (LQG) have more parameters to deal because
they considered more states in the designs (Bemporad et al., 2002). These
complicated controllers however are developed in such way so it will produce
optimum control signal (Polyak and Tempo, 2001). The controller design only has to
decide on the value of the weights associated with the various signals in the system.
On the other hand, this research aims to find an approach to optimize the
performances of the PID controllers.

Despite the simplicity in its structure and being the most popular type of
controller employed, the level of difficulty in the PID controller tuning mainly
depends on the plant behaviours (Åström and Hägglund, 2001). Nonlinearity,
instable open-loop system, under-actuation and the system’s order are the elements
that contribute to the difficulties of tuning process (Zhuang and Atherton, 1993).
Therefore this research used a rotational inverted pendulum (RIP) to demonstrate the
difficulties in tuning the PID control parameters for a very nonlinear and under-
actuated system. The under-actuated (two degree of freedoms, one actuator) property
of RIP also demonstrates the tuning example of two PID controllers simultaneously.
This condition will add to the difficulties in PID tuning.

Referring to the above conditions, the existing PID tuning methods are not
capable to tune the combination of PID parameters when facing such plants. Thus
this research tries to propose an algorithm that automatically gives the user the
optimized PID parameters for the objectives like steady-state error, settling time and
overshoot in the system.
3

1.2 Importance of the Works

Despite the popularity of PID controllers as the most practical controller for
control engineer, Ender (1993) reports that 30% of the installed PID controllers are
operating in manual mode and 65% of the automatic controllers are poorly tuned.
Moreover, a study from Van Overschee et al. (1997) shows 80% of PID controllers
are badly tuned and 25% of the PID controllers are operating under default factory
settings, means the controllers are not tuned at all. Recently, O’Dwyer (2009) states
the proposed tuning methods in literature are not having significant impact in the
industrial practises. These situations implies the tuning PID controllers are the
vexing problems to the tuning operators which maybe the tuning rules available are
not well compatible for their tuning problems in industry.

Hence this research tries to provide an alternative approach for tuning PID
controllers. The developed algorithm in this research will automatically provide the
designers with the optimized PID parameters with less rules of tuning.

1.3 Research Objectives

The main objectives of this research are


i. To develop a multi-objective optimization algorithm based on evolutionary
techniques for tuning PID controller parameters.
ii. To compare the proposed algorithm with the well known multi-objective GA.
iii. To apply the optimized PID controller to an under-actuated plant, rotational
inverted pendulum (RIP) in the simulation and real plant.
4

1.4 Scope of Work

This research consists of a few focus works in order to achieve its objectives.
i. Developing a multi-objective optimization algorithm to optimally tune the
PID controller performances like settling time, steady state errors and
overshoot using multi-objective genetic algorithms (MOGAs) approach.
ii. Analysing the optimization algorithm using several test problems borrowed
from literature and comparing to a well-known algorithm.
iii. Applying the results of optimized PID controller simulation to the real plant
in order to validate the algorithm in the real implementation.

1.5 Research Contribution

The main contributions of this research are


i. Introduction of a variant of MOGAs called Global Criterion Genetic
Algorithm (GCGA).
ii. Optimization of PID controller tuning using GCGA.
iii. Simulation and experimental validation of optimized PID controller tuning.

1.6 Thesis Outline

This thesis consists of six chapters. Chapter 2 provides a discussion of the


fundamentals of PID controller and a number of popular tuning methods for PID
controller. Both conventional and alternative approaches are covered in this chapter.

Chapter 3 discusses the literature review for evolutionary algorithm (EA), the
application of EA in the controller tuning problem and the multi-objective genetic
algorithms (MOGAs). Previous work done by the researchers in the area of MOGAs
will be used as the basis for the proposed algorithm in Chapter 4.
5

Chapter 4 presents the detailed methodology of the proposed algorithm called


Global Criterion Genetic Algorithm (GCGA). Moreover, the modelling of the
rotational inverted pendulum through derivation from the equations of motion is
presented.

Chapter 5 analyzes the GCGA through several popular test problems and
compares its performances with the well known Non-dominated Sorting Genetic
Algorithm II (NSGA-II). This chapter also shows the optimization work of the PID
controller using GCGA in the simulation and real RIP.

Chapter 6 concludes the thesis and suggests several further investigations of


the optimization work.
CHAPTER 2

PID CONTROLLER

2.1 INTRODUCTION

This chapter gives a review on the Proportional-Integration-Derivative (PID)


controller and its main tuning approaches. Both deterministic and stochastic tuning
methods are discussed in this chapter. PID controllers are widely used as the chosen
controller strategy due to their design simplicity and its reliable operation. A simple
PID structure consists of three terms which are Kp, Ki and Kd referring to
proportional, integration and derivative gains respectively. In a PID controller
structure, the parallel architecture like Figure 2.1 is most commonly found although
there are a few difference structures exist like series architecture (McMillan, 1983)
and the modification of series and parallel architecture (Johnson et al., 2005).

Figure 2.1 Parallel architecture of PID controller

The parallel architecture of PID controller (after this refers as PID controller)
sums the all the error signal, e(t) after being multiplied by PID gains, Kp, Ki and Kd to
produce the input signal, u(t).
7

The adjustment process of the values KP, KI and KD is called ‘tuning’ or


‘design’ of PID controller. The tuning approaches can be divided into two categories
which are the conventional and the alternative approaches. The conventional
approaches include the empirical methods and the analytical methods which widely
used by control designers. The alternative approaches are limited to methods that
employ the stochastic process in the tuning rules. Stochastic process refers to one
whose behaviour is non-deterministic, where any of its sub-system determined by the
process of deterministic action and a random behaviour. The details of the stochastic
techniques are described in the subchapter 2.3.

2.2 Conventional Tuning Approaches

Most of the conventional PID tuning methods are empirical tuning


approaches while the analytical tuning approaches are limited to a few number
reported (Cominos and Munro, 2002). The most popular empirical PID tuning
method is the classical Ziegler and Nichols (1942) method where the PID parameters
are experimentally tuned in order to get the best outcome. To perform this method,
the gains KI and KD are set to zero while the gain KP is increasing until it reaches the
ultimate gain value, Ku. Ku is determined when the output response is oscillating with
constant amplitude (which is Ku) at the ultimate period, Tu. Then the gains of PID
controller are given in Table 2.1.
8

Table 2.1: PID Parameters Equation for Ziegler and Nicols Tuning.

PID Gains Equation

Kp 0.6Ku

2K p
Ki
Tu
K P Tu
Kd
8

Although this approach seems promising, it is only compatibles for a single


input single output (SISO) and an open-loop stable system. Since the objective of
this research is to find a solution to tune a highly unstable plant like an under-
actuated system, the Ziegler and Nichol method may not be suitable.

The same limitations went to another popular PID tuning approach, Cohen
and Coon (1953) method. This method required the users to model the plant as first
order plus dead time process. The steps to perform the Cohen and Coon as the
following
i. The process is waited until it reaches steady-state.
ii. The step change is introduced until the process reaches settle down at a new
value.
iii. The process gain, K was calculated based on the slope of change made in
Step 2.
iv. Based on Step 2, an approximate first order process with a time constant τ
and delayed by τdead was introduced.
v. The gains of PID controller were calculated based on the Table 2.2.
9

Table 2.2: PID Parameters Equation for Cohen and Coon Tuning.

PID Gains Equation

τ 4 τ
Kp ( + dead )
Kτ dead 3 4τ

τ dead
13 + 8
Ki τ
τ
τ dead (32 + 6 dead )
τ
4
τ dead
Kd 2τ dead
11 +
τ

In addition of limitations in Ziegler and Nichol method, this method also


required the plant to be modelled as a first order plus dead time process which rarely
exist in our target plant (under-actuated system) which have high system order.

Despite the popularity of the empirical tuning approaches, analytical


techniques seem capable to avoid the tedious experimental works in empirical
methods. The pole placement technique for instance is mostly used when the plants
which under consideration are of low order systems. For example, a general first-
order system model in the continuous s-domain
Kp
G p (s) = (2.1)
1 + sT1
where Kp is the proportional gain and Tn represents the filter time constant. Using PI
controller, the closed loop system becomes

1 K pK p K p K p Ki
s 2 + s( + )+ = 0. (2.2)
T1 T1 T1
This can then be related with the general second-order model
s2 + 2ζωs + ω2. (2.3)
And thus we can obtain
10

2ζωT1 − 1 ω 2T1
Kp = , Ki = . (2.4)
Kp 2ζωT1 − 1

In the case where a second-order system of the following form is used


Kp
G p (s) = , (2.5)
(1 + sT1 )(1 + sT2 )
a PID controller of the form

s s2Kd
K p (1 + + )
Ki Ki
G p ( s) = , (2.6)
s
Ki
can arbitrarily place all closed-loop poles. The system characteristic equation has
becomes
1 1 K pK pKd 1 K pK p K pK pK i
s3 + s 2 ( + + ) + s( + )+ =0 (2.7)
T1 T2 T1T2 T1T2 T1T2 T1T2
This can be compared with the following, general, third-order characteristic equation
(Astrom and Hagglund, 1988)
(s+αω)(s2+2ζωs + ω2) = 0 (2.8)
to get the PID parameters, as for the PI case earlier in Equations (2.1) to (2.4).

It is clear that the pole placement technique requires tedious mathematical


derivations to obtain the PID gains equation. Due to our target is an under-actuated
plant, which has two degree of freedoms, the closed loop system is up to fourth-
order. Thus this method can be hardly implemented in our case. The conventional
tuning approaches may the good choices in solving simple and low-order system, but
when facing the complex and high-order system, it has to look into other approaches.

2.3 Stochastic tuning approaches

PID controllers tuning for the high order and complex plants is difficult when
using conventional approaches. Therefore, the control community shift its attention
to the stochastic approaches which provides a heuristic searching process to the
11

tuning mechanism. In general, the structure of optimization of a PID controller is


illustrated as in Figure 2.2.

Figure 2.2: The structure of stochastic optimization techniques for PID controller

The parameters are stochastically provided by the optimizer to the PID


controller and the closed loop system is simulated to get the system’s responses.
From the responses, the objective of the optimization is evaluated and the objective
value is processed by the optimizer. There are several stochastic techniques that
employed as optimizer for PID controller parameters tuning such genetic algorithm
(GA) (Sadasivarao and Chidambaram, 2006, Porter and Jones, 1992), particle swarm
optimization (PSO) (Gaing, 2004, Hu et al., 2005, Zhao et al., 2005), ant colony
optimization (Duan et al., 2006), bacteria foraging based optimization (Kim and Cho,
2005) and simulated annealing optimization (Ho et al., 2006).

Regardless of the optimization techniques, the formulation of objective


function is the critical part in the optimization. The most commonly objective
function employed is the summation of errors between the reference set point and the
actual output (Sadasivarao and Chidambaram, 2006). The three commonly used
errors are integrated absolute error (IAE), mean square error (MSE) and integrated
time weighted absolute error (ITAE) as in Equation (2.9) to (2.11).

IAE = ∫ r − y(t )dt (2.9)
t =0
12


2
MSE = ∫ (r − y(t )) dt (2.10)
t =0


ITAE = ∫ t (r − y(t ))dt (2.11)
t =0

In this investigation, the IAE is the most commonly used objective function in
PID controller optimization works. However, there are several performance measures
in control design requirement besides the performance based on error like settling
time and overshoot.

Huang and Lam (1997) reported a multiple performance measures where they
combine overshoot (OS), settling time (ts) and mean square error, MSE into a single
equation the time domain response. The objective function, f to evaluate the
performances is given by,
f = α1OS + α 2 t s + α 3 MSE, (2.12)
where α1, α2 and α3 are the weights for each element of performance in the objective
function. In this work, they employed a GA as optimizer to optimize the PID
parameters for a Heating Ventilating and Air Conditioning (HVAC) system.

Another method to evaluate the controller performance is assessing the


controller performances in the frequency domain. Scott et al. (1999) uses GA to
optimize an anti-resonant electromechanical controller for a trust vector control. The
fitness function, f(x) is given by,
ωbw ωe (2.13)
f ( x) = ∑ K band Mag + 2
∑ f 1(ω ),
ω = ωs ω = ωbw
where

 d 2 Mag dMag dMag dMag 2 dMag


 slope
K ( 2
+ − (( ) avg − ) ), < 0,
 d ω d ω d ω d ω d ω

= 0, (2.11)
dMag
f1 (ω ) =  K zero ,
 dω
 dMag 2 dmag
K peak ( ) , > 0,
 d ω d ω

13

where ωs is the starting frequency, ωbw is the controller bandwidth frequency, ωe is


the ending frequency, Kband is the bandwidth gain, Mag is magnitude of the response,
Kslope is the slope gain, Kzero is the zero gain for the reduction of resonance begin and
Kpeak is the resonance peak gain for the reduction of resonance.

The both previous methods combined multiple of controller performance


criterions into their single fitness function. Thus these works prove that the controller
tuning optimization is a multi-objective optimization problem either in time-domain
or frequency-domain specification. The combination of multiple objective in one
fitness function, not only requires the weight values to be specified but it can leads in
premature convergence of the searching process when the weights are not precisely
assigned.

Recently, multi-objective genetic algorithms (MOGAs) have gain large


attention to the EA community as MOGAs has been evaluated as one of the three
fastest growing fields among the intelligence computational topics (Guliashki et al.,
2009). The capability of MOGAs in handling multi-objective problems efficiently
might be the right solution in solving multi-objective optimization problem in PID
controller tuning.
14

2.4 Multi-objective PID Controller Tuning

Basically, the multi-objective PID controller tuning is similar to single


optimization for PID tuning except the part where the more objectives are evaluated
from the system responses and then the optimization is solved by MOGAs. In this
study, we add two more objectives to the steady state errors which are settling time
and overshoot. In our investigation, the multi-objective PID tuning problems are
mostly solved using a well-known algorithm, Non-dominated Sorting Genetic
Algorithm (NSGA-II) (Pedersen and Yang, 2006, Popov et al., 2005, Wang et al.,
2006).

2.5 Summary

The conventional approaches of PID controller tuning rules are shown to


require tedious experimental or mathematical derivation works as the control
community shifted attention to stochastic tuning approaches when facing high order
and complex plants. Even so, the standard stochastic optimization techniques only
operate with one single objective while the controller tuning problems are proved to
be a multi-objective optimization problem. The capability to handle multi-objective
requirement of optimization in MOGAs is well compatible to the optimization of
PID controller tuning problem.
CHAPTER 3

MULTI-OBJECTIVE GENETIC ALGORITHMS (MOGAs)

3.1 Introduction

This chapter gives the overview of the current multi-objective genetic


algorithms (MOGAs) exist in the literature. In the beginning, the basic principles in
evolutionary algorithm are covered to give the fundamental idea of Evolutionary
Algorithms (EAs). Then we review a number of current MOGAs works from
literature especially the ones that received large attention from researchers.

3.2 Basic Evolutionary Algorithm

Evolutionary algorithms (EAs) are stochastic optimization algorithms which


inspired by the metaphor of natural biology evolution. EA is a subset of Evolutionary
Computation (EC) and has four main evolutionary approaches; genetic algorithm
(GA), (Holand, 1975) evolution strategy (ES), (Schwefel, 1965, Rechenberg, 1973)
genetic programming (GP) (Koza, 1992) and evolutionary programming (EP) (Fogel
et al., 1966). GA and GP have similar genetic concept but differ in the coding
structure where GA codes a solution into a chromosome while GP represents its
solution via a tree structure. Unlike GP with flexible structure of solution, EP has
fixed structure of solution with mutation as its main operator. Finally, ES has similar
concept as EP but has adaptive mutation operator. Nowadays, GA and ES dominate
the most of the EA works in literature due to their faster convergence performance.
16

In this research, the proposed algorithm is developed based on GA, thus most of the
literature reviews in this chapter is focused on GA in detail.

Over the last decades, GA has been used as a search and optimization tool in
the various applications such as science, economic and engineering. The main factors
for the success are their broad applicability, ease of use and global perspective
(Goldberg, 1989).

In general, GA consists of several important parts such as initialization,


objective function, fitness assignment, genetic operators like crossover and mutation,
elitism and termination. The terms like population, generation, individual and
chromosome are repeatedly used in the next sections. Note that the three terms,
solution, individual and chromosome are exchangingly used in the next sections
which represent a same element.

3.2.1 Initialization

In optimization, it is typical to start a solution with random initial value.


However, GA works with a population of solutions rather than single point solution
in deterministic optimization in order to mimic the principal of survival the fittest in
nature. Each individual or solution in the population is called ‘chromosome’. A
chromosome consists of genomes which refer to the parameters to be optimized.
Depending on the type of data representation, a genome can be either one or zero in
the binary data representation or integer or floating point number in the real data
representation. The value of a genome is called allele. This research employs real
representation of chromosome because PID gains are normally real numbers. For
instance, let say we have the values of Kp, Ki and Kd are ‘1’, ‘7’ and ‘3’ respectively.
Figure 3.1 and 3.2 show the way of this solution coded into a chromosome which
contains three genomes in the real and binary representations respectively.
17

Figure 3.1 Structure of a chromosome in real representation

Figure 3.2: Structure of chromosome in binary representation

Binary representation has more structure complexity compared to the real


representation which requires simple formulation of genetic operators. For the sake
of clear visualization of operations in GA, the next sections employ the binary
representation of chromosomes in example of operations.

A population consists of N chromosomes and normally the initial population


(the value of alleles) of chromosomes is randomly generated where N is population
size. The transition from the current population to form a new population is called
‘generation’. In other words, the generations represent the iteration of the population.
18

3.2.2 Objective Function and Fitness Assignment

A function of the optimization problem that intends to be optimized either to


maximize or to minimize is called ‘objective function’. In GA, the objective
functions can be formulate into variety forms, either a mathematical equation or the
objective values can be obtained through a dynamic process model from simulation.
GA only interested in the inputs/outputs of the system, regardless how the system is
working. This black-box property is interesting to the optimization problems due to
the traditional optimization methods highly lie to mathematical complexity to be
solved. The gradient based optimization methods for instance, require the objective
function to be a convex problem and the solution is near to the initial estimated
solution while GA is not requires the above conditions. The objective function is
proportional to the fitness function depends on the fitness assignment procedure.

‘Fitness assignment’ is a procedure to assign a fitness value for every


chromosome associated with the objective function. These fitness values control the
search direction in the GA. Normally in GA, the lower the fitness value for a
particular chromosome, the better the chromosome is, in terms of desired solution.
Fitness assignment in GA for single objective optimization is directly derived from
the objective value in comparison to the multi-objective optimization; a more
complex procedure is needed. The fitness value obtained through fitness assignment
works as indicator in the selection process.

3.2.3 Selection

Process on how individual chromosomes are chosen for the genetic processes
in order to form a population of offspring is done through ‘selection’. The objective
of the selection operator is to make copies of good solutions and remove the bad
solutions, while keeping the population size constant.
19

Selection process is typically a combination of deterministic and


randomization processes. Relying too much on the deterministic process will lead to
premature convergence and too much random mechanism will lead to slower
convergence speed. Thus a proper balance between random and deterministic
mechanisms is required in the selection process so as to produce a good search
stochastic algorithm. The parents are chosen from the selection process to fill the
mating pool, a place where to store the parents. There are a few methods to do this
selection process for instance, the roulette wheel selection and the binary tournament
selection. Pseudo-code in Figure 3.3 shows the mechanism of binary tournament
selection.

Binary tournament selection ()


{ While the mating pool is not full
{
Randomly select two chromosomes from the current
population (randomization);
Compare the fitness value between the two
chromosomes;
Choose the fitter chromosome as the parent
(deterministic)
}
}
Figure 3.3 Pseudo-code for the binary tournament
Figure 3.3: Pseudo-code for binary tournament selection

In the binary tournament selection, tournaments are played between two


individuals and the better one is chosen and placed in the mating pool. The other two
individuals are picked again then another slot in the mating is filled by the better
individual.

There exists a number of other types of selection such as roulette-wheel


selection. In the roulette-wheel mechanism, the probability of an individual to be
selected, pi is proportional to its fitness function. For example in a maximization
problem, the selection probability pi of an individual xi with the fitness function, fi is

‫݌‬௜ = ௙
௙೔
(3.1)
ೞೠ೘
20

where fsum is the total fitness of all individuals in the population. In other words, the
individual with the higher fitness has the higher chance to be selected.

The selection operator cannot create any new solutions in the population.
Once the mating pool is filled with selected individuals, the genetic operators
(crossover and mutation) will take place to produce the new solutions.

3.2.4 Crossover

‘Crossover’ process is a trademark for GA or GP as it differentiates them


with other search algorithms in EA. Crossover is a process of combining the two
portions from two good chromosomes (parents) to form a new chromosome
(offspring). Like the selection operators, there are a number of crossover types exist
in the literature (Spears, 2000). In almost all crossover operators, two strings
(individuals) are picked from the mating pool at random and some portions of the
strings are exchanged between the strings to create two new strings.

Typical type of crossover in binary representation is the one-point and the


two-point crossover. Figure 3.4 shows the operation of one point crossover from two
parents to produce two new offspring.
21

Figure 3.4: One-point crossover

A crossover point in the parent chromosomes is randomly chosen. Then, the


two different portions of each chromosome are swapped with other portion of
chromosomes to form two new chromosomes.

In order to preserve some good solutions during the evolution process, not all
solutions in the mating pool are used in a crossover. If pc is the crossover probability,
then 100pc% solutions in the mating pool are used in the crossover operation and
100(1- pc)% of the solutions are simply copied to next operation.

3.2.5 Mutation

‘Mutation’ is an important operation in GA. It is used to improve the overall


performance of chromosomes and to avoid the premature convergence in the
searching process. The need of the mutation is also to keep the diversity in the
population. Mutation is done by randomly changing the value of allele to a slight
change. Figure 3.5 shows the bitwise mutation operation in binary data
representation.
22

Figure 3.5: Bitwise mutation

Like the crossover point, the mutation point is also randomly chosen and the
allele associated with the mutation point is changed. Not all alleles are mutated but
depends on the mutation probability, pm. The mutation operation alters the strings
(solutions) to hopefully create a better string. Since this operation is stochastically
performed, the claim is not guaranteed.

3.2.6 Elitism

One of the features of a good search algorithm is the capability to store the
best solutions found in its process. Since GA works with a population of solutions, it
requires maintaining a number of the best solutions in the process. This mechanism
sometimes called elitism, which becomes an important mechanism in GAs and can
be done in various ways. In steady state EA, elitism can be introduced in a simple
mechanism. After two offspring are created using genetic operators, they are
compared with both of their parents. Among these four solutions, the best two are
selected as the next generation.

Elitism can also be implemented globally in the generation sense. Once the
offspring population is generated, it will be combined with the current population.
Thereafter, the N best solutions are selected as the next generation. This type of
elitism will be used in our proposed algorithm.
23

3.2.7 Termination

In GA, ‘termination’ is usually done based on two conditions: the maximum


number of generation is achieved, or the fitness value for the best solution is met the
desired fitness (specified by the user). The second condition is rarely implemented
especially for the problems that are difficult to predict the desired fitness.

3.2.8 GA Loop

Like the conventional optimization algorithm, the iteration process which


called generation in GA evolves the chromosomes to search for the better solutions.
Figure 3.6 shows the pseudo-code for the complete GA loop. Beginning with
initialization, the randomly generated chromosomes are evaluated associated with
objective function. The objective value for each chromosome is used as indicator in
selection process to choose parent individuals.

Simple Genetic Algorithm ()


{
Initialization;
Objective Evaluation;
While termination criterion has not been reached
{
Selection;
Crossover;
Mutation;
Objective Evaluation for offspring
Elitism
}
}

Figure 3.6: Psedo-code for the GA loop


24

After gathering the parent chromosomes, they experienced the genetic


operation (crossover and mutation) to generate the offspring. Depends on the type of
elitism, the current population and the offspring will be processed to form the next
generation.

Since all the main components of the fundamentals of GA are explained, the
overview of MOGAs in literature is now ready to be discussed.

3.3 Studies of Multi-objective Genetic Algorithms (MOGAs)

Multi-objective optimization also called multi-criteria optimization, multi-


performance or multi-vector optimization can be defined as a problem of finding
(Osyczka, 1985) :
“a vector of decision variables which satisfies constraints and optimizes a vector
function whose elements represent the objective functions. These functions form a
mathematical description of performance criteria which are usually in conflict with
each other. Hence, the term ‘optimize’ means finding such a solution which would
give the values of the entire objective functions acceptable to the decision maker.”

Osyczka (1985) also formally state multi-objective optimization problems


(MOP) as follows:
Finding the vector of x* = [x*1, x*2, ..., x*N]T which satisfies the m inequalities
constraints
gi(x) ≥ 0 and i = 1,2,...m, (3.2)
and p equalities constraint
hi(x) = 0 and i = 1,2...p, (3.3)
and optimizes the vector function
f(x)= [f1(x), f2(x),...,fk(x)T, (3.4)
where x =[x1, x2,...,xn] is the vector of decision variables.
T
25

Typically in MOP, there is no the single optimal solution but instead, a set of
optimal solutions are required in the optimization of the conflicting objectives. These
‘trade-off’ solutions are known as pareto optimal solutions. A set of decision
variables x* ∈ ℱ is said to be pareto optimal is there are not exist another x ∈ ℱ such
that fi(x) ≤ fi(x*) for all i = 1,2,...,k and fj(x) < fj(x*) for at least one j. Here all the
objectives are to be minimized and ℱ denotes as the feasible region of the problem
(region when all the constraints are satisfied.)

Multi-objective Genetic Algorithms (MOGAs) are developed based on a


single-objective Genetic Algorithms (GAs) before they are extended to handle multi-
objective optimization problems. The critical part of MOGAs lies in the fitness
assignment procedure when multiple objectives are needed to considered, while
maintaining the diversity in the survived solutions. Hence this chapter reviews
several fitness assignments and diversity preservation methods to handle multi-
objective problems.

3.3.1 Weighted Sum Approaches

The classical method to handle the multiple objective fitness assignment by


combining together all the objective functions into one function. Typically the
formula for fitness, f for weighted sum approach as in the Equations (3.1) and (3.2).

M
f = ∑ wk f k , (3.5)
k =1

where
M

∑w
k =1
k =1 (3.6)

where M is the number of objectives, wk is the weight for kth objective and f k is the
fitness value for kth objective.
26

Hajela and Lin (1992) introduced an algorithm based on this approach with
the weights are not fixed. Instead, they are encoded together in the chromosomes.
Hence the individuals (chromosomes) are evaluated based on potentially different
weights combination. Figure 3.7 shows the selection mechanism in Hajela and Lin
Genetic Algorithm (HLGA) in two dimensional objective spaces.

Figure 3.7: HLGA’s selection mechanism

The 10 individuals are represented in two dimensional objectives. Since the


weight values also are coded as chromosomes, the dotted lines represent the slope of
difference weights of each individual to the others.

Murata and Ishibuchi (1995) suggested a random weighted GA (RWGA)


similar to HLGA but the random normalized weights. Like other weight-based
approaches, RWGA is also unlikely to be able to search for the true pareto front in
non-convex problems.

These aggregating approaches have been largely underestimated by MOGAs


researchers due to its high possibility of converging at the local optimum and the
knowledge for every objective is required in order to assign the appropriate weight
for each objective. However, due to its simplicity, this weighted-sum aggregation
approach appears to still be in widespread use (Coello and Lamont, 2004).
27

3.3.2 Population Based Approach

In contrary, the population-based function aims to diversify the search


algorithm based on the fitness information in each objective of the individuals. The
most popular algorithm in this class of MOGAs is the Vector Evaluated Genetic
Algorithm (VEGA) (Schaffer, 1985). VEGA is actually a simple genetic algorithm
(GA) with a modified selection mechanism. VEGA uses a switching objectives
mechanism to generate a population of offspring by reproducing N/M of parents for
each objective, where N is the population size and M is the number of objectives. The
weakness of this approach is that the concept of pareto dominance is not incorporated
directly in the selection process. Figure 3.8 shows the selection mechanism in
VEGA, N/M of parents is chosen to be reproduced.

Figure 3.8: VEGA’s selection mechanism

The two areas represent the difference selection priorities based on the
objectives. If f1 is the selection priority, the selected solution is the one which has a
low value of f1 (case of minimization) and so on.

Another method inspired by population based approaches is Vector


Optimization Evolution Strategy (VOES) proposed by Kursawe (1991). VOES of
course based on ES with fitness evaluation procedure is similar to VEGA. In VOES,
the non-dominated solution is preserved in the elitism.
28

The population based approaches have largely been criticised on their biased
behaviour toward a particular objective. Nevertheless, these methods have found
useful in some applications for handling constraint in the optimization (Coello et al.,
2007).

3.3.3 Pareto Based Approaches

According to Coello and Lamont (2004), the pareto-based approach is


perhaps the most successful approach to guide the search space to the true pareto
front. Goldberg (1988) suggested that every MOGAs should has a tendency to select
the individuals that are non-dominated with respect to other individuals in the current
population. It is also suggested that ‘fitness sharing’ and ‘niche parameter’ are
required, as the second sorting procedure to preserve the diversity in solutions. Four
years later, Fonseca and Fleming (1993) proposed a fitness assignment strategy in
Multi-objective Genetic Algorithm (MOGA), slightly different from Goldberg’s
suggestion that takes into account how many other individuals dominate the
respective individual as fitness values. Thus, all the non-dominated individuals
comprise the same fitness values and with equal probability to be selected. The detail
of this mechanism is shown as in Figure 3.9.

Figure 3.9: MOGA’s selection mechanism


29

The right and upper dotted lines from an individual, represent the boundary of
domination area for that particular individual. The other individuals that located in
the domination are said to be dominated by the particular individual. For the sake
clarification, three individuals a, b and c are considered. Let say individual a
dominates individual b and c, and individual b dominates individual c. The MOGA’s
fitness assignment is given by:
fitness = 1 + number of superiors, (3.7)
where the term ‘superiors’ refers to the other individuals that dominate the particular
individual. Therefore the fitness for the individual a is ‘1’ because there is no other
solutions that dominates it. The individual b has the fitness of ‘2’ (only dominated by
individual a) and the individual c has the fitness of ‘3’ (dominated by individual b
and c).

Srinivas and Deb (1994) realized Goldberg’s suggestion most directly in the
Non-dominated Sorting Genetic Algorithm (NSGA) where every non-dominated
front in the current population is evaluated and the individuals in the same fronts will
have the same fitness value. To maintain the diversity in the pareto solutions, NSGA
introduced ‘niche count’, a counting of near individuals in the specified region of a
particular individual. The selection mechanism of NSGA is shown as in Figure 3.10.

Figure 3.10: NSGA’s selection mechanism


30

In order to perform the non-dominated sorting procedure, the individuals in


the first non-dominated front are recognized and assigned with the value ‘1’. Then
these individuals are removed from the population. The new non-dominated
individuals are recognized in the reduced population but this time will be assigned
with the fitness of ‘2’. These steps are repeated until there is no individual in the
continuous reduced population.

The offshoot of this approach, known as NSGA-II, uses the same fitness
assignment strategy but has elitism mechanism and crowded comparison operator
which is a normalized distance between the two neighbours to preserve the diversity
(Deb et al., 2002). NSGA-II has always been considered as the state-of-the-art
algorithm by MOEA researchers (Coello and Lamont, 2004). However, the non-
dominated sorting in NSGA is computationally expensive although the fast version
of non-dominated sorting is suggested in NSGA-II.

There are several more MOGAs like Strengh Pareto Evolutionary Algorithm
(SPEA) (Zitzler and Thiele, 1999), Micro Genetic Algorithm (MicroGA) (Coello
Coello Coello and Toscano Pulido, 2001) and Multi-objective Covariant Matrix
Adaption Evolution Strategy (MO-CMAES) (Igel et al., 2007) which received large
attention from MOGAs community. However, due to the big impact and the fame of
NSGA-II, we use it as the main concept of the proposed algorithm and as comparison
in the test problems.
31

3.3.4 Review of diversity preservation in MOEA

The earliest technique to preserve the diversity in solutions is by maintaining


only one individual in a specified grid. The divided individuals into the grids are
shown as in Figure 3.11.

Figure 3.11: Diversity preservation by grid technique

Since the grid sizes must be specified by the designers, it was difficult to
estimate the proper size of grids in all generations of individuals. This kind of user
interaction normally is not a favoured technique for the users.

The second diversity preservation technique was introduced in NSGA which


is ‘niche counting’. In the specified region, the neighbour individuals of the
considered individual are counted. These counted values are employed in the binary
tournament when the non-dominated rank cannot decide the preferred individual (the
same rank). Figure 3.12 illustrates the niche count technique for 12 individuals.
32

Figure 3.12: Niche counting

Like the grid technique, the drawbacks of this technique are:


i. The requirement from the user of the specified niche region
ii. This technique is difficult to be implemented when the objectives are more
than two.

Then in NSGA-II, the drawbacks of the grid and niche counting techniques
are overcomed in the ‘crowding distance’ technique. The crowding distance of an
individual is calculated based on the sum of two neighbours distance in each
objective based on their non-dominated front. Figure 3.13 shows the individuals in a
non-dominated front and the components in the crowding distance calculation.

Figure 3.13: The crowding distance calculation


33

From the Figure 3.10, the crowding distance, cd for the considered individual
in two objectives is given by,
d1 2 d
cd = ( ) + ( 2 )2 .
dx dy
(3.8)
In Section 4.2.5, the detailed procedure of the crowding distance is explained.

3.4 Elitist Non-dominated Sorting Genetic Algorithm (NSGA-II)

Since the proposed algorithm, GCGA is compared to the elitist Non-


dominated Sorting Genetic Algorithm (NSGA-II), the algorithm is explained in
further. Like its predecessor, NSGA-II maintains the same fitness assignment, the
non-dominated sorting procedure. However, the complexity of the sorting procedure
has been reduced from O(MN2) to O(MNlogN). The fitness sharing in NSGA which
has been largely criticized was removed and replaced with the crowding distance as
the diversity preservation technique. The crowding distance works as the second
sorting procedure for the individuals that have the same non-dominated rank.
Furthermore, the globally sensed elitism technique that introduced in NSGA-II has
significantly speeded up the convergence to true pareto front.
34

3.5 Summary

This chapter has reviewed on the basic principles of EA and the popular
MOEA in the literature. The pareto based approaches have been the favourite
approach for the researchers in developing the new optimization algorithm. As well
as the diversity preservation techniques, the methods that required less designer
inputs like the crowding distance is favoured.

With all these basic understandings in EAs and MOEAs, we are now ready to
introduce our proposed algorithm, Global Criterion Genetic Algorithm (GCGA) in
the next chapter.
CHAPTER 4

METHODOLOGY

4.1 Introduction

This chapter gives the methodology of the thesis where each part of the
proposed algorithm, Global Criterion Genetic Algorithm (GCGA) is explained.
Moreover, the mathematical representation of the rotational inverted pendulum (RIP)
is derived as the target plant of proposed algorithm.

4.2 The Proposed Global Criterion Genetic Algorithm

This section describes the details of the proposed algorithm in the thesis
which called Global Criterion Genetic Algorithm (GCGA). GCGA is constructed
based on two different types of fitness assignment, the proposed global ranking
fitness assignment and the popular non-dominated sorting procedure. In this chapter,
the global ranking fitness assignment, non-dominated sorting procedure is first
described before introducing the proposed algorithm.
36

4.2.1 Global Ranking Fitness Assignment

The proposed of global ranking is to rank the individual’s fitness based on the
summation of the ranks of the each objective in individual. The global ranking value,
G for an individual Xi is given by,
M
G Xi = ∑ ri, j ,
j =1
(4.1)
where M is the number of objectives and rj is the rank of Xi in the jth objective.

Table 1 shows an example of the global ranking fitness assignment of 5


individuals, (X1 – X5) with the objective values for three objectives (f1 – f3)
respectively, where the minimization of all objectives is assumed. In the example,
say for the first objective, f1, X 2 has the smallest objective value, followed, in

ascending order, by X 3 , X 4 , X 1 and X 5 . Therefore, X 2 is sub-ranked ‘1’ in X 2 ,

giving r1 ( X 2 ) = 1 , followed by r1 ( X 3 ) = 2 , r1 ( X 4 ) = 3 , r1 ( X 1 ) = 4 and r1 ( X 5 ) = 5 .


Similarly, the sub-ranks are assigned in the same manner to all the solutions, for r2
and r3. Finally, the global rank value, G , for each solution is calculated by summing
all its sub-rank values.

Table 4.1: An example of the global ranking assignment


Solution/ Objective Sub-rank Global rank,
Individual f1 f2 f3 r1 r2 r3 G
X1 1.1566 0.56 0.20 4 5 1 10

X2 0.9656 0.50 0.56 1 3 5 9

X3 0.9823 0.48 0.86 2 1 3 6

X4 1.1456 0.51 0.75 3 4 2 9

X5 1.2566 0.49 0.89 5 2 4 11

The objectives of the introduction of the global ranking fitness assignment


are:
37

i. To reduce the selection pressure which typically faced by pareto-based


approaches when the number of non-dominated individuals becomes vast.
ii. To reduce the complexity of individuals sorting compared to pareto-based
approaches.

4.2.2 Binary Tournament Selection

Among selection techniques in GA, GCGA uses binary tournament because it


is easier to modify the procedure (Deb, 2000) in order to handle constraint in the case
of constrained optimization problems. Because of controller optimization problems
always deal with the constraints that need to be satisfied (e.g closed-loop stability),
the design of GCGA should includes the constraint handling technique in the
algorithm.

The binary tournament selection takes two random individuals then it


compares the fitness between the two and the fitter one is selected to be reproduced.
When the introduction of constraint is in the optimization problems, the preferred
individual from the two individuals is selected based on these three conditions of
feasibility. A feasible individual means an individual that obeys all the constraints
while an infeasible individual refers to an individual that at least violates one of the
constraints. In summary, a selection is done based on these three conditions;
i. The feasible individual is always preferred than the unfeasible one.
ii. If both individuals are feasible, the fitter individual is selected.
iii. If both individuals are infeasible, the individual who has smaller violation
value is preferred.

The Definition 4.1 and 4.2 define the binary tournament selection employed
in the GCGA.
38

Definition 4.1
Binary Tournament Selection without the Constraint: A solution i wins a tournament
with another solution j if any of the following conditions are true (Deb, 2001):
i. If solution i has a better rank, that is ri < rj.
ii. If they have the same rank but solution i has better crowding distance
than solution j, that is ri = rj and di > dj.

In the controller parameters optimization, the obvious constraint is the closed


loop stability of the parameters in the case of linear and time invariant (LTI) system.
The violation, v in this case is the summation of the right-hand-side poles (unstable
poles) of the system. Then this value is introduced in the tournament with the
constraint as in Definition 4.2.

Definition 4.2
Binary Tournament Selection with the Constraint: A solution i wins a tournament
with another solution j if any of the following conditions are true:
i. If solution i is valid but solution j violates the constraint, that is vi = 0 and
vj > 0.
ii. If both solutions are valid but solution i has a better rank, that is vi = vj =
0 and ri < rj.
iii. If both solutions are valid and they have the same rank but solution i has
better crowding distance than solution j, that is vi = vj = 0, ri = rj and di >
dj.
39

4.2.3 Simulated Binary Crossover (SBX)

Binary crossovers like one point crossover or two point crossover have a
successful history in binary coded GA. Motivated from this successes, Deb and
Agrawal (1995) introduce a real coded crossover inspired by the binary coded of one
point crossover to employed in the real coded GA. Simulated binary crossover (SBX)
produces two offsprings (c1 and c2) by recombining two parents (p1 and p2) based on
the user defined crossover distribution, nc.
1
c1 = [(1 − B) p1 + (1 + B) p2 ],
2
(4.2)
1
c2 = [(1 + B) p1 + (1 − B) p 2 ],
2
In this case, the value of spread factor for the gene of ith, B is computed using,
 1
 (2u) n c +1 , u ≤ 0.5

B= 1 (4.3)
 1 n c +1

( 2(1 − u) ) , u > 0.5


The value of u is randomly generated between ‘0’ to ‘1’. Note that not every
parent chosen from selection process undergoes crossover operation, unless the
crossover probability is ‘1’. Thus the decision whether the two chromosomes are
crossover or not depends on the crossover probability. Importantly, the SBX operator
has two main properties:
i. The difference offspring is in proportion to the parent solutions.
ii. Near-parents solutions are monotonically more likely to be chosen as
offspring than solutions distant from parents (Deb, 2001).
40

4.2.4 Polynomial Mutation

Like in the SBX operator, the polynomial mutation changes the chromosome
values based on the user defined mutation index, nm. For gene of ith, a parameter δ is
given by
 1
 (2r ) ( nm +1) − 1, u ≤ 0.5
δi =  1
(4.4)
 ( nm +1)
1 − [2(1 − r )] , u > 0.5

where r is a random number between 0 to 1.

The mutated offspring, y is produced after the product of crossover operation


of chromosome x. For the ith gene in y, the gene xi is given by

y i = x i + ( x i U − x i L )δ i (4.5)
where xiL and xiU are the lower and upper bounds of xi respectively.

Like crossover, not every gene in chromosome experiences mutation


operation unless the value of mutation probability is ‘1’. The decision whether a
particular gene is mutated or not depends on the mutation probability, pµ.

4.2.5 Elitism through Non-dominated Sorting, Crowding Distance and Nearest


Neighbour (k-NN)

Explanation on elitism mechanism and non-dominated sorting procedure in


Chapter 3 has been given. Here, the crowding distance and k-nearest neighbours (k-
NN) technique are first described before elitism mechanism is explained. The
crowding distance was introduced by Deb (2000) in NSGA-II in improving the niche
counting which used NSGA. Crowding distance calculation requires the sorting of
the population according to ascending order of each objective. Consider a population
of N individuals with M objective values. The smallest and largest values
(boundaries) will be assigned as an infinite distance value. For other intermediate
individuals, the distance of each objective, di, is calculated based on Equation 4.6.
41

M f (m + 1) m − f (m + 1) m
di = ∑ f max m − f min m
, (4.6)
m =1

where M is the number of objective, fmax and fmin are the values of maximum and
minimum objective values respectively. The larger the value of the crowding
distance, the smaller (better) its crowdedness property.

The crowding distance requires O(MN2) computational complexity while k-


NN requires O(MN3) computational complexity. In computing k-NN technique, for
each solution, the summation of k Euclidian distance of nearest neighbours plays role
as the indicator of crowdedness to its neighbours. The integer k is calculated based
on Equation 4.7;

k = round 2P * ,
(4.7)
where P* refers to the number non-dominated individuals in the combination of
current population and the new population generated. The largest and smaller values
(bondaries) are assigned with infinity value while the intermediate individuals are
assigned by k-NN technique. The distance of individual i to its nearest k individuals
is given by

k M fi − f jnear
∑ ∑( )2 ,
m
di = m

j =1 m =1 f max m − f min m
(4.8)
Where fi is the objective value for individual i, fnear is the objective value for nearest
neighbours of individual i, fmax is the maximum objective value and fmin is the
minimum objective value. Like the crowding distance calculation, the larger the
value of K-NN distance, the small (better) the crowdedness property.

The addition of offspring population, along with the current population, will
be combined and sorted according to the non-dominated sorting and the crowding
distance in elitism. This combination will has 2N size of population. If the number of
non-dominated individual are less then N, the crowding distance values used as the
indicator on the rank of the individuals in the same front of non-dominated layer. In
42

other words, the individuals in the same non-dominated front will be sorted
descendingly.

When the number of non-dominated individuals is more than N, the


dominated individuals are automatically rejected. At this stage, the K-NN values will
take the crowding distance’s place to descending sort the non-dominated individuals.
The Figure 4.1 shows our proposed elitism mechanism.

Figure 4.1: The elitism mechanism in GCGA

The combined two populations will be sorted as in above procedures to


differentiate between the survivors and rejected population. The reasons we develop
this mechanism are based on two arguments:
i. The k-NN technique gives better information of crowdedness of
individuals than the crowding distance technique (Deb et al., 2005).
According to Kukkonen and Deb (2006), the crowding distance failed to
precisely estimated the crowdedness information when the objective
functions are more than two.
ii. To employ the k-NN technique from the start is extremely computational
expensive, O(MN3) than the crowding distance, O(MNlogN). Hence why
it is better to use it at the stage when the crowding distance technique
becomes less effective (di Pierro et al., 2007). When the number of non-
dominated individual becomes vast in the collected population,
(combination of parents and offspring) the number of non-dominated
43

individuals that need to be removed also becomes large. When these


individuals are removed simultaneously from the combined population,
the estimation crowdedness of an individual to its neighbours is incorrect.
The right way is removing these individuals one by one while updating
the crowdedness estimation for every removal. However these steps are
computationally expensive. That is why the k-NN method is more
suitable for this stage.

4.2.6 Complete Loop

Here the complete flow chart for mechanism of GCGA is shown in Figure
4.2. In GCGA, the objective values of every chromosome are converted into global
ranking values and the binary tournament selects the potential parents to be bred.

Figure 4.2: The proposed flow chart of GCGA


44

After the parents undergo genetic operations (SBX and polynomial mutation),
the current population and the newly generated population are combined in the
elitism mechanism. As described before, the survivors the combined population are
decided by the non-dominated sorting, the crowding distance and k-NN techniques.

4.3 Rotational Inverted Pendulum

In this section, the chosen plant to be controlled by the optimized PID


controller is described. The complex, non-linear and under-actuated plant that is in
this research is a rotational inverted pendulum (RIP). RIP consists of a driven arm
which rotates in the horizontal plane and a pendulum that is attached to the arm that
are free to rotate in vertical plane. The plant also consists of a DC servo motor that
acts as an actuator to provide a torque to the pendulum arm. Figure 4.3 shows the
RIP in the downward equilibrium and upward equilibrium.

a) b)
Figure 4.3: The rotational inverted pendulum at a) downward equilibrium and b)
upward equilibrium.
45

The RIP comes with several hardware in its operation which are Microbox as
the controller/processor, a power supply and a driver circuit. The configuration of
these hardware are shown in the Figure 4.4.

Figure 4.4: The configuration of RIP, Microbox, power supply and driver circuit.

In order to operate the RIP in the real time environment, there are several
software needed which are:
i. Mathwork Matlab and Simulink,
ii. xPC Target Simulink Library,
iii. Microsoft Visual Studio C++.

The mechanism of the integration for the all the software and PC can be
illustrated as in Figure 4.5.

Figure 4.5: The configuration of RIP, Micro-box and Matlab Simulink


46

The built controller in Simulink will be uploaded into Microbox using xPC
Target library which converts the simulink model into C++ code. Then the Microbox
works as controller to the RIP with the complement of sensors.

4.3.1 Mathematical representation of RIP

Due to the uncontrolled factors in the swing up mode, it is impossible to


implement GCGA in the real time environment of RIP. Hence the GCGA
optimization is done in the simulation of the RIP and the optimized PID parameters
from the simulation will be implemented in the real-time. In order to achieve that, the
mathematical representation of the input and outputs of the RIP has to be derived.
Consider a RIP as shown in Figure 4.3, the equation of motion for RIP is given by
Equation (4.13).

Figure 4.6: Rotational Inverted Pendulum (RIP)

‫ ܬ‬+ ݉ଶ ݈ଵ ଶ +݉ଶ ܿଶ ଶ ‫݊݅ݏ‬ଶ ߠଶ ݉ଵ ݈ଵ ܿଶ ܿ‫ߠݏ݋‬ଶ ߠଵሷ


ቈଵ ቉ቈ ቉…
݉ଶ ݈ଵ ܿଶ ܿ‫ߠݏ݋‬ଶ ‫ܬ‬ଶ + ݉ଶ ܿଶ ଶ ߠଵሷ

‫ ܥ‬+ ݉ଶ ܿଶ ଶ ߠଶሶ ‫ߠ݊݅ݏ‬ଶ ܿ‫ߠݏ݋‬ଶ ݉ଶ ܿଶ ଶ ߠଵሶ ‫ߠ݊݅ݏ‬ଶ ܿ‫ߠݏ݋‬ଶ − ݉ଶ ݈ଵ ܿଶ ߠଶ ‫ݏ‬ଓ݊


ሶ ߠଶ ߠଵሶ
+ቈ ଴ ቉ቈ ቉…
−݉ଶ ܿଶ ଶ ߠଵሶ ‫ߠ݊݅ݏ‬ଶ ܿ‫ߠݏ݋‬ଶ ‫ܥ‬ଵ ߠଵሶ

0 ߬
+൤ ൨ = ቂ ଵ ቃ.
݉ଶ ݃ܿଶ ‫ߠ݊݅ݏ‬ଶ 0
(4.9)

The equation of motion of pendulum can be written as


߬
‫ݍ)ݍ(ܯ‬ሷ + ܸ௠ (‫ݍ‬, ‫ݍ‬ሶ )‫ݍ‬ሶ + ‫ = )ݍ(ܩ‬ቂ ଵ ቃ,
0
(4.10)
47

where
ߠ
‫ = ݍ‬൤ ଵ ൨,
ߠଶ
(4.11)

‫ ܬ‬+ ݉ଶ ݈ଵ ଶ + ݉ଶ ܿଶ ଶ ‫݊݅ݏ‬ଶ ߠଶ ݉ଶ ݈ଵ ܿଶ ܿ‫ߠݏ݋‬ଶ


‫ = )ݍ(ܯ‬ቈ ଵ ቉,
݉ଶ ݈ଵ ܿଶ ܿ‫ߠݏ݋‬ଶ ‫ܬ‬ଶ + ݉ଶ ܿଶ ଶ
(4.12)

݉ଶ ܿଵ ଶ ߠଶሶ ‫݊݅ݏ‬ଶ (2ߠଶ ) −݉ଶ ݈ଵ ܿଶ ߠଶሶ ‫ߠ݊݅ݏ‬ଶ + ଶ ݉ଶ ܿଶ ଶ ߠଵሶ ‫(݊݅ݏ‬2ߠଶ )


ଵ ଵ

ܸ௠ (‫ݍ‬, ‫ݍ‬ሶ ) = ቎ ଶ
E(4.13)
− ଶ ݉ଶ ܿଶ ଶ ߠଵሶ cos (2,& ) 0

0
B (> ) = 6 9,
−%& (& 8)*+,&
(4.14)

;# = H G I − HG ,#3 .
F F J
(4.15)
 

Let
K# = "# + %& '# &
K& = %& (& &
KL = %& '# (&
KM = "& + %& (& &
KN = %& '# &
KO = %& (& 8
QR &
KP =
S@
QR
KT =
S@
Then
K# + K& )*+& ,& KL (-),&
= (> ) = 6 9,
KL (-),& KM
(4.16)

#
3
(2,& ) −KL ,&3 )*+,& + & K& ,#3 )*+(2,& )
KN ,& )*+ & #
?@ (>, >3 ) = U& V,
−K& ,#3 cos (2,& ) 0
(4.17)

;# = KT I − KP ,#3 , (4.18)
0
B(>) = 6 9,
−KO )*+,&
(4.19)
;
=(>)>/ = : # < − B(>) − ?@ (>, >3 )>3 ,
0
(4.20)
KM −KL (-),&
= W# = X X [X X \]J ^ WX J `a\J ^ 6 9.
#
J −KL (-),& K# + K& )*+& ,&
(4.21)
Y Z Y J J _

Let b = KM K# + KM K& )*+& ,& − KL & (-) & ,& , then


48

XY WX_ `a\^J

= W#
= CWX `a\^
c c
XZ [XJ \]J ^J
E (4.22)
_ J
c c

;
Let

d@ = : # < − B (>),
0
(4.23)

3 0 K I − KP ,#3
d@ = 6KT I − KP ,# 9 − 6 9=6 T 9,
0 −KO )*+,& KO )*+,&
(4.24)

>/ = =W# (d@ − ?@ (>, >3 )>3 ), (4.25)


then

K , 3 )*+& (2,& ) −KL ,&3 )*+,& + & K& ,#3 )*+(2,& ) ,#3
# #

?@ (>, >3 )>3 = & N &


C # E ! ., (4.26)
− & K& ,#3 cos (2,& ) 0 ,&3

#
K , , )*+ 3 3 &
(2,& ) − KL ,&3 )*+,& + K& ,#3 ,&3 )*+(2,& )
& #

?@ (>, >3 )>3 = C & N & # &


E,
− & K& ,#3 cos (2,& )
# & (4.27)

K I − KP ,#3
d@ − ?@ (>, >3 )>3 = 6 T 9…
KO )*+,&
1 1
KN ,&3 ,#3 )*+& (2,& ) − KL ,&3 )*+,& + K& ,#3 ,&3 )*+(2,& )
&

− e2 2 g d@ − ?@ (>, >3 )>3 …


1
− K& ,#3 cos (2,& )
&
2
KT I − KP ,#3 − & KN ,&3 ,#3 )*+& (2,& ) + KL ,&3 )*+,& − & K& ,#3 ,&3 )*+(2,& )
# & #

=C E (4.28)
#
K , 3 &
cos(2, ) − K )*+,
& & # & O &

,/
! # . = =W# (d@ − ?@ (>, >3 )>3 ),
,&/
(4.29)

KM −KL (-),&
,#/ b b
! .=e g…
,&/ −KL (-),& K# + K& )*+& ,&
b b
KT I − KP ,#3 − & KN ,&3 ,#3 )*+& (2,& ) + KL ,&3 )*+,& − & K& ,#3 ,&3 )*+(2,& )
# & #

C E,
K , 3 cos(2,& ) + KO )*+,&
# & (4.30)
& & #

,#/ = hKT I − KP ,#3 − & KN ,&3 ,#3 )*+& (2,& ) + KL ,&3 )*+,& − & K& ,#3 ,&3 )*+(2,& )i …
XY # & #
c

− (& K& ,#3 cos(2,& ) + KO )*+,& ),


X_ `a\^J # &
c
(4.31)

,&/ = jKT I − KP ,#3 − + KL ,&3 )*+,& −


#Xk ^J3 ^Z3 \]J &^J XJ ^Z3 ^J3 \]&^J
l…
WX_ `a\^J &
c & &
49

+ j K& ,#3 cos(2,& ) + KO )*+,& l,


XZ [XJ \]J ^J # &
c &
(4.32)

Assuming linearization at θ2 ≈ 0, then cos θ2 = 1, sin θ2 = θ2,

,&3 ,#3 )*+& (2,& ) = 0, ,&3 )*+,& = 0 and ,#3 cos (2,& ) = 0.
& &

Let b = m"# + %& '# & n("& + %& (& & ) − (%& '# (& )&,

,#/ = mKT I − KP ,#3 n − c_ (KO ,& ),


XY X
c
(4.33)

"& + %& (& & QR QR &


,#/ = o I− ,3 p + ⋯
m"# + %& '# & n("& + %& (& & ) − (%& '# (& )& S@ S@ #

(%& (& 8,& ),


@J rZ `J
msZ [@J rZ n(sJ [@J `J J )[(@J rZ `J )J
J (4.34)

,&/ = mKT I − KP ,#3 n + (KO ,& ),


WX_ XZ
c c
(4.35)

−%& '# (& QR QR &


,&/ = o I − ,3 p + ⋯
m"# + %& '# & n("& + %& (& & ) − (%& '# (& )& S@ S@ #

(%& (& 8,& ).


sZ [@J rZ J
msZ [@J rZ n(sJ [@J `J J )W(@J rZ `J )J
J (4.36)

The numerical of the mechanical and electrical parameters are given in Table 4.2

Table 4.2: The mechanical and electrical parameters of the RIP


Measurements (SI
Parameters Description
unit)
m1 Mass of the arm 0.056 kg
m2 Mass of the pendulum 0.022 kg
l1 Length of the arm 0.16 m
l2 Length of the pendulum 0.16 m
Distance to arm centre
c1 0.08 m
of mass
Distance to pendulum
c2 0.08 m
centre of mass
J1 Inertia of arm 0.00215058 kg-m2
J2 Inertia of pendulum 0.00018773 kg-m2
Rm Armature resistance 2.5604 Ω
Kb Back EMF constant 0.01826 V-s/rad
Kt Torque Constant 0.01826 V-s/rad
50

Let x1 = θ1, x2 = θ2, x3 = θ1’ and x4 = θ2’. With physical parameters given in
Table 4.9, the state space representation of the linearized system is given by,

t#3 0 0 1 0 t# 0
t&3 0 0 0 1 t 0
e g=e ge g +e gI
&
tL3 0 −4.4256 −0.0390 0 tL 2.1334
(4.37)
tM3 0 42.6498 0.0334 0 tM −1.8286

The nonlinear representation in Equations (4.34) and (4.36) are required in


the simulation process of the RIP to evaluate the performances and the linearized
representation in Equation (4.37) is used in the evaluating the closed-loop stability.

4.3.2 Integration of GCGA in the PID Tuning of RIP

In controlling the RIP, the modes of control can be divided into two
categories:
i. The swing-up mode - bringing the pendulum from the lower equilibrium to
the positions near the upper equilibrium.
ii. The balancing mode - taking place the swing-up mode to control the arm at
the desired position while the pendulum is stood still at the upper equilibrium.

The design of the controller in the swing-up mode is based on the estimation
of the energy in the pendulum thus it is not suitable to use in the optimization
methods. Hence this research only focused on optimization of design the PID
controller parameters for the balancing mode. However, the swing-up mode is still
required in the real implementation thus we design a swing-up controller based on
works of Acosta (2010).

In this PID controller optimization, three time-domain performances are


employed as the objective functions which are the integrated time weighted absolute
error (ITAE), overshoot (OS) and settling time (ts). Due to the closed loop system is a
type of sixth orders system, these performances cannot be evaluated analytically thus
51

we evaluated them empirically from the responses of simulation. Let say the arm
response, θ1 and the pendulum response, θ2 are both desired at 0 radian. The ITAE
function is given as in Equation (5.1),

{|}d = ‚ƒ„|^Z (€).€|[∑‚ƒ„|^J (€).€|


∑ 

…
(4.38)

where T is the simulation period which in our case is 20s and N is the number of
samples. Then the OS value is the maximum value of arm response, θ1 as in Equation
(5.2),
†‡ = max (,# (‰)) (4.39)
The third performance settling time, ts is programmingly evaluated from the
θ1 and θ2 responses. In our case, ts is defined as the time where the response settles at
most 5% of its desired position. If the settling time for the θ1, tsθ1 is bigger than the
settling time for θ2, tsθ2, then tsθ1 will be the ts and vice versa.

A good controller tuner provides the optimized controller parameters which


has the closed-loop stability in them. Thus to realised this property, the closed loop
stability is implemented as the constraint in our optimization problem. The closed-
loop stability constraint, cs is defined as in Equation (4.40)
() = ∑…
Œ2 SIŠ'( (*)),

(4.40)
where N is number of poles where lie in the right-hand side of s-plane and p* is the
poles where lie in the right-hand side of s-plane. The larger the value of cs means the
greater the constraint violation.

Table 4.3 shows the setting parameters for GCGA in order to optimize gains
in the two PID controllers. The comparison for this section is as shown in the table
below.
52

Table 4.3: Parameters of the PID controller optimization


Parameters Values
No. of generations 250
Population Size 100
Probability of Crossover 0.8
Probability of Mutation 0.1666
Distribution index in SBX 20
Distribution in polynomial mutation 20
Kp1 [0,100]
PID 1 Ki1 [0,50]
Variable Kd1 [0,50]
bounds Kp2 [0,500]
PID 2 Ki2 [0,300]
Kd2 [0,300]

Since the simulations is done with assumption of the pendulum and already
brought up near the upper equilibrium (0 radian) by the swing-up controller, we set
the initial position of the arm, θ1 at 0.29 radian and the pendulum position, θ2 at 0
radian with the desired position for both positions are at 0 rad.

4.4 Summary

In this chapter, the basic of the evolutionary algorithm is explained and we


introduce our MOGA, Global Criterion Genetic Algorithm (GCGA). Furthermore,
the target plan, rotational inverted pendulum (RIP) is introduced in the mathematical
representation and in the proposed control strategy as application of GCGA in the
next chapter.
CHAPTER 5

RESULTS AND DISCUSSION

5.1 Introduction

In this chapter, the proposed GCGA is compared to NSGA-II in several test


problems and we implement the GCGA in the optimization of PID controller for
balancing the RIP. Both simulation and real environment of the resulted PID gains
from GCGA are covered.

5.2 Performance Evaluation of GCGA

In order to benchmark the proposed algorithm with NSGA-II, a series of test


problems called ZDT and DTLZ test problems are chosen. The ZDT test problem
contains five problems which have difference type of searching problem while the
DTLZ test problems contain three problems with three objectives to solve for each
problem. Moreover, two performance metrics are employed in order to evaluate two
distinct goals in the multi-objective optimization problems which are convergence
metric and diversity metric.
54

5.2.1 ZDT and DTLZ Test Problems

In this section, GCGA is compared with one of the famous MOEAs, the Non-
dominated Sorting Genetic Algorithm II (NSGA-II) using the ZDT and DTLZ series
test functions. The ZDT and DTLZ test functions are described in Table 5.1. The test
series consists of various properties of problems such as convex, non-convex,
disconnected and non-uniformly spaced problems. All the symbols and description
for test problems are given in Table 5.2.

Table 5.1: ZDT and DTLZ Test problems used to evaluate the
performance of GCGA and NSGA-II
Variable Optimal
Test Problem n Objective Function
bounds solutions
f1 = x1
 x1  x1 ∈ [0 ,1 ]
ZDT1 f 2 = g ( x ) 1 − 
30 [0,1]  g (x)  xi = 0
(Convex)

∑ i = 2 x i  i = 2,K , n
n

g (x) = 1 + 9 
n −1
f1 ( x ) = x1
ZDT3
f 2 = g ( x )(1 −
x1 x
− 1 sin(10πx1 )) x1 ∈ [0 ,1 ]
(Convex, 30 [0,1] g ( x) g ( x) xi = 0
 n x 
disconnected) ∑i=2 i  i = 2,K , n
g ( x) = 1 + 9 
n −1
f 1 ( x ) = x1
2
  x1   x1 ∈ [0 ,1 ]
ZDT2 f 2 = g ( x ) 1 −   

30 [0,1]   g ( x )   xi = 0
(Non-convex)
g (x) = 1 + 9
(∑ n
i= 2
xi ) i = 2,K , n
n −1
f1 = x1
x1 ∈ [0,1] x1 ∈ [0 ,1]
ZDT4  x1 
10 xi ∈ [− 5,5] f 2 = g ( x ) 1 −  xi = 0
 g ( x) 
(Non-Convex)
i = 2, K , n i = 2,K , n
∑ i = 2 [xi2 − 10 cos( 4π xi ) ]
n
g ( x ) = 1 + 10 ( n − 1) +
55

f 1 = 1 − exp( − 4 x1 ) sin 6 ( 6 π x i )
ZDT6 2
  x1  
(Non-convex, f 2 = g ( x ) 1 −    x 1 ∈ [0 ,1 ]

  g ( x )  
Non- 10 [0,1] xi = 0
0 . 25
   i = 2 ,K , n
  ∑ i = 2 i
n
uniformly x
g ( x) = 1 + 9 
spaced) n −1 
 
 

1
f1 ( x ) = (1 + g ( x M )) x1 x 2 ... x m −1
2
1
f 2 ( x ) = (1 + g ( x M )) x1 x 2 ...( 1 − x m − 1 )
2

DTLZ1 ⋅
10 [0,1] xi = 0.5∀i ∈ xM

1
f m ( x ) = (1 + g ( x M ))
(1 − x1 )
2
g ( x M ) = 100 ( x M + ∑ ( x i − 0 . 5 ) 2 ...
xi ∈ x M
− cos( 20 π ( x i − 0 . 5 )))

π
f1 ( x ) = (1 + g ( x M )) cos( x1 )...
2
π π
cos( x 2 )... cos( x m − 1 )
2 2
π
f 2 ( x ) = (1 + g ( x M )) cos( x1 )...
2
π π
cos( x 2 )... sin( x m − 1 )
DTLZ2 10 [0,1] 2 2 xi = 0.5∀i ∈ xM



π
f m ( x ) = (1 + g ( x M )) sin( x1 )
2
g ( xM ) = ∑ ( x i − 0 .5 ) 2
xi ∈ x M

π
f1 ( x ) = (1 + g ( x M )) cos( x1 )...
2
π π
cos( x 2 )... cos( x m − 1
)
2 2
π
f 2 ( x ) = (1 + g ( x M )) cos( x1 )...
2
π π
cos( x 2 )... sin( x m − 1 )
DTLZ3 10 [0,1] 2 2 xi = 0.5∀i ∈ xM



π
f m ( x ) = (1 + g ( x M )) sin( x1 )
2
g ( x M ) = 100 ( x M + ∑ ( x i − 0 . 5 ) 2 ...
xi ∈ x M
− cos( 20 π ( x i − 0 . 5 )))
56

Table 5.2: Symbols and description for the test problems


Symbol Description
n Number of decision variables
m Number of objective functions
xM {xi ∀i = n − m + 1, n − m + 2,..., n}
M n – m +1

In order to ensure consistency in the results, each algorithm will be run 10


times for each test problem. The user defined parameters for both the GCGA and
NSGA-II in all the test functions are summarized in Table 5.3

Table 5.3: GCGA and NSGA-II user defined parameters


Parameters setting Values
Number of generation 250
Population Size 100
Probability of Crossover 0.8
Probability of Mutation 1/n
Distribution index in SBX 20
Distribution index in polynomial mutation 20

5.5.2 Convergence Metric, Y

Convergence metric, Y, measures the extent of the convergence for the


obtained solutions to the known set of Pareto optimal solutions. For the sake of
metric computation, it is essential for the set of Pareto optimal solution to be known.
There a few ways of calculating but here we present the most direct calculation by
averaging all the minimum normalised distances between final non-dominated
solutions, f and true pareto front, P*. These minimum normalised distances, d can be
computed using this formula

P* M f k (i ) − f k ( j ) 2
d i = Min ∑( f )
j =1 k =1 k max − f k min (5.1)
57

where fmax and fmin are the maximum and minimum value in kth objective. Note that
in every test problem we generate about 500 points along the pareto front.

Table 5.4: Convergence metric for GCGA and NSGA-II


Test Convergence
Algorithm
Problem Mean Variance
GCGA 0.001235 3.4890x10-8
ZDT1
NSGA-II 0.001460 4.9333 x10-8
-4
GCGA 7.1039x10 3.3496 x10-9
ZDT2
NSGA-II 7.5031 x10-4 5.3433x10-10
GCGA 0.003700 8.0004x10-8
ZDT3
NSGA-II 0.003940 4.2666x10-8
GCGA 0.005120 3.4844x10-7
ZDT4
NSGA-II 0.009020 1.2477x10-5
GCGA 0.028300 4.2333x10-8
ZDT6
NSGA-II 0.033130 0.001426
GCGA 0.035280 2.5748x10-5
DTLZ1
NSGA-II 0.173390 0.024348
GCGA 0.025171 2.4158x10-5
DTLZ2
NSGA-II 0.030260 2.8371x10-5
GCGA 0.063520 5.5314x10-4
DTLZ3
NSGA-II 0.128090 0.003411

From Table 5.4, GCGA has better convergence performance for all the test
problems compared to NSGA-II. The average percentage convergence for GCGA
with the respect to NSGA-II for all the test problems is 35.57%. The critical ranking
procedure in global ranking fitness assignment has improved the convergence
properties in approximating the true pareto front.

5.2.3 Diversity Metric

Although the convergence metric itself can provide some information about
the diversity in the solution, here we choose another metric to represent the diversity
metric. The diversity metric, ∆ measures the diversity among the obtained non-
dominated solutions, P* respect to the reference set (RS). In calculating diversity, P*
are projected on a hyper-plane, therefore losing one dimensional of the points. For
the test problems in this research, the values of f1 will be the RS, which discretized in
58

a number of grids within the range of 0.01. The more grids that contain a point of P*
and also a point of RS, the higher the metric value. This metric defines two arrays,
H(i) and h(i) as presented in Equation 5.2 – Equation 5.3

(5.2)

(5.3)

Then the array, m(h(i)) is calculated based on the neighbouring scheme as in


Table 5.5.

Table 5.5 Neighbouring scheme for m(h(i))


h(i - 1) h(i) h(i + 1) m(h(i))
0 0 0 0
0 0 1 0.5
1 0 0 0.5
0 1 1 0.67
1 1 0 0.67
1 0 1 0.75
0 1 0 0.75
1 1 1 1

Finally, the diversity metric, ∆ is computed using

∆= ∑
∑()‘„’ @(Ž())
”()‘„’ @(“())
(5.4)

For diversity metric, with the assumption that the final non-dominated
solutions obtained approach the global pareto front, the higher the metric become and
so the better the diversity properties in solutions.
59

Table 5.6: The diversity metric for GCGA and NSGA-II


Test Diversity Metric
Algorithm
Problem Mean Variance
GCGA 0.590980 0.001357
ZDT1
NSGA-II 0.543000 8.8838x10-4
GCGA 0.688790 0.001123
ZDT2
NSGA-II 0.650970 6.0583x10-4
GCGA 0.484710 3.8253x10-4
ZDT3
NSGA-II 0.573120 1.7533x10-4
GCGA 0.704550 7.3213x10-5
ZDT4
NSGA-II 0.694380 0.001970
GCGA 0.616080 2.204951
ZDT6
NSGA-II 0.678460 4.0070x10-4
GCGA 0.591840 2.8918x10-4
DTLZ1
NSGA-II 0.527270 0.011351
GCGA 0.638700 7.7315x10-4
DTLZ2
NSGA-II 0.659310 2.3672x10-4
GCGA 0.568300 9.0227x10-4
DTLZ3
NSGA-II 0.509750 0.029022

From the Table 5.6, GCGA has better (larger) diversity metrics in all test
problems except in ZDT3, ZDT6 and DTLZ2. This phenomenon is not surprising
because the crowding distance technique in NSGA-II still works well in the two or
three objectives optimization problems. However, the crowding distance technique
has shown degradation in diversity as in proves by di Pierro et al (2007).

5.2.4 Comparison GCGA and NSGA-II in ZDT4 Test Problem

In order to further investigate the convergence and diversity properties, we


took a closer look at the non-convex and challenging ZDT4 test function. ZDT4
problem has 219 local pareto fronts and this property causes difficulties for both
algorithms to converge globally. Here, the performances of GCGA and NSGA-II
from the initial population until the 250th generation were observed. The
investigation started with the same initial population, and used the same parameters
as shown in Table 5.3. Figures in Table 5.7 show the transitions of the solutions at 1,
50, 100, 150, 200 and 250 of generation for GCGA and NSGA-II. The figures
visualize the distribution of the solution in two objective space, f1 and f2.
60

Table 5.7: The transition of solution at Kth generation for GCGA and
NSGA-II in ZDT4 problem.

K GCGA NSGA-II

50

100
61

150

200

250

From Table 5.7, the continuous lines in all the plots represent the true pareto
front in ZDT4. At the 50th generation, it can be observed that GCGA has shown a
better convergence property with the solutions scattered in range of f1 = [1 – 3.5]
compared to NSGA-II, whose solutions in the region given by f1 = [3 – 5.5].
However, the solutions of NSGA-II have better distribution whereas the solutions in
GCGA are slightly crowded in a few places. At the 100th generation, the solutions in
GCGA started to form the pareto front but the solutions in NSGA-II were still
struggling to form the pareto front as well as to maintain their diversity. At the 150th
generation, the solutions from GCGA almost covered all the points in the pareto
front, but for NSGA-II, there are still some dominated solutions which are quite far
62

from the pareto front. Then at the 200th generation, almost all the solutions in GCGA
become non-dominated solutions as they cover almost all the pareto front compared
to those of NSGA-II. Finally, at the 250th generation, both algorithms converge and
cover approximately all the points in the pareto front. It is interesting to note that the
convergence of GCGA not only surpasses NSGA-II when approaching the true
pareto as in the results of convergence metrics in Table 5.7, but this property actually
has started at the earlier generations. However at some point, GCGA tends to lose its
diversity in the earlier stage of the generation (as shown by K=50 in Table 5.7)
before recovering back when approaching the pareto front. This inconsistency of
diversity may caused by the critical ranking procedure in the global ranking
mechanism. However, the introduction of K-NN technique at the final stage of
optimization helps the solutions to be well distributed.

5.3 Procedure of choosing the desired solution from final pareto front from
GCGA optimization

Since GCGA optimizes the multi-objective control performances


simultaneously, the final pareto front contain variety of solutions in the sense of
performance. In this research, we do not intend to restrict the users to choose a
specific solution as the absolute solution; however the following are the steps to
guide the users to choose a suitable solution for their applications;
i. The minimum requirements for each objective are determined.
ii. If one or more solutions from pareto front satisfy the requirements, any of the
satisfied solutions is chosen.
iii. If there is no solution from pareto front that satisfies the requirements, the
priority of the requirements are made. The desired solution is determined
based on the priority. Hence the chosen solution may not satisfy the less
prioritized requirement.
63

5.4 Simulation Results

All of the final 100 individuals in GCGA’s simulation become pareto front
with all the individual obey the closed-loop stability constraint. This property gives
credit to GCGA optimization due to its wider option of solutions to designer instead
of depending on one solution with closed-loop stability. The Figure 5.1 shows the
final pareto front from GCGA optimization in the three axis of objectives.

0.8

0.6
ITEA

0.4

0.2

0
0.395
0.39
0.385
12
0.38 10
8
0.375 6
4
OS 0.37 2
0
ts (s)

Figure 5.1: The final pareto front from GCGA optimization in the three
objective space

The Figure 5.2 and 5.3 show the responses of all the final solution for the arm
and the pendulum in pareto front in GCGA.

Figure 5.2: The arm responses for individuals in the final pareto front
from GCGA
64

Figure 5.3: The pendulum responses for individuals in the final pareto front from
GCGA

The trade-off between ts, OS and and ITEA can be seen in the both figures and
for instance, the solution with the minimum OS has longer ts. This variety of
solutions provided by GCGA help the designers to analyse the suitable solution
according their specification of performances. This phenomenon is what makes the
optimization from GCGA is unique compare to conventional methods where it is not
provides an absolute solution to designer but instead, gives the possible optimized
solution for designer to choose for their application.

The PID parameters and the objective values for all the solutions in pareto
front are provided in the Appendix A. For the sake of clarity in investigation, we
choose three individuals which have the most minimum objective value of each
objective from 100 individuals in pareto front. These individuals sometimes called
the extreme solutions provide the individuals with have the most minimum ITAE, ts
and OS respectively. Since it is difficult to determine these individuals from the 3D
plot in Figure 5.1, these solutions are illustrated in 2D plots as in Figure 5.4 and 5.5.
65

0.388
most minimum ts
0.386

0.384

0.382

0.38
OS

0.378

0.376

most minimum OS
0.374

0.372

0.37
0 2 4 6 8 10 12
ts (s)

Figure 5.4: The most minimum ts and OS in the pareto front

0.7

0.6
most minimum OS

0.5

0.4
ITAE

0.3

0.2
most minimum ITAE

0.1

0
0.37 0.372 0.374 0.376 0.378 0.38 0.382 0.384 0.386 0.388
OS

Figure 5.5: The most minimum OS and ITAE in the pareto front

The PID parameters and the objective values for the three extreme solutions
are provided in Table 5.8.
66

Table 5.8: PID Parameters and their objective values for the three extreme
solutions from GCGA

Kp1 Ki11 Kd1 Kp2 Ki2 Kd2 ts (s) OS ITAE

26.6439 0.00418 16.3553 356.959 0 52.9887 1.536 0.1018 2.8598

25.0361 1.67x10-14 17.3544 351.483 0 57.3089 2.737 0.0717 2.8590

26.6439 2.78x10-14 16.3553 356.9591 0 52.9887 1.538 0.1011 2.8485

From the Table 5.8, the integral gains, Ki for the PID controllers for the arm
are close to zero while the controllers for the pendulum just requires PD control type.
This phenomenon also shows the optimization in GCGA not only optimize the
objective functions, but also choose the best PID type for the application. The
responses for the arm and pendulum for these three individuals are shown in Figures
5.6 and 5.7.

Figure 5.6: The arm responses of the three individuals with the most minimum
objective.
67

Figure 5.7: The pendulum responses of the three individuals with the most
minimum objective.

The solution with most minimum OS exhibits very distant shape of responses
with the other two. This phenomenon cannot be produced in the optimization of
single objective optimization. The introduction of diversity preservation technique in
GCGA causes this phenomenon to occur. This phenomenon also proves the
optimization using GCGA only provide a good solution, but also gives the designers
a variety of options to choose depends on their application.

5.5 Real RIP Results

The PID optimization with GCGA may has shown success in the simulation
results. However this optimization yet proving it can be apply in the real target plant.
Hence the PID parameters optimization results from simulations are fed into PID
controllers for the real plant. The experiments begin with swinging-up the pendulum
by the swing-up controller then the pendulum is balanced with the optimized
controllers for the three extreme solutions. The Figure 5.5 show the arm responses
and Figure 5.6 shows the pendulum responses for the three solutions.
68

60
most minimum ITAE
most minimum OS
most minimum ts
40

20
Arm angle (degree)

-20

-40

-60
0 10 20 30 40 50 60
time (s)

Figure 5.8: Arm responses for the three extreme solutions from the final pareto
front in GCGA

350
most minimum ITAE
most minimum OS
300 most minimum ts

250
Pendulum Angle (degree)

200

150

100

50

-50
0 10 20 30 40 50 60
time (s)

Figure 5.9: Pendulum responses for the three extreme solutions from the final
pareto front in GCGA
69

Unlike the simulations responses, the real plant responses experience small
angle oscillations for both the arm and pendulum. These oscillations occurred due to
uncertainties in the mathematical modelling. This phenomenon is the drawback of
the optimization using GCGA because the model needs to be well estimated.

5.6 Summary

The results in this chapter have shown GCGA has better convergence
property than NSGA-II in the test problems. The wider options of solutions from
resulted pareto front gives flexibility to designers in order to analyze a suitable PID
parameters in their application. Moreover, the guarantee of closed-loop stability in
the presented parameters gives reliability property of tuning in GCGA. In
comparison with the conventional approaches of PID tuning, GCGA has several
advantage;
i. The variety of the solutions provided from GCGA is more reliable,
because the closed-loop stability requirement is simultaneously added in the
optimization.
ii. The plant complexity is not a factor in the difficulties to tune PID
controller using GCGA.

However, the PID tuning using GCGA may face several drawbacks such as;
i. The mathematical/dynamic model of the plant has to be precisely
estimated as the less precise model may results in less optimal PID
parameters.
ii. GCGA requires complex coding structure with than simple GA and
less applicable for the users who not familiar with the fundamental
principles in GA.
CHAPTER 6

CONCLUSION AND FUTURE RECOMMENDATIONS

6.1 Conclusion

A MOEA based on the combination of population and pareto based fitness


assignments method with the good convergence property has been proposed in this
research. Based on the five ZDT and three DTLZ test problems, the proposed GCGA
was able to converge faster than NSGA-II. The introduction of the critical global
ranking fitness assignment in GCGA has speed up the convergence faster. However,
in the early generation, GCGA tend to lose its solution’s diversity. Hence the
diversity preservation technique in GCGA should be improved further in order to
overcome such problem. The recommendations to improve this drawback are
discussed in the subchapter 6.2.

In the application in the tuning PID parameters, the GCGA has successfully
provides the reliable and optimized PID parameters in the both simulation and real
plants results. However, the real plant results experience the oscillations of the arm
and pendulum responses in the small angles. The uncertainties in the mathematical
model of RIP results these oscillations to occur.

Nevertheless, the capability of GCGA in tuning PID controller parameters


especially in the high order and unstable systems has overcome the problems and
restrictions in the conventional tuning approaches. The GCGA optimization also
found useful in determination of the best PID types (PID, PD or PI type). Hence,
optimization of controller tuning parameters using GCGA should finds attention near
future.
71

6.2 Future Recommendations

Although this research manages to well tune the PID parameters for a
complex plant like rotation inverted pendulum (RIP), there are several suggestions
for the further research in the algorithm design or the implementation in the RIP.
Some suggestions for improvement of the system are:
i. Improving the diversity preservation technique in K-NN in the GCGA
design. It can be in two ways:
(a). Pruning the distance of K-NN value to accurately estimate the solutions
crowdedness. It means the calculation of K-NN is rapidly done when a
crowded individual is removed from the non-dominated front until desired
number of non-dominated individuals met. In the original design of GCGA,
the crowded individuals are removed simultaneously which may cause the
slight inaccurate of estimation.
(b). The K-NN technique can be replaced by the hyper-volume measure
which works very efficient in the large number of objective.
ii. In the derivation of the RIP, there are very small parameters that have been
neglected (assuming all values as zero) in order to simplify the mathematical
works. However, if these parameters are well estimated, the small oscillation
of arm and pendulum in the real-time implementation can be reduced. The
parameters like friction torque, Coulomb friction torque, static friction torque
and Stribeck’s velocity can be empirically estimated by performing a few
experiments.
72

REFERENCES

Acosta, J. (2010). Furuta’s Pendulum: A Conservative Nonlinear Model for Theory


and Practise. Mathematical Problems in Engineering, 29.
Astrom, K. J. and Hagglund, T. (1988). Automatic tuning of PID controllers.
Instrument Society of America, 7-8.
Åström, K. J. and Hägglund, T. (2001). The future of PID control. Control
Engineering Practice, 9, 1163-1175.
Bemporad, A., Morari, M., Dua, V. and Pistikopoulos, E. N. (2002). The explicit
linear quadratic regulator for constrained systems. Automatica, 38, 3-20.
Chen, G. 1992. Linear Controller Design: Limits of Performance. JSTOR.
Coello, C. A. C. and Lamont, G. B. (2004). Applications of multi-objective
evolutionary algorithms, World Scientific Pub Co Inc.
Coello, C. A. C., Lamont, G. B. and Van Veldhuizen, D. A. (2007). Evolutionary
algorithms for solving multi-objective problems, Springer-Verlag New York
Inc.
Coello C. and Toscano Pulido, G. (2001). A micro-genetic algorithm for
multiobjective optimization. In, 2001. Springer, 126-140.
Cohen, G. H. and Coon, G. (1953). Theoretical consideration of retarded control.
Trans. Asme, 75, 827–834.
Cominos, P. and Munro, N. (2002). PID controllers: recent tuning methods and
design to specification. In, 2002. IET, 46-53.
Deb, K. (2000). An efficient constraint handling method for genetic algorithms.
Computer methods in applied mechanics and engineering, 186, 311-338.
Deb, K. (2001). Multi-objective optimization using evolutionary algorithms, Wiley.
Deb, K. and Agrawal, R. B. (1995). Simulated binary crossover for continuous
search space. Complex systems, 9, 115-148.
Deb, K., Pratap, A., Agarwal, S. and Meyarivan, T. (2002). A fast and elitist
multiobjective genetic algorithm: NSGA-II. Evolutionary Computation, IEEE
Transactions on, 6, 182-197.
Deb, K., Thiele, L., Laumanns, M. and Zitzler, E. (2005). Scalable test problems for
evolutionary multiobjective optimization. Evolutionary Multiobjective
Optimization, 105-145.
73

Di Pierro, F., Khu, S. T. and Savic, D. A. (2007). An investigation on preference


order ranking scheme for multiobjective evolutionary optimization.
Evolutionary Computation, IEEE Transactions on, 11, 17-45.
Duan, H., Wang, D. and Yu, X. (2006). Novel approach to nonlinear PID parameter
optimization using ant colony optimization algorithm. Journal of Bionic
Engineering, 3, 73-78.
Ender, D. B. (1993). Process control performance: Not as good as you think. Control
Engineering, 40, 180-190.
Fogel, L. J., Owens, A. J. and Walsh, M. J. (1966). Artificial intelligence through
simulated evolution.
Fonseca, C. M. and Fleming, P. J. (1993). Multiobjective genetic algorithms. In,
1993. IET, 6/1-6/5.
Gaing, Z. L. (2004). A particle swarm optimization approach for optimum design of
PID controller in AVR system. Energy Conversion, IEEE Transactions on,
19, 384-391.
Goldberg, D. E. and Holland, J. H. (1988). Genetic algorithms and machine learning.
Machine Learning, 3, 95-99.
Guliashki, V., Toshev, H. and Korsemov, C. (2009). Survey of evolutionary
algorithms used in multiobjective optimization. Problems of Engineering
Cybernetics and Robotics, Bulgarian Academy of Sciences.
Hajela, P. and Lin, C. Y. (1992). Genetic search strategies in multicriterion optimal
design. Structural and Multidisciplinary Optimization, 4, 99-107.
Ho, S. J., Shu, L. S. and Ho, S. Y. (2006). Optimizing fuzzy neural networks for
tuning PID controllers using an orthogonal simulated annealing algorithm
OSA. Fuzzy Systems, IEEE Transactions on, 14, 421-434.
Holand, J. (1975). Adaptation in natural and artificial systems. Ann Arbor Mich:
University Michigan Press.
Hu, H., Hu, Q., Lu, Z. and Xu, D. (2005). Optimal pid controller design in pmsm
servo system via particle swarm optimization. In, 2005. Ieee, 5 pp.
Huang, W. and Lam, H. (1997). Using genetic algorithms to optimize controller
parameters for HVAC systems. Energy and Buildings, 26, 277-282.
Igel, C., Hansen, N. and Roth, S. (2007). Covariance matrix adaptation for multi-
objective optimization. Evolutionary Computation, 15, 1-28.
74

Johnson, M. A., Moradi, M. H. and Crowe, J. (2005). PID control: new identification
and design methods, Springer Verlag.
Kim, D. H. and Cho, J. H. (2005). Adaptive tuning of PID controller for
multivariable system using bacterial foraging based optimization. Advances
in Web Intelligence, 231-235.
Koza, J. R. (1992). Genetic programming: On the programming of computers by
natural selection. MIT Press, Cambridge, MA, USA.
Kukkonen, S. and Deb, K. (2006). Improved pruning of non-dominated solutions
based on crowding distance for bi-objective optimization problems. In, 2006.
Proceedings of the World Congress on Computational Intelligence (WCCI-
2006)(IEEE Press). Vancouver, Canada, 1179-1186.
Kursawe, F. (1991). A variant of evolution strategies for vector optimization.
Parallel Problem Solving from Nature, 193-197.
Mcmillan, G. K. (1983). Tuning and control loop performance. Instrument Society of
America, Research Triangle Park, NC.
Murata, T. and Ishibuchi, H. (1995). MOGA: Multi-objective genetic algorithms. In,
1995. IEEE, 289.
O'dwyer, A. (2009). Handbook of PI and PID controller tuning rules, Imperial
College Pr.
Osyczka, A. (1985). Multicriteria optimization for engineering design. Design
optimization, 1, 193-227.
Pedersen, G. K. M. and Yang, Z. (2006). Multi-objective PID-controller tuning for a
magnetic levitation system using NSGA-II. In: Proceedings of the 8th annual
conference on Genetic and evolutionary computation, 2006. ACM, 1737-
1744.
Polyak, B. and Tempo, R. (2001). Probabilistic robust design with linear quadratic
regulators. Systems & Control Letters, 43, 343-353.
Popov, A., Farag, A. and Werner, H. (2005). Tuning of a PID controller using a
multi-objective optimization technique applied to a neutralization plant. In:
Decision and Control, 2005 and 2005 European Control Conference. CDC-
ECC'05. 44th IEEE Conference on, 2005. IEEE, 7139-7143.
Porter, B. and Jones, A. (1992). Genetic tuning of digital PID controllers. Electronics
Letters, 28, 843-844.
Rechenberg, I. (1973). Evolutionsstrategie, Frommann-Holzboog.
75

Sadasivarao, M. and Chidambaram, M. (2006). PID controller tuning of cascade


control systems using genetic algorithm. Journal of Indian Institute of
Science, 86, 343-354.
Schaffer, J. D. (1985). Some experiments in machine learning using vector evaluated
genetic algorithms. Vanderbilt Univ., Nashville, TN (USA).
Schwefel, H. P. (1965). Kybernetische Evolution als Strategie der experimentellen
Forschung in der Strömungstechnik. Master's thesis, Technical University of
Berlin.
Scott, D. A., Karr, C. L. and Schinstock, D. E. (1999). Genetic algorithm frequency-
domain optimization of an anti-resonant electromechanical controller.
Engineering Applications of Artificial Intelligence, 12, 201-211.
Spears, W. M. (2000). Evolutionary algorithms: the role of mutation and
recombination, Springer-Verlag New York Inc.
Srinivas, N. and Deb, K. (1994). Muiltiobjective optimization using nondominated
sorting in genetic algorithms. Evolutionary computation, 2, 221-248.
Van Overschee, P., Moons, C., Van Brempt, W., Vanvuchelen, P. and De Moor, B.
(1997). RAPID: the end of heuristic PID tuning. JOURNAL A, 38, 6-10.
Wang, X., Li, S., Chen, H. and Mei, Y. (2006). Multi-objective and Multi-district
Transmission Planning Based on NSGA-II and Cooperative Co-evolutionary
Algorithm. In: Zhongguo Dianji Gongcheng Xuebao(Proceedings of the
Chinese Society of Electrical Engineering), 2006. 11-15.
Zhao, J., Li, T. and Qian, J. (2005). Application of particle swarm optimization
algorithm on robust PID controller tuning. Advances in Natural Computation,
444-444.
Zhuang, M. and Atherton, D. (1993). Automatic tuning of optimum PID controllers.
In: Control Theory and Applications, IEE Proceedings D, 1993. IET, 216-
224.
Ziegler, J. and Nichols, N. (1942). Optimum settings for automatic controllers.
Transactions of the, 5, 759-768.
Zitzler, E. and Thiele, L. (1999). Multiobjective evolutionary algorithms: A
comparative case study and the strength pareto approach. Evolutionary
Computation, IEEE Transactions on, 3, 257-271.
APPENDIX A
Matlab and Simulink code for RIP optimization using GCGA

1. GCGA_PID.m
generasi = input('No of Generation = '); %User defined
no. of generation
populasi = input('No of Population = '); %User defined no
of population
M = input('No of Objectives = '); %User defined
no. of objective
V = input('No of Decision Variables = '); %User defined
no. of PID gains
xoverprob = input('Probability of Crossover = '); %User defined
probability of crossover
mutaprob = input('Probility of Mutation = '); %User defined
probability of mutation
mu = input('Distribution Index for Crossover = '); %User defined
index of crossover
mum = input('Distribution Index for Mutation = '); %User defined
index of crossover

%PID gains initilization%


x_min = zeros(1,V);
x_max = zeros(1,V);
%User defined the maximum and minimum of PID gains%
for m = 1 : V
a = sprintf('\nMinimum Value for Variable %d = ',m );
x_min(m) = input(a);
b = sprintf('\nMaximum Value for Variable %d = ',m);
x_max(m) = input(b);
end
tic
% initial population for PID gains
x = zeros(populasi,V); f = zeros(populasi,M);
for n1 = 1 : populasi
for n2 = 1 : V
x(n1,n2) = x_min(n2) + (x_max(n2) - x_min(n2))*rand;
end
%Simulation of pid gains to the nonlinear model of RIP
Kp1 = x(n1,1); Ki1 = x(n1,2); Kd1 = x(n1,3); Kp2 = x(n1,4); Ki2
= x(n1,5); Kd2 = x(n1,6);
[t,x1,y1,y2,y3] = sim('nonlinear_model',20);
[ts,os,ess,cs] = multi_ob(t,y1,y2,y3); %objective calculation%
f(n1,:) = [ts ess os];

end

%offspring initilization
new_f = zeros(populasi,M);
best_f = zeros(generasi,M);
for i = 1 : generasi;
clc
fprintf('\nThe Current Generations = %d \n',i); %display the
current generation
fprintf('\nObjective Values = %d %d %d %d \n',f(1,1 : M));
%display the current objective values
77

stability = stability_measure(x); % closed loop stability


calculation
sum_f = global_ranking(f); %global ranking fitness assignment
[mating_pool] = binary_tournament(x,sum_f,stability); %
selection process to mating pool
new_pop =
genetic(mating_pool,mu,mum,x_min,x_max,xoverprob,mutaprob);
%genetic process

%simulation of RIP for the offspring


for n = 1 : populasi;
Kp1 = new_pop(n,1); Ki1 = new_pop(n,2); Kd1 = new_pop(n,3);
Kp2 = new_pop(n,4); Ki2 = new_pop(n,5); Kd2 = new_pop(n,6);
[t,x1,y1,y2,y3] = sim('nonlinear_model',20);
[ts,os,ess,cs] = multi_ob(t,y1,y2,y3);
new_f(n,:) = [ts ess os];
end
total_x = vertcat(x,new_pop);
total_f = vertcat(f,new_f);

%Elitism mechanism
[total_x,total_f,total_rank,total_cd] =
non_dom_sort(total_x,total_f);
[x,f,indx] = update_pop(total_x,total_f,total_rank,total_cd);
best_f(i,:) = [min(f(:,1)) min(f(:,2)) min(f(:,2))];
%ploting the pareto front
if indx <= populasi
plot3(f(:,1),f(:,2),f(:,3),'linestyle','none','marker','O');
else

plot3(f(:,1),f(:,2),f(:,3),'linestyle','none','marker','O','color','
red');
end
grid

F(i) = getframe;
end
toc

2. multi_ob.m
function [ts,os,ess,cs] = multi_ob(t,y1,y2,y3)

%%settling time calculation%%


indts1 = find(y1 < -0.05 | y1 > 0.05,1,'last');
indts2 = find(y2 < -0.05 | y2 > 0.05,1,'last');
if isempty(indts1) || isempty(indts2)
ts = 20;
else
if t(indts1) > t(indts2)
ts = t(indts1);
else
ts = t(indts2);
end
end
%%overshoot calculation%%
if ts == 20;
os = 5;
78

else
susun = sort(y1);
os = abs(max(susun));
end

%%steady state error analysis%%


for i = 1 : size(y1,1)
er(i) = (t(i))*abs(y1(i));
end
ess = sum(er)./size(t,1);

%%control signal error%%


for i = 1 : size(y1,1)
es(i) = (t(i))*abs(y3(i));
end
cs = sum(es)./size(t,1);

3. stability_measure.m
function stability = stability_measure(x)

stability = zeros(size(x,1),1);
for m = 1 : size(x,1)
Kp1 = x(m,1);Ki1 = x(m,2); Kd1 = x(m,3); Kp2 = x(m,4); Ki2 =
x(m,5); Kd2 = x(m,6);
A = [0 0 1 0;0 0 0 1; 0 -4.425626674046159 -0.038954340258659 0;
0 42.649809572260509 0.033389434507422 0];
B = [0; 0; 2.133378267409323; -1.828609943493705];
C = [1 0 0 0; 0 1 0 0];
D = [0; 0];

plant = ss(A,B,C,D,'inputname','e','outputname',{'y1' 'y2');


PID1 = Kp1 + tf(Ki1,[1,0]) + tf([Kd1,0],[0,1]);
set(PID1,'inputname','i2','outputname','i3');
PID2 = Kp2 + tf(Ki2,[1,0]) + tf([Kd2,0],[0,1]);
set(PID2,'inputname','y2','outputname','i4');
sum1 = ss([-1 1],'inputname',{'thetad','y1'},'outputname','i2');
sum2 = ss([1 1],'inputname',{'i3','i4'},'outputname','e');
controller = connect(PID1,PID2,sum2,{'i2' 'y2'},'e');
sys = connect(PID1,plant,PID2,sum1,sum2,'thetad',{'y1' 'y2'});

real_pole = real(pole(sys));
im_pole = imag(pole(sys));
stab = 0;
for n = 1 : size(real_pole,1)
if real_pole(n) > 0
stab = stab + 1 + real_pole(n);
elseif (real_pole(n) == 0) && (im_pole(n)~=0);
stab = stab + 1;
end
end
stability(m) = stab;
end
79

4. global_ranking.m
function sum_f = global_ranking(f)
[populasi,M] = size(f);
if populasi == 0;
sum_f = [];
else
f_sort = zeros(populasi,M);
indx = zeros(populasi,M);
f_sorted = zeros(populasi,M);

for i = 1 : M
[f_sort(:,i),indx(:,i)] = sort(f(:,i));
end

for i = 1 : M
for j = 1 : populasi
f_sorted(indx(j,i),i) = j;
end
end
f_rank = zeros(populasi,M);
f_rank2 = zeros(populasi,M);

%%for every objective, sort the values by acending order%%


for m = 1 : M
[ff_sort(:,m),indx2(:,m)] = sort(f_sorted(:,m));
f_rank(:,m) = 1 : populasi;
i = 1;
%%to set the same ranks to the same objective value%%
while i <= populasi - 1
a = 1;
j = i + 1;
while j <= populasi
if ff_sort(i,m) == ff_sort(j,m)
f_rank(j,m) = f_rank(i,m);
a = a + 1;
end
j = j + 1;
end
i = i + a;
f_rank(i : populasi,m) = f_rank(i:populasi,m)-(a - 1);
end
%%to sort back to initial indices
for n = 1 : populasi
f_rank2(indx2(n,m),m)= f_rank(n,m);
end
end
sum_f = zeros(populasi,1);
%%%summamtion of ranks%%
for i = 1 : populasi
sum_f(i) = sum(f_rank2(i,:));
end
end
80

5. binary_tournament.m
function [mating_pool] = binary_tournament(x,ranking,stability)

full_pool = x;
full_rank = ranking;
full_stab = stability;
num = size(x,1);
mating_pool =[];
i = 1;
a = num;
while size(mating_pool,1) <= num/2 - 1;
indx_11 = ceil(1 + (a - 1).*rand);
indx_12 = ceil(1 + (a - 1).*rand);
while indx_11 == indx_12
indx_12 = ceil(1 + (a - 1).*rand);
end
if full_stab(indx_11) > full_stab(indx_12)
indx1 = indx_12;
elseif full_stab(indx_11) == full_stab(indx_12)
if full_rank(indx_11) > full_rank(indx_12)
indx1 = indx_12;
else
indx1 = indx_11;
end
else
indx1 = indx_11;
end
mating_pool(i,:) = full_pool(indx1,:);
i = i + 1;
full_pool(indx1,:) = [];
full_rank(indx1) = [];
full_stab(indx1) = [];
a = a - 1;
end

6. genetic.m
function new_pop =
genetic(mating_pool,mu,mum,min,max,xoverprob,mutaprob)

[a,V] = size(mating_pool);
new_pop = zeros(2*a,V);
for i = 1 : a;
indx_11 = ceil(1 + (a - 1).*rand);
indx_12 = ceil(1 + (a - 1).*rand);
while indx_11 == indx_12
indx_12 = ceil(1 + (a - 1).*rand);
end
parent1 = mating_pool(indx_11,:);
parent2 = mating_pool(indx_12,:);
[y1,y2,flag1,flag2] =
new_sbx(parent1,parent2,mu,min,max,xoverprob);
new_pop((2.*(i - 1) + 1),:) =
new_polynomial_mutation(y1,mum,max,min,mutaprob,flag1);
new_pop((2.*(i)),:) =
new_polynomial_mutation(y2,mum,max,min,mutaprob,flag2);
end
81

7. new_sbx.m
function [y1,y2,flag1,flag2] =
new_sbx(parent1,parent2,mu,x_min,x_max,xoverprob)
V = size(parent1,2);
y1 = zeros(1,V);
y2 = zeros(1,V);
xrand = rand;
if (xrand > xoverprob)
y1 = parent1;
y2 = parent2;
flag1 = 0;
flag2 = 0;
else
for n = 1 : V;
u = rand;
if u >= 0.5
if parent1(n) > parent2(n)
c1 = parent1(n);
c2 = parent2(n);
else
c1 = parent2(n);
c2 = parent1(n);
end
if (c1 - x_min(n)) > (x_max(n) - c2)
dum = (x_max(n) - c2);
else
dum = (c1 - x_min(n));
end
if ((c2 - c1) > 0.000000001)
B = 1 + (2./(y1(n) - y2(n)))*dum;
alpha = 2 - B.^(mu+1);
z = rand;
if z <= 1/alpha
yy = (z*alpha).^(1./(mu + 1));
else
yy = (1./(2 - z*alpha)).^(1/(mu + 1));
end
else
yy = 1;
end
y1(n) = 0.5*((c1 + c2) - yy*(c2 - c1));
y2(n) = 0.5*((c1 + c2) + yy*(c2 - c1));
if y1(n) < x_min(n)
y1(n) = x_min(n);
elseif y1(n) > x_max(n)
y1(n) = x_max(n);
else
y1(n) = y1(n);
end
if y2(n) < x_min(n)
y2(n) = x_min(n);
elseif y2(n)> x_max(n)
y2(n) = x_max(n);
else
y2(n) = y2(n);
end
else
y1(n) = parent1(n);
y2(n) = parent2(n);
82

end
end
flag1 = 1;
flag2 = 1;
end

8. new_polynomial_mutation.m
function child =
new_polynomial_mutation(x,mum,x_max,x_min,mutaprob,flag)
V = size(x,2);
child = x;
flag_m = zeros(1,V);
for j = 1 : V
mutrand = rand;
if mutrand > mutaprob;
child(j) = x(j);
flag_m(j) = 0;
else
if ((x(j) - x_min(j))/(x_max(j) - x_min(j))) < ((x_max(j) -
x(j))/(x_max(j) - x_min(j)))
alpha = (x(j) - x_min(j))./(x_max(j) - x_min(j));
else
alpha = (x_max(j) - x(j))./(x_max(j) - x_min(j));
end
r = rand;
if r <= 0.5
delta = ((2*r) + (1-2*r)*(1 -alpha).^((mum +
1))).^(1./(mum + 1)) - 1;
else
delta = 1 - (2*((1 -r)+ 2*(r - 0.5)*(1 - alpha).^(mum +
1))).^(1/(mum + 1));
end
child(j) = x(j) + delta.*(x_max(j) - x_min(j));
if child(j) > x_max(j)
child(j) = x_max(j);
elseif child(j) < x_min(j)
child(j) = x_min(j);
end
flag_m(j) = 1;
end
end
t_flag_m = sum(flag_m);

if (t_flag_m == 0)&&(flag == 0)
j = ceil(rand*V);
if ((x(j) - x_min(j))/(x_max(j) - x_min(j))) < ((x_max(j) -
x(j))/(x_max(j) - x_min(j)))
alpha = (x(j) - x_min(j))./(x_max(j) - x_min(j));
else
alpha = (x_max(j) - x(j))./(x_max(j) - x_min(j));
end
r = rand;
if r <= 0.5
delta = ((2*r) + (1-2*r)*(1 -alpha).^((mum + 1))).^(1./(mum
+ 1)) - 1;
else
delta = 1 - (2*((1 -r)+ 2*(r - 0.5)*(1 - alpha).^(mum +
1))).^(1/(mum + 1));
83

end
child(j) = x(j) + delta.*(x_max(j) - x_min(j));
if child(j) > x_max(j)
child(j) = x_max(j);
elseif child(j) < x_min(j)
child(j) = x_min(j);
end
end

9. non_dom-sort
function [x,f,ranking,cd] = non_dom_sort(x,f)

[N,M] = size(f);
V = size(x,2);
front = 1;
F(front).f = [];
individual = [];
ranking = zeros(N,1);
for i = 1 : N
individual(i).n = 0;
individual(i).p = [];
for j = 1 : N
dom_less = 0;
dom_equal = 0;
dom_more = 0;
for k = 1 : M
if f(i,k) < f(j,k)
dom_less = dom_less + 1;
elseif f(i,k) == f(j,k)
dom_equal = dom_equal + 1;
else
dom_more = dom_more + 1;
end
end
if (dom_less == 0) && (dom_equal ~= M)
individual(i).n = individual(i).n + 1;
elseif (dom_more == 0) && (dom_equal ~= M)
individual(i).p = [individual(i).p j];
end
end
if individual(i).n == 0;
ranking(i) = 1;
F(front).f = [F(front).f i];
end
end

while ~isempty(F(front).f)
Q = [];
for i = 1 : length(F(front).f)
if ~isempty(individual(F(front).f(i)).p)
for j = 1 : length(individual(F(front).f(i)).p)
individual(individual(F(front).f(i)).p(j)).n =
individual(individual(F(front).f(i)).p(j)).n - 1;
if individual(individual(F(front).f(i)).p(j)).n == 0
ranking(individual(F(front).f(i)).p(j)) = front
+ 1;
Q = [Q individual(F(front).f(i)).p(j)];
84

end
end
end
end
front = front + 1;
F(front).f = Q;
end

x = horzcat(x,f,ranking);

[temp,index_of_fronts] = sort(ranking);
for i = 1 : length(index_of_fronts)
sorted_based_on_front(i,:) = x(index_of_fronts(i),:);
end
current_index = 0;
for front = 1 : (length(F) - 1)
distance = 0;
y = [];
previous_index = current_index + 1;
for i = 1 : length(F(front).f)
y(i,:) = sorted_based_on_front(current_index + i,:);
end
current_index = current_index + i;
% Sort each individual based on the objective
sorted_based_on_objective = [];
for i = 1 : M
[sorted_based_on_objective, index_of_objectives] = ...
sort(y(:,V + i));
sorted_based_on_objective = [];
for j = 1 : length(index_of_objectives)
sorted_based_on_objective(j,:) =
y(index_of_objectives(j),:);
end
f_max = ...
sorted_based_on_objective(length(index_of_objectives), V
+ i);
f_min = sorted_based_on_objective(1, V + i);
y(index_of_objectives(length(index_of_objectives)),M + V + 1
+ i)...
= Inf;
y(index_of_objectives(1),M + V + 1 + i) = Inf;
for j = 2 : length(index_of_objectives) - 1
next_obj = sorted_based_on_objective(j + 1,V + i);
previous_obj = sorted_based_on_objective(j - 1,V + i);
if (f_max - f_min == 0)
y(index_of_objectives(j),M + V + 1 + i) = Inf;
else
y(index_of_objectives(j),M + V + 1 + i) = ...
(next_obj - previous_obj)/(f_max - f_min);
end
end
end
distance = [];
distance(:,1) = zeros(length(F(front).f),1);
for i = 1 : M
distance(:,1) = distance(:,1) + y(:,M + V + 1 + i);
end
y(:,M + V + 2) = distance;
y = y(:,1 : M + V + 2);
z(previous_index:current_index,:) = y;
end
85

x = z( : ,1 : V );
f = z(:, V + 1 : V + M);
ranking = z(:,V + M + 1);
cd = z(:,V + M + 2);

10. update_pop
function [x_new,f_new,indx] =
update_pop(total_x,total_f,total_rank,total_cd)

TN = size(total_x,1);

a = 1;
while a < size(total_x,1)
b = a + 1;
while b <= size(total_x,1)
if total_x(a,:) == total_x(b,:)
total_x(b,:) = [];
total_f(b,:) = [];
total_rank(b) = [];
total_cd(b) = [];
end
b = b + 1;
end
a = a + 1;
end

[DN,V] = size(total_x);
M = size(total_f,2);
x_sorted1 = zeros(DN , V);
f_sorted1 = zeros(DN , M);
rank_sorted1 = zeros(DN,1);

[total_sort5,indx5] = sort(total_cd,'descend');
for j = 1 : DN
x_sorted5(j,:) = total_x(indx5(j),:);
f_sorted5(j,:) = total_f(indx5(j),:);
rank_sorted5(j) = total_rank(indx5(j));
cd_sorted5(j) = total_cd(indx5(j));
end

[total_sort1,indx1] = sort(rank_sorted5);
for j = 1 : DN
x_sorted1(j,:) = x_sorted5(indx1(j),:);
f_sorted1(j,:) = f_sorted5(indx1(j),:);
rank_sorted1(j) = rank_sorted5(indx1(j));
cd_sorted1(j) = cd_sorted5(indx1(j));
end

indx = find(rank_sorted1 == 1,1,'last');

if indx <= TN./2


x_new = x_sorted1(1 : TN/2, :);
f_new = f_sorted1(1 : TN/2, :);
else
x_non_dom = x_sorted1(1:indx,:);
86

f_non_dom = f_sorted1(1:indx,:);
K = ceil(sqrt(indx - TN./2)) + 1;
for m = 1 : M
mean1(m) = mean(f_non_dom(:,m));
std1(m) = std(f_non_dom(:,m));
end

for n = 1 : indx
for m = 1 : M
f2(n,m) = (f_non_dom(n,m) - mean1(m))./std1(m);
end
end

for n = 1 : indx
for p = 1 : indx
for m = 1 : M
d1(m) = (f2(n,m) - f2(p,m)).^2;
end
d2(p) = sqrt(sum(d1));
end
[d3,indx3] = sort(d2);
crowd1(n) = sum(d3(1:K));
end
for m = 1 : M
[ff,ind] = sort(f_non_dom(:,m));
crowd1(ind(1)) = inf;
crowd1(ind(indx)) = inf;
end

sum_crowd = zeros(indx,1);
for n = 1 : indx
% sum_crowd(n) = sum(crowd1(n,:));
sum_crowd(n) = crowd1(n);
end
[total_sort2,indx2] = sort(sum_crowd,'descend');
for j = 1 : indx
x_sorted2(j,:) = x_non_dom(indx2(j),:);
f_sorted2(j,:) = f_non_dom(indx2(j),:);
end
x_new = x_sorted2(1 : TN/2, :);
f_new = f_sorted2(1 : TN/2, :);
end
87

11. nonlineal_model.mdl

i. Subsystem
88

ii. delta

iii. 3-4-1

iv. 3-4-2
89

v. Output
APPENDIX B

The PID Gains and objectives for all points in the pareto front.

Kp1 Ki1 Kd1 Kp2 Ki2 Kd2 ts OS ITAE

26.6439 0.00418 16.3553 356.959 0 52.9887 1.536 0.0918 2.8598

25.0361 1.67E-14 17.3544 351.483 0 57.3089 1.737 0.0717 2.8590

26.6439 2.78E-14 16.3553 356.9591 0 52.9887 1.53 0.1011 2.8485

25.0241 2.78E-14 17.3544 351.5173 0 57.2571 3.902 0.1885 2.6764

25.0241 1.89E-13 17.3544 351.4904 0 57.2318 3.872 0.3356 2.6645

25.0241 1.67E-14 17.3544 351.3896 0 57.3089 2.789 0.3318 2.6760

25.0361 0 17.3544 351.3896 0 57.3089 2.699 0.0822 2.8522

25.1950 0 17.2537 350.7637 0 57.2571 2.419 0.0737 2.8538

25.0241 2.78E-14 17.3544 351.4323 0 57.2571 3.031 0.1793 2.6959

25.0241 5.55E-14 17.3544 351.4832 0 57.2571 2.199 0.2511 2.7376

25.1950 0.000264 17.2537 350.7637 0.00382 57.2571 2.665 0.2267 2.6796

25.0241 5.55E-15 17.3640 351.5173 0 57.3132 3.549 0.2138 2.6761

25.0241 5.55E-15 17.3544 351.5758 0 57.2318 1.765 0.0843 2.7957

25.0241 7.22E-14 17.3544 351.4904 0 57.2571 2.584 0.1740 2.7103

25.0361 1.11E-14 17.3544 351.5758 0 57.2571 2.968 0.2242 2.6735

25.0361 1.89E-13 17.3544 351.5758 0 57.2318 2.237 0.1168 2.7608


91

25.0241 6.11E-14 17.3640 351.4323 0 57.3132 2.502 0.0843 2.7945

25.0241 4.44E-14 17.3640 351.4832 0 57.3132 2.562 0.2642 2.6987

25.0241 5.55E-14 17.3640 351.5758 0 57.3089 2.510 0.0846 2.7945

25.0241 4.44E-14 17.3640 351.4904 0 57.3089 2.376 0.2572 2.7127

25.0241 6.11E-14 17.3640 351.4323 0 57.3132 1.698 0.1064 2.8446

26.6439 8.33E-14 16.3553 356.9591 0 52.9887 2.575 0.1931 2.7022

25.0361 5.55E-15 17.3544 351.4832 0 57.3089 1.728 0.1362 2.8011

25.0241 3.33E-14 17.3544 351.5173 0 57.2571 3.227 0.1349 2.7184

25.0361 7.77E-14 17.3544 351.3896 0 57.3089 1.958 0.0888 2.7833

25.0241 7.77E-14 17.3544 351.4904 0 57.2318 2.216 0.1925 2.7266

25.0241 6.11E-14 17.3544 351.4323 0 57.2571 3.777 0.1716 2.6936

25.0241 1.89E-13 17.3544 351.4832 0 57.2571 3.688 0.1771 2.6961

25.1950 4.63E-05 17.2537 350.7637 0.005846 57.2571 2.585 0.2077 2.7006

25.0241 0 17.3640 351.5173 0 57.3132 2.699 0.2547 2.6765

25.0241 0 17.3544 351.3896 0 57.3089 2.698 0.2546 2.6765

25.1950 5.55E-15 17.2537 350.7637 0 57.2571 2.328 0.1606 2.7358

25.0241 0 17.3544 351.5758 0 57.2318 2.02 0.0984 2.7768

25.0241 1.89E-13 17.3544 351.4904 0 57.2571 2.753 0.0914 2.7823

25.0241 1.67E-14 17.3640 351.4904 0 57.3089 2.128 0.1305 2.7580


92

25.0361 5.55E-14 17.3544 351.5758 0 57.2571 2.548 0.2316 2.7020

25.0241 7.22E-14 17.3640 351.4323 0 57.3132 1.738 0.1081 2.7931

25.0241 0 17.3640 351.4832 0 57.3132 2.979 0.1404 2.7174

25.0361 5.55E-14 17.3544 351.5758 0 57.2318 2.324 0.1943 2.7155

25.0241 1.11E-14 17.3640 351.5758 0 57.3089 2.097 0.1155 2.7678

25.0241 1.11E-14 17.3640 351.5758 0 57.3089 2.775 0.2079 2.6946

25.0241 0 17.3544 351.3896 0 57.3089 2.938 0.2142 2.6743

25.0361 0 17.3544 351.4832 0 57.3089 3.845 0.1863 2.6817

26.6439 1.11E-14 16.3553 356.9591 0 52.9887 2.817 0.1307 2.7512

25.1950 1.11E-14 17.2537 350.7637 0 57.2571 2.212 0.1462 2.7372

25.0241 2.22E-14 17.3544 351.5173 0 57.2571 2.207 0.2152 2.7413

24.9814 0 17.3640 351.4378 0 57.227 3.061 0.1124 2.7631

25.0241 8.33E-14 17.3544 351.4904 0 57.2313 3.015 0.1248 2.7572

25.0241 3.33E-14 17.3544 351.4832 0 57.2571 2.167 0.1819 2.7415

25.0241 5.55E-15 17.3640 351.4832 0 57.3132 1.839 0.1088 2.7833

25.0241 6.11E-14 17.3544 351.4378 0 57.2571 2.679 0.1133 2.7730

25.0241 0 17.3640 351.4323 0 57.3132 2.059 0.1465 2.7734

25.0241 7.22E-14 17.3640 351.5173 0 57.3132 2.376 0.1461 2.7241

25.0241 1.67E-14 17.3544 351.3866 0 57.3089 1.854 0.1579 2.7736


93

57.2318
25.0241 1.11E-14 17.3544 351.5758 0 2.229 0.2354 2.7172

25.0241 8.33E-14 17.3544 351.4904 0 57.2571 2.479 0.1522 2.7164

25.0361 5.55E-14 17.3544 351.5758 0 57.2318 3.629 0.1972 2.6856

25.0241 4.44E-14 17.3640 351.5758 0 57.3089 3.00 0.1402 2.7310

25.0241 1.89E-13 17.3544 351.4323 0 57.2571 3.127 0.1401 2.7161

25.0241 3.89E-14 17.3640 351.4904 0 57.3089 3.634 0.1968 2.6843

25.0241 3.89E-14 17.3640 351.4904 0 57.3089 2.839 0.1972 2.6859

25.0361 5.55E-15 17.3544 351.5758 0 57.2571 1.782 0.0725 2.8579

25.0361 2.22E-14 17.3544 351.4832 0 57.3089 1.935 0.1216 2.7748

26.6439 5.55E-15 16.3553 356.9591 0 52.9887 2.375 0.1754 2.7233

25.1950 1.67E-14 17.2537 350.7637 0 57.2571 2.858 0.1411 2.7464

25.0241 8.33E-14 17.3640 351.4832 0 57.3089 2.136 0.1621 2.7495

25.0241 4.44E-14 17.3544 351.4904 0 57.2318 2.821 0.2008 2.6808

25.0241 5E-14 17.3544 351.3866 0 57.3089 2.870 0.1410 2.7379

25.0241 4.44E-14 17.3544 351.4832 0 57.2571 2.375 0.1746 2.7250

25.0241 5.55E-14 17.3640 351.4323 0 57.3132 1.766 0.1228 2.7869

25.0241 7.77E-14 17.3640 351.5173 0 57.3132 2.475 0.1547 2.7228

25.0241 3.33E-14 17.3640 351.4904 0 57.3089 2.944 0.0998 2.7714

25.0241 7.77E-14 17.3544 351.5758 0 57.2318 2.239 0.2241 2.7263


94

25.0241 1.17E-13 17.3544 351.4904 0 57.2571 2.182 0.1488 2.7471

25.0241 1.11E-14 17.3640 351.4832 0 57.3132 1.54 0.0817 2.8597

25.0241 4.44E-14 17.3544 351.4378 0 57.2571 2.497 0.2500 2.7026

25.0241 8.33E-14 17.3544 351.5173 0 57.2571 2.798 0.0999 2.7739

25.0241 0 17.3544 351.4323 0 57.2571 2.500 0.2429 2.7040

25.0361 0 17.3544 351.5758 0 57.2571 2.308 0.2105 2.7150

25.0361 0 17.3544 351.5758 0 57.2318 2.309 0.2149 2.7147

25.0241 2.78E-14 17.3640 351.5758 0 57.3089 1.743 0.1331 2.7885

25.0361 2.78E-14 17.3544 351.4832 0 57.3089 2.117 0.2184 2.7474

26.6439 0 16.3553 356.9591 0 52.9887 2.007 0.1807 2.7665

25.0241 5.55E-14 17.3544 351.5173 0 57.2571 1.918 0.1278 2.7778

25.1950 6.11E-14 17.2537 350.7637 0 57.2571 2.137 0.1795 2.7471

25.0241 0 17.3640 351.4832 0 57.3089 2.677 0.1058 2.7744

25.0241 3.89E-14 17.3544 351.4904 0 57.2318 2.915 0.1071 2.7696

25.0241 4.44E-14 17.3544 351.3866 0 57.3089 2.107 0.1496 2.7590

25.0241 5.55E-15 17.3544 351.4323 0 57.2571 1.621 0.0759 2.8594

25.0241 1.11E-14 17.3544 351.4832 0 57.2571 2.291 0.2307 2.7234

25.0241 7.77E-14 17.3640 351.4832 0 57.3132 3.153 0.1236 2.7369

25.0241 5E-14 17.3544 351.4378 0 57.2571 2.024 0.2209 2.7603


95

25.0241 0 17.3640 351.4323 0 57.3132 2.088 0.1716 2.7570

25.0241 8.33E-14 17.3640 351.5173 0 57.3132 2.069 0.2065 2.7522

25.0241 2.22E-14 17.3640 351.4904 0 57.3089 1.537 0.1005 2.8483

25.0241 7.77E-14 17.3544 351.4904 0 57.2571 1.838 0.1517 2.7830

25.0361 1.67E-14 17.3544 351.5758 0 57.2571 1.85 0.1401 2.7816

25.0361 5.55E-15 17.3544 351.5758 0 57.2318 2.036 0.1741 2.7697

25.0241 0 17.3640 351.5758 0 57.3089 1.896 0.1453 2.7802

25.0241 5.55E-14 17.3544 351.5758 0 57.2318 2.066 0.2090 2.7578

S-ar putea să vă placă și