Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

Introduction to Nature-Inspired Optimization
Introduction to Nature-Inspired Optimization
Introduction to Nature-Inspired Optimization
Ebook543 pages21 hours

Introduction to Nature-Inspired Optimization

Rating: 0 out of 5 stars

()

Read preview

About this ebook

Introduction to Nature-Inspired Optimization brings together many of the innovative mathematical methods for non-linear optimization that have their origins in the way various species behave in order to optimize their chances of survival. The book describes each method, examines their strengths and weaknesses, and where appropriate, provides the MATLAB code to give practical insight into the detailed structure of these methods and how they work.

Nature-inspired algorithms emulate processes that are found in the natural world, spurring interest for optimization. Lindfield/Penny provide concise coverage to all the major algorithms, including genetic algorithms, artificial bee colony algorithms, ant colony optimization and the cuckoo search algorithm, among others. This book provides a quick reference to practicing engineers, researchers and graduate students who work in the field of optimization.

  • Applies concepts in nature and biology to develop new algorithms for nonlinear optimization
  • Offers working MATLAB® programs for the major algorithms described, applying them to a range of problems
  • Provides useful comparative studies of the algorithms, highlighting their strengths and weaknesses
  • Discusses the current state-of-the-field and indicates possible areas of future development
LanguageEnglish
Release dateAug 10, 2017
ISBN9780128036662
Introduction to Nature-Inspired Optimization
Author

George Lindfield

George Lindfield is a former lecturer in Mathematics and Computing at the School of Engineering and Applied Science, Aston University in the United Kingdom.

Related to Introduction to Nature-Inspired Optimization

Related ebooks

Mathematics For You

View More

Related articles

Reviews for Introduction to Nature-Inspired Optimization

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Introduction to Nature-Inspired Optimization - George Lindfield

    numbers.

    Chapter 1

    An Introduction to Optimization

    Abstract

    In this chapter we introduce the main classes of optimization problems and briefly describe some of the classical methods used to solve such problems. We discuss some of the limitations of classical methods and explain why methods of solution inspired by nature are attractive in solving global optimization problems.

    Keywords

    Linear programming; Non-linear optimization; Lévy distribution; Lagrange multiplier; Gradient search

    1.1 Introduction

    Optimization is the task of finding the best solutions to particular problems. These best solutions are found by adjusting the parameters of the problem to give either a maximum or a minimum value for the solution. For example, in a mathematical model of a manufacturing process we might wish to adjust the process parameters in order to maximize profit. In contrast, in the design of a steel structure required to carry a particular load, we might adjust the design parameters to minimize the weight of steel used, thereby minimizing the material cost. In both examples we are seeking to find an optimum solution. The function for which the optimum value is being sought is called the objective function, fitness function or cost function. The latter name arises because the purpose of optimization is often to reduce costs. Note that optimizing a function means finding the values of the function parameters to give the optimal or best value of the function.

    In the following sections we discuss some of the classes of optimization problems and methods of solution.

    1.2 Classes of Optimization Problems

    Optimization problems can usefully be divided into two broad classes, linear and non-linear optimization. We begin by discussing linear optimization. As the name implies, both the objective function and the constraints are linear functions. Linear optimization problems are also referred to as linear programming problems. (Here programming does not refer to computer programming.) The general form of this problem is give by (1.1) thus

    (1.1)

    where x is a column vector of the n parameters that we wish to determine in order to minimize f. The constants of the problem are given by an m component column vector barray A and an n component column vector c. Note that each element of x constraint.

    Note that these equations assume that the m is the jth row of Ait can be converted to the standard form by subtracting slack variable, sometimes called surplus variables, from the equations. The slack variables are added to the vector x, that is they become extra degrees of freedom, or variables, in the problem.

    All linear optimization problems are constrained. If there are no constraints the problem becomes trivial. For example, suppose we wish to find the maximum value of the linear unconstrained function

    Obviously the maximum value of the function is given when x and y tend to infinity. Similarly the minimum value of the function is given when x and y tend to minus infinity.

    . Note that these values of x and y satisfy the constraints, as they must do. In this example the unknown variables are x and yetc., since in real linear optimization problems may have hundreds or even thousands of variables and an equally large number of constraints. Powerful algorithms have been developed to solve these problems, for example Karmarkar (1984). However, this is not a major area of application for nature inspired optimization methods.

    We now consider non-linear optimization problems. Here the objective function is non-linear and the constraints, if they exist at all, may be linear or non-linear. The general statement of a non-linear optimization problem is

    (1.2)

    where x is a vector of n by multiplying the constraint equations by −1. If the problem has no constraints it is called an unconstrained optimization problem. Non-linear problems may have many local optimum solutions, which are optimum in a specific sub-region of the solution space. However, the optimum in the whole region for which the problem is defined is called the global optimum. We can draw an analogy with a mountainous region of land. There are many peaks and valleys in the region but only one highest peak and one lowest valley. This is an important issue in optimization, and we will return to it frequently.

    In some optimization problems the parameters can take any real values to give an optimal solution, but in others parameter can take only positive integer values. For example, the optimal number of people involved in completing a task can only be a positive integer. This class of problem is important. It is very challenging and is often called a combinatorial problem. It includes, for example, job scheduling and the traveling salesman problem.

    We now briefly describe some of the possible ways in which non-linear optimization problems can be solved.

    1.3 Using Calculus to Optimize a Function

    It might be thought that calculus could be employed effectively to find minimum or maximum vales of a function. This is true, but the number of differential coefficients required, and the complexity of the resulting algebra, for anything other than simple problems, rules out this approach for most practical applications.

    We illustrate the general approach by consider the simple function

    Differentiating this function with respect to x and y and gives

    For a maximum or minimum value the partial derivatives must equal zero. Thus

    and substituting for x . In the classical calculus approach we determine whether this solution is a maximum or a minimum of the function by considering the second derivative of the function. Thus, for the above equations, this gives

    Using these differential coefficients we can determine the parameter D thus:

    Substituting the values of x and y the function is a saddle point. In this case

    Thus this solution is a minimum.

    Consider a second example, that of finding the minimum value of the function

    . This function is a famous standard test function called the Styblinski-Tang function, see Appendix A for information on this function. Differentiating this function with respect to x and y and setting the derivative to zero gives

    These cubic equations are uncoupled since they each contains only one of the variables and may be solved independently of each other by any appropriate numerical procedure. Solving the first cubic equation in x gives the following three solutions for x:

    Obviously, by symmetry, the second equation gives identical values for yin f the second derivatives of the function are negative, so these values, taken in x and y pairs must give minimum values of the function. Substituting for x and y values in the function gives

    is the global minimum, the other three minima are local minima. Other combinations of x and yare saddle points. In this problem there are nine possible solutions, as shown in Figure 1.1 and each one has had to be considered as a possible global minimum, and eight rejected. In more complex problems this requirement could be onerous and the process not efficient.

    Figure 1.1 Contour plot of the Styblinski-Tang function showing a maximum, one global minimum, three local minimum and four saddle points of the function.

    Calculus methods can be extended to include the influence of constraints using, for example, the Lagrange multiplier method described in Chapter 9. However, the basic problem of the calculus approach remains that for any problem with more than a small number of variables, the algebraic manipulation and analysis becomes too complicated and inefficient.

    1.4 A Brute Force Method!

    A simple method of finding the optimal value of a function is to systematically evaluate the function for a large number of values. Suppose, for example, we require to determine the optima of a two variable function to an accuracy of one part in one hundred in both x and y evaluations. Table 1.1 provides some computing times required to evaluate Rastrigin's function (defined in Appendix A) in 2, 3 and 4 dimensions with a search range of −2.4 to 2.4. The computation times shown in Table 1.1 are obtained using a modest laptop computer running Matlab. For Rastrigin's function in 4 dimensions or variables, where 1001 divisions of the search range is required, no data is provided because a rough estimate suggests that the time taken to complete the process will be in excess of 10 hours!

    Table 1.1

    Time taken (in seconds) to evaluate Rastrigin's function

    Suppose that a more powerful computer was 1000 times faster. This would be of little significance in the problem of 10 variables where 10³⁰ evaluations are required! Clearly, this is not a realistic way to find the optimum value of functions in many variables.

    .

    This approach vividly illustrates the so called curse of dimensionality. What seems an efficient method of solution in a problem in only two or three dimensions becomes extremely difficult if not impossible in a high dimensional problem.

    1.5 Gradient Methods

    Section 1.3 of this chapter shows that a calculus approach can only solve small problems and multiple evaluations of the function to be optimized becomes impossible when the number of variables and possibly the required accuracy is increased. The methods described here, generally called gradient methods, have proved effective in finding optimal values of functions, but not necessarily global optimal values. If there are many local optima then the likelihood that the algorithm will find the global optima

    Enjoying the preview?
    Page 1 of 1