Sunteți pe pagina 1din 14

STRUCTURAL OPTIMISATION OF TRUSSES

– AN OVERVIEW OF DIFFERENT METHODS

Lt Cdr V S Swaminathan
Dept of Ocean Engineering & Naval Architecture, IIT Kharagpur
Abstract

This paper presents the over view of structural optimisation strategies used in truss
problems, their merits, demerits and the future trends. Comparisons between different methods
has also been discusses keeping the accuracy, efficiency and the convergence rate of each of
the methods in mind. Over the period of time many new methods have been developed in the
field of structural optimisation. They are different from the classical methods developed in many
ways. Most of these methods are meta-heuristic in nature. The common factor in meta-heuristic
algorithms is that they combine rules and randomness to imitate natural phenomena. The
complexities of a problem limit the structural designer from using the classical methods for
analysis and optimisation. However, one has to have a clear understanding of the new methods,
and exercise caution as any error in formulation can lead to wrong results. Even now most of the
goals of a common structural optimisation problem remain the minimisation of weight of the
structure. In this paper one of the classical case will be discussed which has been researched
quiet well.

1. Introduction optimum design of large structures


due to the large amount of gradient
Optimisation of structure has calculations those are required.
gained lots of significance in the last
decade. Much of this progress was Structural optimisation problems
made possible by impressive are generally characterized by a large
developments in the field of digital number of design variables, a simple
computer technology. The escalation objective function and an indirect but
of cost and time has forced the well behaved constraint functions. The
structural designer to deviate from the constraint functions are indirect in the
classical methods of design. Even the sense that they cannot he expressed
structural optimisation has undergone explicitly as functions of the design
evolution from the classical methods to variables. Attempts to optimize
the meta-heuristic methods. The structures by nonlinear programming
classical methods such as linear, methods have met with varying degree
nonlinear, and dynamic programming of success. These methods are
are efficient for certain range of extremely useful in defining the design
problems. However, they have problem in proper mathematical terms.
limitations with respect to complex This paper compares most of the
problems. Some techniques, including methods, classical and meta-heuristic
the penalty-function, augmented for their accuracy, merits, de-merits,
Lagrangian, and conjugate gradient computational time and ease and the
methods, search for a local optimum rate of convergence. Due to the fact
by moving in a direction related to the that material cost is one of the major
local gradient. These methods become factors, minimisation of weight has
inefficient when searching for the become one of the important goals.

1
2. Problem formulation ∆ l≤ ∆ i ≤ ∆ u

In general the structural Al ≤ Ai ≤ Au


optimisation problem can be
categorised in different groups viz. l, u  lower and upper bounds
shape optimisation, weight
optimisation, topology optimisation, 3. Overview of methods
reliability and dynamic characteristics
etc. However, this paper is limited to Some of the methods that will
weight optimisation only. The be seen are modified double cuts
optimisation problem of the structure is approach, sequential quadratic
defined as: programming, genetic algorithm,
harmony search method, improved
Minimise ƒ(X) move limit method of SLP and strain
energy method.
ƒ(X)  The objective function
3.1 Modified double cuts approach
Subjected to the constrains
The work of C J Shih & H W Lee
Gi(X) ≤ 0 { i = 1,2,3,4 ……. n} [1] is presented here. The general form
Hj(X) = 0 { j = 1,2,3,4 ……. m} of a nonlinear programming problem with
fuzzy inequality constraints (FNLP) can
X  The design vector be stated as

As far as weight optimisation is Find X = [x1,x2…..xn]


concerned the objective function is
defined as: Min ƒ(X) (1)
s.t gi(X) ≤ b i , i = 1,2,3…m (2)
k
Minimise W(X) = ∑l A ρ
i =1
i i i
The fuzzy number b i, ∀ i are in the
k  Total number of members in the fuzzy region of [bi, bi + pi] with given
truss fuzzy tolerance of pi. Assume that the
l, A, ρ  length, area and density of ith fuzzy tolerance pi for the ith fuzzy
member. constraint is known, then, b i can be
equivalent to (bi + θ pi), ∀ i, where θ is
A typical structural optimisation in [0, 1]. Eq. (2) can be expressed as
problem is restricted with a number of following formulation used for solving a
constraints controlling the structural FNLP problem:
response. Constraints can be stress
constraints, displacement constraints, s.t gi(X) ≤ b i + (1-α )pi i = 1,2,…m (3)
frequency constraints etc with bounds
on the design variables.
Where α ∈ [0, 1]. Thus, the FNLP
problem containing constraint given in
σ l≤ σ i ≤ σ u Eq. (3) is equivalent to a parametric
programming while α = 1 - θ . For

2
each α , there is an optimal solution, s.t f(X) – [fmax - α f(fmax – fmin)] = 0
therefore, the solution with a grade of (for linear µ f(X)) (8)
membership function is fuzzy. One can
consequently apply the max–min f(X) – (fmin/α f) = 0
operator to obtain the optimal decision. (for non-linear µ f(X)) (9)
Then, the problem of Eqs. (1) and (2) gi(X) ≤ bi + (1-α )pi, i = 1,2…m (10)
can be solved by the max-λ strategy,
where where α f ∈ [0,1] and 0.01 ≤ α ≤ 1. Eq.
λ = min[µ f(X), µ g1(X), µ g2(X),…., (7) for nonlinear µ f(X) can be viewed
µ gm(X)] as minimising f(X) and minimising α f /
Max λ (4) α simultaneously; which in terms
s.t λ ≤ µ f(X) (5) implies that Eq. (7) can be replaced by
λ ≤ µ gi(X) i = 1,2,…m an another utility function to be
(6) minimised, as written in the following:

Where µ f(X) and µ gi(X) represent the 2


αf f 
membership function corresponding to 2  − min 
the objective function and the ith fuzzy  f ( X ) − f min   α f max 
Min   + f min
(11)
constraint, respectively.  f min   
 f max 

When one observes the conventional An alternate function for Eq. (7) in the
α -cuts method in which the value of case of linear µ f(X) can be written as
gi(X) goes to bi + pi as α gi(X) is
approaching to zero; simultaneously, 2
the value of f(X) goes to the direction of αf 
  α −α p 
2
fmin (the minimum value of objective  f ( X ) − f min
Min   +  (12)
function) as α f (the level cut value  f min   α p 

corresponding to the objective function)  
goes to one. For maintaining the
equilibrium between α f and α gi in the Where α p is an additional design
optimization process, there is a variable with the range in 10-4 ≤ α p ≤ 1.
tendency to pull α f toward a lower limit The inadequacy in Eq. (7) resulting the
and to push α gi toward an upper limit local optimum because of αf
so to achieve a balance state. The numerically goes to the minimum and
resultant optimal design eventually α goes to the maximum. Especially the
converges to two different compromise variable α p in Eq. (12) seems to be
values of α *f and α *gi , where each agi useless due to yielding to its upper
in the formulation uses a same level of bound. Therefore, Eqs. (11) & (12) are
α . This consideration leads us to modified to be Eqs. (13) & (14) for
develop the double-cuts approach for avoiding those computationally
FNLP, the formulation can be written as difficulties.

Find [X, α f, α ]T
Min f(X) x α f / α (7)

3
 αf f 
2
gk=∇ f (xk) and Bk is an n×n matrix.
2  − min  Often Bk is required to be positive
 f ( X ) − f min  α f max
Min   + 1 f min
 definite, which is supposed to be an
 f max − f min    approximate Hessian of the Lagrangian
 0.0001 − f 
 max 
l ( x, λ ) = f ( x ) − λ T c ( x ) (16)
…….. (13)
At the current iterate xk. A merit
2
 αf  function, which is normally a penalty
 f ( X ) − f min 
2
 − 0.0001  function such as the l 1 exact penalty
α
Min   + 1  function, is used to carry out line
 f max − f min   − 0.0001  searches. It has been proved that SQP
 0.0001  is globally convergent. One of the
resulted difficulties of SQP for large-
……. (14) scale problem is that the memory
requisite for each QP sub-problem may
Eq. (13) is used with nonlinear be very large if the original problem is
µ f(X) and Eq. (14) is used with linear large.
µ f(X). Formulation of Eqs. (13) & (14)
are developed from the idea of 3.3 Genetic algorithm
quadratic normalization in which the
parameter a goes to a small value. The The work of Kalyanmoy Deb &
Eq. (14) also shows the advantage of Surendra Gulati [3] is presented here.
neglecting the additional design In this method, simulated binary
variable α p. crossover (SBX) and a parameter-
based mutation operator are used.
3.2 Sequential quadratic
programming (SQP) 3.3.1 Simulated binary crossover
(SBX)
The work of R. Sedaghati & E.
Esmailzadeh [2] is presented here. The A probability distribution is used
optimisation problem is defined as around parent solutions to create two
minimisation of the weight of the children solutions. In the proposed SBX
structure subjected to stress and operator, this probability distribution is
displacement constrains and lower not chosen arbitrarily. Instead, such a
bound on the cross-sectional areas. probability distribution is fist calculated
SQP is a kind of iterative method, at for single-point crossover operator in
iterate k it needs to solve a QP sub- binary-coded GAs and then adapted for
problem: real-parameter GAs. The chosen
probability distribution is as follows:
Min gkTd + ½ dTBkd (15)
0.5(ηc + 1) β ηc if β ≤ 1, 
s.t ci(xk) + ∇ ci(xk) d ≥ 0, I = 1,2 …, m
T
P( β ) =  ηc +2  (17)
0.5(ηc + 1) / β otherwise 
where xk is the current iterate,

4
where η c is a parameter which
controls the extent of spread in children 2
β = 1+ min[( x (1) − x1 ), ( x u − x(2) )]
solutions. A small value of η c allows y (2)
−y (1)

solutions far away from parents to be


created as children solutions and a 3.3.2 Parameter-based mutation
large value restricts only near-parent operator
solutions to be created as children
solutions. The procedure of computing A polynomial probability
children solutions y(1) and y(2) from two distribution is used to create a solution
parent solutions x(1) and x(2) are as y in the vicinity of a parent solution x.
follows: The following procedure is used for
1) Create a random number u between variables where lower and upper
0 and 1. boundaries are not specified:
2) Find a parameter β using the
polynomial probability distribution from 1) Create a random number u between
a schema processing point of view, as 0 and 1
follows: 2) Calculate the parameter δ as
follows:
(2u ) c , u ≤ 0.5 
1/(η +1)

β =  (18)
(2u ) m − 1, u ≤ 0.5 
1/(η +1)
1/(η +1)
(1/ 2(1 − u ) c , otherwise  δ = 
1/(ηm +1)
3) The children solutions are then 1 − [2(1 − u )] , otherwise 
calculated as follows: where η m is the distribution index for
mutation and takes any non-negative
y (1) = 0.5[( x(1) + x(2) ) − β x(2) − x(1) ] value.

y (2) = 0.5[( x(1) + x(2) ) − β x(2) − x(1) ] 3) Calculate the mutated child as
follows:
The above procedure is used for
variables where no lower and upper Y=x+ δ∆ max
bounds are specified. Thus, the
children solutions can lie anywhere in Where ∆ max is the maximum
the real space [-∞,∞] with varying perturbance allowed in the parent
probability. For calculating the children solution. For variables where lower and
solutions where lower and upper upper boundaries (xl and xu) are
bounds (xl and xu) of a variable are specified, above equation may be
specified, Eq. (18) needs to be changed as follows:
changed as follows:
 2u + (1 − 2u )(1 − δ )ηm +1 ]1/(ηm +1) − 1, u ≤ 0.5 
(α u ) 1/(ηc +1)
, u ≤ 1/ α  δ = η +1 1/(η +1) 
β =  (19) 1 − [2(1 − u ) + 2(u − 0.5)(1 − δ ) m ] m , otherwise 
1/(ηc +1)
(1/ 2(1 − α u ) , otherwise  Thus, in order to get a mutation
effect of 1% perturbance in solutions,
− (η c +1)
where α = 2 – β and β are we should set η m ≈ 100. We terminate
calculated as follows: a GA simulation when a pre-specified
number of generations are elapsed.

5
 x1 
3.4 Harmony search method  2 
x 
This section is based on the  x3 
 
work by Kang Seok Lee & Zong Woo HM = . 
Geem [4]. Harmony search method  
(HMS) was conceptualized using the . 
musical process of searching for a . 
 HMS 
perfect state of harmony. The HS  x 
algorithm does not require initial values
and uses a random search instead of a
gradient search, hence derivative Step 3: Improvise a new harmony
information is unnecessary. The from the HM
different steps involved in HSM are
given below: A New Harmony vector, xJ = (x1J,
x2 ,….xNJ) is generated from the HM
J

Step 1: Initialize the optimization based on memory considerations, pitch


problem and algorithm parameters adjustments, and randomization. For
instance, the value of the first decision
First, the optimization problem is variable (x1J) for the new vector can be
defined as minimisation of weight of a chosen from any value in the specified
truss with stress and displacement HM range (x11 ~ x1HMS). Values of the
constrains. The HS algorithm other decision variables (xiJ) can be
parameters that are required to solve chosen in the same manner. Here,
the optimization problem are also there is a possibility that the new value
specified in this step: harmony memory can be chosen using the HMCR
size (number of solution vectors, HMS), parameter, which varies between 0 and
harmony memory considering rate 1 as follows:
(HMCR), pitch adjusting rate (PAR),
and termination criterion (maximum
number of searches). Here, HMCR and xi ← 
J {
 xi J ∈ xi1 , xi 2 ,...., xi HMS } w. pHMCR

PAR are parameters that are used to  xi J ∈ X i w. p (1 − HMCR )


improve the solution vector.

Step 2: Initialize the harmony The HMCR sets the rate of


memory (HM) choosing one value from the historic
values stored in the HM, and (1-HMCR)
In this step, the ‘‘harmony memory’’ sets the rate of randomly choosing one
(HM) matrix shown is filled with as value from the possible range of
many randomly generated solution values. For example, a HMCR of 0.95
vectors as the size of the HM (i.e., indicates that the HS algorithm will
HMS) and sorted by the values of the choose the decision variable value from
objective function, f ( x ) . historically stored values in the HM with
a 95% probability and from the entire
possible range with a 5% probability. A
HMCR value of 1.0 is not

6
recommended, because there is a
chance that the solution will be Step 4: Update the HM
improved by values not stored in the
HM. This is similar to the reason why If the New Harmony vector is
genetic algorithms use a mutation rate better than the worst harmony in the
in the selection process. On the other HM, judged in terms of the objective
hand, every component of the New function value, the New Harmony is
Harmony vector, xJ = (x1J, x2J,….xNJ), is included in the HM and the existing
examined to determine whether it worst harmony is excluded from the
should be pitch-adjusted. This HM. The HM is then sorted by the
procedure uses the PAR parameter objective function value.
that sets the rate of adjustment for the
pitch chosen from the HM as follows: Step 5: Repeat Steps 3 and 4

Pitch adjusting decision for xiJ The computations explained are


terminated when the termination
Yesw. p.PAR criterion is satisfied. If not, Steps 3 and
← 4 are repeated.
 Now. p (1 − PAR )
3.5 Improved move limit method
The pitch adjusting process is
performed only after a value is chosen This section is presented based
from the HM. The value (1-PAR) sets on the work done by KV John, CV
the rate of doing nothing. A PAR of 0.1 Ramakrishnan & KG Sharma [5]. A
indicates that the algorithm will choose brief resume of the improved move limit
a neighboring value with 10% ·HMCR method of sequential linear
probability. If the pitch adjustment programming at a particular design
decision for xiJ is yes, and xiJ is point is presented below.
assumed to be xi(k), i.e., the kth element
in Xi, the pitch-adjusted value of xi(k) is Let X P be the current design
point. The linear programming problem
xiJ  xi(k+m) for discrete decision
referred above is solved by the simplex
variables
algorithm leading to the point X P +1 . If
xiJ  xiJ + α for continuous decision P +1
variables X is a feasible point, the objective
function is checked for improvement
where m = the neighboring [W( X P +1 ) < W( X P )]. The sequence of
index, m ∈ {…,-2, -1,1,2}; α = the linear programming is continued from
value of bw x u(-1,1); bw = an arbitrary P +1
if improvement is found, taking
X
distance bandwidth for the continuous
care to compute the constrain values
variable; and u(-1,1) = a uniform
corresponding to this point. Otherwise
distribution between -1 and 1. The
the new design point is selected by
HMCR and PAR parameters introduced
quadratic interpolation between the
in the harmony search help the
algorithm find globally and locally design points X P +1 and X P .
improved solutions, respectively.
Thus if

7
P +1
and X and then optimization
P +1 P
S=X −X continued.
+ P
(20)
X = X +α S +
3.6 Strain energy criterion method

Where α + is the optimal step length Strain energy criterion method


along S direction. The move limit is explained here is based on the work by
also modified as V B Venkayya [6]. A structure with m
+
M =α+M
P
(21) structural elements and specified
configuration is subjected to a
If the linear programming solution generalized force vector R. The
enters an unfeasible region, it is problem is to obtain optimum sizes for
steered to the feasible domain by the elements such that the weight of
moving in the gradient direction of the the structure is a minimum. The design
most violated constraint. Thus the new that has the same average strain
design point is given by energy density in all its elements is a
lower weight design than the one in
+ P +1 P +1 which this condition is not satisfied.
X =X + β ∇g q ( X ) (22) Now the statement of the optimality
criteria is as follows:
Where ∇g q is the gradient of the
“The optimum structure is the
most violated constraint q and
one in which the average strain energy
P +1 density is the same in all its elements.”
g j (X )
β =− (23)
∇g q T ( X
P +1
)∇gq ( X
P +1
) If the change in potential of the
applied forces is considered as a
measure of the stiffness of the structure
It may be necessary in some then it can be shown that the structure
problems to use the above technique of that satisfies the above strain energy
steering to the feasible region criteria will also be the stiffest structure
repeatedly. After steering the design for that loading condition.
vector to the feasible region if no
improvement is found in the objective The optimality criteria modified
function, the usability of the direction for the general case is as follows:
* * P
“The optimum design is the
S =X −X one in which the strain energy of each
element bears a constant ratio to its
is checked. The quadratic energy capacity.”
interpolation is resorted to only if the
direction is usable. Otherwise quadratic The energy capacity is defined
interpolation erroneously leads to X P as total strain energy stored if the entire
as optimum. In such cases quadratic element is stressed to its limiting
normal stress. The limiting normal
interpolation is to be done between X P
stress can be different from the actual
stress limit as long as it does not

8
exceed it. The step size in iteration, 1 t '
ui = si vi (28)
based on energy criteria, can be altered 2Λ
by varying the magnitude of the limiting
normal stress. It should be pointed out Where si is the generalized force vector
that the definition of energy capacity is of the ith element and vi’ is the
independent of the actual state of corresponding relative displacement
stress in the element and depends only vector. It is related to the actual
on the volume and on the limiting displacement vector vi by
normal stress of the elements.
The expression for the energy capacity vi’ = Λ vi
of the ith element is given by (29)
According to the modified optimality
1 criteria the strain energy of each
τ i = σ i (U )ε i (U )Vi (24)
2 element should bear a constant ratio to
its energy capacity. Equations (26) and
Assuming the material to be linearly (28) and the optimality criteria yield
elastic the relation between limiting
normal stress and strain is written as ui '
Λ =C '2 2
(30)
τ
σ i (U ) = Ei ε i (U ) (25) Where C is the constant of
proportionality and ui’ and τ i’ are given
Substitution of equation (25) in (24) by
gives the expression for energy
capacity in the form 1 ' '
ui ' = si vi
2
1 (31) & (32)
τ i = (ε i (U ) )2 Λαi li (26) 1
2 τ i ' = (ε i (U ) )2 αi li
2
Quantity li is defined as
Multiplying both sides of equation (30)
Vi Ei
li = (27) by α i2 and taking the square root yields
Λα i
1

The scalar Λ is the base parameter for u ' 2


α i Λ = Cαi  i '  (33)
all the elements and α i is the relative τ i 
value of the ith design variable. The
actual design variable vector may be Where α iΛ is the ith design variable
written as Λ α . The vector α alone will which is expressed as a function of xi.
be referred to as the relative design The form of equation (33) suggests the
variable vector or normalized vector. following recursion relation for
The scalar Λ is then the normalizing determining the design variable in each
factor. The strain energy in the element cycle.
is written in terms of its internal forces 1

and displacements as  u '   2


(α i Λ )v +1 = C (αi )v  i'   (34)
 τ i v 

9
Where the subscripts ν and 25 – Bar space truss problem
ν +1 refer to the cycles of iteration. has been considered for the case
When the design conditions include study. The 25-bar transmission tower
multiple loading case equation (34) will space truss, shown in Fig. 1, has been
be modified to read size optimized by many researchers.
These include Schmit and Farshi [7],
1 Schmit and Miura [8], Gellatly and
 ui 'max   2 Berke [9], Rizzi [10] apart from the
(α i Λ )v +1 = C (αi )v  '   (35)
τ authors already mentioned in section 3.
 i v 

In these studies, the material density
Where ui max is a measure of the was 0.1 lb/in3 and modulus of elasticity
maximum strain energy of the ith was 10,000 ksi.
element due to any of the loading
conditions. In the case of a single This space truss was subjected
loading condition with stress constraints to the two loading conditions shown in
only, the design that satisfies the Table 1. The structure was required to
optimality criteria is the lowest weight be doubly symmetric about the x- and
design. Iteration using equation (35) y-axes; this condition grouped the truss
produces such a design. When there members as follows: (1) A1, (2) A2 ~ A5,
are constraints on the sizes of the (3) A6 ~ A9, (4) A10 ~ A11, (5) A12 ~ A13,
elements and multiple loading (6) A14 ~ A17, (7) A18 ~ A21 & (8) A22 ~
conditions the design satisfying the A25. The truss members were subjected
optimality criteria may not be the lowest to the compressive and tensile stress
weight design. In such cases equation limitations shown in Table 2. In
(35) will be used to simply generate addition, maximum displacement
design lines and in conjunction with the limitations of ±0.35 in. were imposed on
scaling procedure a directed search for every node in every direction. The
the lowest weight design can be made. minimum cross-sectional area of all
members was 0.01 in2. The comparison
4. Case study of a classical problem of results are given in Table 3.

Condition 1 Condition 2
Node No.
Px Py Pz Px Py Pz

1 0.0 20.0 -5.0 1.0 10.0 -5.0

2 0.0 -20.0 -5.0 0.0 10.0 -5.0

3 0.0 0.0 0.0 0.5 0.0 0.0

6 0.0 0.0 0.0 0.5 0.0 0.0

10
Table 1 – Loading conditions for 25 – Bar space truss
(Loads are in Kips)

Fig 1 – 25 – Bar space truss

Compressive
Tensile stress
Variables Members stress limitations
limitations (Ksi)
(Ksi)
1 A1 35.092 40.0
2 A2 ~ A5 11.590 40.0
3 A6 ~ A9 17.305 40.0
4 A10 ~ A11 25.092 40.0
5 A12 ~ A13 35.092 40.0
6 A14 ~ A17 6.759 40.0
7 A18 ~ A21 6.059 40.0

11
8 A22 ~ A25 11.082 40.0

Table 2 – Member stress limitations for 25 – Bar space truss


K S Lee KV John, CV
Schmit K Deb & VB
Rizzi & Z W Ramakrishna
Members & Farshi S Gulati Venkayya
[10] Geem n & KG
[7] [3] [6]
[4] Sharma [5]
A1 0.010 0.010 0.060 0.047 0.01 0.028
A2 ~ A5 1.964 1.988 2.092 2.022 1.604 1.942
A6 ~ A9 3.033 2.991 2.884 2.950 3.540 3.081
A10 ~ A11 0.010 0.010 0.010 0.010 0.01 0.01
A12 ~ A13 0.010 0.010 0.010 0.014 0.01 0.01
A14 ~ A17 0.670 0.684 0.690 0.688 0.652 0.693
A18 ~ A21 1.680 1.677 1.640 1.657 1.761 1.678
A22 ~ A25 2.670 2.663 2.691 2.663 2.446 2.627
Weight(lb) 545.22 545.16 544.984 544.38 548.07 545.49

Table 3 – Comparison of results for 25 – Bar space truss (Areas  in2)

5. Merits and de-merits The improved modified double-


cut approach for solving nonlinear
Compared to gradient-based engineering design problems is very
mathematical optimization algorithms, effective for fuzzy structural
the HS algorithm imposes fewer optimization. The proposed modified
mathematical requirements to solve approach is the most recommended for
optimization problems and does not its beneficial to simple algorithm and
require initial starting values for the the neglecting of the redundant design
decision variables. The HS algorithm variable. It also improves the final
generates a new vector after design as compared with single-cut
considering all of the existing vectors approach and sophisticated multiple-cut
based on the HMCR and the PAR, approach. The proposed modified
rather than considering only two double-cuts approach guarantees to
(parents) as in genetic algorithms. obtain the unique compromise final
These features increase the flexibility of design.
the HS algorithm and produce better
solutions. However, this method has It is found that the optimization
few shortcomings. Selection of HMCR technique that is based on the force
and PAR plays a vital role in the results method is computationally far more
and the number of iterations required to efficient than the displacement one.
converge to the optimum is very high.

12
Improved move limit method of phenomena. However, most of these
sequential linear programming is an methods are not yet been tested on
approach which is naturally suited for wide range of structural optimisation
solution and works very well for all trial problems.
problems. The number of design
iterations for convergence is nearly the One has to exercise extreme
same or even less than that for other caution while dealing with such
well established methods. However, methods as errors in formulation can
optimisation of complex space truss lead to wrong results. An approach with
problems using this method needs to mixture of different methods, exploiting
be verified. the advantages of each one of them
looks quiet promising for complex
The strain energy criterion problems.
method charts an efficient path to the
optimum and this makes it attractive for References
the optimization of structures with a
large number of variables. However, [1] C.J. Shih, H.W. Lee. Modified double-
cuts approach in 25-bar and 72-bar fuzzy
the validity of the optimality criteria truss optimization. Computers and
approaches have been established only Structures 84 (2006) 2100–2104
for restrictive cases and need to be [2] R. Sedaghati, E. Esmailzadeh. Optimum
design of structures with stress and
extended to more general design displacement constraints using the force
conditions. method. International Journal of
Mechanical Sciences 45 (2003) 1369–
1389.
One of the resulted difficulties of [3] Kalyanmoy Deb, Surendra Gulati. Design
SQP for large-scale problem is that the of truss-structures for minimum weight
using genetic algorithms. Finite Elements
memory requisite for each QP sub- in Analysis and Design 37 (2001)
problem may be very large if the 447}465.
original problem is large. [4] Kang Seok Lee, Zong Woo Geem. A new
structural optimization method based on
the harmony search algorithm.
6. Way ahead & Conclusion Computers and Structures 82 (2004)
781–798

Most of the new methods [5] K V John, C V Ramakrishnan and K G


Sharma. Minimum weight design of
developed in the field of structural trusses using improved move limit
method of sequential linear programming.
optimisation are deviating from the Computers & Structures Vol. 27. No. 5.
classical approach. As the complexities pp. 583-591. 1971.
of the structure has increased the [6] V B Venkayya. Design of optimum
efficiency of the method play a vital role structures. Computers & Structures Vol.
1, pp. 265-309. 1987.
as far as rate of convergence and
accuracy of the results are concerned. [7] Schmit Jr LA, Farshi B. Some
approximation concepts for structural
Most of the gradient based methods synthesis. AIAA J 1974;12(5):692–9.
are getting replaced by meta-heuristic [8] Schmit Jr LA, Miura H. Approximation
methods which promises to reduce the concepts for efficient structural synthesis.
NASA CR-2552, Washington, DC: NASA;
computational difficulties. The common 1976.
factor in meta-heuristic algorithms is
[9] Gellatly RA, Berke L. Optimal structural
that they combine rules and design. AFFDLTR-70-165, Air Force
randomness to imitate natural Flight Dynamics Lab., Wright-Patterson
AFB, OH; 1971.

13
[10] Rizzi P. Optimization of multi-constrained
structures based on optimality criteria.
AIAA/ASME/SAE 17th Structures,
Structural Dynamics, and Materials
Conference, King of Prussia, PA; 1976.

14

S-ar putea să vă placă și